id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.01309 | Optimistic Online Caching for Batched Requests | In this paper we study online caching problems where predictions of future
requests, e.g., provided by a machine learning model, are available. Typical
online optimistic policies are based on the Follow-The-Regularized-Leader
algorithm and have higher computational cost than classic ones like LFU, LRU,
as each update of the cache state requires to solve a constrained optimization
problem. In this work we analysed the behaviour of two different optimistic
policies in a \textit{batched} case, i.e., when the cache is updated less
frequently in order to amortize the update cost over time or over multiple
requests. Experimental results show that such an optimistic batched approach
outperforms classical caching policies both on stationary and real traces | Francescomaria Faticanti, Giovanni Neglia | 2023-10-02T16:16:30Z | http://arxiv.org/abs/2310.01309v1 | # Optimistic Online Caching for Batched Requests
###### Abstract
In this paper we study online caching problems where predictions of future requests, e.g., provided by a machine learning model, are available. Typical online optimistic policies are based on the Follow-The-Regularized-Leader algorithm and have higher computational cost than classic ones like LFU, LRU, as each update of the cache state requires to solve a constrained optimization problem. In this work we analysed the behaviour of two different optimistic policies in a _batched_ case, i.e., when the cache is updated less frequently in order to amortize the update cost over time or over multiple requests. Experimental results show that such an optimistic batched approach outperforms classical caching policies both on stationary and real traces.
keywords: Caching, Online Optimization, Predictions, Batched Requests +
Footnote †: journal: Computer Networks
## 1 Introduction
Caching systems represent one of the most deeply studied research areas that span from the design of CPU hardware to the development of caching services in cloud computing, e.g., elastic caching systems for cloud and edge [2; 3]. The main objective of such systems is to reduce specific costs for the users, the network operator or the caching service provider. Caching policies have been studied under various assumptions on the arrival process of file requests. Recently online learning theory has been proposed to deal with caching settings where requests do not exhibit a regular pattern, and can be thought to be selected by an adversary [4; 5; 6]. Such an approach for the requests modeling stands in contrast to traditional stochastic models which can fail, e.g., in cases of small users' populations [7].
Online caching has been studied in the online convex optimization (OCO) framework [8] starting from the work [4]. In this setting, the main objective is to design algorithms that minimize the _regret_, i.e., the difference between the cost incurred by the proposed solution and the cost of the optimal offline static solution that has complete knowledge of future requests over a fixed time horizon. Later contributions analyzed other online learning algorithms [9] and provided new lower bounds on the regret [5].
Nowadays, thanks to the huge availability of data and resources in cloud systems, reliable predictions for future requests can be generated by machine learning (ML) models [10; 11]. Online algorithms that rely on such predictions are called _optimistic_[12; 13]. References [12; 14] provide example of optimistic online algorithms based on the Follow-The-Regularized-Leader (FTRL) and Online Mirror Descent (OMD) frameworks [8]. Mhaisen et al. [13] presented one of the first applications of optimistic online algorithms to a caching problem. They proved that predictions, even if not perfectly accurate, can improve the performance of online algorithms. They designed an optimistic FTRL algorithm that operates on single requests requiring the cache to be updated each time a new file request is received. These updates are computationally very expensive, as they require to solve a constrained optimization problem, and can limit the applicability of online caching policies. To amortize the update cost over time and over multiple requests a _batched_ approach can be adopted, where the caching system serves each request as it arrives, but updates the cache less frequently on the basis of the batch of requests collected since the last update [9]. We stress that the batched approach does not cause any additional delay for the user.
The novelty of this work resides in the study of optimistic online caching policies able to work on batches of requests. Our main contributions are the following:
_1)_ We present a batched version of the optimistic caching policy in [13] and prove that it still enjoys sublinear regret.
_2)_ We introduce a new optimistic batched caching policy based on the per-component-based algorithm in [12].
_3)_ We analytically characterize under which conditions each of these two caching policies outperforms the other.
_4)_ We determine when a batched operation provides better performance in terms of regret under different models for the predictions' error.
_5)_ We design optimistic versions of classical caching policies like LFU and LRU.
_6)_ We experimentally show, both on stationary traces and real ones, that our optimistic batched online caching policies outperform classical caching policies like LRU and LFU achieving both smaller service cost and per-request computational cost.
The reminder of this paper is organized as follows. The next section discusses the main related works. Section 3 introduces the system model and the problem description. In Section 4 we describe the optimistic caching framework and we present
the main algorithms that take into account predictions: the one presented in [13] and the one we propose. Section 5 presents an analysis of the regret bounds achieved by the two algorithms and a comparison between the single-request operation and the batched one. Experimental results are presented in Section 6. Finally, Section 7 concludes the paper.
## 2 Related Work
Caching optimization problems have been deeply studied in the literature both on the offline and on the online perspective [15]. Several works have explored the offline static allocation of files under the assumption of knowing the requests [16; 17; 18]. On the online perspective, online caching policies based on gradient methods have been studied under the assumption of stochastic requests [19; 20]. In these works, the proposed algorithms have been evaluated under various performance metrics. We consider adversarial requests, i.e. the requests are thought as they are generated by an adversary trying to deteriorate the system's performance, and the regret as the main performance metric following the recent regret-based research on caching [5; 6; 21; 22; 9]. In this context, the main goal is to design algorithms with sublinear regret with respect to the time horizon leading to algorithms that behave as the optimal static solution in hindsight on average. Such online policies are called _no-regret_ algorithms [4].
Adversarial requests are considered in caching since Sleator and Tarjan's paper [23] through the _competitive ratio_ metric. However, as proved in [24], algorithms that ensure constant competitive ratio do not necessarily guarantee sublinear regret.
The main optimization framework adopted in this paper is the Online Convex Optimization (OCO). It was first introduced by Zinkevich [25] showing that the projected gradient descent achieves sublinear regret bounds in the online setting. The works from Paschos et al. [4; 15] were the first to apply the OCO framework to caching problems providing no-regret algorithms for the online caching problem. Bhattacharjee et al. [5] extended the work from Paschos et al. showing tighter lower bounds for the regret and proposing new online caching policies for the networked scenario based on the Follow-The-Perturbed-Leader (FTPL) algorithm. In our case, we consider the single-cache scenario and analyse the framework of the Follow-The-Regularized-Leader (FTRL) that has been proved one of the most promising algorithms for taking into account predictions in the online learning setting [12]. Indeed, as shown in [26], the optimistic version of FTRL benefits more from the use of predictions with respect to the optimistic FTPL.
The combination of predictions and caching has recently drawn attention given the significant usage of machine learning (ML) models for the computation of such predictions. The idea of exploiting predictions in the decision process has lead to the design of so called _optimistic_ online algorithms. Some works have already incorporated predictions in stochastic optimization [27; 28] assuming the requests and system perturbations to be stationary. In our work we do not make any assumption on the quality of the predictions that can be also thought as generated by and adversary. Mohri et al. [12] studied the regret performance of FTRL algorithms in adversarial settings including the predictions proving sublinear regret bounds. To the best of the authors' knowledge, Maisen et al. [13] have been the first to apply optimistic online algorithms in the caching framework under adversarial settings. They proposed FTRL-based algorithms that, at each new request, update the cache state based on the previous incurred costs and the prediction for the next request. However, such algorithms imply the application of computationally-expensive operations, such as the projection on the domain set of the cache states [29; 30], at each new request. To amortize the computational cost over time we propose to collect a _batch_ of requests before deciding the new cache state, leading to less frequent updates of the cache state. Theoretical analysis confirm that the size of such a batch does not affect the regret guarantees of the presented algorithms. A batched approach in caching has been presented in [29] but without taking into account predictions for future requests. Other optimistic online algorithms for caching are proposed in [26]. However, the proposed policies update the cache state at each new request, and the files are entirely stored in the cache, whilst, in line with recent works [4; 13], we assume that the cache can store arbitrary fraction of files.
The novelty of this work is in studying the performance of optimistic version of FTRL-based algorithms dealing with batches of requests. The account of batched requests reinforces also the use of predictions in the optimization process. It is reasonable indeed, when the predictions come from ML models, to involve a set of possible future requests in the predictions rather than a single future request. We show that the optimistic online batched algorithms introduced in this work present the best performance in terms of final miss-ratio and computational cost with respect the most practical and implemented caching policies.
## 3 System Description and Problem Formulation
### System Model
We consider the same system's setting described in [9]. The system receives requests for equal-size files in the catalog \(\mathcal{N}=\{1,2,\ldots,N\}\). File requests are served by a single local cache or by a remote server. In particular, a request for a file \(i\in\mathcal{N}\) can be served by the cache for free or by a remote server incurring a per-file dependent cost \(w_{i}\in\mathbb{R}^{+}\) (more details about our cost model below). This cost can be related to the time needed to retrieve the file from a remote server, or be a monetary cost due to the utilisation of a third-party infrastructure for the file retrieval. We do not make any assumption on the requests arrival process, i.e., we analyse the system in an adversarial online setting where the requests can be thought as generated by an adversary trying to deteriorate system's performance.
**Cache State**. The local cache has finite capacity \(k\in\{1,\ldots,N\}\), and it can store arbitrary fractions of files from the catalog as in [31; 4; 13]. We denote as \(x_{i,i}\in[0,1]\) the fraction of file \(i\) stored in the cache at time \(t\). The cache state, at time \(t\), is then
represented by the vector \(\mathbf{x}_{t}=[x_{t,i}]_{i\in\mathcal{N}}\) belonging to the set
\[\mathcal{X}=\left\{x\in[0,1]^{N}|\sum_{i\in\mathcal{N}}x_{i}=k\right\}.\]
The set \(\mathcal{X}\) is the capped simplex defined by the capacity constraint of the local cache. It is sometimes convenient to express the cache capacity as a fraction of the catalog size, i.e., \(k=\alpha N\), where \(\alpha\in[0,1]\).
**Cache Updates.** Caching decisions are taken after batches (potentially of different sizes) of requests have been served. Formally, at each time-slot \(t=1,\ldots,T\) the system collects \(R_{t}\) requests from the users and then it may updates the cache state. The request process can then be represented as a sequence of vectors \(\mathbf{r}_{t}=(r_{t,i}\in\mathbb{N}:i\in\mathcal{N}),\forall t\), where \(r_{t,i}\) denotes the number of requests for file \(i\) in the \(t\)-th timeslot. The request process belongs then to the set
\[\mathcal{R}=\left\{\mathbf{r}_{t}\in\mathbb{N}^{N},t=1,\ldots,T|\sum_{i\in \mathcal{N}}r_{t,i}=R_{t}\right\}.\]
For some results we will rely on the following additional assumption (already proposed in [9]):
_Assumption 1_.: Every batch contains the same number of requests (i.e., \(R_{t}=R\) for all \(t\in{1,\ldots,T}\)) and the number of requests for each file within the batch is bounded by \(h\) (i.e., \(r_{n}^{t}\in\{0,\ldots,h\}\)).
**Cost Function**. For each new batch of requests \(\mathbf{r}_{t}\) the system pays a cost proportional to the missing fraction \((1-x_{t,i})\) for each file \(i\in\mathcal{N}\) from the local cache. More formally:
\[f_{\mathbf{r}_{t}}(\mathbf{x}_{t})=\sum_{i=1}^{N}w_{i}r_{t,i}(1-x_{t,i}). \tag{1}\]
The sum is weighted by the cost \(w_{i}\) and by the number of times \(r_{t,i}\) file \(i\) is requested in the batch \(\mathbf{r}_{t}\).
**Predictions**. Predictions for the next batch of requests can be the output of a ML model such as a neural network. Such prediction models can be similar to those used in streaming services like Netflix to provide recommendations to users on the basis of their history view [10]. We assume that the predictor provides an estimate for the number of requests for each file in the next time-slot. We indicate with \(\tilde{r}_{t+1,i}\) the prediction of the number of requests for file \(i\) at time \(t+1\). It is then possible to directly estimate the gradient of the cost function in that time-slot. More formally, we denote by \(\tilde{\mathbf{g}}_{t+1}\) the prediction of \(\mathbf{g}_{t+1}=\nabla f_{\mathbf{r}_{t+1}}(\mathbf{x}_{t+1})\), the gradient of the cost function at time \(t+1\), where \(\tilde{g}_{t+1,i}=-w_{i}\tilde{r}_{t+1,i}\).
### Online Caching Problem
We can fit our caching problem in the _Online Convex Optimization_ (OCO) framework [25; 32], where a learner (in our case the caching system) has to take a decision \(\mathbf{x}_{t}\) from a convex set \(\mathcal{X}\) at each time slot \(t\) before the adversary selects the cost function \(f_{\mathbf{r}_{t}}\), i.e., the learner changes the cache state before experiencing the cost. Hence, the main objective is to devise a caching policy \(\mathcal{A}\) that, at each time-slot \(t\), computes the cache state \(\mathbf{x}_{t+1}\) for the next time-slot given the current cache state \(\mathbf{x}_{t}\), the whole history up to time \(t\) (\((\mathbf{x}_{1},r_{1})\),..., \((\mathbf{x}_{t},r_{t})\)), and possibly the predictions for the next time-slot. As it is common in online learning, the main performance metric for the caching policy \(\mathcal{A}\) is the regret defined as
\[R_{T}(\mathcal{A})=\sup_{\{\mathbf{r}_{1},\ldots,\mathbf{r}_{T}\}}\left\{ \sum_{t=1}^{T}f_{\mathbf{r}_{t}}(\mathbf{x}_{t})-\sum_{t=1}^{T}f_{\mathbf{r} }(\mathbf{x}^{\star})\right\}. \tag{2}\]
This function denotes the difference between the total cost obtained by the online policy \(\mathcal{A}\) over a time horizon \(T\), and the total cost of the best caching state \(\mathbf{x}^{\star}\) in hindsight, i.e., \(\mathbf{x}^{\star}=\arg\min_{x\in\mathcal{X}}\sum_{t=1}^{T}f_{\mathbf{r}_{t}}( \mathbf{x})\). The supremum in (2) indicates an adversarial setting for the regret definition, i.e., the regret is measured against an adversary that generates requests trying to deteriorate the performance of the caching system. The main goal in this setting is to design a caching policy \(\mathcal{A}\) that achieves sublinear regret, \(R_{T}(\mathcal{A})=o(T)\). This ensures a zero average regret as \(T\) grows implying that the designed policy behaves on average as the optimal static one.
In what follows, given a sequence of vectors \((\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{t},\ldots)\), we denote their aggregate sum up to time \(t\) as \(\mathbf{y}_{1:t}\triangleq\sum_{s=1}^{t}\mathbf{y}_{s}\).
## 4 Optimistic Caching
As highlighted in [13], an optimistic caching policy can exploit, at each time-slot \(t\), predictions for the requests at time \(t+1\) in order to compute the caching state \(\mathbf{x}_{t+1}\). The general scheme for optimistic online caching is described in Algorithm 1. Given an initial feasible solution \(\mathbf{x}_{1}\in\mathcal{X}\), the cache operates at each time-slot \(t\) as follows: i) the new batch of requests \(\mathbf{r}_{t}\) is revealed; ii) based on the current cache state \(\mathbf{x}_{t}\), the cache incurs the cost \(f_{\mathbf{r}_{t}}(\mathbf{x}_{t})\); iii) the cache receives the prediction \(\tilde{\mathbf{g}}_{t+1}\) for the next time-slot, and iv) based on such predictions and on all the history up to time \(t\) (\((\mathbf{x}_{1},r_{1}),\ldots,(\mathbf{x}_{t},\mathbf{r}_{t})\)), it computes the next cache state \(\mathbf{x}_{t+1}\).
In the OCO literature, algorithms exploiting predictions are usually variants of the _Follow-The-Regularized-Leader_ (FTRL) algorithm [33; 12]. The classic _Follow-The-Leader_ (FTL) algorithm [34] greedily selects the next state in order to minimize the aggregate cost over the past, i.e.,
\[\mathbf{x}_{t+1}:=\arg\min_{\mathbf{x}\in\mathcal{X}}\sum_{s=1}^{t}f_{\mathbf{ r}_{t}}(\mathbf{x})=\arg\min_{\mathbf{x}\in\mathcal{X}}\mathbf{g}_{1:t}^{ \intercal}\mathbf{x},\]
where the last equality follows from the linearity of the cost functions. The linearity of the problem leads FTL to commit to store entirely some files (i.e., \(\mathbf{x}_{t+1}\in\{0,1\}^{N}\)), but this can be exploited by the adversary and leads to a linear regret. The FTRL algorithm improves the performance of FTL by adding a non-linear proximal regularization term, which leads to more cautious updates.1 Let \(r_{t}(\mathbf{x})\) be the regularization function used
at time \(t\) (to be specified later). The FTRL algorithm's update step is given by
\[\mathbf{x}_{t+1}:=\arg\min_{\mathbf{x}\in\mathcal{X}}\left\{r_{1:t}(\mathbf{x})+ (\mathbf{g}_{1:t}+\tilde{\mathbf{g}}_{t+1})^{\top}\mathbf{x}\right\}. \tag{3}\]
As we are going to see, the function to minimize in (3) is a quadratic function. The Problem 3 can then be solved through popular solvers like CVX, but the presence of the constraint \(\mathbf{x}\in\mathcal{X}\) makes the update a potentially expensive operation, motivating the batched operation we propose.
In what follows, we describe two particular FTRL instances applied to our caching problem. The two instances differ by the specific regularization function used in (3) for updating the cache state (line 5 of Algorithm 1).
### Optimistic Bipartite Caching (OBC)
The first algorithm is called _Optimistic Bipartite Caching_ (OBC) and was introduced in [13] for a bipartite caching system with a single request at each time-slot. OBC adopts as proximal regularizer
\[r_{t}(\mathbf{x})=\frac{\sigma_{t}}{2}\|\mathbf{x}-\mathbf{x}_{t}\|^{2},t\geq 1, \tag{4}\]
with the following parameters
\[\sigma_{t}=\sigma(\sqrt{h_{1:t}}-\sqrt{h_{1:t-1}}),\quad\text{where}\quad h_{ t}=\|\mathbf{g}_{t}-\tilde{\mathbf{g}}_{t}\|^{2}, \tag{5}\]
and \(\sigma\geq 0\). The regularizer \(r_{1:t}(\mathbf{x})\) is 1-strongly convex with respect to the norm \(\|\mathbf{x}\|_{(t)}=\sqrt{\sigma_{1:t}}\|\mathbf{x}\|\) whose dual norm we denote by \(\|\mathbf{x}\|_{(t),\star}\). The regularizer depends on the Euclidean distance between the actual gradient \(\mathbf{g}_{t}\) and the predicted one \(\tilde{\mathbf{g}}_{t}\). Qualitatively, if predictions are very accurate, \(r_{1:t}(\mathbf{x})\) is small and then the update in (3) will focus on minimizing the (predicted) aggregate cost (\(\mathbf{g}_{1:t}+\tilde{\mathbf{g}}_{t+1}\))\({}^{\top}\mathbf{x}\). On the contrary, if predictions are not accurate, the regularizer will lead to more cautious updates. The regularization function can then be interpreted as an implicit adaptive learning rate [12]: as gradient predictions become more accurate the algorithm _accelerates_ towards the minimum of the aggregate cost (\(\mathbf{g}_{1:t}+\tilde{\mathbf{g}}_{t+1}\))\({}^{\top}\mathbf{x}\).
In the next section, we present theoretical guarantees on the OBC's regret for the batched setting considered in this paper.
### Per-Coordinate Optimistic Caching (PCOC)
Mohri et al. [12, Corollary 2] proposed an FTRL algorithm where the regularization function decomposes over the coordinates and thus the acceleration occurs on a per-coordinate basis. In this case, if gradient predictions are more accurate on certain coordinates, the algorithm will accelerate the convergence of such coordinates. Here we present a generalization of this algorithm, called _Per-Coordinate Optimistic Caching_ (PCOC), which introduces a generic parameter \(\sigma\) in the definition of the regularization function:
\[r_{t}(\mathbf{x})=\sum_{i=1}^{N}\sum_{s=1}^{t}\frac{\sigma_{i,i}}{2}(x_{i}-x _{s:t})^{2}, \tag{6}\]
where \(\sigma_{i,i}=\sigma(\Delta_{i,i}-\Delta_{i-1,i})\), and \(\Delta_{i,i}=\sqrt{\sum_{s=1}^{s}(g_{ai}-\tilde{g}_{ai})^{2}}\). The function \(r_{0:t}(\mathbf{x})\) is 1-strongly convex with respect to2
Footnote 2: With some abuse of notation we use the same symbols (resp. \(\|\mathbf{x}\|_{(0)}\) and \(\|\mathbf{x}\|_{(t),\star}\)) to denote the norms and the dual norms for OBC and PCOC. The interpretation of the symbols should be clear from the context.
\[\|\mathbf{x}\|_{(t)}^{2}=\sum_{i=1}^{N}\sigma_{1:t}x_{i}^{2},\quad\text{with} \quad\|\mathbf{x}\|_{(t),\star}^{2}=\sum_{i=1}^{N}\frac{x_{i}^{2}}{\sigma_{1:t,i}}. \tag{7}\]
## 5 Performance Analysis
Here we prove theoretical guarantees for the regret bounds of the algorithms presented in the previous section in the case of a single cache and multiple requests at each time-slot. We show that both algorithms enjoy sublinear regrets even if gradient predictions are inaccurate.
### Regret bound of OBC with single cache and R requests
We extend the regret bound in [13, Theorem 1] to the case of batched requests, but we also improve the coefficients taking into account the capacity constraint.
**Theorem 5.1**.: _The regret of OBC is bounded as follows:_
\[R_{T}(OBC)\leq 2\sqrt{2\min\{k,N-k\}\cdot\sum_{t=1}^{T}\|\mathbf{g}_{t}- \tilde{\mathbf{g}}_{t}\|^{2}}. \tag{8}\]
Proof.: We start from the inequality in [12, Theorem 1],
\[R_{T}\leq r_{1:T}(\mathbf{x}^{\star})+\sum_{t=1}^{T}\|\mathbf{g}_{t}-\tilde{ \mathbf{g}}_{t}\|_{(t),\star}^{2},\quad\forall\mathbf{x}^{\star}\in\mathcal{X}. \tag{9}\]
Substituting the regularization functions we obtain
\[R_{T}\leq\frac{\sigma}{2}\sum_{t=1}^{T}(\sqrt{h_{1:t}}-\sqrt{h_{1:t-1}})\| \mathbf{x}^{\star}-\mathbf{x}_{t}\|^{2}+\sum_{t=1}^{T}\frac{h_{t}}{\sigma \sqrt{h_{1:t}}} \tag{10}\]
In our case, as highlighted in [4], the Euclidean diameter of \(\mathcal{X}\) is upper bounded by \(\Delta\)
\[\|\mathbf{x}-\mathbf{x}_{t}\|^{2}\leq\Delta^{2}\triangleq\min\{2k,2(N-k)\}, \forall\mathbf{x},\mathbf{x}_{t}\in\mathcal{X}. \tag{11}\]
Introducing \(\Delta\) in (10), and using [35, Lemma 3.5] it follows
\[\begin{split}& R_{T}\leq\frac{\sigma}{2}\Delta^{2}\sum_{t=1}^{T}( \sqrt{h_{1:t}}-\sqrt{h_{1:t-1}})+\sum_{t=1}^{T}\frac{h_{t}}{\sigma\sqrt{h_{1:t }}}\\ &\leq\frac{\sigma}{4}\Delta^{2}\sqrt{h_{1:T}}+\frac{2}{\sigma} \sqrt{h_{1:T}}=(\frac{\sigma}{2}\Delta^{2}+\frac{2}{\sigma})\sqrt{h_{1:T}}. \end{split} \tag{12}\]
Setting \(\sigma=2/\Delta\) we obtain the desired bound.
Theorem 5.1 shows that the regret bound depends on the cache size, and on the accuracy in the predictions. The algorithm enjoys a zero regret if the cache is able to store the complete catalog, i.e., \(k=N\), or if predictions are perfect, i.e., \(\tilde{g}_{t}=g_{t}\). On the other hand, even if predictions are imperfect, OBC may guarantee sublinear regret, as shown by the following corollary.
**Corollary 1**.: _Under Assumption 1,_
\[R_{T}\leq 2\|w\|_{\infty}\sqrt{2\min\{k,N-k\}TRh}=O(\sqrt{T}). \tag{13}\]
The proof easily follows from \(\|\tilde{\mathbf{g}}_{t}-\mathbf{g}_{t,i}\|^{2}\leq\|w\|_{\infty}^{2}Rh\) under _Assumption 1_.
### Regret bound of PCOC
The following proof follows the steps in (12, Corollary 2), introducing the adjustable parameter \(\sigma\geq 0\) in the definition of the regularizer 6 and taking into account that \(x_{i}\in[0,1]\) for our caching application.
**Theorem 5.2**.: _The regret of PCOC is bounded as follows_
\[R_{T}(PCOC)\leq 2\sum_{i=1}^{N}\sqrt{\sum_{t=1}^{T}(g_{t,i}-\tilde{g}_{t,i})^{2}}. \tag{14}\]
Proof.: From (12, Theorem 3), applying the regularization function defined in (6) and the norms defined in (7), we obtain
\[R_{T} \leq\frac{\sigma}{2}\sum_{i=1}^{N}\sum_{s=1}^{T}(\Delta_{s,i}- \Delta_{s-1,i})(x_{i}-x_{s,i})^{2}+\sum_{t=1}^{T}\|\mathbf{g}_{t}-\mathbf{ \tilde{g}}\|_{(t),\star}^{2}\] \[\stackrel{{\text{\tiny(a)}}}{{\leq}}\frac{\sigma}{2 }\sum_{i=1}^{N}\sqrt{\sum_{t=1}^{T}(g_{t,i}-\tilde{g}_{t,i})^{2}}+\sum_{t=1}^{ T}\|\mathbf{g}_{t}-\mathbf{\tilde{g}}\|_{(t),\star}^{2}\] \[\stackrel{{\text{\tiny(b)}}}{{\leq}}\frac{\sigma}{2 }\sum_{i=1}^{N}\sqrt{\sum_{t=1}^{T}(g_{t,i}-\tilde{g}_{t,i})^{2}}+\frac{2}{ \sigma}\sum_{i=1}^{N}\sqrt{\sum_{t=1}^{T}(g_{t,i}-\tilde{g}_{t,i})^{2}}\] \[=\left(\frac{\sigma}{2}+\frac{2}{\sigma}\right)\sum_{i=1}^{N} \sqrt{\sum_{t=1}^{T}(g_{t,i}-\tilde{g}_{t,i})^{2}}, \tag{15}\]
where (a) follows from \((x_{i}-x_{s,i})^{2}\leq 1\) and the results of the telescopic sum \(\sum_{s=1}^{t}\Delta_{s,i}-\Delta_{s-1,i}\), and (b) from the application of (35, Lemma 3.5) to \(\sum_{t=1}^{T}\|\mathbf{g}_{t}-\mathbf{\tilde{g}}\|_{(t),\star}^{2}\) once the definition of dual norm in (7) has been applied. For the minimization of the regret bound we can set \(\sigma=2\).
Similar to OBC, PCOC has zero regret under perfect predictions, and sublinear regret under _Assumption 1_.
**Corollary 2**.: _Under Assumption 1,_
\[R_{T}\leq 2Nh\|w\|\sqrt{T}=O(\sqrt{T}). \tag{16}\]
The proof follows from \((g_{t,i}-\tilde{g}_{t,i})^{2}\leq\|w\|^{2}h^{2}\) under _Assumption 1_.
### Comparison between the two regret bounds
We compare the two bounds presented above in two specific scenarios for the prediction error: i) a constant error on each component of the gradient, and ii) a prediction error proportional to the popularity of the files in the catalog.
In the first case, OBC presents a better bound with respect to the one obtained by PCOC. In fact, say that \(|g_{t,i}-\tilde{g}_{t,i}|=\epsilon\) for each \(i\) and \(t\), then \(R_{T}(OBC)=2\sqrt{2\min\{k,N-k\}NTE^{2}}\leq 2N\sqrt{TE^{2}}=R_{T}(PCOC)\).
In the second case, PCOC may perform better because it specifically takes into account the heterogeneity of the prediction error across the components. We deviate here from the adversarial request model and consider that 1) requests arrive according to a Poisson process with rate \(\lambda\), and 2) a request is for file \(i\) with probability \(p_{i}\) independently from the past (36). Moreover, we assume the algorithm is executed every time unit, and per-file costs equal 1. In this case, \(g_{t,i}\sim\text{Poisson}(\lambda p_{i})\) for each \(i\in\mathcal{N}\). We compute the expected value of the bounds in (8) and in (14), assuming that the cache can store a fraction \(\alpha\) of the catalog (\(k=\alpha N\)), and \(\tilde{g}_{t,i}=\lambda p_{i}\), i.e., we have a perfect predictors for the expected number of future requests. For the OBC bound, we obtain
\[\mathbb{E}\left[2\sqrt{2\alpha N\sum_{t=1}^{T}\sum_{i=1}^{N}(g_ {t,i}-\tilde{g}_{t,i})^{2}}\right]\leq\] \[\leq 2\sqrt{2\alpha N\sum_{t=1}^{T}\sum_{i=1}^{N}\mathbb{E}[(g_ {t,i}-\tilde{g}_{t,i})^{2}]}=\] \[=2\sqrt{2\alpha N\sum_{t=1}^{T}\sum_{i=1}^{N}\lambda p_{i}}=2 \sqrt{2\alpha\lambda NT}\quad. \tag{17}\]
For the PCOC bound, we obtain
\[\mathbb{E}\left[\sum_{i=1}^{N}\sqrt{\sum_{t=1}^{T}(g_{t,i}- \tilde{g}_{t,i})^{2}}\right]\leq\sum_{i=1}^{N}\sqrt{\sum_{t=1}^{T}\mathbb{E}[(g _{t,i}-\tilde{g}_{t,i})^{2}]}\] \[=2\sum_{i=1}^{N}\sqrt{\sum_{t=1}^{T}\lambda p_{i}}=2\sum_{i=1}^{N }\sqrt{TAp_{i}}\quad. \tag{18}\]
Comparing the two bounds (18) and (17), we find that (18) is a smaller than (17) when \(\alpha\geq\left(\sum_{i=1}^{N}\sqrt{p_{i}}\right)^{2}/(2N\sum_{i=1}^{N}p_{i})\). If \(p_{i}\) obeys to a Zipf law with exponent \(\beta\), we can numerically find from the inequality the minimum value of \(\alpha\) such that the bound of (18) is tighter. In Figure 1 we can notice that the threshold for \(\alpha\) decreases as \(\beta\) increases. In the case of a uniform popularity distribution (\(\beta=0\)), OBC outperforms PCOC unless the cache can store at least half of the catalog. As the popularity distribution becomes more skewed, PCOC is expected to perform better than OBC in terms of regret bound, but for very small caches.
### Batch Selection
We maintain the Poisson assumption about the request arrival process and evaluate what is the effect of requests batching on
the regret, focusing on the bound in Theorem 5.1 (the same analysis can be carried out on the bound in Theorem 5.2). We analyse the expected value of such bound in a general batched-requests setting where the caching decisions are taken every \(\tau\) among an overall time interval of \(\Theta\) time units where a single request is available at each time. Looking at the expected value of the regret bound we have:
\[\mathbb{E}\left[R_{\Theta/\tau}\right]\leq\mathbb{E}\left[C\sqrt{ \sum_{t=1}^{\Theta/\tau}\sum_{i=1}^{N}(g_{t,i}-\tilde{g}_{t,i})^{2}}\right], \tag{19}\]
where \(C\triangleq 2\sqrt{2\min\{k,N-k\}}\). In this case we have \(g_{t,i}\sim\text{Poisson}(\lambda_{i}\tau)\). For the predictions \(\tilde{g}_{t,i}\) we consider two options: i) they coincide with the expected number of future requests, or ii) they coincide with the requests seen during the previous times-lots.
In the first case we have
\[\mathbb{E}\left[C\sqrt{\sum_{t=1}^{\Theta/\tau}\sum_{i=1}^{N}(g_ {t,i}-\tilde{g}_{t,i})^{2}}\right]\overset{\text{(a)}}{\leq}C\sqrt{\sum_{t=1} ^{\Theta/\tau}\sum_{i=1}^{N}\mathbb{E}\left[(g_{t,i}-\tilde{g}_{t,i})^{2} \right]}=\\ =C\sqrt{\sum_{t=1}^{\Theta/\tau}\sum_{i=1}^{N}\text{Var}(g_{t,i} )}=C\sqrt{\sum_{t=1}^{\Theta/\tau}\sum_{i=1}^{N}\lambda_{i}}r=C\sqrt{\Theta \sum_{t=1}^{N}\lambda_{i}}, \tag{20}\]
where (a) follows from Jensen's inequality. The right hand side of (20) suggests that batching has no effect on the algorithm's regret.
In the second case, for \(t>1\), \(\tilde{g}_{t,i}=g_{t-1,i}=n_{t,i}(\tau)\sim\text{Poisson}(\lambda_{i}\tau)\), where \(n_{t,i}(\tau)\) is the number of arrivals within the interval \([(t-2)\tau,(t-1)\tau]\). The initial prediction is given by \(\tilde{g}_{1,i}=\frac{n(\tau_{0})}{\tau_{0}}\), where \(\tau_{0}\) is a first warm-up interval. Looking at the expectation of \((g_{t,i}-\tilde{g}_{t,i})^{2}\), we have
\[\mathbb{E}[(g_{t,i}-\tilde{g}_{t,i})^{2}]=\] \[\mathbb{E}[(g_{t,i}-\tilde{g}_{t,i}-\mathbb{E}[g_{t,i}]+\mathbb{ E}[g_{t,i}]-\mathbb{E}[\tilde{g}_{t,i}]+\mathbb{E}[\tilde{g}_{t,i}])^{2}]=\] \[\text{Var}(g_{t,i})+\text{Var}(\tilde{g}_{t,i})+(\mathbb{E}[g_{t,i}]-\mathbb{E}[\tilde{g}_{t,i}])^{2}=\] \[=\begin{cases}2\lambda_{i}\tau,&t>1\\ \lambda_{i}\tau+(\frac{\tau}{\tau_{0}})^{2}\lambda_{i}\tau_{0},&t=1.\end{cases} \tag{21}\]
Summing all the terms over \(N\) and \(\Theta/\tau\), we obtain
\[\sum_{i=1}^{N}\sum_{t=1}^{\Theta/\tau}\mathbb{E}[(g_{t,i}-\tilde{g}_{t,i})^{2 }]=\sum_{i=1}^{N}\frac{\Theta-\tau}{\tau}2\lambda_{i}\tau+\lambda_{i}\tau+m^{ 2}\tau^{2}, \tag{22}\]
where \(m^{2}\triangleq\frac{\lambda_{i}\tau_{0}}{\tau_{0}^{2}}\). Under these predictions, there is indeed an optimal timescale \(\tau^{*}\) for batching, that is \(\tau^{*}=\min\{\frac{\tau_{0}}{\tau},\Theta\}\). Hence, in case of a good initial prediction (large \(\tau_{0}\)) we should select \(\tau=\Theta\). Otherwise, in case of a less accurate initial prediction we should choose a smaller value \(\tau=\frac{\tau_{0}}{2}\).
## 6 Numerical Results
### Experimental Settings
#### 6.1.1 Datasets
We evaluated the presented approaches on both synthetic and real traces. For the synthetic case, we generated stationary synthetic traces where individual file requests are generated i.i.d. according to a Zipf distribution with parameter \(\beta\in\{0.8,1.2,1.5\}\) from a catalog of \(N=1000\) files. We evaluate the studied solutions against state-of-the-art algorithms over a horizon of \(I=10^{5}\) requests. _Batched_ algorithms have a constant batch size, i.e., \(R_{t}=R\) with \(R\in\{100,1000,2000,5000,10000\}\)
Figure 1: OBC vs PCOC, different regimes for the regret as a function of the Zipf exponent (\(\beta\)) and the relative cache size (\(k=\alpha N\)).
Figure 2: PCOC vs. OBC
for synthetic traces, and \(R\in\{10,50,100,300,1000\}\) for the real trace. The cache size \(k\) varies in \(\{10,50,100,600\}\). The real trace counts \(2\cdot 10^{4}\) requests for the \(N=10^{3}\) most popular files as measured at a given server in Akamai CDN provider [37]. In all the experiments we set \(w_{i}=1,\forall i\in\mathcal{N}\), the cost in (1) corresponds then to the total number of misses. In Figures 2,3,4,5 and (b)b, given a vector of requests over the time horizon \(T\), we report the average over 30 different runs for predictions and we also plot the 0.95-confidence interval of the normalized average cost and the average regret.
#### 6.1.2 Predictions
For the optimistic algorithms' evaluation we considered three types of predictions:
_Type 1_: the first ones are generated according to \(\mathbf{\hat{g}}_{t}=(1-\xi)\mathbf{g}_{t}+\xi\frac{R}{N}\), with \(\xi\in[0,1]\);
_Type 2_: the second ones are generated as random permutations of the correct gradients;
_Type 3_: the third case is the same described in [26], where each prediction is assumed to be correct with a probability \(\pi\).
The first type interpolates between perfect predictions (for \(\xi=0\)) and a situation where all files appear equally popular (for \(\xi=1\)). In the second type, files' future popularities are arbitrarily ranked. In the latter case, given the original vector of requests \(\mathbf{r}_{t}\), the prediction vector \(\mathbf{\hat{g}}_{t}\) is generated by requesting the original files in \(\mathbf{r}_{t}\) with probability \(\pi\) and any other random file from the catalog with probability \(1-\pi\).
#### 6.1.3 Metrics
We evaluate all the algorithms according to three metrics:
i) the _Average Miss Ratio_, i.e., the total cost over the first \(t\) iterations, normalized by \(Rt\);
ii) the _Time Average Regret_ over the first \(t\) iterations;
iii) the _Amortized Cost_, i.e., the average computational time per request.
#### 6.1.4 Online Algorithms
We compare OBC and PCOC presented in Section 4 against classical online algorithms such as LFU, LRU, and OGD [4]. Furthermore, we designed and implemented optimistic version of LFU and LRU.
**Optimistic Least Frequently Used (OLFU)**. The algorithm takes into account predictions for the next requests but updates the cache state at each new requests according to the LFU eviction policy. At the beginning of each batch of requests, OLFU increases the frequency of each file within the predictions for the next batch of \(R\) requests. In the face of a new request, the algorithm i) updates the cache state using LFU with the updated frequencies; ii) checks if the file request was in the predicted batch: if it was not, OLFU increases the frequency for that file and decreases the frequency of a random file from the catalog different from the requested one. At the end of each batch the frequencies of OLFU and the ones computed by a classic LFU policy are equal.
Figure 4: Average Miss Ratio of OLRU vs. LRU
Figure 3: Average Miss Ratio of OLFU vs. LFU
**Optimisitc Least Recently Used (OLRU)**. This policy considers the predictions for the next \(R\) requests and consider the files within the batch as the most recently requested. For each file \(i\in\mathcal{N}\), the algorithm keeps a counter, namely _last-time-requested_, indicating the last time file \(i\) has been requested. In particular, given a batch of predicted requests, OLRU sets the _last-time-requested_ counter of all those predicted files to the current time. In the face of a new request, the algorithm updates the cache using LRU, i.e., evicting the least recently used file from the cache according to the counters updated through the predictions.
### Results
First of all we compare the optimistic versions of LFU and LRU with respect to their classical versions. Afterwards, we focus on the Follow-The-Regularized-Leader-based algorithms evaluating their performance in terms of average regret. Consequently, we compare PCOC with respect to OLFU and classical policies. Finally, we evaluate the optimistic versions of the presented algorithms on the Akamai trace showing also the trade-off between the final missing-ratio and the amortized cost varying the batch size.
**OLFU vs. LFU**. Figure 3 compares OLFU against LFU for different batch sizes and for different levels of the predictions' accuracy with predictions of _Type 3_. We can observe that the batch size plays an important role in the performance of OLFU as the predictions become worse. Indeed, in case of perfect predictions (Figure 2(a)), the versions of OLFU with the highest batch sizes reach a better miss-ratio with respect to LFU since, as the batch size increases, there is more accurate information about the next requests. On the other hand, with very inaccurate predictions (Figure 2(c)), the higher is the batch size and the worse is the missing-ratio, given the incorrect information brought by the perturbed predictions.
**OLRU vs. LRU**. In contrast with OLFU, as highlighted in Figure 4, the optimistic version of LRU performs better for small batch sizes as the predictions' accuracy deteriorates. Indeed, the bigger is the batch and the fewer will be the number of cache updates. In this manner, the counters of all the files within the batch will be updated less frequently resulting to be stale. Beyond such a staleness, the performance of the policy deteriorates as the inaccuracy of the predictions increases.
**PCOC vs. OBC**. We compare the two algorithms for different capacities, i.e., \(k\in\{50,100,600\}\) and different exponents of the Zipf distribution, i.e., \(\beta\in\{0.8,1.5\}\) with \(R=1000\) with predictions of _Type 3_. As showed in Figure 2 the difference between the two algorithms becomes significant as the values of \(\alpha\) and \(\beta\) increase. This confirms the results of Figure 1 where the difference between the two regrets becomes more evident for higher values of the cache size and the Zipf's exponent. In particular when \(k=600\), i.e., the cache can store at least half of the catalog, PCOC clearly outperforms OBC for all the values of \(\beta\).
**PCOC vs. OLFU**. Figure 5 reports on the comparison be
Figure 5: Average Miss Ratio of PCOC vs. OLFU
Figure 6: PCOC vs. Classic Policies
tween PCOC and OLF for different batch sizes and levels of accuracy in predictions of _Type 3_. For all the algorithms we set the initial cache state as \(\mathbf{x}_{0}:=\arg\max_{v\in\mathcal{X}}\{\mathbf{\tilde{r}}_{1}^{\mathrm{T}}x\}\), i.e., we entirely store the files with the highest number of requests in the first predicted batch. We can observe that for high levels of accuracy in the predictions (Figure 4(a) and Figure 4(b)) PCOC outperforms OLFU for all the different batches. When the predictions have very low accuracy (\(\pi=0.1\)) PCOC shows the same performance of OLFU for \(R=100\), however it still remains competitive reaching the convergence even for higher values of \(R\).
**PCOC vs. Classic Policies**. Figures 5(a) and 5(b) show the performance of PCOC against classical online algorithms in cases where \(\beta=0.9\), and \(\beta=1.2\), with \(R=100\) with predicitons of _Type 1_ and _Type 2_. We can notice the benefit of including predictions in the decision process looking at the lower miss ratio of PCOC against LFU. PCOC outperforms LFU even for a noisy factor \(\xi\) as large as \(0.9\) and it is still competitive with LFU when predictions are randomly scrambled. This confirms the advantage of the optimistic nature of such algorithms.
**Akamai Trace**. Figure 6 shows the performance of PCOC on the Akamai trace for \(k=10\) with predictions of _Type 1_. Figure 6(a) compares PCOC against OGD, LFU and LRU. The latter two policies take a decision at each file request, whilst PCOC and OGD updates the cache every \(R=10\) requests. Nevertheless, PCOC outperforms the classic policies. Furthermore, even in a non-stationary case, the predictions can help in reducing the miss ratio. Figure 6(b) shows the comparison between PCOC and OLFU for different batch sizes and with predictions of _Type 3_ with \(\pi=0.7\). We can notice how the difference between the two policies becomes more evident in case of real trace even for higher batch sizes for PCOC. Finally, in Figure 6(c), we compare different versions of PCOC that updates the local cache every \(R\in\{50,100,300,500,1000\}\) requests. The amortized cost vanishes as the value of \(R\) increases (since the number of projections performed in the optimization process diminishes) at the cost of higher miss ratio. However, this confirms the applicability of such a batched method with less frequent updates since both the final miss ratio and the time complexity reached by PCOC with \(R=300\) and \(R=500\) are better than the performance achieved by the most used policies in practice such as LFU and LRU.
## 7 Conclusions
We presented online optimistic caching algorithms that enjoy sublinear regret in case of batched requests. First we studied the conditions where PCOC results to have a better regret with respect to OBC. Secondly, we showed that the per-component based solution (PCOC) outperforms classic caching policies and their optimistic versions in different conditions. Finally, we showed that, over a real trace, a batched approach presents better performance in terms of final miss ratio and amortized cost compared to classical caching policies.
|
2308.12881 | Approximate quadratic varieties | A classical result in additive combinatorics, which is a combination of
Balog-Szemer\'edi-Gowers theorem and a variant of Freiman's theorem due to
Ruzsa, says that if a subset $A$ of $\mathbb{F}_p^n$ contains at least $c
|A|^3$ additive quadruples, then there exists a subspace $V$, comparable in
size to $A$, such that $|A \cap V| \geq \Omega_c(|A|)$. Motivated by the fact
that higher order approximate algebraic structures play an important role in
the theory of uniformity norms, it would be of interest to find higher order
analogues of the mentioned result.
In this paper, we study a quadratic version of the approximate property in
question, namely what it means for a set to be an approximate quadratic
variety. It turns out that information on the number of additive cubes, which
are 8-tuples of the form $(x, x+ a,$ $x+ b, x+ c,$ $x+ a + b, x+ a + c,$ $x+ b
+ c, x+ a + b + c)$, in a set is insufficient on its own to guarantee quadratic
structure, and it is necessary to restrict linear structure in a given set,
which is a natural assumption in this context. With this in mind, we say that a
subset $V$ of a finite vector space $G$ is a $(c_0, \delta,
\varepsilon)$-approximate quadratic variety if $|V| = \delta |G|$, $\|1_V -
\delta\|_{\mathsf{U}^2} \leq \varepsilon$ and $V$ contains at least
$c_0\delta^7 |G|^4$ additive cubes. Our main result is the structure theorem
for approximate quadratic varieties, stating that such a set has a large
intersection with an exact quadratic variety of comparable size. | Luka Milićević | 2023-08-24T16:00:03Z | http://arxiv.org/abs/2308.12881v1 | # Approximate quadratic varieties
###### Abstract
A classical result in additive combinatorics, which is a combination of Balog-Szemeredi-Gowers theorem and a variant of Freiman's theorem due to Ruzsa, says that if a subset \(A\) of \(\mathbb{F}_{p}^{n}\) contains at least \(c|A|^{3}\) additive quadruples, then there exists a subspace \(V\), comparable in size to \(A\), such that \(|A\cap V|\geq\Omega_{c}(|A|)\). Motivated by the fact that higher order approximate algebraic structures play an important role in the theory of uniformity norms, it would be of interest to find higher order analogues of the mentioned result.
In this paper, we study a quadratic version of the approximate property in question, namely what it means for a set to be an approximate quadratic variety. It turns out that information on the number of additive cubes, which are 8-tuples of the form \((x,x+a,x+b,x+c,x+a+b,x+a+c,x+b+c,x+a+b+c)\), in a set is insufficient on its own to guarantee quadratic structure, and it is necessary to restrict linear structure in a given set, which is a natural assumption in this context. With this in mind, we say that a subset \(V\) of a finite vector space \(G\) is a \((c_{0},\delta,\varepsilon)\)-_approximate quadratic variety_ if \(|V|=\delta|G|\), \(\|\mathbbm{1}_{V}-\delta\|_{0^{2}}\leq\varepsilon\) and \(V\) contains at least \(c_{0}\delta^{7}|G|^{4}\) additive cubes. Our main result is the structure theorem for approximate quadratic varieties, stating that such a set has a large intersection with an exact quadratic variety of comparable size.
## 1 Introduction
Approximate algebraic structures are a major topic of study in additive combinatorics. First result concerning such structures, and a cornerstone of the area, is Freiman's theorem [7]. As we shall work in vector spaces over a fixed prime field \(\mathbb{F}_{p}\) in this paper, we begin by recalling an analogue of Freiman's theorem in that setting, due to Ruzsa [22].
**Theorem 1** (Ruzsa [22]).: _Let \(G\) be a finite-dimensional vector space over a prime field \(\mathbb{F}_{p}\). Suppose that \(A\subseteq G\) satisfies \(|A+A|\leq K|A|\).1 Then there exists a subspace \(U\leq G\) of size \(|U|\leq O_{K}(|A|)\) such that \(A\subseteq U\)._
Footnote 1: This condition is called _small doubling_ and set obeying this condition are traditionally called _approximate groups_.
Let us record here a variant of that theorem which has a condition involving additive quadruples, which are \((x,y,z,w)\) such that \(x+y=z+w\), instead of the small doubling assumption. The variant is obtained after an application of Balog-Szemeredi-Gowers theorem [3, 8], and appears more naturally in the context of higher order Fourier analysis.
**Theorem 2**.: _Let \(G\) be a finite-dimensional vector space over a prime field \(\mathbb{F}_{p}\). Suppose that \(A\subseteq G\) and that \(A\) has at least \(c|A|^{3}\) additive quadruples. Then there exists a subspace \(U\leq G\) of size \(|U|\leq O_{c}(|A|)\) such that \(|U\cap A|\geq\Omega_{c}(|A|)\)._
We may think of the sets satisfying the assumptions of Theorem 2 as _approximate cosets_ (we choose this wording as the term _approximate subgroup_ has a standard meaning of having a small doubling). The way such a result is relevant in higher order Fourier analysis is via following consequence, which was proved by Gowers and plays an important role in his proof of an inverse theorem for \(\mathsf{U}^{3}\) uniformity norm [8] (we shall recall the definition of this norm slightly later).
**Theorem 3** (Gowers [8]).: _Let \(G\) and \(H\) be finite-dimensional vector spaces over \(\mathbb{F}_{p}\). Let \(A\subseteq G\) be a subset and let \(\phi\colon A\to H\) be a map which respects at least \(c|G|^{3}\) additive quadruples in \(A\), meaning that \(\phi(x)+\phi(y)=\phi(z)+\phi(w)\) holds for at least \(c|G|^{3}\) quadruples \((x,y,z,w)\in A^{4}\) such that \(x+y=z+w\). Then there exists an affine map \(\Phi\colon G\to H\) such that \(\phi=\Phi\) holds for at least \(\Omega_{c}(|G|)\) points in \(A\)._
Strictly speaking, Gowers obtained Theorem 3 in the case of cyclic groups, but the same proof works in \(\mathbb{F}_{p}^{n}\). We may think of this theorem as a structure theorem for approximate affine homomorphisms. This is an approximate structure in the context of maps between vector spaces.
Finally, let us mention another approximate algebraic structure. We have already mentioned in passing the uniformity norms, whose definition we now recall. Firstly, we recall that the _discrete multiplicative derivative operator_\(\boldsymbol{\Delta}_{\!a}\) for shift \(a\in G\) is defined by \(\boldsymbol{\Delta}_{\!a}f(x)=f(x+a)\overline{f(x)}\) for functions \(f\colon G\to\mathbb{C}\).
Let \(f\colon G\to\mathbb{C}\) be a function. The _Gowers uniformity norm_\(\|f\|_{\mathsf{U}^{k}}\), defined by Gowers in [9], is given by the formula
\[\|f\|_{\mathsf{U}^{k}}\!=\Big{(}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{x,a_{1},\ldots,a_{k}}\boldsymbol{\Delta}_{\!a_{1}}\ldots \boldsymbol{\Delta}_{\!a_{k}}f(x)\Big{)}^{1/2^{k}}.\]
These norms are defined by a combinatorial average, which in the case of \(\mathsf{U}^{3}\) norm is taken over additive cubes, which will be crucial for the rest of the paper. To be precise, an _additive cube_ is an \(8\)-tuple of the form
\[\Big{(}x,x+a,x+b,x+c,x+a+b,x+a+c,x+b+c,x+a+b+c\Big{)}\]
where \(x,a,b,c\in G\) are any \(4\) elements of \(G\).
In his inverse theorem for \(\mathsf{U}^{3}\) norm, Gowers obtained a partial description of functions with large value of the norm, which was sufficient for the purposes of proving Szemeredi's theorem for arithmetic progressions of length \(4\). A qualitatively optimal inverse theorem was later obtained by Green and Tao [14] for groups of odd order. In the vector space case their result is the following.
**Theorem 4** (Green and Tao [14]).: _Suppose that \(f\colon G\to\mathbb{D}\) satisfies \(\|f\|_{\mathsf{U}^{3}}\!\geq c_{0}\) and assume \(p\geq 3\). Then there exists a quadratic polynomial \(q\colon G\to\mathbb{F}_{p}\) such that \(\Big{|}\operatorname{\mathbb{E}}_{x}f(x)\exp\Big{(}\frac{2\pi ig(x)}{p}\Big{)} \Big{|}\geq\Omega_{c_{0}}(1)\)._
We may thus think of functions with large \(\mathsf{U}^{3}\) norm as approximate quadratic phases.
When it comes to higher order analogues of the results above, we have inverse theorems for \(\mathsf{U}^{k}\) norms, for higher \(k\). These were proved by Green, Tao and Ziegler [17] for cyclic groups, by Bergelson, Tao and Ziegler [4] for finite vector spaces over prime fields of sufficiently large characteristic and Tao and Ziegler [25] for all finite vector spaces. In particular, in vector spaces in the case of high characteristic where \(p\geq k\), Theorem 4 of Green and Tao generalizes to the following result.
**Theorem 5** (Bergelson, Tao, Ziegler [4]).: _Suppose that \(f\colon G\to\mathbb{D}\) satisfies \(\|f\|_{\mathsf{U}^{k}}\geq c_{0}\) and assume \(p\geq k\). Then there exists a degree \(k-1\) polynomial \(q\colon G\to\mathbb{F}_{p}\) such that \(\Big{|}\,\mathbb{E}_{x}\,f(x)\exp\Big{(}\frac{2\pi iq(x)}{p}\Big{)}\Big{|}\geq \Omega_{c_{0}}(1)\)._
Similarly to Theorem 4, we may think of this theorem as a structure theorem for approximate polynomial forms.
Returning now to Theorem 3, we remark that there are satisfactory higher order generalizations of approximate affine homomorphisms, namely approximate polynomials and their multilinear variant, Freiman multihomorphisms, whose definitions we now recall.
Let \(G\) and \(H\) be two abelian groups, let \(A\subseteq G\) be a subset and let \(F\colon A\to H\) be a map. Similarly to \(\mathbf{\underline{a}}_{a}\), let us write \(\Delta_{a}F\) for the map from \(A\cap A-a\) to \(H\) given by \(x\mapsto F(x+a)-F(x)\). We say that \(F\) is an \(\varepsilon\)_-approximate polynomial of degree at most \(d\)_ if \(\Delta_{a_{1}}\ldots\Delta_{a_{d+1}}F(x)=0\) holds for at least \(\varepsilon|G|^{d+2}\) choices of \((d+2)\)-tuples \((a_{1},\ldots,a_{d+1},x)\in G^{d+2}\) (and all \(2^{k+1}\) resulting arguments of \(F\) lie in its domain).
On the other hand, a _Freiman multihomomorphism of order \(d\)_ is a map \(\Phi\colon A\to H\), where now \(A\subseteq G^{d}\), such that \(\Phi\) is a Freiman homomorphism in each principal direction, namely for each direction \(i\in[d]\), and each element \((x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{d})\) of \(G_{1}\times\cdots\times G_{i-1}\times G_{i+1}\times\cdots\times G_{d}\), the map that sends each \(y_{i}\) such that \((x_{1},\ldots,\,x_{i-1},\,y_{i},\,x_{i+1},\ldots,\,x_{d})\in A\) to \(\phi(x_{1},\ldots,\,x_{i-1},\,y_{i},\,x_{i+1},\ldots,\,x_{d})\) respects all additive quadruples.
Manners [20] proved a structure theorem for approximate polynomials over cyclic groups and Gowers and the author [11] proved a structure theorem for Freiman multihomomorphisms in finite vector spaces. Furthermore, in the same paper, Gowers and the author obtained a structure theorem for approximate polynomials in finite vector spaces in the case of high characteristic.
Our goal in this paper is to make a step towards completing the picture and study the quadratic analogue of Theorem 2. Table 1 summarizes the discussion of various approximate algebraic structures of linear and higher order.
Returning to Theorem 2, we may phrase the assumption on the number of additive quadruples as follows. Let \(A\) be a subset of \(G\) and let \(x,a,b\in G\) be chosen uniformly and independently at random. Then
\[\mathbb{P}(x+a+b\in A|x+a,x+b,x\in A)=\frac{\mathbb{P}(x+a+b,x+a,x+b,x\in A)}{ \mathbb{P}(x+a,x+b,x\in A)}=\frac{Q}{|A|^{3}},\]
where \(Q\) is the number of additive quadruples in \(A\). Thus, \(A\) has at least \(c_{0}|A|^{3}\) additive quadruples if and only the probability of 'completing an additive quadruple' \(\mathbb{P}(x+a+b\in A|x+a,x+b,x\in A)\) is at least \(c_{0}\). In other words, \(A\) is approximately closed under the relevant operation.
In order to find a quadratic generalization, it is natural to count the additive cubes. Observe that if \(V=\{x\in G\colon q(x)=0\}\) is a quadratic variety defined by a quadratic map \(q\colon G\to\mathbb{F}_{p}^{r}\), then \(V\) is closed under 'completing additive cubes' due to the following identity
\[\phi(x+a+b+c)=\phi(x+a+b)+\phi(x+a+c)+\phi(x+b+c)-\phi(x+a)-\phi(x+b)-\phi(x+c) +\phi(x).\]
However, this time being approximately closed under completing additive cubes is no longer sufficient on its own to guarantee that \(V\) has any quadratic structure at all. Consider the following example. Let \(G=U\oplus T\) be a direct sum and let \(\pi\colon G\to T\) be the projection onto \(T\) associated with this direct sum. Let \(S\) be a Sidon subset (a set without non-trivial additive quadruples) of \(T\) and set \(V=U+S\). Observe that if \(x+a+b,x+a,x+b,x\in V\) then we have \(\pi(x+a+b)+\pi(x)=\pi(x+a)+\pi(x+b)\), and, since \(S\) is Sidon, this means that \(\pi(x)\in\{\pi(x+a),\pi(x+b)\}\) so \(a\in U\) or \(b\in U\). Suppose now that 7 points \(x,x+a,x+b,x+c,x+a+b,x+a+c,x+b+c\) all belong to \(V\). If \(a,b,c\in U\), then \(x+a+b+c\in V\). Otherwise, suppose without loss of generality that \(a\notin U\). Observation about additive quadruples in \(V\) implies that \(b,c\in U\), so \(x+a+b+c\in x+a+U\subseteq V\), so actually \(V\) is closed under completing additive cubes.
The set \(V\) above has no quadratic structure, and has some, but not significant linear structure. Since our aim is to obtain a purely quadratic structure theorem, we impose further condition on \(V\) and assume that it has negligible linear structure. This leads us to the following definition, formulated by Gowers (personal communication), which also forbids the example above.
**Definition 6**.: Let \(V\subseteq G\). We say that \(V\) is a _\((c_{0},\delta,\varepsilon)\)-approximate quadratic variety_ if \(|V|=\delta|G|\), \(\|1_{V}-\delta\|_{\mathsf{U}^{2}}\leq\varepsilon\) and \(\mathbb{E}_{x,a,b,c}\,\underline{\Delta}_{a,b,c}1_{V}(x)=c_{0}\delta^{7}\).
We think of parameters \(c_{0},\delta\) and \(\varepsilon\) as satisfying the relationship \(c_{0}\gg\delta\gg\varepsilon\). As a further motivation for imposing the \(\mathsf{U}^{2}\) condition, we note the following connection with approximate quadratic polynomials.
**Proposition 7**.: _Let \(G\) and \(H\) be two finite-dimensional vector spaces over \(\mathbb{F}_{p}\) and let \(A\subseteq G\). Let \(F\colon A\to H\) satisfy \(\Delta_{a}\Delta_{b}\Delta_{c}F(x)=0\) for at least \(c_{0}|G|^{4}\) choices of \((x,a,b,c)\in G^{4}\) (and all 8 arguments
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Degree & Sets & Maps \(G\to H\) & Forms \(G\to\mathbb{F}_{p}\) \\ \hline Linear & Theorem 2 & Theorem 3 & Inverse theorem for \(\|\cdot\|_{\mathsf{U}^{2}}\) norm \\ \hline \multirow{3}{*}{Higher} & **??** & Structure theorem for & \\ & **??** & approximate polynomials and & Inverse theorem for \(\|\cdot\|_{\mathsf{U}^{k}}\) norm \\ \cline{1-1} & & Freiman multihomorphisms & \\ \hline \end{tabular}
\end{table}
Table 1: Various approximate algebraic structures.
of \(F\) belong to \(A\)) and let \(\xi\) be a positive quantity. Let \(d\in\mathbb{N}\) and let \(\varepsilon>0\) be given. Provided that \(\varepsilon\) and \(\dim G\) are sufficiently small and large, respectively, in terms of \(c_{0},d\) and \(\xi\), the following holds._
_Suppose that \(F\) respects at most \(\varepsilon|G|^{3}\) additive quadruples in \(A\). Then there exists a cost \(w_{0}+W\) of a subspace of codimension \(O_{\varepsilon}(1)\) such that, if we choose a subspace \(U\) of codimension \(d\) uniformly at random, then the probability that the set_
\[V=\{x\in A\colon x\in w_{0}+W,F(x)\in U\}\]
_is a \((c^{\prime},\delta,\xi)\)-approximate quadratic variety for some \(\delta\in[2^{-9}c_{0}^{8}p^{-d},2p^{-d}]\) and \(c^{\prime}\in[c_{0}^{8}2^{-9},2]\), is at least \(0.99\)._
We remark that the assumption of \(F\) respecting very few additive quadruples is quite natural, as otherwise it becomes an approximate homomorphism instead, which is much simpler and to which we may apply Theorem 3.
Gowers also conjectured that an approximate quadratic variety necessarily has a large intersection with an exact quadratic variety of comparable size. Notice the resemblance to Theorem 2.
**Conjecture 8**.: _Fix a prime \(p\geq 3\). Let \(V\subseteq G\) be a \((c_{0},\delta,\varepsilon)\)-approximate quadratic variety. Suppose that \(\varepsilon\) is sufficiently small in terms of \(\delta\) and \(c_{0}\). Then there exists a quadratic variety \(Q\) such that \(|Q|\leq O_{c_{0}}(|V|)\) and \(|Q\cap V|\geq\Omega_{c_{0}}(|V|)\)._
Having discussed approximate quadratic varieties and the motivation for the definition, we are now ready to state our main result of this paper, which is the resolution of Conjecture 8 with reasonable dependencies on \(c_{0}\) and \(\delta\).
**Theorem 9**.: _There exists an absolute constant \(D\geq 1\) such the following holds. Assume that \(p\geq 3\). Let \(V\subseteq G\) be a \((c_{0},\delta,\varepsilon)\)-approximate quadratic variety. Suppose that_
\[\varepsilon\leq(2^{-1}\delta)^{\exp\big{(}\log^{D}(O_{p}(c_{0}^{-1}))\big{)}}.\]
_Then there exists a quadratic variety \(Q\) such that \(|Q|\leq\exp\Big{(}\exp(\log^{D}(O_{p}(c_{0}^{-1})))\Big{)}\cdot|V|\) and \(|Q\cap V|\geq\exp\Big{(}-\exp(\log^{D}(O_{p}(c_{0}^{-1})))\Big{)}\cdot|V|\)._
Let us remark that if we only had assumptions \(|V|=\delta|G|\) and \(\mathbb{E}_{x,a,b,c}\,\mathbf{\Delta}_{\!u,b,c}\mathbbm{1}_{V}(x)=c_{0}\delta^ {7}\), the theorem above would imply that \(V\) is either closely related to a quadratic variety or that \(\|\mathbbm{1}_{V}-\delta\|_{\mathsf{U}^{2}}\geq\Omega_{c_{0},\delta}(1)\), becoming \(\Omega_{\delta}(1)\) when \(c_{0}\geq\delta\), and proving existence of weak linear structure. A strong linear structure would be obtaining a non-trivial Fourier coefficient in the large spectrum of \(\mathbbm{1}_{V}\), namely finding \(r\neq 0\) such that \(|\widehat{1_{V}}(r)|\geq\Omega_{c_{0}}(\delta)\). However, the example based on Sidon sets shows that for some absolute constant \(\theta>0\) we may have \(|\widehat{1_{V}}(r)|\leq\delta^{1+\theta}\) for all non-zero \(r\), and so one cannot hope for a stronger result in a qualitative sense.
Furthermore, the methods of this paper could likely prove the theorem above in the case of low characteristic (when \(p=2\)) as well, but that would require the use of non-classical polynomials [25] in the final stages of the proof and a suitable modification of the statement of the result.
Finally, let us also note resemblance to nilspaces, introduced by Antolin Camarena and Szegedy [2], whose definition we now recall. A _nilspace_ is a set \(X\) together with a collection of sets \({\sf C}^{n}(X)\subseteq X^{\{0,1\}^{n}}\), for each non-negative integer \(n\), satisfying the following axioms:
* (Composition) For every morphism \(\phi\colon\{0,1\}^{m}\to\{0,1\}^{n}\) and every \(c\in{\sf C}^{n}(X)\), we have \(c\circ\phi\in{\sf C}^{m}(X)\).
* (Ergodicity) \({\sf C}^{1}(X)=X^{\{0,1\}}\).
* (Corner completion) Let \(c^{\prime}\colon\{0,1\}^{n}\setminus\{(1,\dots,1)\}\to X\) be such that every restriction of \(c^{\prime}\) to an \((n-1)\)-face containing \((0,\dots,0)\) is in \({\sf C}^{n-1}(X)\). Then there exists \(c\in{\sf C}^{n}(X)\) such that \(c(v)=c^{\prime}(v)\) for all \(v\neq(1,\dots,1)\).
If every \((k+1)\)-corner has a unique completion, then we say that \(X\) is a \(k\)_-step nilspace_. (In the theory of nilspaces, one also imposes topology on cubes, but here we are interested primarily in algebraic aspects, so we skip such details.) Such nilspaces occur naturally in the higher order Fourier analysis as they arise in the nilspace approach to the inverse theorems for uniformity norms.
The key assumption in Theorem 9 can be expressed as
\[{\mathbb{P}}(x+a+b+c\in V|x,x+a,x+b,x+c,x+a+b,x+a+c,x+b+c\in V)\geq c_{0}\]
so we may think of a an approximate quadratic variety as a combinatorial counterpart of a 2-step nilspace (though in the case of nilspaces the cube collections are abstract and thus more general).
**1.1. Proof overview**
Let us begin with some motivation. If \(V\) is a \((c_{0},\delta,\varepsilon)\)-approximate quadratic variety, then we expect that \(V\) is a \(c_{0}\)-dense subset of a quadratic variety \(Q=\{x\in G\colon\beta(x,x)=0\}\), where \(\beta\colon G\times G\to{\mathbb{F}}_{p}^{r}\) is a symmetric bilinear map, and we know that \(V\) is very \({\sf U}^{2}\)-quasirandom. Furthermore, as we think of parameters \(c_{0},\delta\) and \(\varepsilon\) as satisfying the relationship \(c_{0}\gg\delta\gg\varepsilon\), \(Q\) has to have density comparable to \(\delta\) and to inherit \({\sf U}^{2}\)-quasirandomness from \(V\). Since \(Q\) is a quadratic variety, this in turn implies that \(\beta\) is of very high rank. Our goal now is to identify \(\beta\) using \(V\). To that end, consider the intersection \(V\cap V-a\). This is a subset of \(\{x\in G\colon\beta(x+a,x+a)=\beta(x,x)=0\}=\{x\in G\colon\beta(a,x)=-2^{-1} \beta(a,a),\beta(x,x)=0\}=V\cap\{x\in G\colon\beta(a,x)=-2^{-1}\beta(a,a)\}\). Thus, we expect \(V\cap V-a\) to be \({\sf U}^{2}\)-quasirandom subset of the subspace coset \(\{x\in G\colon\beta(a,x)=-2^{-1}\beta(a,a)\}\). Given \({\sf U}^{2}\)-quasirandomness, the set of large values of the convolution \({\mathbbm{1}}_{V\cap V-a}\stackrel{{\pi}}{{\approx}}{\mathbbm{1}} _{V\cap V-a}\) should then be equal to the whole subspace \(\{x\in G\colon\beta(a,x)=0\}\). This motivates the study of subspaces \(W_{a}\) that arise from convolution of \({\mathbbm{1}}_{V\cap V-a}\) via methods such as
Bogolyubov argument. We expect these subspaces \(W_{a}\) to be related to \(\{x\in G\colon\beta(a,x)=0\}\).
Hence, we now have a collection of subspaces \(W_{a}\), for \(a\in G\), of density comparable to \(\delta\). However, this is not an arbitrary family of subspaces. In fact, the algebraic structure of indices plays an important role. Namely, if \(a_{1},a_{2},a_{3},a_{4}\) form an additive quadruple, thus \(a_{1}+a_{2}=a_{3}+a_{4}\), we have that \(x\in W_{a_{1}}\cap W_{a_{2}}\cap W_{a_{3}}\) implies \(0=\beta(a_{1},x)+\beta(a_{2},x)-\beta(a_{3},x)=\beta(a_{1}+a_{2}-a_{3},x)= \beta(a_{4},x)\) and thus we expect that a relationship along the lines of
\[W_{a_{1}}\cap W_{a_{2}}\cap W_{a_{3}}\subseteq W_{a_{4}} \tag{1}\]
holds. The first major step of the proof of Theorem 7 achieves this and it appears as Theorem 20 later in the paper.
**Theorem 10** (**Step 1**).: _There exists an absolute constant \(D\geq 1\) such that the following holds. Let \(V\subseteq G\) be a \((c_{0},\delta,\varepsilon)\)-approximate quadratic variety and suppose that \(\varepsilon\leq\exp\Big{(}-\log^{D}(2c_{0}^{-1})\Big{)}\delta^{288}\). Then there exist a quantity \(c_{1}\), a set \(A\subseteq G\) and a collection of subspaces \(W_{a}\leq G\) indexed by elements \(a\in A\) such that_
1. \(\exp(-\log^{D}(2c_{0}^{-1}))\leq c_{1}\leq 1\)_,_
2. \(|A|\geq c_{1}|G|\)_,_
3. _for each_ \(a\in A\) _and_ \(b\in W_{a}\) _we have_ \(\,\overline{*}^{(8)}\mathbb{1}_{V\cap V-a}(b)\geq c_{1}\delta^{15}\)_,_
4. \(c_{1}\leq\frac{|W_{a}|}{\delta|G|}\leq c_{1}^{-1}\) _holds for all_ \(a\in A\)_,_
5. _for_ \(r\in[9]\)_, for all but at most_ \(D\varepsilon\delta^{-32r}|G|^{r}\) _choices of_ \((a_{1},\ldots,a_{r})\in A^{r}\) _we have_ \[\frac{|W_{a_{1}}\cap W_{a_{2}}\cap\ldots\cap W_{a_{r}}|}{\delta^{r}|G|}\leq c _{1}^{-1},\] _and,_
6. _for at least_ \(c_{1}|A|^{6}\) _6-tuples_ \((b_{1},b_{2},b_{3},x_{2},y_{3},z_{1})\in A^{6}\) _we have that_ \(b_{1}+b_{2}-b_{3},x_{2}-b_{2}+b_{3},y_{3}+b_{1}-b_{3},b_{1}+b_{2}-z_{1}\in A\) _and_ \[c_{1}\delta^{6}|G|\leq|W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{b_{1}+b_{ 2}-b_{3}}\cap W_{x_{2}}\cap W_{x_{2}-b_{2}+b_{3}}\cap W_{y_{3}}\cap W_{y_{3}+ b_{1}-b_{3}}\cap W_{z_{1}}\cap W_{b_{1}+b_{2}-z_{1}}|.\]
We remark that \(\,\overline{*}^{(8)}\mathbb{1}_{V\cap V-a}\) stands for an iterated convolution of the function \(\mathbb{1}_{V\cap V-a}\) with itself, see (4) for a precise definition. Notice that item **(vi)** is considerably stronger than what we mentioned in the motivation discussion, where an approximate version of \(W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\subseteq W_{b_{1}+b_{2}-b_{3}}\) would be \(|W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{b_{1}+b_{2}-b_{3}}|\geq c_{1} \delta^{3}|G|\). We shall return to this discrepancy in the discussion of the second major step of the proof.
We shall refer to a collection of subspaces in satisfying all properties but **(iii)** in conclusion of the theorem above as an _approximate quasirandom linear systems of subspaces_. To explain the terminology,
linearity stands for property (1) for additive quadruples \((a_{1},a_{2},a_{3},a_{4})\), quasirandomness suggests that intersections of a vast majority subspaces behave as given by property **(v)** in the conclusion of the theorem, and all properties are of an approximate form rather than exact.
The second major step of the proof is to characterize approximate quasirandom linear systems of subspaces. Recall that we expect \(W_{a}\) to be related to \(\{x\in G\colon\beta(a,x)=0\}\). With this in mind, we expect that \((W_{a})_{a\in A}\) comes from some bilinear map. The second step is to obtain such a result, which appears as Theorem 33. The assumptions of the theorem are essentially the same as the conclusion of Theorem 10.
**Theorem 11** (**Step 2**).: _There exists an absolute constant \(D\geq 1\) such that the following holds. Let \(c>0\) and let \(d\) be a positive integer. Let \(A\subseteq G\) be a set of size \(|A|\geq c|G|\) and let \(W_{a}\leq G\) be a subspace of codimension \(d\) for each \(a\in A\). Suppose that_
\[|W_{a_{1}}\cap W_{a_{2}}\cap\ldots\cap W_{a_{r}}|\leq Kp^{-rd}|G|\]
_holds for all but at most \(\eta|G|^{r}\)\(r\)-tuples \((a_{1},a_{2},\ldots,a_{r})\in A^{r}\) for each \(r\in[9]\). Assume furthermore that for at least \(c|G|^{6}\) triples \((a,b_{1},b_{2},b_{3},x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{10}\) we have_
* \[a=b_{1}+b_{2}-b_{3},\ x_{3}=x_{2}-b_{2}+b_{3},\ y_{1}=y_{3}+b_{1}-b_{3},\ z_{2}=b_{1}+b_{2}-z_{1},\] _and_
* _the subspaces_ \[W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x_{2}}\cap W_{x_{3}} \cap W_{y_{1}}\cap W_{y_{3}}\cap W_{z_{1}}\cap W_{z_{2}}\] _has size at least_ \(K^{-1}p^{-6d}|G|\)_._
_Then, provided \(\eta\leq 2^{-31}c^{3}\), there exist parameters \(c^{\prime}\geq\exp\Big{(}-\exp\Big{(}(\log(2c^{-1})+\log_{p}K)^{D}\Big{)} \Big{)}\) and \(r\leq\exp\Big{(}(\log(2c^{-1})+\log_{p}K)^{D}\Big{)}\), set \(A^{\prime}\subseteq A\) and a map \(\Phi\colon G\times\mathbb{F}_{p}^{d}\to G\), affine in the first variable and linear in the second, such that \(|A^{\prime}|\geq c^{\prime}|G|\) and for each \(a\in A^{\prime}\) we have \(|\mathrm{Im}\,\Phi(a,\cdot)\cap W_{a}^{\perp}|\geq c^{\prime}p^{d}\). Moreover, there exists a subspace \(\Lambda\leq\mathbb{F}_{p}^{d}\) of dimension \(r\) such that whenever \(\lambda\notin\Lambda\) we have_
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vbox{\hrule width 0.0pt height 6.
**Proposition 12** (**Step 3**).: _There is an absolute constant \(D\geq 1\) such that the following holds. Let \(c,\delta,\varepsilon>0\) and \(d\in\mathbb{N}\) be such that \(c\leq\delta p^{d}\leq c^{-1}\). Suppose that \(\varepsilon\leq(2^{-1}c\delta)^{D}\). Let \(V\subseteq G\) be a set of density \(\delta\) such that \(\|\mathbbm{1}_{V}-\delta\|_{\mathsf{U}^{2}}\leq\varepsilon\). Suppose that we are also given a subset \(A\subseteq G\) of size \(|A|\geq c|G|\), a subspace \(W_{a}\leq G\) for each \(a\in A\) and a bilinear map \(\beta\colon G\times G\to\mathbb{F}_{p}^{d}\) such that_
* _for each_ \(\lambda\in\mathbb{F}_{p}^{d}\setminus\{0\}\) _we have_ \(\operatorname{bias}\lambda\cdot\beta\leq\varepsilon\)_,_
* _for each_ \(a\in A\) _we have_ \(|W_{a}\cap\{b\in G\colon\beta(a,b)=0\}|\geq cp^{-d}|G|\)_,_
* _for each_ \(a\in A\) _and_ \(b\in W_{a}\) _we have_ \(\,\overline{*}^{(8)}\mathbbm{1}_{V\cap V-a}(b)\geq c\delta^{15}\)_._
_Then there exists a quadratic variety \(Q\subseteq G\) of size \(|Q|\leq(2c^{-1})^{D}\delta|G|\) such that \(|Q\cap V|\geq\exp\Big{(}-\log^{D}(2c^{-1})\Big{)}\delta|G|\). Moreover, \(Q\) is defined as \(\{x\in G\colon\gamma(x,x)-\psi(x)=\mu\}\) for a symmetric bilinear map \(\gamma\colon G\times G\to\mathbb{F}_{p}^{\tilde{d}}\), an affine map \(\psi\colon G\to\mathbb{F}_{p}^{\tilde{d}}\) and \(\mu\in\mathbb{F}_{p}^{\tilde{d}}\), where \(\operatorname{bias}\lambda\cdot\gamma\leq\varepsilon\) for all \(\lambda\neq 0\), for some \(d-O(\log_{p}(2c^{-1}))\leq\tilde{d}\leq d\)._
Let us now say something about the proof of each step.
When it comes to **Step 1**, we rely heavily on the \(\mathsf{U}^{2}\)-uniformity assumption on \(V\), which itself is sufficient to prove some properties in the conclusion of Theorem 10. Most basic consequence of uniformity is a standard fact that intersections of translates of \(V\) behave quasirandomly (see Lemma 22). We can control more involved expressions as well, for example
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \smash{\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \smash{\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.
However, in order to prove an approximate version of property (1), we need to use the assumption of having many additive cubes in \(V\) and, simplifying things greatly by assuming that \(W_{a}\) is the set of elements \(x\) where \(\mathbbm{1}_{V\cap V-a_{1}}\stackrel{{\pi}}{{\ast}}\mathbbm{1}_{V \cap V-a_{1}}(x)\) is about \(\delta^{3}\), we would need a bound of the form
\[\Omega_{c_{0}}(\delta^{15})\leq\mathop{\hbox to 0.0pt{\vbox{\hrule height 0.4pt \hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt}}}\limits_{a_{1},a_{2},a_{3},x}\mathbbm{1}_{V\cap V-a_{1}} \stackrel{{\pi}}{{\ast}}\mathbbm{1}_{V\cap V-a_{1}}(x)\mathbbm{1} _{V\cap V-a_{2}}\stackrel{{\pi}}{{\ast}}\mathbbm{1}_{V\cap V-a_{ 2}}(x)\]
\[\mathbbm{1}_{V\cap V-a_{3}}\stackrel{{\pi}}{{\ast}}\mathbbm{1}_{V \cap V-a_{3}}(x)\mathbbm{1}_{V\cap V-a_{1}-a_{2}+a_{3}}\stackrel{{ \pi}}{{\ast}}\mathbbm{1}_{V\cap V-a_{1}-a_{2}+a_{3}}(x).\]
Unlike proving upper bounds on expressions that are not directly controllable by \(\mathsf{U}^{2}\) norm, it is surprisingly challenging to prove such a lower bound. To get the lower bound, we first observe the following _duality_ property of convolutions of indicator functions of intersections
\[\mathbbm{1}_{V\cap V-a}\stackrel{{\pi}}{{\ast}}\mathbbm{1}_{V \cap V-a}(b)=\mathbbm{1}_{V\cap V-b}\stackrel{{\pi}}{{\ast}} \mathbbm{1}_{V\cap V-b}(a).\]
This observation allows us to turn the expression above into
\[\mathop{\hbox to 0.0pt{\vbox{\hrule height 0.4pt\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\kern 6.0pt\vrule width 0.4pt} \hrule height 0.4pt}}}\limits_{a_{1},a_{2},a_{3},x}\mathbbm{1}_{V\cap V-x} \stackrel{{\pi}}{{\ast}}\mathbbm{1}_{V\cap V-x}(a_{1})\mathbbm{1} _{V\cap V-x}\stackrel{{\pi}}{{\ast}}\mathbbm{1}_{V\cap V-x}(a_{2})\]
\[\mathbbm{1}_{V\cap V-x}\stackrel{{\pi}}{{\ast}}\mathbbm{1}_{V \cap V-x}(a_{3})\mathbbm{1}_{V\cap V-x}\stackrel{{\pi}}{{\ast}} \mathbbm{1}_{V\cap V-x}(a_{1}+a_{2}-a_{3}). \tag{2}\]
Using the properties of \(V\), we show that for many \(x\), the set \(S_{x}\) of \(a\in G\) such that \(\mathbbm{1}_{V\cap V-x}\stackrel{{\pi}}{{\ast}}\mathbbm{1}_{V \cap V-x}(a)\geq\Omega_{c_{0}}(\delta^{3})\) is \(\Omega_{c_{0}}(1)\)-dense subset of some subspace \(U_{x}\) of size \(|U_{x}|\geq\Omega_{c_{0}}(\delta|G|)\) (observe that this is related to other properties in the conclusion of Theorem 10). Then a lower bound on (2) follows from the fact that \(S_{x}\) has at least \(\Omega_{c_{0}}(\delta^{3}|G|^{3})\) additive quadruples. Interestingly, we needed to use results in the spirit of Bogolyubov argument in this stage of the proof, as it was essential that we have the subspace structure.
Finally, as we actually need to prove property **(vi)** of Theorem 10 instead of a simpler variant such as (1), the proof is more involved than the sketch above, but the sketch indicates the key ideas.
In **Step 2**, the following observation, stated as Lemma 34, is the main tool of passing from the given family of subspaces to approximate bilinear structure. Namely, if four subspaces \(U_{1},\ldots,U_{4}\) of dimension \(d\) satisfy \(K^{-1}p^{3d}\leq|U_{i_{1}}+U_{i_{2}}+U_{i_{3}}|\) for any three distinct indices \(i_{1},i_{2}\) and \(i_{3}\) and \(|U_{1}+U_{2}+U_{3}+U_{4}|\leq Kp^{3d}\), then for any linear isomorphism \(\phi_{4}\colon\mathbb{F}_{p}^{d}\to U_{4}\) there exist linear isomorphisms \(\phi_{i}\colon\mathbb{F}_{p}^{d}\to U_{i}\) for \(i\in[3]\) such that
\[\operatorname{rank}(\phi_{1}+\phi_{2}-\phi_{3}-\phi_{4})\leq O(\log_{p}K). \tag{3}\]
We remark that we do not only say that there are 4 isomorphisms satisfying (3), but rather that there exist suitable \(\phi_{2},\phi_{3}\) and \(\phi_{4}\) for _any given_\(\phi_{1}\), which will be crucial in the proof. This fact also shows that we may think of subspaces satisfying (1) as additive quadruples of subspaces. Observe that these
properties are satisfied by the orthogonal complements of subspaces in our family (we may easily ensure that all \(W_{a}^{\perp}\) are of same size).
The approach to proving Theorem 11 is to find a suitable element \(a\in A\), fix an isomorphism \(\theta\colon\mathbb{F}_{p}^{d}\to W_{a}^{\perp}\) and apply the observation above to additive quadruples in \(A\) involving \(a\) to get isomorphisms to other \(W_{b}^{\perp}\). This will give us an approximate homomorphism between \(G\) and the space of linear maps \(\operatorname{Hom}(\mathbb{F}_{p}^{d},G)\), where the distance between elements is measured by rank, from which we shall obtain an exact homomorphism. We achieve this by using a result of Kazhdan and Ziegler [18] (see Theorem 19), which we reproved using the inverse theorem for Freiman bihomomorphisms [10, 19] to get quantitative bounds.
It turns out that in this step, the weaker assumption of having many triples \((b_{1},b_{2},b_{3})\) such that \(|W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{b_{1}+b_{2}-b_{3}}|\geq c_{1} \delta^{3}|G|\) is not sufficiently strong for the proof to work and that we need more involved configurations of points. The reason is that in defining the linear isomorphisms \(\mathbb{F}_{p}^{d}\to W_{b}^{\perp}\) we need to ensure that they are well-defined and that these linear isomorphisms also respect many additive quadruples.
Finally, in **Step 3**, we first need to show that the given map \(\beta\) can be replaced by a symmetric bilinear map. To that end, we use Green-Tao symmetry argument [14] to first show that \((x,y)\mapsto\lambda\cdot(\beta(x,y)-\beta(y,x))\) has small rank for many vectors \(\lambda\in\mathbb{F}_{p}^{d}\). To pass to an exactly symmetric map, we have to make use of the solution of partition versus analytic problem for trilinear forms, first proved by Green and Tao [15] in the case of polynomials, with essentially optimal bounds obtained by Adiprasito, Kazhdan and Ziegler [1] and by Cohen and Moshkovitz [6]. Note that it the high characteristic case (when \(p\geq 3\)) we may usually replace an approximately bilinear form \(\gamma(x,y)\) by a simple mean \(2^{-1}(\gamma(x,y)+\gamma(y,x))\). However, as we are concerned with \(\beta\) whose codomain is subspace of somewhat large dimension, we have to use partition versus analytic problem at some point, even if we use the mean trick at each coordinate of \(\beta\) (see for example Lemma 2.8 in [21]). Once we may assume that \(\beta\) is symmetric, we expect that the approximate variety \(V\) comes from a variety defined by \(\beta(x,x)+\gamma(x)=\lambda\) for some linear map \(\gamma\), which we still have to identify. To that end, we consider intersection of \(V\cap V-a\) with \(\{x\in G\colon\beta(a,x)=\lambda\}\) for various \(\lambda\). As it turns out, for many \(a\), there is a value \(\lambda(a)\) for which this intersections is almost the whole of \(V\cap V-a\). We then use additional graph-theoretic arguments to show that \(\lambda(a)\) is an approximate homomorphism, from which we may then pass to a linear map and conclude the proof.
### Regularity lemmas are insufficient
A natural approach to proving Theorem 9 is to use arithmetic regularity lemmas of Green and Tao [16]. In this short subsection, we briefly discuss why such an approach is not helpful for this problem. In this setting, we could in principle find a bilinear map \(\beta\colon G\times G\to\mathbb{F}^{r}\) and functions \(d\colon\mathbb{F}_{p}^{r}\to\mathbb{D}\), \(f_{\mathrm{err}},f_{\mathrm{unif}}\colon G\to\mathbb{D}\) such that
\[\mathbb{1}_{V}(x)=d(\beta(x,x))+f_{\mathrm{err}}(x)+f_{\mathrm{unif}}(x),\]
where \(f_{\rm err}\) has small \(L^{2}\) norm, \(f_{\rm unif}\) has extremely small \(\|\cdot\|_{\emptyset^{3}}\) norm and
\[d(\lambda)=\frac{\mathbb{E}_{x}\,\mathbbm{1}_{V}(x)\,\mathbbm{1}(\beta(x,x)= \lambda)}{\mathbb{E}_{x}\,\mathbbm{1}(\beta(x,x)=\lambda)},\]
which comes from projection of \(\mathbbm{1}_{V}\) onto layers \(\{x\colon\beta(x,x)=\lambda\}\) defined by \(\beta\). For simplicity, we ignore the \(\ell^{2}\) error terms and assume that we have perfect approximation, and also that \(\beta\) itself is of high rank (we expect this to be the case from the expected structure of \(V\)). In particular, function \(d\) simplifies to \(d(\lambda)=|G|^{-1}p^{r}|V\cap\{x\colon\beta(x,x)=\lambda\}|\). Observe also that the high rank of \(\beta\) implies that
\[(x,a,b,c)\mapsto\Big{(}\beta(x,x),\beta(x+a,x+a),\ldots,\beta(x+b+c,x+b+c) \Big{)}\]
is equidistributed in \((\mathbb{F}_{p}^{r})^{7}\) (here we listed all points of additive cube associated with \(x,a,b,c\) except \(x+a+b+c\)), while \(\beta\) being a bilinear form implies the identity
\[\beta(x+a+b+c,x+a+b+c)=\beta(x,x)-\beta(x+a,x+a)-\beta(x+b,x+b)- \beta(x+c,x+c)\] \[\qquad\qquad+\beta(x+a+b,x+a+b)+\beta(x+a+c,x+a+c)+\beta(x+b+c,x+ b+c).\]
Using these facts and given that the error terms are negligible, the count of 3-dimensional additive cubes in \(V\) essentially becomes
\[c_{0}\delta^{7}\leq\underset{\lambda_{1},\ldots,\lambda_{7}\in\mathbb{F}_{p}^{ r}}{\prod}d(\lambda_{1})\ldots d(\lambda_{7})d(\lambda_{1}+\lambda_{2}+\lambda_{3}+ \lambda_{4}-\lambda_{5}-\lambda_{6}-\lambda_{7})=\sum_{\gamma\in\mathbb{F}^{r }}|\hat{d}(\gamma)|^{8}.\]
Note that \(d(\lambda)\leq 1\) for all \(\lambda\) and that
\[\sum_{\gamma\in\mathbb{F}_{p}^{r}}|\hat{d}(\gamma)|^{2}=\underset{\lambda\in \mathbb{F}_{p}^{r}}{\prod}|d(\lambda)|^{2}\leq\underset{\lambda\in\mathbb{F}_{ p}^{r}}{\prod}d(\lambda)=\frac{1}{|G|}\sum_{\lambda\in\mathbb{F}_{p}^{r}}|V \cap\{x\colon\beta(x,x)=\lambda\}|=\frac{|V|}{|G|}=\delta.\]
We thus get some \(\gamma\) such that \(|\hat{d}(\gamma)|\geq\sqrt[p]{c_{0}}\delta\). We may assume that \(\gamma\neq 0\) as the contribution from \(\gamma=0\) above is \(\delta^{8}\).2 Thus,
Footnote 2: In fact, we expect that \(V\) is the union of \(\{\beta=\lambda\}\) for some subspace \(\Lambda\) such that \(|\Lambda|=\delta p^{r}\). If so, we would also have \(d(\lambda)=\mathbbm{1}_{\Lambda}(\lambda)\) and thus \(\hat{d}(\gamma)=\frac{|\lambda|}{p^{r}}\mathbbm{1}_{\Lambda^{\perp}}(\gamma)= \delta\mathbbm{1}_{\Lambda^{\perp}}(\gamma)\).
\[\sqrt[p]{c_{0}}\delta\leq \Big{|}\underset{\lambda\in\mathbb{F}_{p}^{r}}{\prod}|G|^{-1}p^{r }|V\cap\{x\colon\beta(x,x)=\lambda\}|\omega^{-\lambda\cdot\gamma}\Big{|}\] \[= \Big{|}\sum_{\lambda\in\mathbb{F}_{p}^{r}}\frac{|V\cap\{x\colon \beta(x,x)=\lambda\}|}{|G|}\omega^{-\lambda\cdot\gamma}\Big{|}\] \[= \Big{|}\sum_{\mu\in\mathbb{F}_{p}}\frac{\sum_{\lambda\in\mathbb{F} _{p}^{r}}{\prod}}{|G|}\frac{|V\cap\{x\colon\beta(x,x)=\lambda\}|}{|G|}\omega^ {-\mu}\Big{|}\] \[= \Big{|}\sum_{\mu\in\mathbb{F}_{p}}\frac{|V\cap\{x\colon\gamma \cdot\beta(x,x)=\mu\}|}{|G|}\omega^{-\mu}\Big{|}\]
\[= \Big{|}\sum_{\mu\in\mathbb{F}_{p}}\Big{(}\frac{|V\cap\{x\colon\gamma \cdot\beta(x,x)=\mu\}|}{|G|}-\delta\Big{)}\omega^{-\mu}\Big{|}.\]
Hence, for some \(\mu\in\mathbb{F}_{p}\) and codimension \(1\) quadratic variety \(B=\{x\colon\gamma\cdot\beta(x,x)=\mu\}\) we conclude that \(V\) gets a density increment of \(p^{-1}\sqrt[6]{c_{0}}\delta\) on \(B\). In particular, after \(s\) steps, we may only guarantee density \(\delta\Big{(}1+p^{-1}\sqrt[6]{c_{0}}\Big{)}^{s}\) on codimension \(s\) quadratic variety. Proceeding in this fashion, provided \(c_{0}\) is sufficiently small, we expect that we need \(s=K\log_{p}\delta^{-1}\) steps in order to get density \(\Omega_{c_{0}}(1)\) on some quadratic variety, where \(K\) is arbitrarily large. But, such a variety would then be of codimension \(s\) and would have density \(p^{-s}\leq\delta^{K}\) as our bilinear map is quasirandom, so we cannot obtain the claimed result in this fashion.
**Acknowledgements.** I would like to thank Tim Gowers for introducing me to the problem of proving the structure theorem for approximate quadratic varieties. This work was supported by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia through the Mathematical Institute of the Serbian Academy of Sciences and Arts.
**SS2 Preliminaries**
**Notation.** The fixed prime \(p\) is assume to be at least \(3\) throughout the paper and dependencies on \(p\) in bounds are suppressed.
In this paper, we shall frequently consider subsets of products of two vectors spaces \(G\) and \(H\). Given \(X\subset G\times H\) and an element \(x\in G\), we write \(X_{x^{\prime}}=\{y\in H\colon(x,y)\in X\}\) for the _(vertical) slice_ of \(X\) in the column indexed by \(x\). Likewise, for an element \(y\in H\), we write \(X_{y}=\{x\in G\colon(x,y)\in X\}\) for the _(horizontal) slice_ of \(X\) in the row indexed by \(y\).
For two functions \(f,g\colon G\to\mathbb{R}\) we define _convolution_ as \(f\mathbin{\overline{*}}g(x)=\mathbb{E}_{y\in G}\,f(y+x)\overline{g(x)}\) (note that this is non-standard, as we average over pairs of elements whose difference is \(x\), rather than their sum). We also need _iterated convolution_ (of _order_\(2k\)) of a function \(f\colon G\to\mathbb{R}\), defined as
\[\overline{*}^{(2k)}f(a)=\mathop{\mathbb{E}}_{x_{1},\ldots,x_{2k-1}\in G}f(x_ {1})f(x_{2})\cdots f(x_{2k-1})f\Big{(}\sum_{\ell\in[2k-1]}(-1)^{\ell+1}x_{i} -a\Big{)}. \tag{4}\]
Note that the order of iterated convolution \(2k\) stands for the number of terms, rather than number of convolutions needed to give the expression above. Also note that there are no complex conjugates in the expression above. This is done to simplify the notation for iterated convolution as it will only be used for real-valued functions in this paper.
The _discrete multiplicative derivative operator_\(\mathop{\boldsymbol{\Delta}}_{a}\) for shift \(a\in G\) is defined by \(\mathop{\boldsymbol{\Delta}}_{a}f(x)=f(x+a)\overline{f(x)}\) for functions \(f\colon G\to\mathbb{C}\).
To save writing in situations where we have many indices of variables appearing in predictable patterns, we use the following convention. Instead of denoting a sequence of length \(m\) by \((x_{1},\ldots,x_{m})\), we write \(x_{[m]}\), and for \(I\subset[m]\) we write \(x_{I}\) for the subsequence with indices in \(I\).
We need a few auxiliary results. The first one is a robust version of Bogolyubov-Ruzsa lemma, which is essentially due to Schoen and Sisask and builds upon the work of Sanders.
**Theorem 13**.: _Let \(A\subset G\) be a subset having at least \(\alpha|A|^{3}\) additive quadruples. Then there exists a subspace \(V\subseteq 2A-2A\) of size \(|V|\geq\exp\Big{(}-O\Big{(}\log^{O(1)}(2c^{-1})\Big{)}\Big{)}|A|\) such that the following holds. Every \(y\in V\) can be expressed as \(y=a_{1}+a_{2}-a_{3}-a_{4}\) with \(a_{1},a_{2},a_{3},a_{4}\in A\) in at least \(\alpha^{O(1)}|A|^{3}\) many ways._
Proof.: Apply Balog-Szemeredi-Gowers theorem to find a subset \(A^{\prime}\subseteq A\) such that \(|A^{\prime}|\geq\Omega(\alpha^{O(1)}|A|)\) and \(|A^{\prime}+A^{\prime}|\leq O(\delta^{-O(1)}|A^{\prime}|)\). The theorem follows from results of Sanders (Theorem A.2 for arbitrary prime \(p\) instead of \(p=2\) in [23]) and Schoen and Sisask (Theorem 5.1 in [24]).
A closely related result is the inverse theorem for approximate homomorphisms, which we already stated in the introduction as Theorem 3. We make use of a more efficient version, which is proved using Balog-Szemeredi-Gowers theorem and Sanders's results on Bogolyubov-Ruzsa lemma [23].
**Theorem 14**.: _Let \(G\) and \(H\) be finite-dimensional vector spaces over \(\mathbb{F}_{p}\). Let \(A\subseteq G\) be a subset and let \(\phi\colon A\to H\) be a map which respects at least \(c|G|^{3}\) additive quadruples in \(A\). Then there exists an affine map \(\Phi\colon G\to H\) such that \(\phi=\Phi\) holds for at least \(\exp\Big{(}-\log^{O(1)}(2c^{-1})\Big{)}\) points in \(A\)._
Recall that the _bias_ of a multilinear form \(\phi\colon G^{k}\to\mathbb{F}_{p}\) is defined as
\[\operatorname{bias}\phi=\mathop{\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt \vrule width 0.4pt height 6.0pt depth 0.0pt}}\limits_{x_{1},\ldots,x_{k}}\omega^{\phi(x_{1},\ldots,x_{k})},\]
where \(\omega=\exp\Big{(}\frac{2\pi i}{p}\Big{)}\). This quantity is a measure of how far from being quasirandom the given form \(\phi\) is. We need the inverse theorem for biased trilinear forms. First results in this spirit were proved by Green and Tao [15] for the case of polynomials and a multilinear variant was proved by Bhowmick and Lovett [5]. The version below, which has essentially optimal bounds, is due to Adiprasito, Kazhdan and Ziegler [1] and due to Cohen and Moshkovitz [6].
**Theorem 15** (Inverse theorem for biased trilinear forms).: _Suppose that \(\phi\colon G\times G\times G\to\mathbb{F}_{p}\) is a trilinear form such that \(\operatorname{bias}\phi\geq c\). Then there exists a positive integer \(r\leq O(\log_{p}c^{-1})\), linear forms \(\alpha_{1},\ldots,\)\(\alpha_{r},\)\(\beta_{1},\ldots,\)\(\beta_{r},\)\(\gamma_{1},\ldots,\)\(\gamma_{r}\colon G\to\mathbb{F}_{p}\) and bilinear forms \(\alpha_{1}^{\prime},\ldots,\)\(\alpha_{r}^{\prime},\)\(\beta_{1}^{\prime},\ldots,\)\(\beta_{r}^{\prime},\)\(\gamma_{1}^{\prime},\ldots,\)\(\gamma_{r}^{\prime}\colon G\times G\to\mathbb{F}_{p}\) such that_
\[\phi(x,y,z)=\sum_{i\in[r]}\alpha_{i}(x)\alpha_{i}^{\prime}(y,z)+\beta_{i}(y) \beta_{i}^{\prime}(x,z)+\gamma_{i}(z)\gamma_{i}^{\prime}(x,y)\]
_holds for all \(x,y,z\in G\)._
Next, we need a result on the number of certain arrangements of points (related to additive quadruples) inside dense subsets of vector spaces.
**Lemma 16**.: _Let \(G\) be a finite-dimensional vector space over \(\mathbb{F}_{p}\) and let \(A\subset G\) be a subset of density \(c\). Then_
\[\mathop{\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\vrule width 0.4pt height 6.0pt depth 0.0pt}}\limits_{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1}\in G} \mathbb{1}_{A}(b_{1}+b_{2}-b_{3})\mathbb{1}_{A}(b_{1})\mathbb{1}_{A}(b_{2}) \mathbb{1}_{A}(b_{3})\mathbb{1}_{A}(x_{2})\mathbb{1}_{A}(x_{2}-b_{2}+b_{3})\]
\[\mathbbm{1}_{A}(y_{3})\mathbbm{1}_{A}(y_{3}+b_{1}-b_{3})\mathbbm{1}_{A}(z_{1}) \mathbbm{1}_{A}(b_{1}+b_{2}-z_{1})\geq c^{32}.\]
Proof.: Since the density of \(A\) is \(c\) we have3
Footnote 3: In the first step we prove the standard fact that the number of additive quadruples in a set in \(G\) of density \(c\) is at least \(c^{4}|G|^{3}\). We could have simply stated that fact, but since the rest of the proof uses identical method, we opted to start from density assumption.
\[c^{4}\leq\Big{(}\operatorname*{\mathop{\prod}\limits_{x}}\mathbbm{1}_{A}(x) \right)^{4}=\Big{(}\operatorname*{\mathop{\prod}\limits_{x,y}}\mathbbm{1}_{A}( x)\mathbbm{1}_{A}(y)\Big{)}^{2}= \Big{(}\operatorname*{\mathop{\prod}\limits_{d}}\Big{(}\operatorname*{ \mathop{\prod}\limits_{x}}\mathbbm{1}_{A}(x)\mathbbm{1}_{A}(x+d)\Big{)}\Big{)} ^{2}\] (change of variables) \[\leq \operatorname*{\mathop{\prod}\limits_{d}}\Big{(}\operatorname*{ \mathop{\prod}\limits_{x}}\mathbbm{1}_{A}(x)\mathbbm{1}_{A}(x+d)\Big{)}^{2}\] (by Cauchy-Schwarz) \[= \operatorname*{\mathop{\prod}\limits_{b_{1},b_{2},b_{3}}}\mathbbm{1}_ {A}(b_{1}+b_{2}-b_{3})\mathbbm{1}_{A}(b_{1})\mathbbm{1}_{A}(b_{2})\mathbbm{1 }_{A}(b_{3}).\]
Let us make a change of variables and use \(a=b_{1}+b_{2}-b_{3}\) instead of \(b_{3}\). Then
\[c^{8}\leq \Big{(}\operatorname*{\mathop{\prod}\limits_{a,b_{1},b_{2}}} \mathbbm{1}_{A}(a)\mathbbm{1}_{A}(b_{1})\mathbbm{1}_{A}(b_{2})\mathbbm{1}_{A }(b_{1}+b_{2}-a)\Big{)}^{2}=\Big{(}\operatorname*{\mathop{\prod}\limits_{a,b_ {1}}}\mathbbm{1}_{A}(a)\mathbbm{1}_{A}(b_{1})\Big{(}\operatorname*{\mathop{ \prod}\limits_{b_{2}}}\mathbbm{1}_{A}(b_{2})\mathbbm{1}_{A}(b_{1}+b_{2}-a) \Big{)}\Big{)}^{2}\] \[\leq \operatorname*{\mathop{\prod}\limits_{a,b_{1}}}\mathbbm{1}_{A}(a )\mathbbm{1}_{A}(b_{1})\Big{(}\operatorname*{\mathop{\prod}\limits_{b_{2}}} \mathbbm{1}_{A}(b_{2})\mathbbm{1}_{A}(b_{1}+b_{2}-a)\Big{)}^{2}\] (by Cauchy-Schwarz) \[= \operatorname*{\mathop{\prod}\limits_{a,b_{1},b_{2},x_{2}}} \mathbbm{1}_{A}(a)\mathbbm{1}_{A}(b_{1})\operatorname*{\mathop{\prod}\limits_{b _{2},x_{2}}}\mathbbm{1}_{A}(b_{2})\mathbbm{1}_{A}(b_{1}+b_{2}-a)\mathbbm{1}_{A }(x_{2})\mathbbm{1}_{A}(b_{1}+x_{2}-a).\]
Apply Cauchy-Schwarz inequality another time to get
\[c^{16}\leq \Big{(}\operatorname*{\mathop{\prod}\limits_{a,b_{2},x_{2}}} \mathbbm{1}_{A}(a)\mathbbm{1}_{A}(b_{2})\mathbbm{1}_{A}(x_{2})\Big{(} \operatorname*{\mathop{\prod}\limits_{b_{1}}}\mathbbm{1}_{A}(b_{1}) \mathbbm{1}_{A}(b_{1}+b_{2}-a)\mathbbm{1}_{A}(b_{1}+x_{2}-a)\Big{)}\Big{)}^{2}\] \[\leq \operatorname*{\mathop{\prod}\limits_{a,b_{2},x_{2}}} \mathbbm{1}_{A}(a)\mathbbm{1}_{A}(b_{2})\mathbbm{1}_{A}(x_{2})\Big{(} \operatorname*{\mathop{\prod}\limits_{b_{1}}}\mathbbm{1}_{A}(b_{1}) \mathbbm{1}_{A}(b_{1}+b_{2}-a)\mathbbm{1}_{A}(b_{1}+x_{2}-a)\Big{)}^{2}\] \[= \operatorname*{\mathop{\prod}\limits_{a,b_{2},x_{2}}} \mathbbm{1}_{A}(a)\mathbbm{1}_{A}(b_{2})\mathbbm{1}_{A}(x_{2})\operatorname*{ \mathop{\prod}\limits_{b_{1},y_{1}}}\mathbbm{1}_{A}(b_{1})\mathbbm{1}_{A}(b_{1} +b_{2}-a)\mathbbm{1}_{A}(y_{1}+b_{2}-a)\mathbbm{1}_{A}(b_{1}+x_{2}-a)\mathbbm{1 }_{A}(y_{1}+x_{2}-a)\] \[\leq \operatorname*{\mathop{\prod}\limits_{a,b_{1},b_{2},x_{2},y_{1}} }\mathbbm{1}_{A}(a)\mathbbm{1}_{A}(b_{1})\mathbbm{1}_{A}(b_{2})\mathbbm{1}_{A}( b_{1}+b_{2}-a)\mathbbm{1}_{A}(x_{2})\mathbbm{1}_{A}(b_{1}+x_{2}-a)\mathbbm{1}_{A}(y_{1}) \mathbbm{1}_{A}(y_{1}+b_{2}-a),\]
where we omitted term \(\mathbbm{1}_{A}(y_{1}+x_{2}-a)\) in the last line, which is fine as all terms take values in the interval \([0,1]\).
Make another change of variables and use \(y_{3}=y_{1}+b_{2}-a\) instead of \(y_{1}\) so we get
\[c^{16}\leq\underset{a,b_{1},b_{2},x_{2},y_{3}}{\mathbb{I}}_{A}(a){\mathbb{I}}_{A }(b_{1}){\mathbb{I}}_{A}(b_{2}){\mathbb{I}}_{A}(b_{1}+b_{2}-a){\mathbb{I}}_{A}( x_{2}){\mathbb{I}}_{A}(b_{1}+x_{2}-a){\mathbb{I}}_{A}(y_{3}){\mathbb{I}}_{A}(y_{3}+a- b_{2}).\]
Make a further change of variables and use \(b_{3}=b_{1}+b_{2}-a\) instead of \(b_{2}\). Thus
\[c^{32}\leq \Big{(}\underset{a,b_{1},b_{3},x_{2},y_{3}}{\mathbb{I}}_{A}(a){ \mathbb{I}}_{A}(b_{1}){\mathbb{I}}_{A}(b_{3}+a-b_{1}){\mathbb{I}}_{A}(b_{3}){ \mathbb{I}}_{A}(x_{2}){\mathbb{I}}_{A}(b_{1}+x_{2}-a){\mathbb{I}}_{A}(y_{3}){ \mathbb{I}}_{A}(y_{3}+b_{1}-b_{3})\Big{)}^{2}\] \[= \Big{(}\underset{a,b_{3},x_{2},y_{3}}{\mathbb{I}}_{A}(a){ \mathbb{I}}_{A}(b_{3}){\mathbb{I}}_{A}(x_{2}){\mathbb{I}}_{A}(y_{3})\Big{(} \underset{b_{1}}{\mathbb{I}}_{A}(b_{1}){\mathbb{I}}_{A}(b_{3}+a-b_{1}){ \mathbb{I}}_{A}(b_{1}+x_{2}-a){\mathbb{I}}_{A}(y_{3}+b_{1}-b_{3})\Big{)}\Big{)} ^{2}\] \[\leq \underset{a,b_{3},x_{2},y_{3}}{\mathbb{I}}_{A}(a){\mathbb{I}}_{A }(b_{3}){\mathbb{I}}_{A}(x_{2}){\mathbb{I}}_{A}(y_{3})\Big{(}\underset{b_{1}} {\mathbb{I}}_{A}(b_{1}){\mathbb{I}}_{A}(b_{3}+a-b_{1}){\mathbb{I}}_{A}(b_{1}+ x_{2}-a){\mathbb{I}}_{A}(y_{3}+b_{1}-b_{3})\Big{)}^{2}\] \[= \underset{a,b_{3},x_{2},y_{3}}{\mathbb{I}}_{A}(a){\mathbb{I}}_{A }(b_{3}){\mathbb{I}}_{A}(x_{2}){\mathbb{I}}_{A}(y_{3})\underset{b_{1},z_{1}}{ \mathbb{I}}_{A}(b_{1}){\mathbb{I}}_{A}(b_{3}+a-b_{1}){\mathbb{I}}_{A}(b_{1}+x_ {2}-a){\mathbb{I}}_{A}(y_{3}+b_{1}-b_{3})\] \[{\mathbb{I}}_{A}(z_{1}){\mathbb{I}}_{A}(b_{3}+a-z_{1}){\mathbb{I} }_{A}(z_{1}+x_{2}-a){\mathbb{I}}_{A}(y_{3}+z_{1}-b_{3})\] \[\leq \underset{a,b_{1},b_{3},x_{2},y_{3},z_{1}}{\mathbb{I}}_{A}(a){ \mathbb{I}}_{A}(b_{1}){\mathbb{I}}_{A}(b_{3}+a-b_{1}){\mathbb{I}}_{A}(b_{3}){ \mathbb{I}}_{A}(x_{2}){\mathbb{I}}_{A}(b_{1}+x_{2}-a)\] \[{\mathbb{I}}_{A}(y_{3}){\mathbb{I}}_{A}(y_{3}+b_{1}-b_{3}){ \mathbb{I}}_{A}(z_{1}){\mathbb{I}}_{A}(b_{3}+a-z_{1})\]
where we used Cauchy-Schwarz inequality in the first inequality above and omitted terms \({\mathbb{I}}_{A}(z_{1}+x_{2}-a)\) and \({\mathbb{I}}_{A}(y_{3}+z_{1}-b_{3})\) in the second. We make the final change of variables and use \(b_{2}=b_{3}+a-b_{1}\) instead of \(a\). Thus
\[c^{32}\leq\underset{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1}}{\mathbb{I}}_{A}(b_{1} ){\mathbb{I}}_{A}(b_{2}){\mathbb{I}}_{A}(b_{3}){\mathbb{I}}_{A}(b_{1}+b_{2}-b_{3 }){\mathbb{I}}_{A}(x_{2}){\mathbb{I}}_{A}(x_{2}-b_{2}+b_{3})\]
\[{\mathbb{I}}_{A}(y_{3}){\mathbb{I}}_{A}(y_{3}+b_{1}-b_{3}){\mathbb{I}}_{A}(z_{1} ){\mathbb{I}}_{A}(b_{1}+b_{2}-z_{1}).\qed\]
We also need the fact that linear maps of very high rank between spaces of same dimension can be efficiently modified to yield an isomorphism. The proof is elementary.
**Lemma 17**.: _Let \(A\) and \(B\) be two vector spaces of dimension \(d\) and let \(\phi\colon A\to B\) be a linear map of rank \(d-\ell\). Then there exists an isomorphism \(\psi\colon A\to B\) such that \(\operatorname{rank}(\phi-\psi)\leq\ell\)._
Proof.: By the rank-nullity theorem, the kernel \(K\) of \(\phi\) has dimension \(\ell\). Let \(U\) be an arbitrary subspace such that \(A=K\oplus U\), and let \(\pi\colon A\to K\) be the resulting projection onto \(K\). Then \(I=\phi(U)\) is the image of the map \(\phi\). Let \(B=I\oplus V\) for some subspace \(V\). Then \(\dim V=d-\dim I=\ell=\dim K\), so there exists a linear isomorphism \(\theta\colon K\to V\). Define \(\psi=\phi+\theta\circ\pi\). We claim that \(\psi\) is isomorphism. It suffices
to prove that \(\psi\) is injective. To that end, let \(x\in A\) be such that \(\psi(x)=0\). Then \(\phi(x)+\theta(\pi(x))=0\). However, \(\phi(x)\in I\) and \(\theta(\pi(x))\in V\), so as \(I\cap V=0\), it follows that \(\phi(x)=0\) and \(\theta(\pi(x))=0\). Hence, \(x\in K\) and \(\pi(x)=0\), so \(x=0\), as desired.
Let \(G\) and \(H\) be finite-dimensional vector spaces over \(\mathbb{F}_{p}\) and let \(A\subseteq G\) be a subset \(G\times H\). Let \(\phi\colon A\to K\) be a map. We say that \(\phi\)_respects_ all horizontal additive quadruples if for all \(y\in H\) and \(x_{1},x_{2},x_{3},x_{4}\in G\) such that \(x_{1}+x_{2}=x_{3}+x_{4}\) and \((x_{i},y)\in A\) for \(i\in[4]\) we have \(\phi(x_{1},y)+\phi(x_{2},y)=\phi(x_{3},y)+\phi(x_{4},y)\). Analogously, we say that \(\phi\)_respects_ all vertical additive quadruples if the same condition holds with the roles of \(G\) and \(H\) reversed. If \(\phi\) respects all horizontal and vertical additive quadruples we say that \(\phi\) is a _Freiman bihomomorphism_. It turns out that global biaffine maps are essentially the only sources of Freiman bihomomorphisms. This theorem was first proved in [10] by Gowers and the author and the bounds were later improved by Lovett (personal communication), and by Kim, Li and Tidor [19] by optimizing some steps in the original proof.
**Theorem 18**.: _Let \(A\subseteq G\) be a subset \(G\times H\) of density \(c\). Suppose that \(\phi\colon G\times H\to K\) is a Freiman bihomomorphism. Then there eixsts a biaffine map \(\Phi\colon G\times H\to K\) such that \(\Phi=\phi\) holds for at least \(c^{\prime}|G||H|\) points in \(A\), where \(c^{\prime}=\exp\Big{(}-\exp(O(\log^{O(1)}c^{-1}))\Big{)}\)._
In this paper, we shall use the following corollary.
**Corollary 19**.: _Let \(G,H,K\) be finite-dimensional vector spaces over \(\mathbb{F}_{p}\) and let \(A\subseteq G\) be a subset of density \(c\) and let for each \(a\in A\) be given a linear map \(\phi_{a}\colon H\to K\). Suppose that for at least \(c|G|^{3}\) additive quadruples \((a_{1},a_{2},a_{3},a_{4})\in A^{4}\) we have_
\[\operatorname{rank}\Big{(}\phi_{a_{1}}+\phi_{a_{2}}-\phi_{a_{3}}-\phi_{a_{4}} \Big{)}\leq r. \tag{5}\]
_Then there exists a map \(\Phi\colon G\times H\to K\), which is affine in the first coordinate and linear in the second, such that \(\operatorname{rank}(\Phi(a,\cdot)-\phi_{a})\leq\exp\Big{(}(\log c^{-1}+r)^{O( 1)}\Big{)}\) for at least \(\exp\Big{(}-\exp\Big{(}(\log c^{-1}+r)^{O(1)}\Big{)}\Big{)}|G|\) of \(a\in A\), where \(\Phi(a,\cdot)\) stands for the map from \(H\) to \(K\) given by \(b\mapsto\Phi(a,b)\)._
Let us remark that a similar result was proved by Kazhdan and Ziegler in [18] by using the inverse theorem for \(\mathsf{U}^{4}\) norm. Unlike the result above, Kazhdan and Ziegler assume that bound (5) holds for all choices of additive quadruples in \(G\). It is very likely that their proof can be modified to work in this setting as well, but it turns out that property (5) almost implies that \(\phi\) is a Freiman bihomomorphism, so applying Theorem 18 gives a shorter deduction. This is no surprise as Theorem 18 was initially proved in order to give a quantitative inverse theorem for \(\mathsf{U}^{4}\) norm.
Proof.: Consider map \(\psi\colon A\times H\to K\) given by \(\psi(a,x)=\phi_{a}(x)\). Since each \(\phi_{a}\) is linear, the map \(\psi\) respects all vertical additive quadruples. When it comes to horizontal additive quadruples, given any additive quadruple \((a_{1},a_{2},a_{3},a_{4})\in A^{4}\) such that
\[\operatorname{rank}\Big{(}\phi_{a_{1}}+\phi_{a_{2}}-\phi_{a_{3}}-\phi_{a_{4} }\Big{)}\leq r,\]
it follows that the codimension of kernel of \(\phi_{a_{1}}+\phi_{a_{2}}-\phi_{a_{3}}-\phi_{a_{4}}\) is at most \(r\), so we have at least \(p^{-r}|H|\) of elements \(y\in H\) for which \(\psi\) respects horizontal additive quadruple \(\Big{(}(a_{1},y),(a_{2},y),(a_{3},y),(a_{4},y)\Big{)}\). By averaging and using Theorem 14 we may pass to a subset \(B\subseteq A\times H\), of size \(|B|\geq\exp\Big{(}-(\log c^{-1}+r)^{O(1)}\Big{)}|G||H|\), on which \(\psi\) is a Freiman bihomomorphism. By Theorem 18 there exists a further subset \(B^{\prime}\subseteq B\) and biaffine map \(\Phi\colon G\times H\to K\) such that \(\Phi=\psi\) on \(B^{\prime}\), with \(|B^{\prime}|\geq c_{1}|G||H|\) where \(c_{1}\geq\exp\Big{(}-\exp\Big{(}(\log c^{-1}+r)^{O(1)}\Big{)}\Big{)}\). Define \(\tilde{\Phi}(x,y)=\Phi(x,y)-\Phi(x,0)\), which is linear in \(y\). Take now any \(a\in A\) for which \(|B^{\prime}_{a^{\prime}}|\geq\frac{c_{1}}{2}|H|\). For each \(y\in B^{\prime}_{a^{\prime}}\) we have \(\Phi(a,y)=\phi_{a}(y)\). By Cauchy-Schwarz inequality, we have \(\tilde{\Phi}(a,y-y^{\prime})=\Phi(a,y)-\Phi(a,y^{\prime})=\phi_{a}(y)-\phi_{a }(y^{\prime})=\phi_{a}(y-y^{\prime})\) for at least \(\frac{c_{1}^{2}}{4}|H|^{2}\) choices of \((y,y^{\prime})\). Thus, \(\operatorname{rank}\Big{(}\tilde{\Phi}(a,\cdot)-\phi_{a}\Big{)}\leq\log_{p}(4 c_{1}^{-2})\), for each such \(a\), of which there are at least \(\frac{c_{1}}{2}|G|\).
## SS3 From approximate quadratic varieties to approximate quasirandom linear systems of subspaces
In this section, we begin the study of approximate quadratic varieties. Our goal is to obtain an approximate quasirandom linear systems of subspaces that is closely related to the given quadratic variety. The main result of this section is the following theorem.
**Theorem 20**.: _There exists an absolute constant \(D\geq 1\) such that the following holds. Let \(V\subseteq G\) be a \((c_{0},\delta,\varepsilon)\)-approximate quadratic variety and suppose that \(\varepsilon\leq\exp\Big{(}-\log^{D}(2c_{0}^{-1})\Big{)}\delta^{288}\). Then there exist a quantity \(c_{1}\), a set \(A\subseteq G\) and a collection of subspaces \(W_{a}\leq G\) indexed by elements \(a\in A\) such that_
* \(\exp(-\log^{D}(2c_{0}^{-1}))\leq c_{1}\leq 1\)_,_
* \(|A|\geq c_{1}|G|\)_,_
* _for each_ \(a\in A\) _and_ \(b\in W_{a}\) _we have_ \(\overline{*}^{(8)}\mathbbm{1}_{V\cap V-a}(b)\geq c_{1}\delta^{15}\)_,_
* \(c_{1}\leq\frac{|W_{a}|}{\delta|G|}\leq c_{1}^{-1}\) _holds for all_ \(a\in A\)_,_
* _for_ \(r\in[9]\)_, for all but at most_ \(D\varepsilon\delta^{-32r}|G|^{r}\) _choices of_ \((a_{1},\ldots,a_{r})\in A^{r}\) _we have_ \[\frac{|W_{a_{1}}\cap W_{a_{2}}\cap\ldots\cap W_{a_{r}}|}{\delta^{r}|G|}\leq c _{1}^{-1},\] _and,_
* _for at least_ \(c_{1}|A|^{6}\) _6-tuples_ \((b_{1},b_{2},b_{3},x_{2},y_{3},z_{1})\in A^{6}\) _we have that_ \(b_{1}+b_{2}-b_{3},x_{2}-b_{2}+b_{3},y_{3}+b_{1}-b_{3},b_{1}+b_{2}-z_{1}\in A\) _and_ \[c_{1}\delta^{6}|G|\leq|W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{b_{1}+b_{2 }-b_{3}}\cap W_{x_{2}}\cap W_{x_{2}-b_{2}+b_{3}}\cap W_{y_{3}}\cap W_{y_{3}+b_ {1}-b_{3}}\cap W_{z_{1}}\cap W_{b_{1}+b_{2}-z_{1}}|.\]
Throughout the section \(V,c_{0},\delta\) and \(\varepsilon\) will be fixed and \(V\) will be a \((c_{0},\delta,\varepsilon)\)-approximate quadratic variety.
### Elementary estimates
We frequently need control expressions such as \(\mathbb{E}_{a,b,x}\,\mathbf{\Delta}_{a,b}1_{V}(x)\) using the fact that the \(\mathsf{U}^{2}\) norm of the difference \(\mathbbm{1}_{V}-\delta\) is small. In order to be efficient we record the following lemma which says that if we can find two variables in such an expression that both appear in a single occurrence of \(\mathbbm{1}_{V}\), then we may replace that term with \(\delta\).
**Lemma 21**.: _Let \(f_{1},\ldots,f_{r}\colon G\to\mathbb{D}\) be functions and let \(\lambda_{i,j}\in\mathbb{F}_{p}\) be coefficients for \(i\in[0,r]\), \(j\in[s]\). Suppose that for some distinct indices \(a,b\in[s]\) we have \(\lambda_{i,a}\lambda_{i,b}\) non-zero if and only if \(i=0\). Then_
Proof.: Write \(g(x)=\mathbbm{1}_{V}(x)-\delta\). Let \(I\) be the set of all indices \(i\in[r]\) such that \(\lambda_{i,a}\neq 0\). Standard applications of Cauchy-Schwarz inequality give
\[\Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt}}\limits_{x_{1},\ldots,x_{s}}g\Big{(}\sum_{j\in[s]}\lambda_{0,j}x_{j} \Big{)}\prod_{i\in[r]}f_{i}\Big{(}\sum_{j\in[s]}\lambda_{i,j}x_{j}\Big{)} \Big{|}^{4}\leq \Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt}}\limits_{x_{|s|\setminus\{a\}}}\Big{|}\mathop{\hbox{\vbox{ \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt}}\limits_{x_{a}}g\Big{(}\sum_{j\in[s]}\lambda_{0,j}x_{j} \Big{)}\prod_{i\in I}f_{i}\Big{(}\sum_{j\in[s]}\lambda_{i,j}x_{j}\Big{)} \Big{|}^{2}\Big{|}^{2}\] \[= \Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt}}\limits_{x_{|s|,y}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[= |G|^{2}\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{}}}}}}{{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{}}}}{{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{}}}}{{\mathop{\mathop{\mathop{\mathop{\mathop{{ }}}}{{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{}}}}{{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\left}}}}{{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\left}}}}{{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\left}}}}{{\mathop{\mathop{\mathop{\mathop{\mathop{ }}}}{{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\left}}}}{{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\left}}}{{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\left}}}}{{\mathop{\mathop{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\mathop{\mathop{\left}}}}{{{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\left}}}}{{{\mathop{\mathop{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\mathop{\left}}}}{{{\mathop{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\mathop{\left}}}}{{{\mathop{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\mathop{\left}}}}{{{\mathop{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{ \mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{ \mathop{\left}}}{{{\mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}} {{{\mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{ \mathop{\left}}}{{{\mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{{\left}}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\left}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\left}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\left}}{{{\mathop{\left}}{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{{ \left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\left}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}{{\mathop{\left{{\left}}}{{\mathop{{ \mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\left{\left}}}{{{\mathop{\mathop{\left}}}{{{ \mathop{\mathop{\left}}}{{{\mathop{\mathop{\left}}}{{{\mathop{\left{\left{{\left}}}{{ \mathop{\mathop{\left{\left}}}{{{\mathop{\left{\left{{\left}}}}{{\mathop{\left{{ \left}}}{{{\mathop{\mathop{\left}}}{{\mathop{{\left{\left{\left}}}{{{\mathop{ \left}}}{{\mathop{{\left}}}{{{\mathop{\mathop{\left{\left}}}{{{\mathop{\left{{ \left}}}}{{{\mathop{\left{\left{{\left}}}}{{{\mathop{\left{{\left}}}{{ \mathop{{\left}}}{{{\mathop{\left{\left}}}{{{\mathop{\left{\left}}}{{\mathop{{ \left}}}{{\mathop{{\left{\left}}}{{{\mathop{\left{\left}}}{{{\mathop{\left{\left}}} {{{\mathop{\left{\left{{\left}}}}{{{\mathop{\left{\left}}}}{{{\mathop{\left{\left{{ \left}}}}{{{\mathop{\left{\left{\left}}}}{{{\mathop{\left{{\left}}}}{{ {\mathop{\left{{\left}}}}{{{\mathop{\left{\left{{\left}}}}{{\mathop{{\left{\left{ }}}}{{{\mathop{\left{\left{\left}}}}{{{\mathop{\left{\left{{\left}}} {{\left{{\left}}}}{{{\mathop{\left{\left{\left}}}}{{{\mathop{\left{{\left}}} {{{\left{\left{\left}}}}{{\mathop{{\left{\left{{\left}}}}}{{\mathop{{\left{ \left}}}{{{\mathop{{\left}}}}{{\mathop{{\left{\left{{\left}}}}{{\mathop{{ \left{\left}}}}{{{\mathop{\left{{\left}}}}{{{\mathop{\left{{\left{\left}}}} {{\mathop{{\left{{\left}}}}{{\mathop{{\left{\left{{\left}}}}}{{\mathop{{\left{ \left}}}}{{{\mathop{\left{{\left{{\left}}}}}{{\mathop{{\left{{\left{{\left}}} }}{{\mathop{{\left{\left{{{\left}}}}}{{\mathop{\left{{\left{{{{\left}}}}}{ \mathop{\left{{{\left}}}}{{\mathop{{\left{\left}}}}{{\mathop{{ \left{{\left}}}}{{\mathop{{\left{\left{{{\left}}}}}{{\mathop{\left{{{ }}}}}{{\mathop{{\left{\left{{\left}}}}}{{ \mathop{{\left{{\left{{{\left}}}}}}{{\mathop{{\left{\left}}}}{{ \mathop{{{\left{\left{{\left}}}}}{{\mathop{{\left{\left}}}}}{{{ \mathop{{\left{}}}}}}{{\mathop{{\left{{{\left{}}}}}}{{ \mathop{{\left{{{\left}}}}}}{{\mathop{{\left{{\left{}}}}}}{{\mathop{{ \left{{{\left{{}}}}}}}{{\mathop{\left{{{\left}}}}}{{\mathop{\left{{ }}}}}{{\mathop{{\left{{{\left}}}}}}{{\mathop{{\left{{ }}}}}{{{\mathop{\left{{}}}}}}{{\mathop{{\left{{}}}}}{{ \mathop{{\left{{}}}}}}{{\mathop{{\left{{}}}}}{{\mathop{\left{{}}}}}{{ }}{{\mathop{{\left{}}}}}{{\mathop{{\left{{}}}}}{{\mathop{{\left{{}}}}}}{{ }}{{\mathop{{\left{}}}}{{}}}{{\mathop{{\left{}}}}{{}}{{\mathop{\left{{}}}}}{{ }}{{\mathop{{\left{}}}}}{{}}{{\mathop{\left{{}}}}}{{}}{{\mathop{{\left{ }}}}{{}}{{\mathop{{\left{}}}}}{{}}{{\mathop{{\left{}}}}{{}}}{{ \mathop{{\left{}}}}}{{}}{{\mathop{{\left{}}}}{}}{{}}{{\mathop{\left{{}}}}{{}}{ {\mathop{{\left}}}}{{}}{{\mathop{{\left}}}}{{}}{}{}{\mathop{{\left}}}{{}}{}{}{{ \mathop{{\left}}}}{}{}{}{{\mathop{\left}}}{}{}{}{}{}{\mathop{{\left}}}{}{}}{{}{ \mathop{{\left}}}{}{}{}{}{}{\mathop{{\left}}}}{}{}{}{}{\mathop{{\left}}}{}{}{}{}{\mathop{{\left}}}{}{}{}{}{}{\mathop{\left}}}{}{}{}{}{ \mathop{{\left}}}{}{}{}{}{\mathop{{\left}}}{}{}{}{}{\mathop{{\left
Applying Lemma 21 6 times for terms \(\mathbb{1}_{V}(x+a)\) using \(x\) and \(a\), \(\mathbb{1}_{V}(x+b)\) using \(x\) and \(b\), \(\mathbb{1}_{V}(x)\) using \(x\), \(\mathbb{1}_{V}(y+a)\) using \(y\) and \(a\), \(\mathbb{1}_{V}(y+b)\) using \(y\) and \(b\) and \(\mathbb{1}_{V}(y)\) using \(y\) in that order, we see that
\[\Big{|}\underset{\begin{subarray}{c}a,b\\ x,y\end{subarray}}{\mathbb{1}_{V}(x)\mathbb{1}_{V}(x+a)\mathbb{1}_{V}(x+b) \mathbb{1}_{V}(y)\mathbb{1}_{V}(y+a)\mathbb{1}_{V}(y+b)-\delta^{6}\Big{|}}\leq 6\varepsilon.\]
Similar argument for \(\mathbb{E}_{x,a,b}\,\mathbb{1}_{V}(x)\mathbb{1}_{V}(x+a)\mathbb{1}_{V}(x+b)\) shows that the expression above is at most \(12\varepsilon\). The first part of the claim now follows.
For the second part of the claim simply note that \(|V\cap(V-a)\cap(V-b)\cap(V-a-b)|\leq|V\cap(V-a)\cap(V-b)|\).
We need one more claim of this form. Notice that we have \(8\,\mathbb{1}_{V}\) terms in the expression below, yet the upper bound is about \(\delta^{5}\), which is due to algebraic dependencies between arguments of \(\mathbb{1}_{V}\).
**Claim 24**.: _For all but at most \(20\sqrt{\varepsilon}|G|^{4}\) of \((a,d_{1},d_{2},d_{3})\in G^{4}\) we have_
\[\underset{x}{\mathbbm{1}_{V}(x)\mathbb{1}_{V}(x+a)\mathbb{1}_{V}(x+d_{1}) \mathbb{1}_{V}(x+d_{1}+a)\mathbb{1}_{V}(x+d_{1}-d_{2})\mathbb{1}_{V}(x+d_{1}- d_{2}+a)\mathbb{1}_{V}(x+d_{3})\mathbb{1}_{V}(x+d_{3}+a)}\leq\delta^{5}+\sqrt[4]{ \varepsilon}. \tag{7}\]
Having terms \(\mathbb{1}_{V}(x+d_{2})\mathbb{1}_{V}(x+d_{2}+a)\) instead of \(\mathbb{1}_{V}(x+d_{1}-d_{2})\mathbb{1}_{V}(x+d_{1}-d_{2}+a)\) would be more natural, but we opted for this formulation as we get this expression when the claim is applied.
Proof.: Notice that the expression in question is at most
\[\underset{x}{\mathbbm{1}_{V}(x)\mathbb{1}_{V}(x+a)\mathbb{1}_{V}(x+d_{1}) \mathbb{1}_{V}(x+d_{1}-d_{2})\mathbb{1}_{V}(x+d_{3})}.\]
We now show that
\[\underset{a,d_{1},d_{2},d_{3}}{\mathbbm{1}_{V}(x)\mathbb{1}_{V}(x+a) \mathbb{1}_{V}(x+d_{1})\mathbb{1}_{V}(x+d_{1}-d_{2})\mathbb{1}_{V}(x+d_{3})} -\delta^{5}\Big{|}^{2}\leq 20\varepsilon.\]
By (by now) usual argument based on Lemma 21 we get that
\[\underset{a,d_{1},d_{2},d_{3}}{\mathbbm{1}_{V}(x)\mathbb{1}_{V}(x+a) \mathbb{1}_{V}(x+d_{1})\mathbb{1}_{V}(x+d_{1}-d_{2})\mathbb{1}_{V}(x+d_{3})} \Big{)}^{2}\]
\[=\underset{\begin{subarray}{c}a,d_{1},d_{2},d_{3}\\ x,y\end{subarray}}{\mathbb{1}_{V}(x)\mathbb{1}_{V}(x+a)\mathbb{1}_{V}(x+d_{1}) \mathbb{1}_{V}(x+d_{1}-d_{2})\mathbb{1}_{V}(x+d_{3})}\]
\[\mathbb{1}_{V}(y)\mathbb{1}_{V}(y+a)\mathbb{1}_{V}(y+d_{1})\mathbb{1}_{V}(y+d_ {1}-d_{2})\mathbb{1}_{V}(y+d_{3})\]
differs from \(\delta^{10}\) by at most \(10\varepsilon\). Namely, we apply Lemma 21 for term \(\mathbb{1}_{V}(x+d_{3})\) using \(x\) and \(d_{3}\), for term \(\mathbb{1}_{V}(x+d_{1}-d_{2})\) using \(x\) and \(d_{2}\), for term \(\mathbb{1}_{V}(x+d_{1})\) using \(x\) and \(d_{1}\), for term \(\mathbb{1}_{V}(x+a)\) using \(x\) and \(a\), for term \(\mathbb{1}_{V}(x)\) using \(x\), in that order, and similarly for terms involving \(y\). Similarly,
\[\underset{a,d_{1},d_{2},d_{3},x}{\mathbb{1}_{V}(x)\mathbb{1}_{V}(x+a) \mathbb{1}_{V}(x+d_{1})\mathbb{1}_{V}(x+d_{1}-d_{2})\mathbb{1}_{V}(x+d_{3})}\]
differs from \(\delta^{5}\) by at most \(5\varepsilon\). The claim follows after averaging.
### Additive structure
In this subsection show that the set of large values iterated convolutions of \(\mathbbm{1}_{V\cap V-a}\) has significant additive structure for many \(a\in G\). We begin with an important technical definition.
Let \(\eta>0\) be a parameter. We say that an element \(a\in G\) is \(\eta\)-_regular_ if it satisfies
**(i)**: \(\frac{1}{2}\delta^{2}|G|\leq|V\cap V-a|\leq 2\delta^{2}|G|\),
**(ii)**: for all but at most \(\eta|G|\) of \(b\in G\) we have
\[|V\cap(V-a)\cap(V-b)\cap(V-a-b)|\leq 2\delta^{3}|G|,\]
and
**(iii)**: for all but at most \(\eta|G|^{3}\) of triples \((d_{1},d_{2},d_{3})\in G\) we have
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt heigh t 6.0pt depth 0.0pt}}\limits_{x}\mathbbm{1}_{V}(x)\mathbbm{1}_{V}(x+a)\mathbbm{1}_{V}(x+d_{ 1})\mathbbm{1}_{V}(x+d_{1}+a)\mathbbm{1}_{V}(x+d_{1}-d_{2})\mathbbm{1}_{V}(x+d _{1}-d_{2}+a)\mathbbm{1}_{V}(x+d_{3})\mathbbm{1}_{V}(x+d_{3}+a)\leq 2\delta^{5}.\]
Let us show that regular elements are abundant.
**Proposition 25**.: _Suppose that \(\varepsilon\leq 2^{-4}\delta^{20}\). Then all but at most \(40\sqrt[4]{\varepsilon}|G|\) elements are \(\varepsilon^{1/4}\)-regular._
Proof.: By Claim 22, provided that \(\varepsilon\leq 2^{-4}\delta^{8}\), we have
\[\Bigl{|}|V\cap(V-a)|-\delta^{2}|G|\Bigr{|}\leq\frac{1}{2}\delta^{2}|G|\]
for all but at most \(8\varepsilon^{1/2}|G|\) elements \(a\in G\). Next, provided that \(\varepsilon\leq\delta^{12}\), it follows from Claim 23 that
\[|V\cap(V-a)\cap(V-b)\cap(V-a-b)|\leq 2\delta^{3}|G| \tag{8}\]
holds for all but at most \(12\varepsilon^{1/2}|G|^{2}\) pairs of elements \((a,b)\in G^{2}\). Hence, the number of \(a\) such that (8) fails for at least \(\varepsilon^{1/4}|G|\) elements \(b\in G\) is at most \(12\varepsilon^{1/4}|G|\). Finally, using Claim 24 and provided \(\varepsilon\leq\delta^{20}\), we conclude that the number of elements \(a\) such that (7) fails for at least \(\sqrt[4]{\varepsilon}|G|^{3}\) triples \((d_{1},d_{2},d_{3})\in G^{3}\) is at most \(20\varepsilon^{1/4}|G|\). The proposition now follows.
The following proposition finds desired additive structure if we are given a regular element.
**Proposition 26**.: _Let \(\eta>0\) and let \(a\in G\) be an \(\eta\)-regular element. Let \(P\subseteq G\) be a set and let \(c^{\prime}>0\) be such that_
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt heigh t 6.0pt depth 0.0pt}}\limits_{b}\mathbbm{1}_{P}(b)\Bigl{(}\mathbbm{1}_{V\cap V-a}\ \overline{*}\ \mathbbm{1}_{V\cap V-a}(b)\Bigr{)}^{2}\geq c^{\prime}\delta^{7}. \tag{9}\]
_Suppose that \(\eta\leq 2^{-41}c^{\prime 8}\delta^{8}\). Then there exists a subset \(\tilde{P}\subseteq P\) which contains at least \(2^{-42}c^{\prime 8}\delta^{3}|G|^{3}\) additive quadruples, has size \(\frac{c^{\prime}\delta}{16}|G|\leq|\tilde{P}|\leq 2^{9}c^{\prime-2}\delta|G|\) and every element \(b\in\tilde{P}\) satisfies_
\[\mathbbm{1}_{V\cap V-a}\ \overline{*}\ \mathbbm{1}_{V\cap V-a}(b)\geq\frac{1}{8}c^{ \prime}\delta^{3}.\]
Proof.: By assumption **(ii)** of \(\eta\)-regularity, for all but at most \(\eta|G|\) of \(b\in G\) we have
\[\mathbbm{1}_{V\cap V-a}\ \overline{*}\ 1_{V\cap V-a}(b)=\mathop{\mathbb{E}}_{x} \mathbbm{1}_{V\cap(V-a)}(x+b)\mathbbm{1}_{V\cap(V-a)}(x)=\frac{1}{|G|}|V\cap(V- a)\cap(V-b)\cap(V-a-b)|\leq 2\delta^{3}. \tag{10}\]
By removing \(b\) for elements \(b\in G\) when this inequality fails, we may find a subset \(P^{\prime}\subseteq P\) of size \(|P^{\prime}|\geq|P|-\eta|G|\) such that (10) holds for all \(b\in P^{\prime}\). From (9) we deduce that
\[\mathop{\big{\|}\!\!\!\mathop{\mathbb{E}}_{b}}\mathbbm{1}_{P^{\prime}}(b) \Big{(}\mathbbm{1}_{V\cap V-a}\ \overline{*}\ 1_{V\cap V-a}(b)\Big{)}^{2}\geq c^{\prime} \delta^{7}-\eta.\]
Let \(\tilde{P}\subseteq P^{\prime}\) be the set of all elements \(b\in P^{\prime}\) such that \(\mathbbm{1}_{V\cap V-a}\ \overline{*}\ \mathbbm{1}_{V\cap V-a}(b)\geq\frac{1}{8}c^{ \prime}\delta^{3}\). Then
\[\mathop{\big{\|}\!\!\!\mathop{\mathbb{E}}_{b}}\mathbbm{1}_{P^{ \prime}\setminus\tilde{P}}(b)\Big{(}\mathbbm{1}_{V\cap V-a}\ \overline{*}\ 1_{V\cap V-a}(b)\Big{)}^{2}\leq \frac{1}{8}c^{\prime}\delta^{3}\mathop{\big{\|}\!\!\!\mathop{ \mathbb{E}}_{b}}\mathbbm{1}_{P^{\prime}\setminus\tilde{P}}(b)\mathbbm{1}_{V \cap V-a}\ \overline{*}\ 1_{V\cap V-a}(b)\] \[\leq \frac{1}{8}c^{\prime}\delta^{3}\mathop{\big{\|}\!\!\!\mathop{ \mathbb{E}}_{b}}\mathbbm{1}_{V\cap V-a}\ \overline{*}\ 1_{V\cap V-a}(b)\] \[\leq \frac{1}{8}c^{\prime}\delta^{3}\Big{(}\frac{|V\cap V-a|}{|G|} \Big{)}^{2}\leq\frac{1}{2}c^{\prime}\delta^{7},\]
where we used assumption **(i)** of \(\eta\)-regularity in the last step.
Hence
\[\mathop{\big{\|}\!\!\!\mathop{\mathbb{E}}_{b}}\mathbbm{1}_{\tilde{P}}(b) \Big{(}\mathbbm{1}_{V\cap V-a}\ \overline{*}\ 1_{V\cap V-a}(b)\Big{)}^{2}\geq\frac{1}{2}c^{\prime} \delta^{7}-\eta,\]
which, since \(\tilde{P}\subseteq P^{\prime}\), also gives \(|\tilde{P}|\geq\Big{(}\frac{c^{\prime}\delta}{8}-\eta\delta^{-6}\Big{)}|G|\geq \frac{c^{\prime}\delta}{16}|G|\), since \(\eta\leq\frac{c^{\prime}\delta^{7}}{16}\). The same argument as the one above and the fact that \(\tilde{P}\subseteq P^{\prime}\) imply
\[\frac{|\tilde{P}|}{|G|}\Big{(}\frac{1}{8}c^{\prime}\delta^{3}\Big{)}^{2}\leq \mathop{\big{\|}\!\!\!\mathop{\mathbb{E}}_{b}}\mathbbm{1}_{\tilde{P}}(b) \Big{(}\mathbbm{1}_{V\cap V-a}\ \overline{*}\ 1_{V\cap V-a}(b)\Big{)}^{2}\leq 2 \delta^{3}\mathop{\big{\|}\!\!\!\mathop{\mathbb{E}}_{b}}\mathbbm{1}_{V \cap V-a}\ \overline{*}\ 1_{V\cap V-a}(b)\leq 8\delta^{7}\]
so \(|\tilde{P}|\leq 2^{9}c^{\prime-2}\delta|G|\).
We say that \(b\) is a _popular difference_ if \(b\in\tilde{P}\). Consider the bipartite graph \(\Gamma\) with both vertex classes \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) being a copy of \(V\cap(V-a)\) and edges between \(x\in\mathcal{C}_{1}\) and \(y\in\mathcal{C}_{2}\) if \(x-y\) is a popular difference. By a _cycle of length 4_ we mean an ordered quadruple of vertices \(v_{1},v_{2},v_{3},v_{4}\) such that \(v_{1}v_{2},\ldots,v_{4}v_{1}\) are edges.5 This is a graph of density
Footnote 5: It would be more precise to call these quadruples homomorphisms from \(C_{4}\) to the given graph.
\[\frac{\sum_{b\in\tilde{P}}\mathbbm{1}_{V\cap V-a}\ \overline{*}\ 1_{V\cap V-a}(b)|G|}{|V\cap(V-a)|^{2}}\geq\frac{c^{\prime} \delta}{16}|G|\cdot\frac{1}{8}c^{\prime}\delta^{3}|G|}{4\delta^{4}|G|^{2}} \geq 2^{-9}{c^{\prime}}^{2}\]
so it has \(2^{-36}c^{\prime 8}|V\cap(V-a)|^{4}\) cycles of length 4. Notice that each such cycle gives an additive quadruple in \(\tilde{P}\): namely if \(x_{1},x_{2}\) are vertices in class \(\mathcal{C}_{1}\) and \(y_{1}\) and \(y_{2}\) are vertices in class \(\mathcal{C}_{2}\), then \(p_{i,j}=x_{i}-y_{j}\)
is a popular difference and \(p_{1,1}+p_{2,2}=p_{1,2}+p_{2,1}\). On the other hand, if we fix an additive quadruple \((d_{1},d_{2},d_{3},d_{2}+d_{3}-d_{1})\) of popular differences it gives
\[\sum_{y}\mathbbm{1}_{V\cap(V-a)}(y)\mathbbm{1}_{V\cap(V-a)}(y+d_{1})\mathbbm{1 }_{V\cap(V-a)}(y+d_{1}-d_{2})\mathbbm{1}_{V\cap(V-a)}(y+d_{3})\]
cycles of length \(4\). This quantity is bounded above by \(2\delta^{5}|G|\) for all but at most \(\eta|G|^{3}\) of triples \((d_{1},d_{2},d_{3})\in G\) by assumption **(iii)** of \(\eta\)-regularity, while it is always at most \(|G|\). Let \(Q\) be the number of additive quadruples in \(\tilde{P}\). By double-counting we get
\[2^{-40}{c^{\prime}}^{8}\delta^{8}|G|^{4}{\leq 2^{-36}{c^{\prime}}^{8}}|V\cap(V- a)|^{4}{\leq Q\cdot 2\delta^{5}|G|{+}\eta|G|^{3}{\cdot}|G|}.\]
Since \(\eta\leq 2^{-41}{c^{\prime}}^{8}\delta^{8}\), we conclude that \(Q\geq 2^{-42}{c^{\prime}}^{8}\delta^{3}|G|^{3}\), as desired.
Let \(\xi>0\) be a parameter to be chosen later (it will be a function of \(c\)) and let \(A^{\xi}\subseteq G\) be the set of all elements \(a\in G\) which are \(\varepsilon^{1/4}\)-regular and satisfy
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{b}\Big{(}\mathbbm{1}_{V\cap V-a}\stackrel{{ \pi}}{{\approx}}\mathbbm{1}_{V\cap V-a}(b)\Big{)}^{2}\geq \xi\delta^{7}.\]
We first show that \(A^{\xi}\) is large.
**Claim 27**.: _Suppose that \(\xi\leq c_{0}/2\) and \(\varepsilon\leq 2^{-32}c_{0}^{4}\delta^{28}\). We have \(|A^{\xi}|{\geq\frac{c_{0}}{40}}|G|\) and_
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{a,b}\mathbbm{1}_{G\setminus A^{\xi}}(a)\Big{(} \mathbbm{1}_{V\cap V-a}\stackrel{{\pi}}{{\approx}} \mathbbm{1}_{V\cap V-a}(b)\Big{)}^{2}\leq\xi\delta^{7}+40\sqrt[4]{\varepsilon}. \tag{11}\]
Proof.: Write \(A=A^{\xi}\) to simplify the notation. The fact that \(V\) is a \((c_{0},\delta,\varepsilon)\)-approximate quadratic variety implies that
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{a,b}\Big{(}\mathbbm{1}_{V\cap V-a}\stackrel{{ \pi}}{{\approx}}\mathbbm{1}_{V\cap V-a}(b)\Big{)}^{2}\geq c_{0} \delta^{7}. \tag{12}\]
Let \(I\) be the set of elements \(a\in G\) which are not \(\varepsilon^{1/4}\)-regular. Proposition 25 implies \(|I|{\leq 40\sqrt[4]{\varepsilon}|G|}\). Their contribution to (12) is
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{a,b}\mathbbm{1}_{I}(a)\Big{(}\mathbbm{1}_{V\cap V-a} \stackrel{{\pi}}{{\approx}}\mathbbm{1}_{V\cap V-a}(b)\Big{)}^{2} \leq\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{a}\mathbbm{1}_{I}(a)\leq 40\sqrt[4]{ \varepsilon}.\]
Furthermore, the contribution from \(G\setminus(I\cup A)\) is
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{a,b}\mathbbm{1}_{I^{c}\cap A^{c}}(a)\Big{(} \mathbbm{1}_{V\cap V-a}\stackrel{{\pi}}{{\approx}}\mathbbm{1}_{V \cap V-a}(b)\Big{)}^{2}=\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{a}\mathbbm{1}_{I^{c}\cap A^{c}}(a)\bigg{(} \mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{b}\Big{(}\mathbbm{1}_{V\cap V-a}\stackrel{{\pi}}{{ \approx}}\mathbbm{1}_{V\cap V-a}(b)\Big{)}^{2}\bigg{)}\leq\xi\delta^{7},\]
from which we obtain (11).
To prove the bounds on \(|A|\), notice first that if \(a\) is \(\varepsilon^{1/4}\)-regular then
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{b}\Big{(}\mathbbm{1}_{V\cap V-a}\stackrel{{ \pi}}{{\approx}}\mathbbm{1}_{V\cap V-a}(b)\Big{)}^{2}=\mathop{\hbox{\vrule wid th 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{b}\frac{|V\cap(V-a)\cap(V-b)\cap(V-a-b)|^{2}}{|G|^{2}}\]
\[\leq \sqrt[4]{\varepsilon}+2\delta^{3}\,\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{b}\frac{|V\cap(V-a)\cap(V-b)\cap(V-a-b)|}{|G|}\] \[\leq \sqrt[4]{\varepsilon}+2\delta^{3}\frac{|V\cap V-a|^{2}}{|G|^{2}} \leq\sqrt[4]{\varepsilon}+8\delta^{7}\leq 10\delta^{7},\]
provided \(\varepsilon\leq\delta^{28}\), where we used assumption **(ii)** of \(\sqrt[4]{\varepsilon}\)-regularity in the second line and assumption **(i)** of \(\sqrt[4]{\varepsilon}\)-regularity in the third line. Combining (11) and (12) we get
\[10\delta^{7}\frac{|A|}{|G|}\geq\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{a}\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{b}\left(\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-a}\mp\mathop{\hbox to 12.0pt{\vrule heigh t 6.0pt depth -0.0pt width 1px}\vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-a}(b) \right)^{2}\geq(c_{0}-\xi)\delta^{7}-40\sqrt[4]{\varepsilon}\geq\frac{c_{0} \delta^{7}}{4},\]
using the assumptions on \(\xi\) and \(\varepsilon\) in the last inequality.
As a corollary, we get lower bounds on the number of occurrences of another configuration in \(V\).
**Corollary 28**.: _Provided \(\varepsilon\leq 2^{-200}c_{0}^{8}\delta^{28}\), we have_
\[\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1}} \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{w}\mathop{\hbox to 12.0pt{\vrule heigh t 6.0pt depth -0.0pt width 1px}\vrule height 0.0pt depth -0.0pt width 1px}\limits_{w}\mathop{\hbox to 12.0pt{ \vrule height 6.0pt depth -0.0pt width 1px}\vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-(b_{1}+b_{2}-b_{3})}\mp \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-(b_{1}+b_{2}-b_{3})}(w)\,\, \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-b_{1}}\mp \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-b_{1}}(w)\,\, \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-b_{2}}\mp \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-b_{2}}\mp \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px}\vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-b_{2}}(w)\] \[\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-b_{3}}\mp \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-b_{3}}(w)\,\, \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-x_{2}}\mp \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-x_{2}}(w)\,\, \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-(x_{2}-b_{2}+b_{3})}(w)\] \[\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-y_{3}}\mp \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-y_{3}}(w)\,\, \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-(y_{3}+b_{1}-b_{3})}(w)\] \[\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-z_{1}}(w)\,\, \mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-(b_{1}+b_{2}-z_{1})}(w)\geq\exp\Big{(}-O\Big{(}\log^{O(1)} (2c_{0}^{-1})\Big{)}\Big{)}\delta^{36}.\]
Proof.: In this proof, we set temporarily \(A=A^{c_{0}/2}\). For each \(w\in A\) we have
\[\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{a}\Big{(}\mathop{\hbox to 12.0pt{ \vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-w}\mp\mathop{\hbox to 12.0pt{\vrule heigh t 6.0pt depth -0.0pt width 1px}\vrule height 0.0pt depth -0.0pt width 1px}\limits_{V\cap V-w}(a) \Big{)}^{2}\geq\frac{c_{0}}{2}\delta^{7}.\]
By Claim 27 we have \(|A|\geq\frac{c}{40}|G|\). For each \(w\in A\), apply Proposition 26 with set \(P=G\) to obtain \(\tilde{P}_{w}\subseteq G\) which contains at least \(2^{-50}c_{0}^{8}\delta^{3}|G|^{3}\) additive quadruples, has size \(|\tilde{P}_{w}|\leq 2^{11}c_{0}^{-2}\delta|G|\) and every element \(a\in\tilde{P}_{w}\) satisfies \(\mathop{\hbox to 12.0pt{\vrule height 6.0pt depth -0.0pt width 1px} \vrule height 0.0pt depth -0.0pt width 1px}\vrule height 0.0pt depth -0.
\[1_{\tilde{P}_{w}}(y_{3})1_{\tilde{P}_{w}}(y_{3}+b_{1}-b_{3})1_{\tilde{P}_{w}}(z_{ 1})1_{\tilde{P}_{\tilde{P}_{w}}}(b_{1}+b_{2}-z_{1})\]
\[\geq\underset{w}{\big{\|}\underline{\mathbb{R}}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
1. _for each_ \(i\in[m]\) _the bounds_ \(K^{-1}\delta|G|\leq|U_{a,i}|\leq|U^{\prime}_{a,i}|\leq K\delta|G|\) _hold,_
2. _for each_ \(i\in[m]\) _we have_ \(|U_{a,i}|=p^{-1}|U^{\prime}_{a,i}|\)_,_
3. _writing_ \(\tilde{U}_{a}=\cup_{i\in[m]}U^{\prime}_{a,i}\)_, we have_ \[\mathop{\hbox{\vrule width 0.0pt height 5.0pt depth -0.0pt\vrule width 0.0pt height 5.0pt depth -0.0pt} \hbox{\vrule width 0.0pt height 5.0pt depth -0.
the conclusion follows. (Note that this condition also implies \(|U_{a,i}|\leq\exp\Big{(}-\log^{O(1)}(2\xi^{-1})\Big{)}\delta|G|\).) In particular, by averaging over \(b_{2},b_{3},b_{4}\), we obtain an element \(t\in G\) such that \(|t+U_{a,i}\cap P|\geq\exp\Big{(}-\log^{O(1)}(2\xi^{-1})\Big{)}\delta|G|\). We now set \(U^{\prime}_{a,i}=U_{a,i}+\langle t\rangle\).
By regularity of \(a\), we have \(\mathbb{E}_{b}\,\mathbbm{1}_{V\cap V-a}\stackrel{{\pi}}{{ \rightarrow}}\mathbbm{1}_{V\cap V-a}(b)\leq 4\delta^{4}\), so the total number of \(b\in G\) such that \(\mathbbm{1}_{V\cap V-a}\stackrel{{\pi}}{{\rightarrow}}\mathbbm{1 }_{V\cap V-a}(b)\geq\frac{\xi}{1000}\delta^{3}\) is at most \((2\xi^{-1})^{O(1)}\delta|G|\). The procedure terminates after at most \(\exp\Big{(}\log^{O(1)}(2\xi^{-1})\Big{)}\) steps, since we cover at least \(\exp\Big{(}-\log^{O(1)}(2\xi^{-1})\Big{)}\delta|G|\) of such elements \(b\) by \(U^{\prime}_{a,i}\) that were not covered by \(\tilde{U}_{a}\) previously.
By the way we defined the set \(A^{\xi}\) and unions of subspaces \(\tilde{U}_{a}\), we deduce the following approximation property.
**Claim 30**.: _Provided \(\xi\leq c_{0}/2\) and \(\varepsilon\leq 2^{-500}\xi^{32}\delta^{32}\), we have_
\[\big{\|}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: Let us begin by applying Corollary 28. Thus
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \hbox{\vrule width 0.0pt height 6.0pt depth 0.
Claim 30 implies
\[\mathop{\big{\|}\hskip-1.0pt\vbox{\hrule height 0.4pt width 100 \vrule width 1px \vrule width
We may use Lemma 2165 times to get the bound \(\delta^{65}+65\varepsilon\), as follows. First, use \(w\) with each of 11 variables \(b_{1},b_{2},b_{2}^{\prime},\ldots,z_{1},z_{1}^{\prime}\) to remove the remaining 11 terms that involve \(w\) and one of the mentioned variables. Then use \(w\) and a variable among \(v_{1},\ldots,v_{9}^{\prime}\) to remove further 18 terms. The remaining 36 terms have obvious choices of variables.
Hence, the second long sum in (18) is at most \(2\delta^{65}\), since \(\varepsilon\leq\frac{1}{65}\delta^{65}\).
Let us define \(t\colon G^{6}\to G^{10}\)
\[t(b_{1},b_{2},b_{3},x_{2},y_{3},z_{1})=\Big{(}b_{1},b_{2},b_{3},b_{1}+b_{2}-b_ {3},x_{2},x_{2}-b_{2}+b_{3},y_{3},y_{3}+b_{1}-b_{3},z_{1},b_{1}+b_{2}-z_{1} \Big{)}\]
which is the 10-tuple of points appearing in the expressions above. Furthermore, let \(F(b_{1},b_{2},b_{3},x_{2},y_{3},z_{1};w)\) for the product \(\prod_{i\in[10]}\mathbbm{1}_{V\cap V-t_{i}(b_{1},b_{2},b_{3},x_{2},y_{3},z_{ 1})}\stackrel{{\pi}}{{=}}\mathbbm{1}_{V\cap V-t_{i}(b_{1},b_{2}, b_{3},x_{2},y_{3},z_{1})}(w)\). The upper bound on (17) then becomes
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1},w}\Big{(}1- \mathbbm{1}_{A}(b_{1})\mathbbm{1}_{\tilde{U}_{b_{1}}}(w)\Big{)}F(b_{1},b_{2}, b_{3},x_{2},y_{3},z_{1};w)\leq 2\sqrt{\xi}\delta^{36}+10\sqrt[8]{\varepsilon}.\]
Note that the actual signs in the linear combinations such as \(b_{1}+b_{2}-b_{3}\) do not play a role in the argument above, so we could have had any \(\pm b_{1}\pm b_{2}\pm b_{3}\) instead. The argument can be used to prove the same inequality but with \(\Big{(}1-\mathbbm{1}_{A}(s)\mathbbm{1}_{\tilde{U}_{s}}(w)\Big{)}\) instead of \(\Big{(}1-\mathbbm{1}_{A}(b_{1})\mathbbm{1}_{\tilde{U}_{b_{1}}}(w)\Big{)}\), where \(s\) is any of the remaining 9 possibilities. Namely, for \(s\) among \(b_{2},b_{3},x_{2},y_{3},z_{1}\) we apply almost the same arguments with \(s\) in place of \(b_{1}\), with the slight difference in the terms that are neglected. On the other hand, for other possibilities, we need to change variables. When \(s=b_{1}+b_{2}-b_{3}\) we replace \(b_{3}\) by \(b_{1}+b_{2}-b_{3}\) and \(x_{2}\) by \(y_{3}\), thus reducing that case to \(s=b_{3}\) (with a slight difference in signs in linear combinations, which is not an issue). When \(s=x_{2}-b_{2}+b_{3}\) we replace \(x_{2}\) by \(x_{2}+b_{2}-b_{3}\) to reduce to the case when \(s=x_{2}\), with similar arguments for \(s\in\{y_{3}+b_{1}-b_{3},b_{1}+b_{2}-z_{1}\}\).
Let \(I(s;w)=\Big{(}1-\mathbbm{1}_{A}(s)\mathbbm{1}_{\tilde{U}_{s}}(w)\Big{)}\), which takes values 0 and 1. In particular
\[\bigg{|}\bigg{(}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1},w}\Big{(}\prod_{i\in[10 ]}\Big{(}1-I(t_{i}(b_{1},b_{2},b_{3},x_{2},y_{3},z_{1});w)\Big{)}\Big{)}F(b_{1 },b_{2},b_{3},x_{2},y_{3},z_{1};w)\bigg{)}\]
\[-\bigg{(}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1},w}F(b_{1},b_{2}, b_{3},x_{2},y_{3},z_{1};w)\bigg{)}\bigg{|}\]
\[=\bigg{|}\sum_{j\in[10]}\bigg{(}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1},w}\Big{(}\prod_{i\in[j,10]}\Big{(}1-I(t_{i}(b_{1},b_{2},b_{3},x_{2},y_{3},z_{1});w)\Big{)}\Big{)}F(b_{1 },b_{2},b_{3},x_{2},y_{3},z_{1};w)\]
\[-\Big{(}\prod_{i\in[j+1,10]}\Big{(}1-I(t_{i}(b_{1},b_{2},b_{3},x_{2},y_{3},z_{ 1});w)\Big{)}\Big{)}F(b_{1},b_{2},b_{3},x_{2},y_{3},z_{1};w)\bigg{)}\bigg{|}\]
\[\leq\sum_{j\in[10]}\bigg{|}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1},w}I(t_{j}(b_{1},b_{2}, b_{3},x_{2},y_{3},z_{1});w)\Big{(}\prod_{i\in[j+1,10]}\Big{(}1-I(t_{i}(b_{1},b_{2},b_{3},x_{2},y_{3},z_{1});w) \Big{)}\Big{)}\bigg{)}\]
\[F(b_{1},b_{2},b_{3},x_{2},y_{3},z_{1};w)\bigg{|}\]
\[\leq\sum_{j\in[10]}\mathop{\hbox{\vrule width 1px 6.02pt height 0.5pt depth 0.0pt \vrule width 1px 6.02pt height 0.5pt depth 0.0pt}}\limits_{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1},w}I(t_{j}(b_{1},b_{2},b _{3},x_{2},y_{3},z_{1});w)F(b_{1},b_{2},b_{3},x_{2},y_{3},z_{1};w),\]
where we used the fact that all terms take values in \([0,1]\) to neglect some of them in the last line.
Returning to our original notation, we conclude that
\[\mathop{\hbox{\vrule width 1px 6.02pt height 0.5pt depth 0.0pt \vrule width 1px 6.02pt height 0.5pt depth 0.0pt}}\limits_{b_{1},b_{2},b_{3},x_{2},y_{3},z_{1}}\mathop{\hbox{ \vrule width 1px 6.02pt height 0.5pt depth 0.0pt}}\limits_{w}\mathbb{1}_{A}(b_{1})\mathbb{1}_{\tilde{U}_{b_{1}}}(w) \mathbb{1}_{V\cap V-b_{1}}\stackrel{{\pi}}{{\rightarrow}} \mathbb{1}_{V\cap V-b_{1}}(w)\ \mathbb{1}_{A}(b_{2})\mathbb{1}_{\tilde{U}_{b_{2}}}(w) \mathbb{1}_{V\cap V-b_{2}}\stackrel{{\pi}}{{\rightarrow}} \mathbb{1}_{V\cap V-b_{2}}(w)\]
\[\mathbb{1}_{A}(b_{3})\mathbb{1}_{\tilde{U}_{b_{3}}}(w)\mathbb{1}_{V\cap V-b_{ 3}}\stackrel{{\pi}}{{\rightarrow}}\mathbb{1}_{V\cap V-b_{3}}(w) \ \mathbb{1}_{A}(b_{1}+b_{2}-b_{3})\mathbb{1}_{\tilde{U}_{b_{1}+b_{2}-b_{3}}}(w) \mathbb{1}_{V\cap V-(b_{1}+b_{2}-b_{3})}\stackrel{{\pi}}{{ \rightarrow}}\mathbb{1}_{V\cap V-(b_{1}+b_{2}-b_{3})}(w)\]
\[\mathbb{1}_{A}(x_{2})\mathbb{1}_{\tilde{U}_{x_{2}}}(w)\mathbb{1}_{V\cap V-x_{ 2}}\stackrel{{\pi}}{{\rightarrow}}\mathbb{1}_{V\cap V-x_{2}}(w) \ \mathbb{1}_{A}(x_{2}-b_{2}+b_{3})\mathbb{1}_{\tilde{U}_{x_{2}-b_{2}+b_{3}}}(w) \mathbb{1}_{V\cap V-(x_{2}-b_{2}+b_{3})}\stackrel{{\pi}}{{ \rightarrow}}\mathbb{1}_{V\cap V-(x_{2}-b_{2}+b_{3})}(w)\]
\[\mathbb{1}_{A}(y_{3})\mathbb{1}_{\tilde{U}_{y_{3}}}(w)\mathbb{1}_{V\cap V-y_{ 3}}\stackrel{{\pi}}{{\rightarrow}}\mathbb{1}_{V\cap V-y_{3}}(w) \ \mathbb{1}_{A}(y_{3}+b_{1}-b_{3})\mathbb{1}_{\tilde{U}_{y_{3}+b_{1}-b_{3}}}(w) \mathbb{1}_{V\cap V-(y_{3}+b_{1}-b_{3})}\stackrel{{\pi}}{{ \rightarrow}}\mathbb{1}_{V\cap V-(y_{3}+b_{1}-b_{3})}(w)\]
\[\mathbb{1}_{A}(z_{1})\mathbb{1}_{\tilde{U}_{z_{1}}}(w)\mathbb{1}_{V\cap V-z_{ 1}}\stackrel{{\pi}}{{\rightarrow}}\mathbb{1}_{V\cap V-z_{1}}(w) \ \mathbb{1}_{A}(b_{1}+b_{2}-z_{1})\mathbb{1}_{\tilde{U}_{b_{1}+b_{2}-z_{1}}}(w) \mathbb{1}_{V\cap V-(b_{1}+b_{2}-z_{1})}\stackrel{{\pi}}{{ \rightarrow}}\mathbb{1}_{V\cap V-(b_{1}+b_{2}-z_{1})}(w)\]
\[\geq\frac{c_{1}}{2}\delta^{36},\]
provided \(\xi\leq 2^{-20}c_{1}^{2}\) and \(\varepsilon\leq 2^{-100}c_{1}^{8}\delta^{288}\).
Since every element \(a\in A\) is \(\sqrt[4]{\varepsilon}\)-regular, by property **(ii)** we have that \(\mathbb{1}_{V\cap V-a}\stackrel{{\pi}}{{\rightarrow}}\mathbb{1}_ {V\cap V-a}(b)\leq 2\delta^{3}\) holds for all but at most \(\sqrt[4]{\varepsilon}|G|\) of \(b\in G\). Combining this fact with the inequality above, we see that
\[\mathbb{1}_{A}(x_{2})\mathbb{1}_{\tilde{U}_{x_{2}}}(w)\mathbb{1}_{A}(x_{2}-b_ {2}+b_{3})\mathbb{1}_{\tilde{U}_{x_{2}-b_{2}+b_{3}}}(w)\mathbb{1}_{A}(y_{3}) \mathbb{1}_{\tilde{U}_{y_{3}}}(w)\mathbb{1}_{A}(y_{3}+b_{1}-b_{3})\mathbb{1}_ {\tilde{U}_{y_{3}+b_{1}-b_{3}}}(w)\]
\[\mathbb{1}_{A}(z_{1})\mathbb{1}_{\tilde{U}_{z_{1}}}(w)\mathbb{1}_{A}(b_{1}+b _{2}-z_{1})\mathbb{1}_{\tilde{U}_{b_{1}+b_{2}-z_{1}}}(w).\]
Provided \(\varepsilon\leq 2^{-40}c_{1}^{4}\delta^{144}\), the claim follows.
We now set \(\framebox{$\xi=\exp(-D\log^{D}(2c_{0}^{-1}))$}\), where \(D\) is the constant from Proposition 31 and write \(A=A^{\xi}\).
Let us also show that subspaces \(U_{a,i}\) cannot have a large intersection for different choices of \(a\).
**Claim 32**.: _Let \(r\in\mathbb{N}\) and let \(\varepsilon\leq 2^{-500}\xi^{32}\delta^{32}\). There exists \(K\leq\exp\Big{(}r\log^{O(1)}(2\xi^{-1})\Big{)}\) such that for all but at most \(64r\delta^{-32r}\varepsilon|G|^{r}\) choices of \((a_{1},\ldots,a_{r})\in A^{r}\) we have_
\[|U_{a_{1},i_{1}}\cap U_{a_{2},i_{2}}\cap\ldots\cap U_{a_{r},i_{r}}|\leq K \delta^{r}|G|\]
_for all indices \(i_{1}\in[m_{a_{1}}],\ldots,i_{r}\in[m_{a_{r}}]\)._
Proof.: Recall from Proposition 29 which constructs subspaces \(U_{a,i}\) that we have parameters \(M\leq\exp\Big{(}\log^{O(1)}(2\xi^{-1})\Big{)}\) and \(\alpha\geq(\xi/2)^{O(1)}\delta^{15}\) such that \(m_{a}\leq M\) for all \(a\in A\) and that \(\overline{*}^{(8)}\mathbbm{1}_{V\cap V-a}(b)\geq\alpha\) for all \(b\in U_{a,i}\). Fix \(a_{1},\ldots,a_{r}\in A\). Note that
\[\alpha^{r}M^{-r}\sum_{i_{1}\in[m_{a_{1}}],i_{2}\in[m_{a_{2}}], \ldots,i_{r}\in[m_{a_{r}}]}|U_{a_{1},i_{1}}\cap U_{a_{2},i_{2}}\cap\ldots\cap U _{a_{r},i_{r}}|\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
and \((v,y_{i,1})\) for \(\mathbb{1}_{V}\Big{(}\sum_{\ell\in[7]}(-1)^{\ell+1}y_{i,\ell}-v\Big{)}\) and obvious variables for the remaining terms. Similarly, we get \(\Big{|}\operatorname{\mathbb{E}}_{a_{[r]}}F(a_{1},\dots,a_{r})-\delta^{16r} \Big{|}\leq 16r\varepsilon\). We conclude
\[\big{\|}\operatorname*{\vbox{\hbox{\includegraphics[scale=0.4]{fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig//fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/
**Theorem 33**.: _There exists an absolute constant \(D\geq 1\) such that the following holds. Let \(c>0\) and let \(d\) be a positive integer. Let \(A\subseteq G\) be a set of size \(|A|\geq c|G|\) and let \(W_{a}\leq G\) be a subspace of codimension \(d\) for each \(a\in A\). Suppose that_
\[|W_{a_{1}}\cap W_{a_{2}}\cap\ldots\cap W_{a_{r}}|\leq Kp^{-rd}|G|\]
_holds for all but at most \(\eta|G|^{r}\)\(r\)\(r\)-tuples \((a_{1},a_{2},\ldots,a_{r})\in A^{r}\) for each \(r\in[9]\). Assume furthermore that for at least \(c|G|^{6}\) triples \((a,b_{1},b_{2},b_{3},x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{10}\) we have_
* \[a=b_{1}+b_{2}-b_{3},\ x_{3}=x_{2}-b_{2}+b_{3},\ y_{1}=y_{3}+b_{1}-b_{3},\ z_{2}=b_{1}+b_{2}-z_{1},\] (21) _and_
* _the subspaces_ \[W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x_{2}}\cap W_{x_{3}} \cap W_{y_{1}}\cap W_{y_{3}}\cap W_{z_{1}}\cap W_{z_{2}}\] (22) _has size at least_ \(K^{-1}p^{-6d}|G|\)_._
_Then, provided \(\eta\leq 2^{-31}c^{3}\), there exist parameters \(c^{\prime}\geq\exp\Big{(}-\exp\Big{(}(\log(2c^{-1})+\log_{p}K)^{D}\Big{)}\Big{)}\) and \(r\leq\exp\Big{(}(\log(2c^{-1})+\log_{p}K)^{D}\Big{)}\), set \(A^{\prime}\subseteq A\) and a map \(\Phi\colon G\times\mathbb{F}_{p}^{d}\to G\), affine in the first variable and linear in the second, such that \(|A^{\prime}|\geq c^{\prime}|G|\) and for each \(a\in A^{\prime}\) we have \(|\mathrm{Im}\,\Phi(a,\cdot)\cap W_{a}^{\perp}|\geq c^{\prime}p^{d}\). Moreover, there exists a subspace \(\Lambda\leq\mathbb{F}_{p}^{d}\) of dimension \(r\) such that whenever \(\lambda\notin\Lambda\) we have_
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{x,y}\omega\Big{(}\Phi(x,\lambda)\cdot y\Big{)}\leq\Big{(} \eta c^{\prime-2}\Big{)}^{1/2r}.\]
The key observation that allows us to pass from subspaces to mutually related linear maps is the next lemma.
**Lemma 34**.: _Let \(U_{1},U_{2},U_{3},U_{4}\leq G\) be subspaces of dimension \(d\). Let \(K\geq 1\) be a parameter such that_
\[K^{-1}p^{3d}\leq|U_{i_{1}}+U_{i_{2}}+U_{i_{3}}|\leq|U_{1}+U_{2}+U_{3}+U_{4}| \leq Kp^{3d}\]
_for any three distinct elements \(i_{1},i_{2},i_{3}\in[4]\). Let \(\phi_{4}\colon\mathbb{F}_{p}^{d}\to U_{4}\) be a linear isomorphism. Then there exist linear isomorphisms \(\phi_{i}\colon\mathbb{F}_{p}^{d}\to U_{i}\) for \(i\in[3]\) such that \(\mathrm{rank}(\phi_{1}+\phi_{2}-\phi_{3}-\phi_{4})\leq 20\log_{p}K\)._
Proof.: By assumptions we have
\[|U_{1}\cap(U_{2}+U_{3})|=\frac{|U_{2}+U_{3}||U_{1}|}{|U_{1}+U_{2}+U_{3}|}\leq \frac{|U_{2}||U_{3}||U_{1}|}{|U_{1}+U_{2}+U_{3}|}\leq K.\]
Let \(V_{1}\leq U_{1}\) be an arbitrary subspace such that \(U_{1}=\Big{(}U_{1}\cap(U_{2}+U_{3})\Big{)}\oplus V_{1}\). Thus \(|V_{1}|\geq|U_{1}|/K\). Similarly, we may find subspaces \(V_{2}\leq U_{2}\) and \(V_{3}\leq U_{3}\) such that \(U_{2}=\Big{(}U_{2}\cap(U_{3}+U_{1})\Big{)}\oplus V_{2}\) and \(U_{3}=\Big{(}U_{3}\cap(U_{1}+U_{2})\Big{)}\oplus V_{3}\) which also satisfy \(|V_{2}|\geq|U_{2}|/K\) and \(|V_{3}|\geq|U_{3}|/K\). We claim
that \(V_{1}+V_{2}+V_{3}\) is actually a direct sum. To see this, let \(x\in V_{1}\cap(V_{2}+V_{3})\) be arbitrary. Since \(V_{1}\cap(V_{2}+V_{3})\subseteq U_{1}\cap(U_{2}+U_{3})\), we have that \(x\in V_{1}\cap(U_{1}\cap(U_{2}+U_{3}))\). But this intersection is \(0\), proving that \(x=0\).
Write \(S=V_{1}+V_{2}+V_{3}\). Since this is a direct sum, there exist linear maps \(\pi_{i}\colon S\to V_{i}\) for \(i\in[3]\) such that \(s=\pi_{1}(s)+\pi_{2}(s)-\pi_{3}(s)\) for all \(s\in S\).
Going back to assumptions, we see that
\[|U_{4}\cap S|=\frac{|U_{4}||S|}{|U_{4}+S|}\geq\frac{K^{-3}p^{4d}}{|U_{4}+U_{1} +U_{2}+U_{3}|}\geq K^{-4}p^{d}.\]
Recall that we are given a linear isomorphism \(\phi_{4}\colon\mathbb{F}_{p}^{d}\to U_{4}\). For \(i\in[3]\), let \(\phi_{i}^{\prime}\colon\mathbb{F}_{p}^{d}\to U_{i}\) be linear map defined as follows. We first define \(\phi_{i}^{\prime}(x)=\pi_{i}(\phi_{4}(x))\) for all \(x\in\phi_{4}^{-1}(U_{4}\cap S)\), and then extend to whole \(\mathbb{F}_{p}^{d}\) arbitrarily. We claim that \(\operatorname{rank}\phi_{i}^{\prime}\geq d-5\log_{p}K\). To see this, we need to estimate \(|\ker\pi_{i}\cap(U_{4}\cap S)|\). By definition of \(\pi_{i}\), we have \(\ker\pi_{i}=V_{j}+V_{k}\) for \(\{j,k\}=[3]\setminus\{i\}\). Thus
\[|(V_{j}+V_{k})\cap(U_{4}\cap S)|\leq|(U_{j}+U_{k})\cap U_{4}|=\frac{|U_{j}+U_ {k}||U_{4}|}{|U_{j}+U_{k}+U_{4}|}\leq\frac{|U_{j}||U_{k}||U_{4}|}{|U_{j}+U_{k} +U_{4}|}\leq K,\]
from which we deduce
\[\operatorname{rank}\phi_{i}^{\prime}\geq\operatorname{rank}\phi_ {i}^{\prime}|_{\phi_{4}^{-1}(U_{4}\cap S)} \geq\dim(U_{4}\cap S)-\dim\ker\phi_{i}^{\prime}|_{\phi_{4}^{-1}(U _{4}\cap S)}\] \[\geq d-4\log_{p}K-\dim(\ker\pi_{i}\cap(U_{4}\cap S))\geq d-5\log _{p}K.\]
Using Lemma 17, we may find a linear isomorphism \(\phi_{i}\colon\mathbb{F}_{p}^{d}\to U_{i}\) such that \(\operatorname{rank}(\phi_{i}-\phi_{i}^{\prime})\leq 5\log_{p}K\). From the definition, we see that \(\phi_{1}^{\prime}+\phi_{2}^{\prime}-\phi_{3}^{\prime}-\phi_{4}\) vanishes on \(\phi_{4}^{-1}(U_{4}\cap S)\). Thus \(\operatorname{rank}(\phi_{1}^{\prime}+\phi_{2}^{\prime}-\phi_{3}^{\prime}- \phi_{4})\leq 4\log_{p}K\), and the claim follows.
We also need a related uniqueness result.
**Lemma 35**.: _Let \(W,U_{1},U_{2},V_{1},V_{2}\leq G\) be subspaces of dimension \(d\). Let \(K\geq 1\) be a parameter such that_
\[|W\cap(U_{1}+U_{2}+V_{1}+V_{2})|\leq K.\]
_Suppose that \(\phi_{i}\colon\mathbb{F}_{p}^{d}\to U_{i}\), \(i\in[2]\), \(\psi_{i}\colon\mathbb{F}_{p}^{d}\to V_{i}\), \(i\in[2]\) and \(\theta\colon\mathbb{F}_{p}^{d}\to W\) are linear maps such that_
\[\operatorname{rank}\Big{(}\phi_{1}+\phi_{2}+\psi_{1}+\psi_{2}+\theta\Big{)} \leq r.\]
_Then \(\operatorname{rank}\theta\leq r+\log_{p}K\)._
Proof.: Let \(I=\operatorname{im}\Big{(}\phi_{1}+\phi_{2}+\psi_{1}+\psi_{2}+\theta\Big{)}\), which is a vector space of dimension at most \(r\). Let \(J=\operatorname{im}\theta\). Then we have \(J\subseteq(I+U_{1}+U_{2}+V_{1}+V_{2})\cap W\). But
\[|(I+U_{1}+U_{2}+V_{1}+V_{2})\cap W|\leq|I||U_{1}+U_{2}+V_{1}+V_{2})\cap W|\leq p ^{r}K,\]
from which the claim follows.
We are now ready to prove Theorem 33.
Proof of Theorem 33.: Let us begin the proof by showing the following claim.
**Claim 36**.: _Suppose \(\eta\leq 2^{-31}c^{3}\). For at least \(\frac{c}{2}|G|^{6}\) of 10-tuples \((a,b_{1},b_{2},b_{3},x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{10}\) we have_
* \(a=b_{1}+b_{2}-b_{3},\ x_{3}=x_{2}-b_{2}+b_{3},\ y_{1}=y_{3}+b_{1}-b_{3},\ z_{2}=b_{1}+b_{2}-z_{1}\)_, and_
* _each of 7 subspaces_ \[W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{a},\ W_{b_{1}}\cap W_{x_{2}}\cap W _{x_{3}}\cap W_{a},\ W_{b_{2}}\cap W_{x_{3}}\cap W_{x_{2}}\cap W_{b_{3}},\ W_{y_{1}} \cap W_{b_{2}}\cap W_{y_{3}}\cap W_{a}\] \[W_{b_{1}}\cap W_{y_{3}}\cap W_{y_{1}}\cap W_{b_{3}},\ W_{z_{1}}\cap W _{z_{2}}\cap W_{b_{3}}\cap W_{a},\ W_{b_{1}}\cap W_{b_{2}}\cap W_{z_{1}}\cap W _{z_{2}}\] _has size at least_ \(K^{-3}p^{-3d}|G|\)_._
Proof.: Let \(\mathcal{T}\) be the set of all 10-tuples \((a,b_{1},b_{2},b_{3},x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{10}\) that satisfy (21) and (22). Thus, \(|\mathcal{T}|\geq c|G|^{6}\). We now show that at most \(\frac{c}{40}|G|^{6}\) 10-tuples in \(\mathcal{T}\) have \(|W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{a}|<K^{-1}p^{-3d}|G|\). Same argument will apply to other 6 subspaces and the claim will follow. (Note that the indices of each of the 7 subspaces in the claim form an additive quadruple, with any three indices behaving independently; these are the only properties that we shall use in the proof.)
Let \(\mathcal{B}\) be the set of all \((b_{1},b_{2},b_{3})\in A^{3}\) such that the number of 10-tuples in \(\mathcal{T}\) with these 3 elements is at least \(\frac{c}{1000}|G|^{3}\). The number of 10-tuples in \(\mathcal{T}\) whose \((b_{1},b_{2},b_{3})\) belongs to \(\mathcal{B}\) is therefore at least \(\frac{999c}{1000}|G|^{6}\). Let \(\mathcal{B}^{\prime}\) be the subset \(\mathcal{B}\) consisting of those triples \((b_{1},b_{2},b_{3})\) for whom the number of 6-tuples \((x,y,z,x^{\prime},y^{\prime},z^{\prime})\in A^{6}\) with
\[|W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x}\cap W_{y}\cap W_{z}\cap W_{x ^{\prime}}\cap W_{y^{\prime}}\cap W_{z^{\prime}}|\leq Kp^{-9d}|G|\]
is at most \(\frac{c^{2}}{2\cdot 10^{6}}|G|^{6}\). Thus the number of 10-tuples in \(\mathcal{T}\) whose \((b_{1},b_{2},b_{3})\) belongs to \(\mathcal{B}^{\prime}\) is therefore at least \(\frac{999c}{1000}|G|^{6}\)\(-\eta 2\cdot 10^{6}c^{-2}|G|^{6}\)\(\geq\frac{998c}{1000}|G|^{6}\), provided \(\eta\leq c^{3}/(2\cdot 10^{9})\).
Pick any 10-tuple \((a,b_{1},b_{2},b_{3},x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\) such that \((b_{1},b_{2},b_{3})\in\mathcal{B}^{\prime}\). Then \(a=b_{1}+b_{2}-b_{3}\) and we have at least \(\frac{c^{2}}{10^{6}}|G|^{6}\) of 6-tuples \((x,y,z,x^{\prime},y^{\prime},z^{\prime})\) such that
\[|W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x}\cap W_{y}\cap W_{z} |,|W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x^{\prime}}\cap W_{y ^{\prime}}\cap W_{z^{\prime}}|\geq K^{-1}p^{-6d}|G|.\]
Standard properties of subspaces then imply
\[K^{-2}p^{-12d}|G|^{2} \leq|W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x} \cap W_{y}\cap W_{z}|\cdot|W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}} \cap W_{x^{\prime}}\cap W_{y^{\prime}}\cap W_{z^{\prime}}|\] \[=\Big{|}\Big{(}W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}} \cap W_{x}\cap W_{y}\cap W_{z}\Big{)}\ +\ \Big{(}W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x^{\prime}}\cap W_{ y^{\prime}}\cap W_{z^{\prime}}\Big{)}\] \[|W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x}\cap W_{y }\cap W_{z}\cap W_{x^{\prime}}\cap W_{y^{\prime}}\cap W_{z^{\prime}}|\] \[\leq|W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}|\cdot|W_{b_{1} }\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x}\cap W_{y}\cap W_{z}\cap W_{x^{\prime}} \cap W_{y^{\prime}}\cap W_{z^{\prime}}|.\]
But there exists a choice of \((x,y,z,x^{\prime},y^{\prime},z^{\prime})\) for which
\[|W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}\cap W_{x}\cap W_{y}\cap W_{z}\cap W_{x^{ \prime}}\cap W_{y^{\prime}}\cap W_{z^{\prime}}|\leq Kp^{-9d}|G|\]
so we get \(|W_{a}\cap W_{b_{1}}\cap W_{b_{2}}\cap W_{b_{3}}|\geq K^{-3}p^{-3d}|G|\), as required.
Write \(U_{a}=W_{a}^{\perp}\) for each \(a\in A\). The assumptions on \(W_{a}\) and the claim above imply that
\[|U_{a_{1}}+U_{a_{2}}+\cdots+U_{a_{r}}|\geq K^{-1}p^{rd} \tag{23}\]
holds for all but at most \(\eta|G|^{r}\)\(r\)-tuples \((a_{1},a_{2},\ldots,a_{r})\in A^{r}\) for \(r\in[9]\), and that for at least \(\frac{c}{2}|G|^{6}\) 10-tuples \((a,b_{1},b_{2},b_{3},x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{10}\) we have
* \(a=b_{1}+b_{2}-b_{3},\ x_{3}=x_{2}-b_{2}+b_{3},\ y_{1}=y_{3}+b_{1}-b_{3},\ z_{2}=b_{1}+b_{2}-z_{1}\), and
* each of 7 subspaces \[U_{b_{1}}+U_{b_{2}}+ U_{b_{3}}+U_{a},\ U_{b_{1}}+U_{x_{2}}+U_{x_{3}}+U_{a},\ U_{b_{2}}+U_{x_{3}}+U_{x_{2}}+U_{b_{3}},\ U_{y_{1}}+U_{b_{2}}+U_{y_{3}}+U_{a}\] \[U_{b_{1}}+U_{y_{3}}+U_{y_{1}}+U_{b_{3}},\ U_{z_{1}}+U_{z_{2}}+U_{ b_{3}}+U_{a},\ U_{b_{1}}+U_{b_{2}}+U_{z_{1}}+U_{z_{2}}\] (24) has size at most \(K^{3}p^{3d}\).
Our aim is to use Lemma 34 to define linear isomorphisms between \(\mathbb{F}_{p}^{d}\) and \(U_{a}\). To that end, we say that an additive quadruple \((x_{1},x_{2},x_{3},x_{4})\) (where \(x_{1}+x_{2}=x_{3}+x_{4}\)) is _good_ if we have \(K^{-1}p^{3d}\leq|U_{x_{j_{1}}}+U_{x_{j_{2}}}+U_{x_{j_{3}}}|\) for any distinct indices \(j_{1},j_{2},j_{3}\in[4]\) and \(|U_{x_{1}}+U_{x_{2}}+U_{x_{3}}+U_{x_{4}}|\leq K^{3}p^{3d}\). Notice that the associated subspaces to elements of any good additive quadruple satisfy conditions of Lemma 34.
Our next claim shows that we may find many 10-tuples \((a,b_{1},b_{2},b_{3},x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{10}\) which satisfy stronger conditions than guaranteed by (24), and these stronger conditions will allow us to apply Lemma 35.
**Claim 37**.: _Suppose that \(\eta\leq\frac{c}{100}\). For at least \(\frac{c}{20}|G|^{6}\) 10-tuples \((a,b_{1},b_{2},b_{3},x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{10}\) we have_
* \(a=b_{1}+b_{2}-b_{3},\ x_{3}=x_{2}-b_{2}+b_{3},\ y_{1}=y_{3}+b_{1}-b_{3},\ z_{2}=b_{1}+b_{2}-z_{1}\)_,_
* _each of 7 subspaces_ \[U_{b_{1}}+U_{b_{2}}+ U_{b_{3}}+U_{a},\ U_{b_{1}}+U_{x_{2}}+U_{x_{3}}+U_{a},\ U_{b_{2}}+U_{x_{3}}+U_{x_{2}}+U_{b_{3}},\ U_{y_{1}}+U_{b_{2}}+U_{y_{3}}+U_{a}\] \[U_{b_{1}}+U_{y_{3}}+U_{y_{1}}+U_{b_{3}},\ U_{z_{1}}+U_{z_{2}}+U_{ b_{3}}+U_{a},\ U_{b_{1}}+U_{b_{2}}+U_{z_{1}}+U_{z_{2}}\] _has size at most_ \(K^{3}p^{3d}\)_,_
* _each of 16 subspaces obtained by taking sum of 3 subspaces inside any of the 4 quadruples_ \(\{U_{b_{1}},U_{b_{2}},U_{b_{3}},U_{a}\},\{U_{b_{1}},U_{x_{2}},U_{x_{3}},U_{a}\}, \{U_{y_{1}},U_{b_{2}},U_{y_{3}},U_{a}\}\) _and_ \(\{U_{z_{1}},U_{z_{2}},U_{b_{3}},U_{a}\}\) _has size at least_ \(K^{-1}p^{3d}\)_,_
* _each of 3 subspaces_ \[U_{b_{1}}\cap(U_{b_{2}}+U_{x_{3}}+U_{x_{2}}+U_{b_{3}}),\ U_{b_{2}}\cap(U_{b_{1}}+U_{ y_{3}}+U_{y_{1}}+U_{b_{3}}),\ U_{b_{3}}\cap(U_{b_{1}}+U_{b_{2}}+U_{z_{1}}+U_{z_{2}})\] _has size at most_ \(K^{4}\)_._
Proof.: Combining conditions (23) for \(r\in\{3,4\}\) and (24) we get at least \((2^{-1}c-19\eta)|G|^{6}\) 10-tuples \((a,b_{1},b_{2},b_{3},x_{2},x_{3},\)\(y_{1},\)\(y_{3},\)\(z_{1},z_{2})\in A^{10}\) such that conditions **(i)** and **(ii)** in the conclusion of the claim hold and additionally we have each of 16 subspaces obtained by taking sum of 3 subspaces inside any of the 4 quadruples \(\{U_{b_{1}},U_{b_{2}},U_{b_{3}},U_{a}\}\), \(\{U_{b_{1}},U_{x_{2}},U_{x_{3}},U_{a}\}\), \(\{U_{y_{1}},U_{b_{2}},U_{y_{3}},U_{a}\}\) and \(\{U_{z_{1}},U_{z_{2}},U_{b_{3}},U_{a}\}\) has size at least \(K^{-1}p^{3d}\) and each of 3 subspaces
\[U_{b_{1}}+U_{b_{2}}+U_{b_{3}}+U_{x_{2}},\ U_{b_{1}}+U_{b_{2}}+U_{b_{3}}+U_{y_{ 3}},\ U_{b_{1}}+U_{b_{2}}+U_{b_{3}}+U_{z_{1}}\]
has size at least \(K^{-1}p^{4d}\). We now show that each such 10-tuple has the properties described in the claim.
Take any such a 10-tuple \((a,b_{1},b_{2},b_{3},x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{10}\). By the way we choose 10-tuples we see that property **(iii)** holds. It remains to prove property **(iv)**. We have
\[|U_{b_{1}}\cap(U_{b_{2}}+U_{x_{3}}+U_{x_{2}}+U_{b_{3}})|=\frac{|U_{b_{1}}||U_{ b_{2}}+U_{x_{3}}+U_{x_{2}}+U_{b_{3}}|}{|U_{b_{1}}+U_{b_{2}}+U_{x_{3}}+U_{x_{2}}+U_{ b_{3}}|}\leq\frac{p^{d}\cdot K^{3}p^{3d}}{|U_{b_{1}}+U_{b_{2}}+U_{x_{2}}+U_{b_{3}}|} \leq K^{4}.\]
Similar arguments prove the other two bounds.
Let \(a\in A\) be an element that we shall specify later and let \(\theta\colon\mathbb{F}_{p}^{d}\to U_{a}\) be a linear isomorphism. For each index \(i\in[3]\), let \(A_{i}\subseteq A\) be the set of all elements \(x\in A\) that appear at \(i^{\text{th}}\) position in a good additive quadruple in which \(a\) appears as the last element. For each such element \(x\in A_{i}\), we pick a random good additive quadruple \((y_{1},y_{2},y_{3},a)\) such that \(y_{i}=x\), uniformly among all such quadruples. We do this independently for all \(x\). Once we have chosen such quadruple \((y_{1},y_{2},y_{3},a)\) we apply Lemma 34 with chosen linear isomorphism \(\theta\colon\mathbb{F}_{p}^{d}\to U_{a}\) to obtain linear isomorphisms \(\psi_{j}\colon\mathbb{F}_{p}^{d}\to U_{y_{j}}\) such that \(\operatorname{rank}\left(\psi_{1}+\psi_{2}-\psi_{3}-\theta\right)\leq 60 \log_{p}K\). Finally, define \(\phi_{x}^{i}=\psi_{i}\).
The crucial claim is that, if we choose \(a\) suitably, the system of maps \((\phi_{x}^{1})_{x\in A^{1}}\) exhibits a considerable amount of additive structure.
**Claim 38**.: _There exists an element \(a\in A\) such that in the above procedure we get at least \(2^{-28}c^{4}|G|^{2}\) additive quadruples \((b_{1},b_{2},b_{3},a)\in A^{1}\times A^{2}\times A^{3}\times A\) such that \(\operatorname{rank}\left(\phi_{b_{1}}^{1}+\phi_{b_{2}}^{2}-\phi_{b_{3}}^{3}- \theta\right)\leq 500\log_{p}K\)._
Proof.: Let \(\mathcal{T}\) be the set of all 10-tuples satisfying properties in Claim 37. Let \(a\in A\) be such that
\[\mathcal{T}^{\prime}=\Big{\{}(b_{1},b_{2},b_{3}, x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{9}\colon\] \[(a,b_{1},b_{2},b_{3}, x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in\mathcal{T}\Big{\}}\]
has size at least \(\frac{c}{40}|G|^{5}\). Now take any \((b_{1},b_{2},b_{3})\in A^{3}\) such that \(b_{1}+b_{2}=b_{3}+a\) and such that the number of 6-tuples \((x_{2},x_{3},y_{1},y_{3},z_{1},z_{2})\in A^{6}\) such that \((b_{1},b_{2},b_{3},x_{2},\)\(x_{3},y_{1},y_{3},\)\(z_{1},z_{2})\in\mathcal{T}^{\prime}\) is at least \(\frac{c}{80}|G|^{3}\). By averaging, the set of such triples \(\mathcal{B}\) has size at least \(\frac{c}{80}|G|^{2}\). Let \(\mathcal{S}_{b_{1},b_{2},b_{3}}\) be the set of the 6-tuples we considered above for \((b_{1},b_{2},b_{3})\). Let us observe for any \((b_{1},b_{2},b_{3},x_{2},\)\(x_{3},y_{1},y_{3},\)\(z_{1},z_{2})\in\mathcal{T}^{\prime}\) that properties **(ii)** and **(iii)** of Claim 37 imply that quadruples \((b_{1},b_{2},b_{3},a)\), \((b_{1},x_{2},x_{3},a)\), \((y_{1},b_{2},y_{3},a)\) and \((z_{1},z_{2},b_{3},a)\) are good and thus \(b_{1},y_{1},z_{1}\in A_{1}\), \(b_{2},x_{2},z_{2}\in A_{2}\) and \(b_{3},x_{3},y_{3}\in A_{3}\). We now show that for every \((b_{1},b_{2},b_{3})\in\mathcal{B}\) the event \(\operatorname{rank}\left(\phi_{b_{1}}^{1}+\phi_{b_{2}}^{2}-\phi_{b_{3}}^{3}- \theta\right)\leq 150\log_{p}K\) occurs with
probability at least \(2^{-18}c^{3}\).
To see that, fix \((b_{1},b_{2},b_{3})\in\mathcal{B}\). Apply Lemma 34 to quadruple of subspaces \((U_{b_{1}},U_{b_{2}},U_{b_{3}},U_{a})\) and isomorphism \(\theta\colon\mathbb{F}_{p}^{d}\to U_{a}\). We thus get linear isomorphisms \(\theta_{i}\colon\mathbb{F}_{p}^{d}\to U_{b_{i}}\) for \(i\in[3]\) such that \(\operatorname{rank}\left(\theta_{1}+\theta_{2}-\theta_{3}-\theta\right)\leq 6 0\log_{p}K\). Since \(|\mathcal{S}_{b_{1},b_{2},b_{3}}|\geq\frac{c}{80}|G|^{3}\), we see in particular that there at least \(\frac{c}{80}|G|\) of choices of \(x_{2}\in A_{2}\) such that for \(x_{3}=b_{1}+x_{2}-a\) we have that additive quadruple \((b_{1},x_{2},x_{3},a)\) is good and \(|U_{b_{1}}\cap(U_{b_{2}}+U_{x_{3}}+U_{x_{2}}+U_{b_{3}})|\leq K^{4}\) (using property **(iv)** in the conclusion of Claim 37). If it happens that \((b_{1},x_{2},x_{3},a)\) is used to define \(\phi_{b_{1}}^{1}\), then we get also linear isomorphisms \(\rho_{2}\colon\mathbb{F}_{p}^{d}\to U_{x_{2}}\) and \(\rho_{3}\colon\mathbb{F}_{p}^{d}\to U_{x_{3}}\) such that \(\operatorname{rank}\left(\phi_{b_{1}}^{1}+\rho_{2}-\rho_{3}-\theta\right)\leq 6 0\log_{p}K\). But then
\[\operatorname{rank}\left(\left(\phi_{b_{1}}^{1}-\theta_{1}\right)+\rho_{2}- \rho_{3}-\theta_{2}+\theta_{3}\right)\leq 120\log_{p}K\]
so by Lemma 35 we get \(\operatorname{rank}(\phi_{b_{1}}^{1}-\theta_{1})\leq 124\log_{p}K\). In particular, this happens with probability at least \(\frac{c}{80}\).
Similarly, events \(\operatorname{rank}(\phi_{b_{2}}^{2}-\theta_{2})\leq 124\log_{p}K\) and \(\operatorname{rank}(\phi_{b_{3}}^{3}-\theta_{3})\leq 124\log_{p}K\) happen with probability at least \(\frac{c}{80}\) each, and all three events are independent, so we get
\[\operatorname{rank}\left(\phi_{b_{1}}^{1}+\phi_{b_{2}}^{2}-\phi_{b_{3}}^{3}- \theta\right)\leq\operatorname{rank}\left(\theta_{1}+\theta_{2}-\theta_{3}- \theta\right)+\operatorname{rank}(\phi_{b_{1}}^{1}-\theta_{1})+\operatorname{ rank}(\phi_{b_{2}}^{2}-\theta_{2})+\operatorname{rank}(\phi_{b_{3}}^{3}- \theta_{3})\leq 432\log_{p}K\]
with probability at least \(2^{-21}c^{3}\).
Thus, the expected number of triples \((b_{1},b_{2},b_{3})\in B\) for which \(\operatorname{rank}\left(\phi_{b_{1}}^{1}+\phi_{b_{2}}^{2}-\phi_{b_{3}}^{3}- \theta\right)\leq 500\log_{p}K\) holds is at least \(2^{-28}c^{4}|G|^{2}\), proving the claim.
We say that an additive quadruple \((b_{1},b_{2},b_{3},a)\in A^{1}\times A^{2}\times A^{3}\times A\) is _very good_ if \(\operatorname{rank}\left(\phi_{b_{1}}^{1}+\phi_{b_{2}}^{2}-\phi_{b_{3}}^{3}- \theta\right)\leq 500\log_{p}K\). By the claim above we have \(c^{\prime}|G|^{2}\) very good additive quadruples, where \(c^{\prime}=2^{-28}c^{4}\). Consider the bipartite graph whose vertex classes \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are copies of \(G\) and where we put edge between \(b_{2}\in\mathcal{C}_{1}\) and \(b_{3}\in\mathcal{C}_{2}\) if \((b_{3}+a-b_{2},b_{2},b_{3},a)\) is very good. This is a graph of density \(c^{\prime}\) and hence it contains at least \({c^{\prime}}^{4}|G|^{4}\) ordered 4-cycles.
Let \((b_{2},b_{3},b_{2}^{\prime},b_{3}^{\prime})\) be an ordered 4-cycle in the above graph. Let \(x_{1}=b_{3}+a-b_{2}\), \(x_{2}=b_{3}+a-b_{2}^{\prime}\), \(x_{3}=b_{3}^{\prime}+a-b_{2}^{\prime}\) and \(x_{4}=b_{3}^{\prime}+a-b_{2}\), so additive quadruples
\[(x_{1},b_{2},b_{3},a),\ (x_{2},b_{2}^{\prime},b_{3},a),\ (x_{3},b_{2}^{\prime},b_{3}^{ \prime},a),\ (x_{4},b_{2},b_{3}^{\prime},a)\]
are very good. Observe also that \(x_{1}+x_{3}=x_{2}+x_{4}\) and that
\[\phi_{x_{1}}^{1}+\phi_{x_{3}}^{1}-\phi_{x_{2}}^{1}-\phi_{x_{4}}^{1 }=\left(\phi_{x_{1}}^{1}+\phi_{b_{2}}^{2}-\phi_{b_{3}}^{3}-\theta\right)+\left( \phi_{x_{3}}^{1}+ \phi_{b_{2}^{\prime}}^{2}-\phi_{b_{3}^{\prime}}^{3}-\theta\right)\] \[-\left(\phi_{x_{2}}^{1}+\phi_{b_{2}^{\prime}}^{2}-\phi_{b_{3}}^{3}- \theta\right)-\left(\phi_{x_{4}}^{1}+\phi_{b_{2}}^{2}-\phi_{b_{3}^{\prime}}^{3}- \theta\right),\]
so \(\operatorname{rank}\left(\phi_{x_{1}}^{1}+\phi_{x_{3}}^{1}-\phi_{x_{2}}^{1}-\phi_ {x_{4}}^{1}\right)\leq 2000\log_{p}K\). Thus, by double-counting, there are at least \({c^{\prime}}^{4}|G|^{3}\) additive quadruples \((x_{1},x_{3},x_{2},x_{4})\in(A^{1})^{4}\) with \(\operatorname{rank}\left(\phi_{x_{1}}^{1}+\phi_{x_{3}}^{1}-\phi_{x_{2}}^{1}- \phi_{x_{4}}^{1}\right)\leq 2000\log_{p}K\), as each such additive quadruple can arise from at most \(|G|\) 4-cycles considered above.
By Corollary 19 there exists a map \(\Phi\colon G\times\mathbb{F}_{p}^{d}\to G\), which is affine in the first coordinate and
linear in the second, such that \(\operatorname{rank}(\Phi(a,\cdot)-\phi_{a}^{1})\leq R\) for at least \(c_{1}|G|\) elements \(a\in A\), where \(c_{1}\geq\exp\Big{(}-\exp\Big{(}(\log{c^{\prime}}^{-1}+\log_{p}K)^{O(1)}\Big{)} \Big{)}\) and \(R\leq\exp\Big{(}(\log{c^{\prime}}^{-1}+\log_{p}K)^{O(1)}\Big{)}\). Let \(B\) be the set of such \(a\).
Finally, we prove that \(\Phi\) is highly quasirandom. Let \(r=\lceil 2R+\log_{p}K\rceil\) and \(\alpha=(2\eta c_{1}^{-2})^{1/2r}\). We claim there are no \(r\) linearly independent elements \(\lambda_{1},\ldots,\lambda_{r}\in\mathbb{F}_{p}^{d}\) such that for each \(i\in[r]\) the bilinear form \((x,y)\in G^{2}\to\Phi(x,\lambda_{i})\cdot y\) has bias at least \(\alpha\). Suppose the contrary. This implies that for each \(i\in[r]\), there are at least \(\alpha|G|\) elements \(x\) for which \(\Phi(x,\lambda_{i})=0\). Intersecting all these sets, we get a subspace \(S\leq G\) of size \(|S|\!\geq\alpha^{r}|G|\) such that for each \(x\in S\) we have \(\Phi(x,\lambda_{i})=0\) for all \(i\in[r]\). Averaging over cosets of \(S\), we may find \(t\) such that \(|B\cap(t+S)|\!\geq c_{1}|S|\).
Whenever \(a\in B\cap(t+S)\), the condition \(\operatorname{rank}(\Phi(a,\cdot)-\phi_{a}^{1})\leq R\) implies that \(\Phi(a,\lambda)=\phi_{a}^{1}(\lambda)\) holds for at least \(p^{d-R}\) elements \(\lambda\in\mathbb{F}_{p}^{d}\). Hence, \(|\operatorname{Im}\Phi(a,\cdot)\cap\operatorname{Im}\phi_{a}^{1}|\!\geq p^{ d-R}\), so recalling that \(U_{a}=\operatorname{Im}\phi_{a}^{1}\), we may find a subspace \(T_{a}\) of dimension at most \(R\) such that \(U_{a}\subseteq T_{a}+\operatorname{Im}\Phi(a,\cdot)\). In particular, for any two elements \(a_{1},a_{2}\in B\cap(t+S)\) we have
\[|U_{a_{1}}+U_{a_{2}}| \leq |(T_{a_{1}}+\operatorname{Im}\Phi(a_{1},\cdot))+(T_{a_{2}}+ \operatorname{Im}\Phi(a_{2},\cdot))|\] \[\leq |T_{a_{1}}||T_{a_{2}}||\operatorname{Im}\Phi(a_{1},\cdot)+ \operatorname{Im}\Phi(a_{2},\cdot)|\!\leq p^{2R}|\operatorname{Im}\Phi(a_{1}, \cdot)+\operatorname{Im}\Phi(a_{2},\cdot)|.\]
However, since \(a_{1},a_{2}\in B\cap(t+S)\), whenever \(i\in[r]\), we have \(\Phi(a_{1},\lambda_{i})=\Phi(a_{1}-t,\lambda_{i})+\Phi(t,\lambda_{i})=\Phi(t, \lambda_{i})=\Phi(a_{2},\lambda_{i})\). Thus \(|\operatorname{Im}\Phi(a_{1},\cdot)+\operatorname{Im}\Phi(a_{2},\cdot)|\!\leq p ^{2d-r}\) and we get
\[|U_{a_{1}}+U_{a_{2}}|\!\leq p^{2R+2d-r}<K^{-1}p^{2d}\]
by the choice of \(r\). However, this is in contradiction with (23), since \(\eta<c_{1}^{2}\alpha^{2r}\) by the choice of \(\alpha\).
## SS5 Structure of approximate quadratic varieties
In the final part of the proof, we use the map \(\Phi\) to deduce the structure of the set \(V\). The last step is articulated as the following proposition.
**Proposition 39**.: _There is an absolute constant \(D\geq 1\) such that the following holds. Let \(c,\delta,\varepsilon>0\) and \(d\in\mathbb{N}\) be such that \(c\leq\delta p^{d}\leq c^{-1}\). Suppose that \(\varepsilon\leq(2^{-1}c\delta)^{D}\). Let \(V\subseteq G\) be a set of density \(\delta\) such that \(\|1_{V}-\delta\|_{\mathsf{U}^{2}}\!\leq\varepsilon\). Suppose that we are also given a subset \(A\subseteq G\) of size \(|A|\!\geq c|G|\), a subspace \(W_{a}\leq G\) for each \(a\in A\) and a bilinear map \(\beta\colon G\times G\to\mathbb{F}_{p}^{d}\) such that_
* _for each_ \(\lambda\in\mathbb{F}_{p}^{d}\setminus\{0\}\) _we have_ \(\operatorname{bias}\lambda\cdot\beta\leq\varepsilon\)_,_
* _for each_ \(a\in A\) _we have_ \(|W_{a}\cap\{b\in G\colon\beta(a,b)=0\}|\!\geq cp^{-d}|G|\)_,_
* _for each_ \(a\in A\) _and_ \(b\in W_{a}\) _we have_ \(\,\overline{*}^{(8)}\!\mathbbm{1}_{V\cap V-a}(b)\geq c\delta^{15}\)_._
_Then there exists a quadratic variety \(Q\subseteq G\) of size \(|Q|\!\leq(2c^{-1})^{D}\delta|G|\) such that \(|Q\cap V|\!\geq\exp\Big{(}-\log^{D}(2c^{-1})\Big{)}\delta|G|\). Moreover, \(Q\) is defined as \(\{x\in G\colon\gamma(x,x)-\psi(x)=\mu\}\) for a symmetric bilinear map \(\gamma\colon G\times G\to\mathbb{F}_{p}^{\tilde{d}}\), an affine map \(\psi\colon G\to\mathbb{F}_{p}^{\tilde{d}}\) and \(\mu\in\mathbb{F}_{p}^{\tilde{d}}\), where \(\operatorname{bias}\lambda\cdot\gamma\leq\varepsilon\) for all \(\lambda\neq 0\), for some \(d-O(\log_{p}(2c^{-1}))\leq\tilde{d}\leq d\)._
Proof.: The proof consists of three steps. In the first one we show that \(\beta\) is approximately symmetric, in the second we pass to an exactly symmetric bilinear map and in the final step we use the symmetry to find the desired quadratic variety.
**Approximate symmetry.** This step of the proof is based on the symmetry argument of Green and Tao [14]. We may combine assumptions **(ii)** and **(iii)** to deduce that
\[\mathop{\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\vrule width 0.4pt heigh t 6.0pt depth 0.0pt\vrule width 0.4pt height 6.0pt depth 0.0pt}\kern-3.0pt \mathop{\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt\vrule width 0.4pt height 6.
\[1_{V}(x_{1}-x_{2}+\cdots-x_{6}+x_{7}-b)1_{V}(x_{1}-x_{2}+\cdots-x_{6}+x_{7}-b+a)\]
\[1_{V}(x_{1}-x_{2}+\cdots-x_{6}+x_{7}-b)1_{V}(x_{1}-x_{2}+\cdots-x_{6}+x_{7}-b+a)\]
\[1_{V}(x_{1}-x_{2}+\cdots-x_{6}+x_{7}-b)1_{V}(x_{1}-x_{2}+\cdots-x_{6}+x_{7}-b+a)\]
\[1_{V}(x_{1}-x_{2}+\cdots-x_{6}+x_{7}-b)1_{V}(x_{1}-x_{2}+\cdots-x_{6}+x_{7}-b+a)\geq 2 ^{-1}c^{3}\delta^{16}.\]
We now show that for such a \(\lambda\) we have that \(\lambda\cdot(\beta-\beta\circ(1\ 2))\) is of low rank.
Introduce an auxiliary variable \(t\), and make a change of variables \(w_{i}=x_{i}-t\). Then
\[\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{a,b}\omega^{\lambda\cdot\beta(a,b)}\mathop{\hbox{\vrule w idth 0.0pt height 6.0pt depth 0.0pt}\vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{t,w_{[7]}}1_{V}(w_{1}+t)1_{V}(w_{1}+t+a)\ldots 1 _{V}(w_{7}+t)1_{V}(w_{7}+t+a)\]
\[1_{V}(w_{1}-w_{2}+\cdots-w_{6}+w_{7}+t-b)1_{V}(w_{1}-w_{2}+\cdots-w_{6}+w_{7}+ t-b+a)\geq 2^{-1}c^{3}\delta^{16}\]
Make a further change of variables \(x=t,y=t+a,z=2t+a-b\) instead of \(t,a,b\). Then
\[2^{-1}c^{3}\delta^{16}\leq\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}\vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{x,y}\omega^{\lambda\cdot\beta(y-x,x+y-z)}\mathop{\hbox{ \vrule width 0.0pt height 6.0pt depth 0.0pt}\vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{z,w_{[7]}}1_{V}(w_{1}+x)1_{V}(w_{1}+y) \ldots 1_{V}(w_{7}+x)1_{V}(w_{7}+y)\]
\[1_{V}(w_{1}-w_{2}+\cdots-w_{6}+w_{7}+z-y)1_{V}(w_{1}-w_{2}+\cdots-w_{6}+w_{7}+ z-x)\]
\[=\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt} \vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{x,y,z}\omega^{\lambda\cdot\beta(y,x)-\lambda\cdot\beta(x,y)} \omega^{\lambda\cdot\beta(x-y,z)-\lambda\cdot\beta(x,x)+\lambda\cdot\beta(y,y) }\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt}\vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{w_{[7]}}1_{V}(w_{1}+x)1_{V}(w_{1}+y)\ldots 1 _{V}(w_{7}+x)1_{V}(w_{7}+y)\]
\[1_{V}(w_{1}-w_{2}+\cdots-w_{6}+w_{7}+z-y)1_{V}(w_{1}-w_{2}+\cdots-w_{6}+w_{7}+ z-x).\]
Write \(\rho(x,y)=\lambda\cdot\beta(y,x)-\lambda\cdot\beta(x,y)\) and for \(\boldsymbol{w}=w_{[7]}\) let
\[f_{\boldsymbol{w}}(t)=\prod_{i\in[7]}1_{V}(w_{i}+t)\]
and
\[g_{\boldsymbol{w}}(t)=1_{V}(w_{1}-w_{2}+\cdots-w_{6}+w_{7}+t).\]
With the new notation we have
\[2^{-1}c^{3}\delta^{16}\leq\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}\vrule width 0.0pt height 6.0pt depth 0.0pt}\limits_{x,y,z}\omega^{\rho(x,y)}\Big{(}\omega^{\lambda\cdot\beta(x,z)- \lambda\cdot\beta(x,x)}f_{\boldsymbol{w}}(x)g_{\boldsymbol{w}}(z-x)\Big{)} \Big{(}\omega^{-\lambda\cdot\beta(y,z)+\lambda\cdot\beta(y,y)}f_{\boldsymbol{ w}}(y)g_{\boldsymbol{w}}(z-y)\Big{)}.\]
Note that \(f_{\mathbf{w}}(y)g_{\mathbf{w}}(z-y)\) takes values \(0\) and \(1\) so \(f_{\mathbf{w}}(y)g_{\mathbf{w}}(z-y)=f_{\mathbf{w}}(y)^{2}g_{\mathbf{w}}(z-y)^{2}\). Applying Cauchy-Schwarz inequality we get
\[2^{-2}c^{6}\delta^{32}\leq\bigg{(}\mathop{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{}}}}}}}}}}}}_{y,z,x,x^{\prime}}f_{\mathbf{w}}(y)g_{\mathbf{w}}(z-y)\Big{|}\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\cdot
from which we see that \(\operatorname{bias}\rho\geq 2^{-6}c^{12}\), provided \(\varepsilon\leq 2^{-12}c^{12}\delta^{32}\).
**Obtaining exact symmetry.** Let \(R\colon\mathbb{F}_{p}^{d}\times G\times G\to\mathbb{F}_{p}\) be the trilinear form defined by \(R(\lambda,x,y)=\lambda\cdot(\beta(x,y)-\beta(y,x))\). From the work above,
\[\operatorname{bias}R\geq\frac{|\Lambda|}{p^{d}}\cdot 2^{-6}c^{12}\geq 2^{-12}c^{15}.\]
By the Theorem 15 we obtain an integer \(s\leq O(\log_{p}(2c^{-1}))\), linear forms \(\theta_{1},\ldots,\theta_{s}^{\prime\prime}\) and bilinear forms \(\rho_{1},\ldots,\rho_{s}^{\prime\prime}\) such that
\[R(\lambda,x,y)=\sum_{i\in[s]}\theta_{i}(\lambda)\rho_{i}(x,y)+\sum_{i\in[s]} \theta_{i}^{\prime}(x)\rho_{i}^{\prime}(\lambda,y)+\sum_{i\in[s]}\theta_{i}^{ \prime\prime}(y)\rho_{i}^{\prime\prime}(\lambda,x).\]
Define subspace \(U=\{u\in G\colon\theta^{\prime}(u)=0,\theta^{\prime\prime}(u)=0\}\), which has codimension at most \(2s\) in \(G\). Let \(e_{1},\ldots,e_{\tilde{d}}\) be a basis of the subspace \(\{\lambda\in\mathbb{F}_{p}^{d}\colon\theta(\lambda)=0\}\), where \(\tilde{d}\geq d-s\). Let \(\tilde{\beta}\colon G\times G\to\mathbb{F}_{p}^{\tilde{d}}\) be a bilinear map given by \(\tilde{\beta}_{i}(x,y)=e_{i}\cdot\beta(x,y)\). Then we have \(\tilde{\beta}(u,v)=\tilde{\beta}(v,u)\) for all \(u,v\in U\). We now want to replace \(\beta\) by a map which is symmetric on the whole space \(G\).
Going back to assumptions of the proposition, recall that we have a set \(A\) of size \(|A|\geq c|G|\), a subspace \(W_{a}\) for each \(a\in A\) such that
\[|W_{a}\cap\{b\in G\colon\beta(a,b)=0\}|\geq cp^{-d}|G|\]
and for each \(b\in W_{a}\) we have
\[\overline{\ast}^{(8)}\mathbb{1}_{V\cap V-a}(b)\geq c\delta^{15}.\]
Since \(W_{a}\) is a subspace for each \(a\in A\), we conclude that (observe that \(b\) ranges over \(U\) in the expectation below)
\[\begin{split}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{a\in G,b\in U}\mathbb{1}(\tilde{\beta}(a,b)=0)\, \overline{\ast}^{(8)}\mathbb{1}_{V\cap V-a}(b)\geq\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{a\in G}\mathbb{1}_{A}(a)\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{b\in G}\mathbb{1}(\beta(a,b)=0)\mathbb{1}_{U}(b)\, \overline{\ast}^{(8)}\mathbb{1}_{V\cap V-a}(b)\\ \geq c\delta^{15}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{a\in G}\mathbb{1}_{A}(a)\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{b\in G}\mathbb{1}(\beta(a,b)=0)\mathbb{1}_{U}(b)\mathbb{1}_{W_ {a}}(b)\\ \geq c\delta^{15}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{a\in G}\mathbb{1}_{A}(a)\frac{|\{b\in G \colon\beta(a,b)=0\}\cap W_{a}\cap U|}{|G|}\\ \geq cp^{-2s}\delta^{15}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{a\in G}\mathbb{1}_{A}(a)\frac{|\{b\in G \colon\beta(a,b)=0\}\cap W_{a}|}{|G|}\\ \geq c^{3}p^{-2s-d}\delta^{15}\geq c^{4}p^{-2s}\delta^{16}.\end{split}\]
Let \(T\leq G\) be an arbitrary subspace with the property that \(G=U\oplus T\) and let \(c_{1}=c^{4}p^{-2s}\). Thus,
\[c_{1}\delta^{16}\leq\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt \vrule width 0.0pt height 6.0pt depth 0.0pt}}\limits_{t\in T,a,b\in U}\mathbb{1}(\tilde{\beta}(a+t,b)=0)\, \overline{\ast}^{(8)}\mathbb{1}_{V\cap V-a-t}(b).\]
By averaging, there exists \(t_{0}\in T\) such that
\[c_{1}\delta^{16} \leq\mathop{\hbox to 0.0pt{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt}}\limits_{a,b\in U}\mathbbm{1}(\tilde{\beta}(a+t_{0},b)=0)\, \overline{\ast}\,^{(8)}\mathbbm{1}_{V\cap V-a-t_{0}}(b).\] \[=\mathop{\hbox to 0.0pt{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt} \hrule height 0.4pt width 100 \vrule width 0.
\[2^{-1}c_{1}^{2}\delta^{24}\leq c_{1}^{2}\delta^{24}-8\varepsilon\leq\delta^{8} \underset{\begin{subarray}{c}b,u\in U,\lambda\in\mathbb{F}_{p}^{d}\\ x_{1},\ldots,x_{7}\in G\end{subarray}}{\mathbb{F}}\omega^{\lambda\cdot\tilde{ \beta}(u,b)}\mathbbm{1}_{V}(x_{1}+t_{0})\ldots\mathbbm{1}_{V}(x_{7}+t_{0}) \mathbbm{1}_{V}(x_{1}-x_{2}+\cdots+x_{7}-b+t_{0})\]
\[\mathbbm{1}_{V}(x_{1}+t_{0}+u)\ldots\mathbbm{1}_{V}(x_{7}+t_{0}+u)\mathbbm{1} _{V}(x_{1}-x_{2}+\cdots+x_{7}-b+t_{0}+u)\]
\[=\delta^{8}\underset{\begin{subarray}{c}b,u\in U\\ x_{1},\ldots,x_{7}\in G\end{subarray}}{\mathbb{F}}\mathbbm{1}(\tilde{\beta}(u, b)=0)\mathbbm{1}_{V-t_{0}}(x_{1})\ldots\mathbbm{1}_{V-t_{0}}(x_{7})\mathbbm{1}_{V-t_{0}} (x_{1}-x_{2}+\cdots+x_{7}-b)\]
\[\mathbbm{1}_{V-t_{0}}(x_{1}+u)\ldots\mathbbm{1}_{V-t_{0}}(x_{7}+u)\mathbbm{1 }_{V-t_{0}}(x_{1}-x_{2}+\cdots+x_{7}-b+u).\]
Let us misuse the notation and write \(V\) instead of \(V-t_{0}\), which is fine as the quadratic variety that we obtain for \(V-t_{0}\) can then be shifted by \(t_{0}\) to get the desired conclusion. Recall that the subspace \(U\) has codimension at most \(2s\) in \(G\). Extend \(\tilde{\beta}|_{U\times U}\) arbitrarily to a symmetric bilinear map \(\gamma\colon G\times G\to\mathbb{F}_{p}^{\tilde{d}}\). Then \(\operatorname{rank}\lambda\cdot\gamma\geq\operatorname{rank}\lambda\cdot \tilde{\beta}\) for all \(\lambda\neq 0\) and hence
\[\operatorname{bias}\lambda\cdot\gamma=p^{-\operatorname{rank}\lambda\cdot \gamma}\leq p^{-\operatorname{rank}\lambda\cdot\tilde{\beta}}=\operatorname{ bias}\lambda\cdot\tilde{\beta}\leq\varepsilon.\]
Furthermore,
\[2^{-1}p^{-4s}c_{1}^{2}\delta^{16}\leq\underset{\begin{subarray}{c}a,b\in G\\ x_{1},\ldots,x_{7}\in G\end{subarray}}{\mathbb{F}}\mathbbm{1}(\gamma(a,b)=0) \mathbbm{1}_{V}(x_{1})\ldots\mathbbm{1}_{V}(x_{7})\mathbbm{1}_{V}(x_{1}-x_{2 }+\cdots+x_{7}-b)\]
\[\mathbbm{1}_{V}(x_{1}+a)\ldots\mathbbm{1}_{V}(x_{7}+a)\mathbbm{1}_{V}(x_{1}- x_{2}+\cdots+x_{7}-b+a).\]
**Finding quadratic variety.** Let \(q(x)=\frac{1}{2}\gamma(x,x)\). Observe that for each \(a,t\in G\) we have
\[q(t+a)-q(t)=\frac{1}{2}\gamma(t+a,t+a)-\frac{1}{2}\gamma(t,t)=\gamma(a,t)+ \frac{1}{2}\gamma(a,a)=\gamma(a,t)+q(a).\]
For \(a\in G\) and \(\mu\in\mathbb{F}_{p}^{\tilde{d}}\), let us define \(\rho_{a}(\mu)=\mathbb{E}_{x}\,\mathbbm{1}_{V\cap V-a}(x)\mathbbm{1}(\gamma(a, x)=\mu)\). Let \(M_{a}=\max_{\mu\in\mathbb{F}_{p}^{\tilde{d}}}\rho_{a}(\mu)\). Writing \(c_{2}=2^{-1}p^{-4s}c_{1}^{2}\), we have
\[c_{2}\delta^{16}\leq\underset{\begin{subarray}{c}a,b\in G\\ x_{1},\ldots,x_{7}\in G\end{subarray}}{\mathbb{F}}\mathbbm{1}(\gamma(a,b)=0) \mathbbm{1}_{V\cap V-a}(x_{1})\ldots\mathbbm{1}_{V\cap V-a}(x_{7})\mathbbm{1 }_{V\cap V-a}(x_{1}-x_{2}+\cdots+x_{7}-b)\]
\[=\underset{\begin{subarray}{c}a\in G\\ x_{1},\ldots,x_{8}\in G\end{subarray}}{\mathbb{F}}\mathbbm{1}(\gamma(a,x_{1}- x_{2}+\cdots-x_{8})=0)\mathbbm{1}_{V\cap V-a}(x_{1})\ldots\mathbbm{1}_{V\cap V-a}(x_{7}) \mathbbm{1}_{V\cap V-a}(x_{8})\]
\[=\sum_{\mu_{1},\ldots,\mu_{8}\in\mathbb{F}_{p}^{\tilde{d}}}\underset{ \begin{subarray}{c}a\in G\\ x_{1},\ldots,x_{8}\in G\end{subarray}}{\mathbb{F}}\mathbbm{1}(\mu_{1}-\mu_{2}+ \cdots-\mu_{8}=0)\prod_{i\in[8]}\left(\mathbbm{1}_{V\cap V-a}(x_{i})\mathbbm{1 }(\gamma(a,x_{i})=\mu_{i})\right)\]
\[=\sum_{\mu_{1},\ldots,\mu_{7}\in\mathbb{F}_{p}^{\mathbb{J}}}\prod_{a} ^{\mathbb{J}}\rho_{a}(\mu_{1})\ldots\rho_{a}(\mu_{7})\rho_{a}(\mu_{1}-\mu_{2}+ \cdots+\mu_{7})\] \[\leq\prod_{a}^{\mathbb{J}}\sum_{\mu_{1},\ldots,\mu_{7}\in\mathbb{F }_{p}^{\mathbb{J}}}\rho_{a}(\mu_{1})\ldots\rho_{a}(\mu_{7})M_{a}. \tag{25}\]
Note that
\[\sum_{\mu\in\mathbb{F}_{p}^{\mathbb{J}}}\rho_{a}(\mu)=\sum_{\mu\in\mathbb{F}_{ p}^{\mathbb{J}}}\prod_{x}\mathbbm{1}_{V\cap V-a}(x)\mathbbm{1}(\gamma(a,x)=\mu)= \prod_{x}\mathbbm{1}_{V\cap V-a}(x)=\frac{|V\cap V-a|}{|G|}.\]
By Claim 22, we see for all but at most \(8\sqrt{\varepsilon}|G|\) of \(a\in G\) we have that the sum above is at most \(2\delta^{2}\). Let \(P\subseteq G\) be the set of all \(a\in G\) such that \(M_{a}\geq 2^{-10}c_{2}\delta^{2}\) and \(|V\cap V-a|\leq 2\delta^{2}|G|\). Inequality (25) gives
\[c_{2}\delta^{16}\leq 8\sqrt{\varepsilon}+2^{7}\delta^{14}\Big{(}2^{-10}c_{2} \delta^{2}+\prod_{a}^{\mathbb{J}}\mathbbm{1}_{P}(a)M_{a}\Big{)},\]
from which we obtain \(\mathbb{E}_{a}\,\mathbbm{1}_{P}(a)M_{a}\geq 2^{-10}c_{2}\delta^{2}\), provided \(\varepsilon\leq 2^{-8}c_{2}^{2}\delta^{32}\).
For each \(a\in P\), let \(\mu(a)\in\mathbb{F}_{p}^{\tilde{d}}\) be an argument for which \(M_{a}\) is attained. Write \(\tilde{\mu}(t)=\mu(t)+q(t)\). Then
\[2^{-10}c_{2}\delta^{2} \leq\prod_{a}^{\mathbb{J}}\mathbbm{1}_{P}(a)M_{a}=\prod_{a,x}^{ \mathbb{J}}\mathbbm{1}_{P}(a)\mathbbm{1}_{V\cap V-a}(x)\mathbbm{1}(\gamma(a,x )=\mu(a))\] \[=\prod_{a,x}\mathbbm{1}_{P}(a)\mathbbm{1}_{V}(x)\mathbbm{1}_{V}(x +a)\mathbbm{1}(\gamma(a,x)=\mu(a))\] \[=\prod_{a,x}\mathbbm{1}_{P}(a)\mathbbm{1}_{V}(x)\mathbbm{1}_{V}( x+a)\mathbbm{1}(q(a+x)-q(x)=\mu(a)+q(a))\] \[=\prod_{x,y}\mathbbm{1}_{P}(y-x)\mathbbm{1}_{V}(x)\mathbbm{1}_{V }(y)\mathbbm{1}(q(y)-q(x)=\tilde{\mu}(y-x)).\]
Let \(\Gamma\) be the bipartite graph whose vertex classes \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are two copies of \(V\), with \((x,y)\in\mathcal{C}_{1}\times\mathcal{C}_{2}\) being an edge if \(y-x\in P\) and \(q(y)-q(x)=\tilde{\mu}(y-x)\). The inequality above shows that the density of \(\Gamma\) is at least \(2^{-10}c_{2}\). This means that \(\Gamma\) has at least \(2^{-40}c_{2}^{4}|V|^{4}\) ordered 4-cycles. Each ordered 4-cycle \((x_{1},y_{1},x_{2},y_{2})\) gives rise to an additive quadruple in \(P\) respected by \(\tilde{\mu}\) as \((y_{1}-x_{1})+(y_{2}-x_{2})=(y_{1}-x_{2})+(y_{2}-x_{1})\) and
\[\tilde{\mu}(y_{1}-x_{1})+\tilde{\mu}(y_{2}-x_{2})-\tilde{\mu}(y_ {1}-x_{2})-\tilde{\mu}(y_{2}-x_{1})\] \[=\Big{(}q(y_{1})-q(x_{1})\Big{)}+\Big{(}q(y_{2})-q(x_{2})\Big{)}- \Big{(}q(y_{1})-q(x_{2})\Big{)}-\Big{(}q(y_{2})-q(x_{1})\Big{)}=0.\]
By Claim 22 and provided \(\varepsilon\leq\delta^{16}\), all but at most \(16\sqrt{\varepsilon}|G|^{3}\) additive quadruple \((a,a^{\prime},b,b^{\prime})\) in \(P\) can arise from at most \(2\delta^{4}|G|\) ordered 4-cycles, since this corresponds to finding \(x\) such that \(x,x+a,x+a-b,x+a-b+a^{\prime}\in V\), i.e. \(x\in V\cap(V-a)\cap(V-a+b)\cap(V-a+b-a^{\prime})\). Hence, as long as \(\varepsilon\leq 2^{-100}c_{2}^{8}\delta^{8}\), \(\tilde{\mu}\) respects at least least \(2^{-42}c_{2}^{4}|G|^{3}\) additive quadruples in \(P\).
We may apply Theorem 14 to find an affine map \(\psi\colon G\to\mathbb{F}_{p}^{\bar{d}}\) and a parameter \(c_{3}\geq\exp\Big{(}-\log^{O(1)}(2c_{2}^{-1})\Big{)}\) such that \(\psi(a)=\tilde{\mu}(a)\) holds for at least \(c_{3}|G|\) elements \(a\in P\). Let \(P^{\prime}\) be the set of such elements. Hence
\[2^{-10}c_{2}c_{3}\delta^{2}\leq \,\big{\|}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
notice that
\[\Big{|}|G|^{-1}|Q|-p^{-\tilde{d}}\Big{|}= \Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x\in G}\mathbbm{1}(q(x)-\psi(x)= \mu)-p^{-\tilde{d}}\Big{|}=\Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x\in G}\Big{(}p^{- \tilde{d}}\sum_{\lambda\in\mathbb{F}_{p}^{\tilde{d}}}\omega^{\lambda\cdot(q(x)- \psi(x)-\mu)}\Big{)}-p^{-\tilde{d}}\Big{|}\] \[\leq p^{-\tilde{d}}\sum_{\lambda\in\mathbb{F}_{p}^{\tilde{d}}\setminus \{0\}}\Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x\in G}\omega^{\lambda\cdot(q(x)- \psi(x)-\mu)}\Big{|}.\]
It remains to give a bound on \(|\mathbb{E}_{x\in G}\,\omega^{\lambda\cdot(q(x)-\psi(x)-\mu)}|\) for non-zero \(\lambda\). Note that
\[\Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x}\omega^{\lambda\cdot(q(x)- \psi(x)-\mu)}\Big{|}^{4}=\Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x,y}\omega^{\lambda\cdot(q(x)- \psi(x)-\mu)-\lambda\cdot(q(y)-\psi(y)-\mu)}\Big{|}^{2}\] \[=\Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x,a}\omega^{\lambda\cdot(q(x+a)-q(x)- \psi(a)+\psi(0))}\Big{|}^{2}=\Big{|}\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{a}\Big{(}\mathop{\hbox{ \vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x}\omega^{\lambda\cdot(q(x+a)-q(x)- \psi(a)+\psi(0))}\Big{)}\Big{|}^{2}\] \[\leq\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{a}\Big{|}\mathop{\hbox{\vbox{ \hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x}\omega^{\lambda\cdot(q(x+a)-q(x)- \psi(a)+\psi(0))}\Big{|}^{2}\] (by Cauchy-Schwarz) \[=\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x,a,b}\omega^{\lambda\cdot(q(x+a+b)-q(x+a)-q(x+b)+q(x))}= \mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{a,b}\omega^{\lambda\cdot \gamma(a,b)}=\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{a,b}\omega^{\lambda\cdot\gamma(a,b)}=\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{a }\Big{(}\lambda\cdot\gamma\Big{)}\leq\varepsilon.\]
Thus \(|Q|\leq\Big{(}p^{-\tilde{d}}+\sqrt[4]{\varepsilon}\Big{)}|G|\leq\Big{(}p^{s-d} +\sqrt[4]{\varepsilon}\Big{)}|G|\), as desired.
Finally, we may prove Theorem 9.
Proof of Theorem 9.: Let \(D\) be the maximum of the three absolute constants appearing in Theorems 20 and 33 and Propostion 39. Let \(V\subseteq G\) be the given \((c_{0},\delta,\varepsilon)\)-approximate quadratic variety. Apply Theorem 20 to obtain a positive quantity \(c_{1}\), a set \(A\subseteq G\) and a collection of subspaces \(W_{a}\leq G\) indexed by \(a\in A\) which satisfy properties **(i)-(vi)** from the conclusion of that theorem. Let \(d=\lceil\log_{p}(c_{1}^{-1}\delta^{-1})\rceil\). By property **(iv)**, the codimension of each \(W_{a}\) is at most \(d\), so we may misuse the notation and write \(W_{a}\) for an arbitrary subspace inside it of codimension exactly \(d\). We need to replace \(c_{1}\) by \(c_{2}=c_{1}^{11}\) so that the conditions **(i)-(vi)** still hold.
Apply Theorem 33 with parameters \(c=c_{2},K=c_{2}^{-1}\) and \(\eta=D\varepsilon\delta^{-288}\). We thus obtain parameters \(c^{\prime}\geq\exp\Big{(}-\exp\Big{(}\log^{D}(2c_{2}^{-1})\Big{)}\Big{)}\) and \(r\leq\exp\Big{(}\log^{D}(2c_{2}^{-1})\Big{)}\), set \(A^{\prime}\subseteq A\) and a map \(\Phi\colon G\times\mathbb{F}_{p}^{d}\to G\), affine in the first variable and linear in the second, such that \(|A^{\prime}|\geq c^{\prime}|G|\) and for each \(a\in A^{\prime}\) we have \(|\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{a},\cdot\cap W_{a}^{\perp}|\geq c^{\prime}p^{d}\). Moreover, there exists a subspace \(\Lambda\leq\mathbb{F}_{p}^{d}\) of dimension \(r\) such that whenever \(\lambda\notin\Lambda\) we have
\[\mathop{\hbox{\vbox{\hrule height 0.4pt width 100 \vrule width 0.4pt height 6.0pt depth 0.0pt\kern-0.4pt\hrule height 0.4pt}}}_{x,y}\omega\Big{(}\Phi(x,\lambda)\cdot y\Big{)}\leq \varepsilon^{\prime},\]
where \(\varepsilon^{\prime}=\Big{(}\eta c^{\prime-2}\Big{)}^{1/2r}\).
Let \(M\) be a subspace of \(\mathbb{F}_{p}^{d}\) of codimension \(r\) such that \(\mathbb{F}_{p}^{d}=\Lambda+M\) and let \(e_{1},\ldots,e_{d-r}\) be a basis
of \(M\). Let us define bilinear map \(\beta\colon G\times G\to\mathbb{F}_{p}^{d-r}\) by \(\beta_{i}(x,y)=\Phi(x,e_{i})\cdot y\) for \(i\in[d-r]\). Given any \(\lambda\in\mathbb{F}_{p}^{d-r}\setminus\{0\}\), we have
\[\operatorname{bias}\lambda\cdot\beta=\mathop{\prod}\limits_{x,y}\omega^{ \sum_{i\in[d-r]}\lambda_{i}\beta_{i}(x,y)}=\mathop{\prod}\limits_{x,y} \omega^{\sum_{i\in[d-r]}\lambda_{i}\Phi(x,e_{i})\cdot y}=\mathop{\prod} \limits_{x,y}\omega^{\Phi\left(x,\sum_{i\in[d-r]}\lambda_{i}e_{i}\right)\cdot y }\leq\varepsilon^{\prime},\]
since \(\sum_{i\in[d-r]}\lambda_{i}e_{i}\notin\Lambda\).
Next, we have \(|\operatorname{Im}\Phi(a,\cdot)\cap W_{a}^{\perp}|\geq c^{\prime}p^{d}\) for all \(a\in A^{\prime}\). Thus,
\[c^{\prime-1}p^{-d}|G|\geq|\operatorname{Im}\Phi(a,\cdot)^{\perp}+W_{a}|= \frac{|\operatorname{Im}\Phi(a,\cdot)^{\perp}||W_{a}|}{|\operatorname{Im} \Phi(a,\cdot)^{\perp}\cap W_{a}|}\]
so
\[|\operatorname{Im}\Phi(a,\cdot)^{\perp}\cap W_{a}|\geq\frac{c^{\prime}p^{d}| \operatorname{Im}\Phi(a,\cdot)^{\perp}||W_{a}|}{|G|}\geq c^{\prime}| \operatorname{Im}\Phi(a,\cdot)^{\perp}|\geq c^{\prime}p^{-d}|G|.\]
However, when \(b\in\operatorname{Im}\Phi(a,\cdot)^{\perp}\) then we have \(0=\Phi(a,e_{i})\cdot b=\beta_{i}(a,b)\) for all \(i\in[d-r]\), and hence \(\beta(a,b)=0\). Thus
\[|\{b\in G\colon\beta(a,b)=0\}\cap W_{a}|\geq c^{\prime}p^{-d}|G|.\]
All conditions of Propostion 39 are satisfied, as long as \(\varepsilon^{\prime}\leq(2^{-1}c^{\prime}\delta)^{D}\), hence there exists a quadratic variety \(Q\subseteq G\) of size \(|Q|\leq(2{c^{\prime}}^{-1})^{D}\delta|G|\) such that \(|Q\cap V|\geq\exp\Big{(}-\log^{D}(2{c^{\prime}}^{-1})\Big{)}\delta|G|\).
## SS6 Approximate polynomials and approximate varieties
In this appendix we show the connection between approximate quadratic polynomials and approximate quadratic varieties. As in the rest of the paper, \(G\) and \(H\) are finite-dimensional vector spaces over \(\mathbb{F}_{p}\). We need some preliminary lemmas. The first one estimates the probability that a given tuple of vectors belongs to a random coset of fixed codimension.
**Lemma 40**.: _Let \(v_{1},\ldots,v_{r}\in G\) and let \(n=\dim G\). Let \(U\leq G\) be a random subspace of codimension \(d\), among such subspaces. Then, if the maximum size of independent set inside \(v_{1},v_{2},\ldots,v_{r}\) has size \(m\), we have_
\[\mathbb{P}(v_{1},\ldots,v_{r}\in U)=\frac{(p^{n-d}-1)(p^{n-d}-p)\cdots(p^{n-d} -p^{m-1})}{(p^{n}-1)(p^{n}-p)\cdots(p^{n}-p^{m-1})}. \tag{26}\]
_In particular, provided \(m+d<n-2\),_
\[p^{-md}-4p^{m+d}p^{-n}\leq\mathbb{P}(v_{1},\ldots,v_{r}\in C)\leq p^{-md}. \tag{27}\]
Proof.: We may view random model in this lemma as follows. Let \(\mathcal{L}\in\operatorname{GL}(G)\) be a linear automorphism chosen uniformly at random. Fix arbitrary subspace \(U_{0}\) of codimension \(d\) and set \(U=\mathcal{L}(U_{0})\). Let \(u_{1},\ldots,u_{m}\) be a maximal independent subset of \(v_{1},v_{2},\ldots,v_{r}\). Thus
\[\mathbb{P}\Big{(}v_{1},\ldots,v_{r}\in U\Big{)}=\mathbb{P}\Big{(}u_{1},u_{2}, \ldots,u_{m}\in U\Big{)}.\]
However, recalling that \(U=\mathcal{L}(U_{0})\), we have that \(u_{1},u_{2},\ldots,u_{m}\in U\) if and only \(\mathcal{L}^{-1}(u_{1}),\ldots,\mathcal{L}^{-1}(u_{m})\in U_{0}\). Since \(\mathcal{L}\) is chosen uniformly at random and \(u_{1},\ldots,u_{m}\) are independent, the \(m\)-tuple \(\Big{(}\mathcal{L}^{-1}(u_{1}),\ldots,\mathcal{L}^{-1}(u_{m})\Big{)}\)
is uniformly distributed over all \(m\)-tuples of independent vectors. Thus, \(\mathbb{P}(u_{1},u_{2},\ldots,u_{m}\in U)=N_{U}/N_{G}\), where \(N_{U}\) is the number of independent ordered \(m\)-tuples in \(U\), and \(N_{G}\) is the corresponding quantity in \(G\). Equality (26) follows from a direct counting argument.
To conclude (27), note first that \(\frac{p^{n-d}-k}{p^{n-k}}\leq p^{-d}\), giving the upper bound. For the lower bound, we use elementary inequalities \(1-2x\leq\exp(-2x)\leq 1-x\), which holds for \(x\in[0,1/2]\). We have
\[\frac{p^{n-d}-p^{k}}{p^{n}-p^{k}}=p^{-d}\frac{p^{n-d}-p^{k}}{p^{n-d}-p^{k-d}}= p^{-d}\Big{(}1-\frac{p^{k}-p^{k-d}}{p^{n-d}-p^{k-d}}\Big{)}\geq p^{-d}\Big{(}1-p^ {k+d-n}\Big{)}\geq p^{-d}\exp(-2p^{k+d-n}),\]
as long as \(k+d+2<n\). Thus
\[\mathbb{P}\Big{(}v_{1},\ldots,v_{r}\in C\Big{)}\geq\prod_{k=0}^{ m-1}\Big{(}p^{-d}\exp(-2p^{k+d-n})\Big{)}=p^{-md}\exp\Big{(}-2\sum_{k=0}^{m-1}p^{ k+d-n}\Big{)}\geq p^{-md}\exp(-4p^{m+d-n})\] \[\geq p^{-md}-4p^{m+d-n},\]
as long as \(m+d+2<n\).
The second lemma is a version of Lemma 21 in the context of maps between vector spaces, rather than \(\mathbb{C}\)-valued functions on a vector space.
**Lemma 41**.: _Suppose that \(A\subset G\) and that \(F\colon A\to H\) is a map. Let \(r,s\in\mathbb{N}\) and let \(\lambda_{i\,j}\in\mathbb{F}_{p}\) for \(i\in[r],j\in[s]\) and \(\mu_{i}\in\mathbb{F}_{p}\) for \(i\in[r]\) be scalars. Assume that_
\[\sum_{i\in[r]}\mu_{i}F\Big{(}\sum_{j\in[s]}\lambda_{i\,j}u_{j}\Big{)}=0 \tag{28}\]
_holds for at least \(\alpha|G|^{s}\) of \(s\)-tuples of \((u_{1},\ldots,u_{s})\in G^{s}\). Suppose that there are distinct indices \(a\) and \(b\) in \([s]\) and an index \(i_{0}\in[r]\) such that \(\mu_{i_{0}}\neq 0\) and \(\lambda_{i\,a}\lambda_{i\,b}\neq 0\) holds if and only if \(i=i_{0}\). Then \(F\) respects at least \(\alpha^{4}|G|^{3}\) additive quadruples in \(A\)._
As in the case of Lemma 21, the lemma still applies if the situation is simpler and there is a variable \(u_{i}\) that appears in a single copy \(F\) in the expression above.
Proof.: Let \(I\) be the set of indices \(i\in[r]\) such that \(\lambda_{i\,a}\neq 0\). For given \(u_{[s]\setminus\{a\}}\), let \(N(u_{[s]\setminus\{a\}})\) be the number of \(u_{a}\) such that (28) holds. By Cauchy-Schwarz inequality, we have
\[\sum_{u_{[s]\setminus\{a\}}}N(u_{[s]\setminus\{a\}})^{2}\geq|G|^{1-s}\Big{(} \sum_{u_{[s]\setminus\{a\}}}N(u_{[s]\setminus\{a\}})\Big{)}^{2}\geq\alpha^{2}|G |^{s+1}.\]
In particular, we have at least \(\alpha^{2}|G|^{s+1}\)\((s+1)\)-tuples \((u_{[s]\setminus\{a\}},v_{a},v_{a}^{\prime})\) such that
\[0= \bigg{(}\sum_{i\in[r]}\mu_{i}F\Big{(}\sum_{j\in[s]\setminus\{a\}} \lambda_{i\,j}u_{j}+\lambda_{i\,a}v_{a}\Big{)}\bigg{)}-\bigg{(}\sum_{i\in[r]} \mu_{i}F\Big{(}\sum_{j\in[s]\setminus\{a\}}\lambda_{i\,j}u_{j}+\lambda_{i\,a}v _{a}^{\prime}\Big{)}\bigg{)}\] \[= \sum_{i\in I}\mu_{i}F\Big{(}\sum_{j\in[s]\setminus\{a\}}\lambda_{ i\,j}u_{j}+\lambda_{i\,a}v_{a}\Big{)}-\mu_{i}F\Big{(}\sum_{j\in[s]\setminus\{a\}} \lambda_{i\,j}u_{j}+\lambda_{i\,a}v_{a}^{\prime}\Big{)}. \tag{29}\]
Let \(M(u_{[s]\setminus\{a,b\}},v_{a},v^{\prime}_{a})\) be the number of \(u_{b}\) such that (29) holds for \((s+1)\)-tuple \((u_{[s]\setminus\{a\}},v_{a},v^{\prime}_{a})\). By Cauchy-Schwarz inequality we get
\[\sum_{u_{[s]\setminus\{a,b\}},v_{a},v^{\prime}_{a}}M(u_{[s]\setminus\{a,b\}},v _{a},v^{\prime}_{a})^{2}\geq|G|^{-s}\Big{(}\sum_{u_{[s]\setminus\{a,b\}},v_{a },v^{\prime}_{a}}M(u_{[s]\setminus\{a,b\}},v_{a},v^{\prime}_{a})\Big{)}^{2} \geq\alpha^{4}|G|^{s+2}.\]
Hence, we get at least \(|G|^{s+2}\) of \((s+2)\)-tuples \((u_{[s]\setminus\{a,b\}},v_{a},v^{\prime}_{a},w_{b},w^{\prime}_{b})\) such that
\[0= \bigg{(}\sum_{i\in I}\mu_{i}F\Big{(}\sum_{j\in[s]\setminus\{a,b \}}\lambda_{i\,j}u_{j}+\lambda_{i\,a}v_{a}+\lambda_{i\,b}w_{b}\Big{)}-\mu_{i}F \Big{(}\sum_{j\in[s]\setminus\{a,b\}}\lambda_{i\,j}u_{j}+\lambda_{i\,a}v^{ \prime}_{a}+\lambda_{i\,b}w_{b}\Big{)}\bigg{)}\] \[=\]
Thus, averaging over \(u_{[s]\setminus\{a,b\}}\) we see that \(F\) respects at least \(\alpha^{4}|G|^{3}\) additive quadruples.
Finally, we make use of Green's \(\mathsf{U}^{2}\)-arithmetic regularity lemma.
**Lemma 42** (Theorem 2.1 [13]).: _Given a set \(A\subseteq G\) and a parameter \(\varepsilon>0\), there exists a decomposition \(G=K\oplus T\), where \(\dim T\leq W(O(\varepsilon^{-3}))\), where \(W(t)\) is a tower of twos of height \(t\), such that for all but at most \(\varepsilon|T|\) of \(t\in T\), we have_
\[\max_{k\notin K^{\perp}}\Big{|}\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt height 6.0pt depth 0.0pt}}_{x\in K}\mathbbm{1}_{A}(x+t)\omega^{-k \cdot x}\Big{|}\leq\varepsilon. \tag{30}\]
If condition (30) holds, we say that \(t\) is \(\varepsilon\)_-regular_.
We are now ready to prove Proposition 7.
Proof of Proposition 7.: By assumptions, we have a set \(\mathcal{G}\) of quadruples \((x,a,b,c)\in G^{4}\) of size at least \(c_{0}|G|^{4}\) such that \(\Delta_{a}\Delta_{b}\Delta_{c}F(x)=0\) and all \(8\) arguments of \(F\) belong to \(A\).
In the first step of the proof, we need to pass to a sufficiently regular subset of \(A\). Let \(\eta>0\) be a parameter to be specified later. Apply Lemma 42 to find a decomposition \(G=K\oplus T\), with \(\dim T\leq W(O(\eta^{-3}))\), such that \(t\) is \(\eta\)-regular for all but at most \(\eta|T|\) of elements \(t\in T\). Let \(T_{\rm reg}\) be the set of \(\eta\)-regular elements \(t\in T\). Then, using decomposition \(G=K\oplus T\), we have
\[c_{0}\leq\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt heigh t 6.0pt depth 0.0pt}}_{x,a,b,c\in G}\mathbbm{1}_{A}(x)\mathbbm{1}_{A}(x+a)\ldots\mathbbm{1}_{A}(x+a+b+c) \mathbbm{1}(\Delta_{a,b,c}F(x)=0)\]
\[=\mathop{\hbox{\vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt heigh t 6.0pt depth 0.0pt}}_{x^{\prime},a^{\prime},b^{\prime},c^{\prime}\in K}\mathop{\hbox{ \vrule width 0.0pt height 6.0pt depth 0.0pt\vrule width 0.0pt heigh t 6.0pt depth 0.0pt}}_{x^{\prime\prime},a^{\prime\prime},b^{\prime\prime},c^{\prime\prime}\in T} \mathbbm{1}_{A}(x^{\prime}+x^{\prime\prime})\mathbbm{1}_{A}(x^{\prime}+x^{ \prime\prime}+a^{\prime}+a^{\prime\prime})\ldots\]
\[\mathbbm{1}_{A}(x^{\prime}+x^{\prime\prime}+a^{\prime}+a^{\prime\prime}+b^{\prime} +b^{\prime\prime}+c^{\prime}+c^{\prime\prime})\mathbbm{1}(\Delta_{a^{\prime}+a^{ \prime\prime},b^{\prime}+b^{\prime\prime},c^{\prime}+c^{\prime\prime}}F(x^{ \prime}+x^{\prime\prime})=0).\]
On the other hand, the fact that \(|T\setminus T_{\mathrm{reg}}|\leq\eta|T|\) implies
\[\mathop{\prod_{x^{\prime},a^{\prime\prime},b^{\prime},c^{\prime}\in K}}_{x^{ \prime\prime},a^{\prime\prime},b^{\prime\prime},c^{\prime\prime}\in T} \mathbbm{1}_{T\setminus T_{\mathrm{reg}}}(x^{\prime\prime})\mathbbm{1}_{A}(x ^{\prime}+x^{\prime\prime})\mathbbm{1}_{A}(x^{\prime}+x^{\prime\prime}+a^{ \prime}+a^{\prime\prime}+a^{\prime\prime}+b^{\prime}+b^{\prime\prime}+c^{ \prime}+c^{\prime\prime})\leq\eta,\]
as well as similar inequalities with any element among \(x^{\prime\prime}+a^{\prime\prime},x^{\prime\prime}+b^{\prime\prime},\ldots, x^{\prime\prime}+a^{\prime\prime}+b^{\prime\prime}+c^{\prime\prime}\) instead of \(x^{\prime\prime}\) as the argument of \(\mathbbm{1}_{T\setminus T_{\mathrm{reg}}}\). It follows that, if we choose \(\eta\leq c_{0}/16\),
\[c_{0}/2\leq\mathop{\prod_{x^{\prime},a^{\prime},b^{\prime},c^{\prime}\in K}}_{x ^{\prime\prime},a^{\prime\prime},b^{\prime\prime},c^{\prime\prime}\in T} \mathbbm{1}_{T_{\mathrm{reg}}}(x^{\prime\prime})\ldots\mathbbm{1}_{T_{\mathrm{ reg}}}(x^{\prime\prime}+a^{\prime\prime}+b^{\prime\prime}+c^{\prime\prime}) \mathbbm{1}_{A}(x^{\prime}+x^{\prime\prime}+a^{\prime}+a^{\prime\prime})\ldots\]
\[\mathbbm{1}_{A}(x^{\prime}+x^{\prime\prime}+a^{\prime}+a^{\prime\prime}+b^{ \prime}+b^{\prime\prime}+c^{\prime}+c^{\prime\prime})\mathbbm{1}(\Delta_{a^{ \prime}+a^{\prime\prime},b^{\prime}+b^{\prime\prime},c^{\prime}+c^{\prime \prime}}F(x^{\prime}+x^{\prime\prime})=0).\]
Thus, by averaging, we may find \(x^{\prime\prime},a^{\prime\prime},b^{\prime\prime},c^{\prime\prime}\in T\) such that all 8 elements \(x^{\prime\prime}\), \(x^{\prime\prime}+a^{\prime\prime}\), \(x^{\prime\prime}+b^{\prime\prime},\ldots\), \(x^{\prime\prime}+a^{\prime\prime}+b^{\prime\prime}+c^{\prime\prime}+T\) are \(\eta\)-regular and
\[c_{0}/2\leq\mathop{\prod_{x^{\prime},a^{\prime},b^{\prime},c^{\prime}\in K}} \mathbbm{1}_{A}(x^{\prime}+x^{\prime\prime})\mathbbm{1}_{A}(x^{\prime}+x^{ \prime\prime}+a^{\prime}+a^{\prime\prime})\ldots\]
\[\mathbbm{1}_{A}(x^{\prime}+x^{\prime\prime}+a^{\prime}+b^{\prime}+b^{\prime \prime}+c^{\prime}+c^{\prime\prime})\mathbbm{1}(\Delta_{a^{\prime}+a^{\prime \prime},b^{\prime}+b^{\prime\prime},c^{\prime}+c^{\prime\prime}}F(x^{\prime}+x ^{\prime\prime})=0).\]
A Cauchy-Schwarz inequlity based argument like that in Lemma 41 allows us to pass to a subset of a single coset \(A\cap(x^{\prime\prime}+K)\) on which \(F\) respects at least \(\frac{c_{0}^{8}}{2^{8}}|K|^{4}\) additive cubes, with \(x^{\prime\prime}\in T_{\mathrm{reg}}\). Thus, writing \(c_{1}=\frac{c_{0}^{8}}{2^{8}}\) and misusing the notation and writing \(A\) instead of \((A-x^{\prime\prime})\cap K\), and \(G\) instead of \(K\), we may assume that \(|\widehat{\mathbb{1}_{A}}(r)|\leq\eta\) for all \(r\in G\setminus\{0\}\). Let \(c_{2}\) be the density of \(A\), hence \(c_{2}\geq c_{1}\). We also have that \(F\) respects at most \(\varepsilon^{\prime}|G|^{3}\) additive quadruples in \(A\), where \(\varepsilon^{\prime}=\varepsilon p^{3\dim T}\). In particular, the number of additive quadruples in \(A\) is then
\[\sum_{x,a,b\in G}\mathbbm{1}_{A}(x)\mathbbm{1}_{A}(x+a)\mathbbm{1}_{A}(x+b) \mathbbm{1}_{A}(x+a+b)=|G|^{3}\underset{r\in G}{\sum}|\widehat{\mathbb{1}_{A} }(r)|^{4}\leq(c_{2}^{4}+\eta^{2})|G|^{3}.\]
Let \(U\) be a random subspace of \(H\) of codimension \(d\). Let \(V=\{x\in A\colon F(x)\in U\}\). Let \(X\) be the number of additive quadruples in \(V\), and \(Y\) the number of quadruples \((x,a,b,c)\in\mathcal{G}\) such that 8 points \(F(x),F(x+a),\ldots,F(x+a+b+c)\) belong to \(U\).
We use the second moment method to prove the theorem. We use Lemma 40 frequently below, without additional comments. Firstly, note that
\[\mathop{\prod_{x\in A}}\limits^{\infty}|V|=\sum_{x\in A}\mathbb{P}(F(x)\in U) \geq(p^{-d}-4p^{1+d-n})|A|,\]
\[\boxed{\vbox{\hbox{\vbox{\hbox{\vbox{\hbox{\vbox{\hbox{\vbox{ \hbox{\vbox{\hbox{\vbox{\hbox{\vbox{\hbox{\hbox{\vbox{\hbox{\hbox{\vbox{ \hbox{\hbox{\hbox{\vbox{\hbox{\hbox{\hboxhbox{\vbox{{\hboxhbox{\hboxhbox{\vbox{{\hboxhboxhbox{ \hboxhboxhboxhbox{\vbox{{\hboxhboxhbox{\hboxhboxhboxhbox{\vbox{{\hbox \hbox}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{}{\|\|\|\|\|\,
again using Lemma 41 in the last line. By Chebyshev's inequality we have
\[\mathbb{P}\Big{(}\Big{|}X-p^{-4d}c_{2}^{4}|G|^{3}\Big{|}\leq\sqrt{\eta}|G|^{3} \Big{)}\geq 1-\Big{(}3\eta+p^{8\sqrt[4]{\varepsilon^{\prime}}}+p^{4+d-n} \Big{)}, \tag{32}\]
which is at least \(0.999\), again provided \(\varepsilon^{\prime}\) is sufficiently small in terms of other parameters.
Finally, we consider random variable \(Y\). Recall that if \((x,a,b,c)\in\mathcal{G}\), then \(\Delta_{a,b,c}F(x)=0\) (and all \(8\) points that are arguments of \(F\) belong to \(A\)). Then \(8\) values \(F(x),\ldots,F(x+a+b+c)\) belong to \(U\) if and only if any \(7\) of them belong to \(U\). Hence
\[\big{|}\hskip-1.0pt\raisebox{-1.0pt}{\includegraphics[height=1.0pt]{figures/ eps-1. |
2306.04597 | Language Models Get a Gender Makeover: Mitigating Gender Bias with
Few-Shot Data Interventions | Societal biases present in pre-trained large language models are a critical
issue as these models have been shown to propagate biases in countless
downstream applications, rendering them unfair towards specific groups of
people. Since large-scale retraining of these models from scratch is both time
and compute-expensive, a variety of approaches have been previously proposed
that de-bias a pre-trained model. While the majority of current
state-of-the-art debiasing methods focus on changes to the training regime, in
this paper, we propose data intervention strategies as a powerful yet simple
technique to reduce gender bias in pre-trained models. Specifically, we
empirically show that by fine-tuning a pre-trained model on only 10 de-biased
(intervened) training examples, the tendency to favor any gender is
significantly reduced. Since our proposed method only needs a few training
examples, our few-shot debiasing approach is highly feasible and practical.
Through extensive experimentation, we show that our debiasing technique
performs better than competitive state-of-the-art baselines with minimal loss
in language modeling ability. | Himanshu Thakur, Atishay Jain, Praneetha Vaddamanu, Paul Pu Liang, Louis-Philippe Morency | 2023-06-07T16:50:03Z | http://arxiv.org/abs/2306.04597v1 | # Language Models Get a Gender Makeover:
###### Abstract
Caution: this paper contains potentially offensive or upsetting model outputs.
Societal biases present in pre-trained large language models are a critical issue as these models have been shown to propagate biases in countless downstream applications, rendering them unfair towards specific groups of people. Since large-scale retraining of these models from scratch is both time and compute-expensive, a variety of approaches have been previously proposed that de-bias a pre-trained model. While the majority of current state-of-the-art debiasing methods focus on changes to the training regime, in this paper, we propose data intervention strategies as a powerful yet simple technique to reduce gender bias in pre-trained models. Specifically, we empirically show that by fine-tuning a pre-trained model on only 10 de-biased (intervened) training examples, the tendency to favor any gender is significantly reduced. Since our proposed method only needs a few training examples, our few-shot debiasing approach is highly feasible and practical. Through extensive experimentation, we show that our debiasing technique performs better than competitive state-of-the-art baselines with minimal loss in language modeling ability.
## 1 Introduction
Recently, there has been a surge of interest in pre-trained large language models (LLM) in natural language processing (NLP). It has been shown that the pre-training + finetuning of a model drastically improves its performance on downstream tasks as the knowledge captured by the pre-training on a large corpus is transferred to the downstream application when finetuning the model. However, this also leads to societal biases like gender bias that were implicitly learned by the pre-trained models being transferred to crucial downstream applications like job recommendation engines (Zhao et al., 2019; Barocas et al., 2017; Kurita et al., 2019). Analyzing and mitigating bias without requiring significant re-training or compute resources is crucial to the widespread adoption of LLMs in downstream applications.
Previous work (Nadeem et al., 2021), (Nangia et al., 2020), (Cer et al., 2018) has attempted to quantify bias, and others such as Ravfogel et al. (2020) and Liang et al. (2021) have attempted to remove it algorithmically from the models. Closer to our work are data-manipulative techniques such as Zmigrod et al. (2019) and Maudslay et al. (2019) that modify the dataset and further fine-tune the model. In this paper, we propose simple data intervention strategies and show that they can mitigate gender bias in pre-trained models with the help of few-shot fine-tuning. Moreover, taking inspiration from Schick et al. (2021), we find that by utilizing a biased pre-trained LLM for mining for most gender-biased samples in a dataset, our methods can mitigate gender bias with very few training samples. Finally, we perform an extensive evaluation of our debiasing technique on two recent bias benchmarks (Nadeem et al., 2021) and show that our method outperforms three existing state-of-the-art techniques and performs comparably to the other two. Our main contributions are the following:
* We propose simple data intervention techniques that can be used to reduce gender bias in a pre-trained LLM with few training examples (few-shot), thus making human-in-the-loop bias mitigation strategies feasible.
* We introduce a novel data sampling technique that utilises LLMs to mine for the most biased samples from a dataset and can benefit existing state-of-the-art debiasing methods. When used for debiasing a model, these few samples serve as exemplars and induce large reductions in gender bias.
## 2 Related Work
In recent years, there has been growing concern about the bias/stereotypical discriminatory behavior by NLP models, particularly concerning gender. Several studies have investigated the presence of gender bias in various NLP tasks and proposed methods for mitigating it.
One line of research has focused on analyzing the extent of gender bias in pre-trained language models such as BERT and GPT-2. These studies have found that these models exhibit a significant amount of gender bias in their word embeddings for BERT (Jentzsch and Turan, 2022) and for GPT-2 (Kirk et al., 2021) and are prone to making stereotypical gender-based predictions (e.g., assuming that a doctor is male and a nurse is female). A standard evaluation metric used in this line of research is Stereotype metrics such as StereoSet (Nadeem et al., 2021), which evaluates the model's ability to predict gender stereotypes and CrowS pairs (Nangia et al., 2020) which measure whether a model generally prefers more stereotypical sentences. A similar line of work is gender bias tests proposed in BIG-bench (Srivastava et al., 2022). The tests assess the language model's gender biases, stereotypes, and ability to infer gender information. It evaluates gender bias and stereotype between male and female, and gender minority bias and stereotype between majority and minority. It also examines the model's language modeling performance, which can be affected during de-biasing.
Another line of research has proposed methods for debiasing these models. These methods can be broadly categorized into two groups: **data-based** and **algorithm-based**. Data-based methods aim to reduce bias by removing or altering biased words from the training set. In contrast, algorithm-based methods aim to modify the model's architecture or training procedure to reduce bias. One popular data-based method is "uncertainty sampling" (Lewis and Gale, 1994), where the model is trained on the instances that it is most uncertain about, which can help to reduce bias by forcing the model to learn from a diverse set of examples. A popular algorithm-based method is "Adversarial Debiasing" proposed by Zhang et al. (2018), which fine-tunes the model using an adversarial loss to make it less sensitive to sensitive attributes such as gender. OSCar proposed by Dev et al. (2021), is another algorithm based method that utilizes the idea of disentangling "problematic concepts" like occupation and gender relationship instead of removing them altogether. MABEL (He et al., 2022) has both algorithm and data-based components, as it first augments the training data by swapping gender words and then applies a contrastive learning objective and alignment via entailment pairs. Their data augmentation strategy is similar in spirit to the data intervention techniques we propose, however our analysis does not require training auxiliary models and uses significantly lesser data.
Data-based methods include the "Equalization" technique proposed by Bolukbasi et al. (2016), which aims to equalize the representation of gender-specific words in the embedding space, the "Counterfactual Data Augmentation" (CDA) method proposed by Zimmermann and Hoffmann (2022), which generates counterfactual examples to improve the model's robustness to bias, and "Name-Based Counterfactual Data Substitution" proposed by Maudslay et al. (2019) which reduces gender bias by replacing gender-informative names in the dataset with gender-neutral names. Our proposed method is also a data-based method, which aims to effectively reduce gender bias by taking inspiration from different techniques such as uncertainty sampling and name-based counterfactual data substitution (Maudslay et al., 2019).
Figure 1: Our method can be summarized as a combination of bias discovery and mitigation. First, we use a pre-trained LLM to find the most gender-biased samples. Then, we apply our data intervention techniques and use these modified training samples to fine-tune the model. Experiments show that our method is very effective at reducing gender bias, outperforming three state-of-the-art baselines and being comparable to two other baselines.
## 3 Probing Bias in Large Language Models
Pre-trained LLMs are biased towards different genders, as seen in a simple mask-fill experiment using BERT. (Here, and in the rest of the paper, we assume a binary treatment of gender for simplicity.) The task is then to mask out the gender-related nouns and pronouns (such as he, she, her, woman, etc.) and get BERT to predict the masked words for the affected sequences in the dataset. Here, we consider a fixed list of gender-specific words curated from previous work Lu et al. (2018); Zmigrod et al. (2019) and neutral words list1. We finally compute the "total confidence difference" as the sum of differences in the model's prediction confidence for each gender-word pair (such as confidence of predicting he \(-\) she, man \(-\) woman, etc.). Formally, we define total confidence difference as \(|\sum_{i=0}^{N}(f(x_{female}^{(i)})-f(x_{male}^{(i)}))|\) where \(f(x)\) represent the confidence of model's prediction, \(N\) is the total number of tokens in the dataset and \(x\) is the tokenized gender word. The higher this number, the more biased the model is concluded to be. We compute the metric at token level and ensure that each of the gender word gets tokenized into exactly one token by initially extending the tokenizer with our gender word list. The top 3 biased gender-word pairs in StereoSet are shown in Table 1. Intuitively, our technique for gauging bias in LLMs is sensitive to the fixed word list used to represent the sensitive attributes (here, gender). In Table 2, we show the number of words covered by the word list used for both WikiText-2 and StereoSet datasets.
Footnote 1: [https://github.com/joelparkerhenderson/inclusive-language](https://github.com/joelparkerhenderson/inclusive-language)
## 4 Data Interventions
In order to reduce gender bias in pre-trained models, we carefully select diverse and hard-biased examples and then replace gender words with more neutral or equality-focused phrases. This is achieved by using a wordlist to find gender terms in sentences and then segregating words as name and non-name words.
We call our initial approach naive-masking as it does not require a word list for mapping gender words to gender-neutral words. Instead, it replaces all gender words with the fixed word "person." In our next approach, neutral-masking, we swap words in a slightly more semantically accurate manner. In this, we use a word-pair list that goes from gender words to gender-neutral words. With both approaches, we intend to introduce new words in a model's vocabulary to make it more likely to choose a more neutral word in gender-biased sentences.
In our final approach, we exploit the existing vocabulary of the model and try to balance the confidence of prediction on opposite-gender words by using phrases instead. Thus, we call our final approach random-phrase-masking as we instead substitute words with phrases that reflect the equality of gender. This approach not only reduces gender bias but also preserves the original meaning of the sentence in most cases. In our approach, we chose the phrases and order of gender words at random with equal probability.
\begin{table}
\begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{Mean} \\ Gender-Word Pairs & \multicolumn{2}{c}{Confidence Difference} \\ \cline{2-3} & Mean & Std. Dev. \\ \hline he, she & 0.317 & 0.288 \\ \hline Will, May & 0.316 & 0.225 \\ \hline boy, girl & 0.219 & 0.218 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Confidence difference for the Top 3 gender-word pairs in StereoSet
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Samples} & Affected Words \\ & & (mean) \\ \hline \multirow{3}{*}{WikiText-2} & 10 & 191 \\ \cline{2-3} & 50 & 627 \\ \cline{2-3} & 100 & 1028 \\ \hline \multirow{3}{*}{StereoSet} & 10 & 55 \\ \cline{2-3} & 50 & 227 \\ \cline{2-3} & 100 & 463 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of words (mean ) covered by the word list vs dataset and number of sequences sampled from each dataset
\begin{table}
\begin{tabular}{c c c} \hline \hline Intervention & Input word & Converted word \\ \hline \multirow{3}{*}{naive-masking} & he & person \\ & she & person \\ & boy & person \\ \hline \multirow{3}{*}{neutral-masking} & he & they \\ & her & their \\ \cline{1-1} & schoolglirl & schoolkid \\ \hline \multirow{3}{*}{random-phrase-masking} & he & he or she \\ \cline{1-1} & she & she and he \\ \cline{1-1} & boy & either girl or boy \\ \hline \hline \end{tabular}
\end{table}
Table 3: Example conversions for three methods. In Random Phrase Masking, the phrase is being chosen and it’s order was random.
Additionally, we hypothesize that the choice of the dataset for fine-tuning is also essential. We choose two datasets: the WikiText-2 Merity et al. (2017) dataset, which has implicit gender bias since its sources from Wikipedia articles, and the StereoSet dataset Nadeem et al. (2021), which has explicit/more gender bias as it has been designed to evaluate gender bias. WikiText-22 has 600 train articles and roughly 2M tokens while StereoSet3 (dev) has 2123 samples out of which we only consider 800 samples which are not unrelated. Naturally, our data intervention method should work better on a dataset with training examples with gender bias while being devoid of meaningful gender associations like "She needs a gynecologist," where the gender of the person is important. By testing our method on both datasets, we can understand the sensitivity of our approach to the quality of training samples used.
Footnote 2: An English language dataset (Creative Commons Attribution-ShareAlike License).
Footnote 3: An English language dataset available at bias-bench (Creative Commons Attribution-ShareAlike 4.0 International Public License)
## 5 Bias Evaluation Metrics
We focus on evaluating the bias of a model while also measuring its language modeling capability. The ideal model would not just be one with the least bias but also one which does not compromise its language modeling performance. The dual estimation of bias and performance of a model was proposed in the StereoSet benchmark Nadeem et al. (2021), with the Language Modeling Score (LMS) measuring the percentage of times a meaningful token is predicted for the mask as opposed to a meaningless token, the Stereotype Score (SS) measuring the percentage of times the model predicted a stereotypical word as compared to an anti-stereotypical word, and an idealized CAT score (ICAT) combining the LMS and SS score into a single metric. An ideal model has an ICAT score of 100, while the worst biased model has an ICAT score of 0. We additionally evaluate the CrowS-Pairs benchmark Nangia et al. (2020), which captures data with greater diversity in both the stereotypes expressed and the structure of sentences (50 is ideal). However, we note that the Crow-S benchmark is much more limited compared to StereoSet Nadeem et al. (2021) in terms of both the volume and variety of linguistic phenomenon relating to gender bias it covers.
## 6 Experiments
We compare our proposed interventions with five baselines, 4 of which are state-of-the-art methods and the original pre-trained model. Our first baseline is the application of dropouts to neural networks, **Dropout** proposed by Webster et al. (2020). Next, we consider an algorithmic de-biasing technique **INLP** technique proposed by Ravfogel et al. (2020). Then, we consider a sentence embedding de-biasing approach **SentenceDebias**Liang et al. (2020). Finally, we consider a data-based approach **CDA**Zmigrod et al. (2019) that is closest to our work. For a fairer comparison, we run the baselines with the same size (100) of the training set as our method. For all of our experiments, we consider the "bert-base-uncased" pre-trained model available from HuggingFace. For fine-tuning our model, we select a varying number of most-biased training samples (10, 50, and 100) from the WikiText-2 and StereoSet (we only use the dev set) datasets, as discussed in section 4. We also compare this to a random selection of data points as an ablation study. On the selected dataset, we apply our interventions and obtain the modified dataset, which is then used to fine-tune our pre-trained model using masked language modeling (MLM) loss. The key point is that we only fine-tune the model on the gender words conditioned on the remaining text, significantly reducing the fine-tuning time. We perform ablations on various types of interventions as discussed in Table 7. The model is trained for 30 epochs, with a learning rate of 0.001 and AdamW optimizer. We ran all of our experiments on NVIDIA Tesla T4 GPU on Google Colab for roughly 48 hours. For all experiments, we report the numbers as the mean and standard deviations (6) of 3 different runs. Our experiment code can be found here.4
Footnote 4: [https://github.com/himansh005/data_debias](https://github.com/himansh005/data_debias)
## 7 Results
Table 4 shows the StereoSet and Crow-S scores for our baselines and our best-performing interventions on the WikiText-2 Dataset. In the StereoSet benchmark, we observe that random-phrase-masking obtains lower SS than all other baselines. On the Crow-S benchmark, random-phrase-masking does better than thre of the baselines except SentenceDebias which achieves slightly better scores. While random-phrase-masking results in lower SS scores than neutral-masking, it also obtained
very low LMS scores. We attribute this performance degradation to the blunt substitution of phrases that our method uses, which might lead to odd-sounding sentences. In the Crow-S benchmarks, we see similar behavior and find that random-phrase-masking does better than neutral-masking. Since we believe that our method is sensitive to the choice of the dataset, we also present results on the StereoSet (dev) dataset 6. In Figure 2, we perform a qualitative analysis of our proposed approach and find that random-phrase-masking is able to flip the predictions on fill-mask tasks for stereotypical sentences.
## 8 Conclusion
In this paper, we show that simple data interventions on limited training data effectively reduce gender bias in LLMs. We also show that a biased pre-trained LLM can be used to mine the most effective de-biasing training examples. Evaluation of our methods on state-of-the-art bias benchmarks empirically suggests that our methods effectively reduce gender bias. Given that our methods can work in a few-shot manner and do not require any auxiliary model training, we hope that our work benefits further research in the domain of human-in-the-loop bias mitigation techniques by making the creation of bias mitigation datasets feasible.
## 9 Limitations
Our proposed method has the following main limitations which we believe are important directions for future work to address:
1. **Gender dependency:** Our approach does not account for sentences that only make sense for a single gender. For example, sentences like "She needs to see a gynecologist" would not be captured by our method. This is a common problem encountered by most debiasing algorithms as it is difficult to distinguish these.
2. **Finite wordlist:** The wordlist does not contain all gender-based words as the language con
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Method} & \multicolumn{3}{c}{StereoSet Scores} & \multicolumn{1}{c}{Crow-S} \\ \cline{3-6} & & & SS (\(\downarrow\)) & LMS (\(\uparrow\)) & ICAT (\(\uparrow\)) & Scores (\(\downarrow\)) \\ \hline \multirow{6}{*}{
\begin{tabular}{l}
tinues to evolve. We believe that future works could employ better approaches that can automatically mine gender words relevant to a dataset.
3. **Blunt substitution:** The phrase substitution method is an improvement over direct word substitution, but there are still plenty of instances where the new sentence might be semantically incorrect. This does not have any major implication on inference as we are only doing few-shot learning, but it should not be extended to the entire dataset.
4. **Binary gender:** The method only focuses on the male and female gender. It does not consider non-binary or gender-neutral pronouns such as "ze/hit." This can be solved by using an updated wordlist, but the authors could not come across one at the time of writing.
5. **Downstream analyses:** While our work proposes methods that show reduced gender bias as per a set of metrics, the work in no way claims to reduce gender bias in general, especially on downstream tasks. However, we strongly believe that this technique holds potential to reduce gender bias on downstream tasks as well since we adopt a regular fine-tuning approach and focus mainly on better data interventions. Moreover, recent research has shown that fine-tuning-based debiasing approaches do not damage a model's internal representations to a critical extent Meade et al. (2022).
Overall, these limitations suggest that our approach may not be suitable for use in contexts where gender-specific or non-binary language is prevalent, and the underlying wordlist should be frequently updated.
## 10 Ethics Statement
This study was conducted in accordance with ethical principles and guidelines. The study was designed to provide beneficial knowledge and not harm any group or individual. We recognize that the wordlist we use might not represent all contexts of gender bias and that our debiasing method does not cover all contexts of occurrences of gender bias. However, we made sure to consider the ethical implications of our methodologies and the results of our analysis. The authors have tried to ensure the method does not amplify any other inherent bias but also acknowledge that our approach may have limitations. We take responsibility for any ethical concerns that may arise as a result of our research.
## Acknowledgments
This material is based upon work partially supported by the National Science Foundation (Awards #1722822 and #1750439) and National Institutes of Health (Awards #R01MH125740, #R01MH096951, and #U01MH116925). PPL is partially supported by a Facebook PhD Fellowship and a Carnegie Mellon University's Center for Machine Learning and Health Fellowship. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF, NIH, Facebook, or CMLH, and no official endorsement should be inferred. Additionally, we express our appreciation to the anonymous reviewers for their insightful suggestions, which greatly improved our work. Furthermore, we would like to acknowledge the contributions of our colleagues, Atishay Jain and Praneeta Vaddamanu, who played a significant role in the development of this research.
|
2303.07898 | ISLE: A Framework for Image Level Semantic Segmentation Ensemble | One key bottleneck of employing state-of-the-art semantic segmentation
networks in the real world is the availability of training labels. Conventional
semantic segmentation networks require massive pixel-wise annotated labels to
reach state-of-the-art prediction quality. Hence, several works focus on
semantic segmentation networks trained with only image-level annotations.
However, when scrutinizing the results of state-of-the-art in more detail, we
notice that they are remarkably close to each other on average prediction
quality, different approaches perform better in different classes while
providing low quality in others. To address this problem, we propose a novel
framework, ISLE, which employs an ensemble of the "pseudo-labels" for a given
set of different semantic segmentation techniques on a class-wise level.
Pseudo-labels are the pixel-wise predictions of the image-level semantic
segmentation frameworks used to train the final segmentation model. Our
pseudo-labels seamlessly combine the strong points of multiple segmentation
techniques approaches to reach superior prediction quality. We reach up to 2.4%
improvement over ISLE's individual components. An exhaustive analysis was
performed to demonstrate ISLE's effectiveness over state-of-the-art frameworks
for image-level semantic segmentation. | Erik Ostrowski, Muhammad Shafique | 2023-03-14T13:36:36Z | http://arxiv.org/abs/2303.07898v5 | # Autoensemble: Automated Ensemble Search Framework for Semantic Segmentation using Image Labels
###### Abstract
A key bottleneck of employing state-of-the-art semantic segmentation networks in the real world is the availability of training labels. Standard semantic segmentation networks require massive pixel-wise annotated labels to reach state-of-the-art prediction quality. Hence, several works focus on semantic segmentation networks trained with only image-level annotations. However, when scrutinizing the state-of-the-art results in more detail, we notice that although they are very close to each other on average prediction quality, different approaches perform better in different classes while providing low quality in others. To address this problem, we propose a novel framework, AutoEnsemble, which employs an ensemble of the "pseudo-labels" for a given set of different segmentation techniques on a class-wise level. Pseudo-labels are the pixel-wise predictions of the image-level semantic segmentation frameworks used to train the final segmentation model. Our pseudo-labels seamlessly combine the strong points of multiple segmentation techniques approaches to reach superior prediction quality. We reach up to 2.4% improvement over AutoEnsemble's components. An exhaustive analysis was performed to demonstrate AutoEnsemble's effectiveness over state-of-the-art frameworks for image-level semantic segmentation.
Erik Ostrowski Institute of Computer Engineering,
Technische Universitat Wien (TU Wien),
Austria
[email protected] _Muhammad Shafique_
Division of Engineering,
New York University Abu Dhabi,
United Arab Emirates
[email protected]
Semantic Segmentaion, Weakly Supervised, Ensemble, Deep Learning, Class Activation Maps
## 1 Introduction
Generating high-quality semantic segmentation predictions using only models trained on image-level annotated datasets would enable a new level of applicability. The progress of fully supervised semantic segmentation networks has already helped provide many useful tools and applications. For example, in autonomous and self-driving vehicles [1, 2], remote sensing [3, 4], facial recognition [5], agriculture [6, 7], and in the medical field [8, 9], etc. The downside of those fully supervised semantic segmentation networks (FSSS) is that they require large amounts of pixel-wise annotated images. Generating such a training set is very time-consuming and tedious work. For instance, one frame of the Cityscapes dataset, which contains thousands of pixel-wise frame annotations of street scenes from cities, requires more than an hour of manual user-driven annotation [10]. Furthermore, medical imaging and molecular biology fields require the knowledge of highly qualified and experienced individuals capable of interpreting and annotating the images.
Therefore, to reduce the time and resources required for generating pixel-wise masks, a wide range of research works focus on developing approaches that focus on weaker kinds of supervision. In this work, we will focus on weak supervision in the form of image-level labels. Image-level labels give the least amount of supervision for semantic segmentation but are the easiest to acquire.
Several works already focus on image-level semantic segmentation techniques, and they consistently reach better and better high scores. Most works are based on Class Activation Maps (CAMs) [11]. CAMs localize the object by train
Figure 1: Pseudo labels from (a) DRS, (b) PMM, (c) Puzzle-CAM, (d) CLIMS, (e) AutoEnsemble (Ours), (f) Ground truth
ing a DNN model with classification loss and then reusing the learned weights to highlight the image areas responsible for its classification decision. Most image-level segmentation approaches aim to improve the CAM baseline by adding additional regularizations to the classification loss or refining the CAM mask afterward. As more and more methods emerge for improving CAM quality, state-of-the-art is usually compiled of combinations of regularizations and after-the-fact refinement. However, when analyzing different image-level segmentation techniques on a class-by-class basis, we observed that the differences between those approaches vary significantly on specific classes, although those methods generate predictions that reach comparable scores on average.
Therefore, we are proposing our AutoEnsemble framework. In our framework, we combine the pseudo-labels of multiple image-level segmentation techniques based on the respective class scores to generate a superset of pseudo-labels, combining the upsides of multiple different approaches. Fig. 1 visualizes the gains possible of AutoEnsemble compared to its best component. We perform extensive experiments on the PASCAL VOC2012 dataset [12] to prove the effectiveness of the proposed framework in various experimental settings and compare them with a wide range of state-of-the-art techniques to illustrate the benefits of our approach. The **key contribution** of this work are:
1. Our novel AutoEnsemble framework improves the prediction quality of the segmentation mask by combining state-of-the-art pseudo-labels and class-by-class base.
2. Our AutoEnsemble is not limited by the number or approach of any conventional image-level segmentation framework to combine their pseudo-labels. Since the AutoEnsemble is only used for generating pseudo-labels, it will not add more computations for inference predictions.
3. We have presented detailed ablation studies and analysis of the results comparing AutoEnsemble to state-of-the-art methods on the VOC2012 dataset to evaluate our method's efficacy and the improvements achieved using our framework.
## 2 Related Work
This section discusses the current state-of-the-art image-level semantic segmentation.
PCA [13] trains a second network to learn pixel similarities, which generates a transition matrix combined with the CAM iteratively to refine its activation coverage. Puzzle-CAM [14] introduces a regularization loss by sub-dividing the input image into multiple parts, forcing the network to predict image segments that contain the non-discriminative parts of an object. CLIMS [15] trained the network by matching text labels to the correct image. Hence, the network tries to maximize and minimize the distance between correct and wrong pairs, respectively, instead of just giving a binary classification result. PMM [16] used Coefficient of Variation Smoothing to smooth the CAMs, Proportional Pseudo-mask Generation that introduces a new metric, which highlights the importance of each class on each location, in contrast to the scores trained from the binary classifier. Furthermore, they employed Pretended Under-fitting, which improves training with noisy labels, and Cyclic Pseudo-mask, which iteratively trains the final semantic segmentation network with its predictions. DRS [17] aims to improve the image's activation area to less discriminative areas. [17] et al. achieve this by suppressing the attention on discriminative regions, thus guiding the attention to adjacent non-discriminative regions to generate a complete attention map of the target object.
## 3 Our Framework
Fig. 2 presents an overview of the AutoEnsemble framework. We start with collecting the pseudo labels of our candidate methods. In the next step, we can employ several refinement methods to improve the pseudo-label quality beforehand. In our case, we used PCA [13] and a dense Conditional Random Field (dCRF) [18] for the candidates if the provided pseudo labels did not already undergo refinement methods. Then we can combine the pseudo-labels on a class-wise basis, where we only copy the predictions of particular classes of the can
\begin{table}
\begin{tabular}{l c c c c} \hline Class & PMM & DRS & CLIMS & Puzzle \\ \hline Bus & ✓ & ✗ & ✗ & ✗ \\ Car & ✓ & ✗ & ✗ & ✗ \\ Bottle & ✗ & ✓ & ✗ & ✗ \\ Chair & ✗ & ✓ & ✗ & ✗ \\ Train & ✗ & ✓ & ✗ & ✗ \\ Bike & ✗ & ✗ & ✓ & ✗ \\ Boat & ✗ & ✗ & ✓ & ✗ \\ Table & ✗ & ✓ & ✗ \\ Motor & ✗ & ✓ & ✗ \\ Person & ✗ & ✓ & ✗ \\ Sofa & ✗ & ✗ & ✓ & ✗ \\ TV & ✗ & ✓ & ✗ \\ Aero & ✗ & ✗ & ✗ & ✓ \\ Bird & ✗ & ✗ & ✗ & ✓ \\ Cat & ✗ & ✗ & ✓ \\ Cow & ✗ & ✗ & ✗ & ✓ \\ Dog & ✗ & ✗ & ✗ & ✓ \\ Horse & ✗ & ✗ & ✗ & ✓ \\ Plant & ✗ & ✗ & ✗ & ✓ \\ Sheep & ✗ & ✗ & ✗ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Highest score per VOC2012 class on each component of AutoEnsemble (PE) and itself.
didate labels to our ensemble if the candidate has a high score in that class. Finally, we use the generated pseudo labels to train an FSSS network. Our proposed version uses the four state-of-the-art methods introduced in the previous section, and for all of them except CLIMS, we also refine their baseline with PCA. PCA uses a random walk in combination with pixel affinities. Furthermore, as common practice, we use dCRF to the PCA predictions to improve their quality. The refinement of the pseudo labels is not limited to PCA, and any combination of additional refinement methods can be employed within our AutoEnsemble framework.
In the next step, we will evaluate the different candidate pseudo label sets on a class-wise basis and determine which candidate is used for which class for the ensemble. The combination is done by transferring every pixel classified as class \(x\) by the candidate network \(n\), which has the high score on \(x\), to our ensemble pseudo-labels, for every class. Furthermore, we tested a naive version, in which we ranked every class by its number of instances in the training set and then used the complete CAM of candidate \(n\) from the whole image if \(n\) has the high score on the highest ranked class \(x\) present on the image. The naive AutoEnsemble performed worse than our final version but still better than its components.
We excluded the _background_ class from our ensemble since the background is the inverse of all classes combined. Note that we can perform this class selection method since we already assign the correct class labels to each prediction instead of using a classification network for the assignment. Therefore, training a fully supervised semantic segmentation network with those pseudo-labels is necessary. Nevertheless, the FSSS training guarantees that collecting multiple pseudo-label sets is a one-time effort per dataset. Fig. 2 illustrates an overview of the process.
## 4 Experiments
First, we will discuss our experimental setup. We completed the experiments on a CentOS 7.9 Operating System executing on an Intel Core i7-8700 CPU with 16GB RAM and 2 Nvidia GeForce GTX 1080 Ti GPUs. The CLIMS pseudo-labels were used as provided by the official Github, and we performed PCA with a ResNet50 backbone and dCRF on the pseudo-labels provided on the DRS and PMM Github. All DeepLabV3+ results were generated using a ResNet50 backbone. The mean Intersection-over-Union (mIoU) ratio is the evaluation metric for all experiments. We used the PASCAL VOC2012 semantic segmentation benchmark for evaluating our framework. It comprises 21 classes, including a background class, and most images include multiple objects. Following the commonly adopted experimentation protocol for semantic segmentation, we use the \(10,528\) augmented images, including the image-level labels, for training. Our model is evaluated on the validation set with \(1,464\) images and the test set of \(1,456\) images to ensure a constant comparison with the state-of-the-art.
### Semantic Segmentation Performance on VOC2012
Next, we compare our ensemble with its components consisting of recent works using image-level supervision Table 2. We trained all pseudo-labels with the same DeepLabV3+ model for comparability using a ResNet50 backbone. We notice that the ensemble outperforms its component by a margin of at least \(2\%\), although the individual components do not show this amount of variance between them. AutoEnsemble-2 is the ensemble of just PuzzleCAM and CLIMS, and AutoEnsemble is the ensemble of all four methods. DRS is the best performing of its component, and the AutoEnsemble reaches a \(2\%\) higher mIoU score.
Figure 2: Overview of the AutoEnsemble framework. The first stage is the collection of Image-level semantic segmentation; In the second stage, we can use a number of refinement methods to improve the mask quality; In the third stage, we combine the refinement masks on a class-wise basis to generate the pseudo-labels that reach the best prediction for each class; In the final stage, we are training an FSSS with the pseudo labels
Next, we present a more comprehensive analysis and class-wise mIoU breakdown for all classes in the VOC2012 training dataset. On the one hand, we see in Table 2 that the difference in the average mIoU score between our four component methods the relatively small, with the lowest scoring PuzzleCAM reaching 69.7% and the highest scoring DRS at 71.3%. On the other hand, the ensemble of all four methods reaches 74.1%, and the ensemble of just PuzzleCAM and CLIMS 73.6% achieves a significant gain compared to its components. Although, we also notice that the gain from adding more image-level segmentation pseudo-labels to the ensemble shrinks over time and needs to be considered when choosing the component for AutoEnsemble.
Let us take a closer look at the performance of the individual components on a class-wise basis. PMM reaches the best score only in two classes and an average \(0.77\%\) improvement in those classes. Although only reaching the highest average score in three classes, DRS provides an average \(6.04\%\) improvement in those three classes. CLIMS is the best in seven classes and achieves an average improvement of \(6.04\%\) as well. Whereas PuzzleCAM is the lowest average scoring method but reaches high scores in eight classes but improves them only by \(2.62\%\) on average. Therefore, we conclude that CLIMS and PuzzleCAM contribute the most and PMM the least to the ensemble. Hence, we also evaluated the combination of only CLIMS and PuzzleCAM to see how much improvement we gain while combining the minimum amount of pseudo-label sets. We called this version AutoEnsemble-2. We notice that most high scores translated to the ensemble, with only minor losses in some classes, most probably due to overlap with other classes.
## 5 Conclusion
In this paper, we have proposed our AutoEnsemble framework that combines the pseudo-labels of several image-level segmentation tehniques on a class-wise basis to leverage the strong points of its different components. The combined pseudo labels reach at least 2% higher mIoU scores than its components. Most of those gains stem from bigger variances within particular classes, as we observed that different approaches have different strengths and weaknesses. The AutoEnsemble framework combines any number of pseudo-labels to boost the quality of the pseudo-labels for final training. We showed that the predictions generated by the model trained with the pseudo labels of AutoEnsemble achieve state-of-the-art performance on the VOC2012 dataset, which shows its effectiveness.
## Acknowledgments
This work is part of the Moore4Medical project funded by the ECSEL Joint Undertaking under grant number H2020-ECSEL-2019-IA-876190.
## Copyright
(c) 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
|
2308.14228 | Broadcast Channels with Heterogeneous Arrival and Decoding Deadlines:
Second-Order Achievability | A standard assumption in the design of ultra-reliable low-latency
communication systems is that the duration between message arrivals is larger
than the number of channel uses before the decoding deadline. Nevertheless,
this assumption fails when messages arrive rapidly and reliability constraints
require that the number of channel uses exceed the time between arrivals. In
this paper, we consider a broadcast setting in which a transmitter wishes to
send two different messages to two receivers over Gaussian channels. Messages
have different arrival times and decoding deadlines such that their
transmission windows overlap. For this setting, we propose a coding scheme that
exploits Marton's coding strategy. We derive rigorous bounds on the achievable
rate regions. Those bounds can be easily employed in point-to-point settings
with one or multiple parallel channels. In the point-to-point setting with one
or multiple parallel channels, the proposed achievability scheme outperforms
the Normal Approximation, especially when the number of channel uses is smaller
than $200$. In the broadcast setting, our scheme agrees with Marton's strategy
for sufficiently large numbers of channel uses and shows significant
performance improvements over standard approaches based on time sharing for
transmission of short packets. | Homa Nikbakht, Malcolm Egan, Jean-Marie Gorce, H. Vincent Poor | 2023-08-27T23:20:59Z | http://arxiv.org/abs/2308.14228v1 | # Broadcast Channels with Heterogeneous Arrival and Decoding Deadlines: Second-Order Achievability
###### Abstract
A standard assumption in the design of ultra-reliable low-latency communication systems is that the duration between message arrivals is larger than the number of channel uses before the decoding deadline. Nevertheless, this assumption fails when messages arrive rapidly and reliability constraints require that the number of channel uses exceed the time between arrivals. In this paper, we consider a broadcast setting in which a transmitter wishes to send two different messages to two receivers over Gaussian channels. Messages have different arrival times and decoding deadlines such that their transmission windows overlap. For this setting, we propose a coding scheme that exploits Marton's coding strategy. We derive rigorous bounds on the achievable rate regions. Those bounds can be easily employed in point-to-point settings with one or multiple parallel channels. In the point-to-point setting with one or multiple parallel channels, the proposed achievability scheme outperforms the Normal Approximation, especially when the number of channel uses is smaller than \(200\). In the broadcast setting, our scheme agrees with Marton's strategy for sufficiently large numbers of channel uses and shows significant performance improvements over standard approaches based on time sharing for transmission of short packets.
## I Introduction
Mobile wireless networks in 5G and in 6G proposals are increasing intended for use in latency-critical and high-reliability systems, notably in industrial control applications, autonomous vehicles and remote surgery [1, 2, 3, 4, 5, 6]. In such ultra-reliable low-latency communications (URLLC), packets are typically short. As a consequence, data transmission cannot be made reliable by increasing channel code blocklength arbitrarily.
A key challenge is, therefore, to design coding schemes that support high reliability requirements under finite blocklength. In recent years, a number of channel coding schemes have been proposed to address such requirements including short LDPC and polar codes [7, 8]. At the same time, new characterizations of fundamental tradeoffs among the size of the message set, the probability of error, and the length of the code have been obtained via achievability and converse bounds, thereby building on the work of [9, 10, 11].
A standard assumption in the design of coding schemes, even for URLLC, is that consecutive messages have distinct arrival times and decoding deadlines. As such, there is no choice but to encode the messages independently. However, this assumption is violated when a message arrives before the decoding deadline of a prior message. For example, a sensor in an unstable control system may send rapid measurements in order to stabilize the system [12]. In order to ensure reliability of the sensor observations, the channel uses allocated to each observation of the speed may partially overlap.
It is therefore desirable to consider joint encoding of multiple sensor observations, albeit with heterogeneous decoding deadlines. That is, if the channel uses for two separate observations overlap, it is not possible to wait until the entire transmission for both sensor observations is received before decoding. In this situation, code design must account for two issues:
1. messages with close arrival times; and
2. messages with heterogeneous decoding deadlines.
In addition to heterogeneous arrival times and decoding deadlines, a transmitter may seek to send messages to distinct receivers. While this setting has clear relevance for URLLC applications, there are currently no known designs for appropriate coding schemes.
### _Related Work_
#### I-A1 Finite Blocklength Regime
To capture both reliability and latency, investigation of coding bounds in the finite blocklength regime is a requirement that dates back to the work of Shannon, Gallager, and Berlekamp [13]. Most of the existing literature focuses on identifying the limits of communication between a single transmitter and a single receiver for a coding block of
size \(n\). Among the proposed bounds, the widely used bound is the _Normal Approximation_ that for a given \(n\) approximates the point-to-point transmission rate \(R\) by [9, 14]:
\[R\approx C-\sqrt{\frac{V}{n}}\log(e)Q^{-1}(\epsilon)+\frac{\log n}{2n}, \tag{1}\]
where \(C\) is the channel capacity, \(V\) is the channel dispersion coefficient, \(\epsilon\) is the average error probability and \(Q^{-1}(\cdot)\) is the inverse of the Gaussian cumulative distribution function. However this approximation has been proved to be a valid \(O(n^{-1})\) asymptotic approximation for converse and achievability bounds [15], but an \(O(n^{-1})\) bound is not necessarily reliable for small values of \(n\) corresponding to URLLC applications. Therefore, exact bounds are of considerable interest.
#### I-A2 Heterogeneous Decoding Deadlines
The problem of designing a code to handle heterogeneous decoding deadlines was first considered in the context of static broadcasting [16], where a single message is decoded at multiple receivers under different relative decoding delay constraints. The work in [16] was recently generalized to multi-source and multi-terminal networks by Langberg and Effros in [17]. In particular, the notion of a time-rate region was introduced, which accounted for different decoding delay constraints for each message at each receiver.
The work in both [16] and [17] focused on the asymptotic regime. In the finite blocklength regime, a coding scheme for the Gaussian broadcast channel with heterogeneous blocklength constraints was introduced in [18], which decodes the messages at time-instances that depend on the realizations of the random channel fading. By employing an early decoding scheme, the authors showed that significant improvements over standard successive interference cancellation are possible. In [19] achievable rates and latency of the early-decoding scheme in [18] are improved by introducing _concatenated shell codes_.
#### I-A3 Heterogeneous Arrival Times
The work in [16, 17, 18, 19] focuses on the case where both messages are available at the time of encoding. In our previous works [20, 21], we introduced a coding scheme for the Gaussian point-to-point channel that encodes the first message before the second message arrives. The scheme proposed in [20] exploited power sharing for symbols between the arrival time of the second message and the decoding deadline of the first message. Under a Gaussian interference assumption, bounds on the error probabilities for each message were established based on the message set size and finite decoding deadline constraints. In [21], a coding scheme was proposed that exploits dirty-paper coding (DPC) [22, 23, 24].We further derived rigorous bounds on the achievable error probabilities of the messages.
#### I-A4 Broadcast Channels in the Finite Blocklength Regime
Finite blocklength analysis of broadcast channels (BCs) was studied in [25, 26, 27, 28]. The second-order Gaussian BC setting was investigated in [26] where the authors studied the concatenate-and-code protocol [29] in which the transmitter concatenates the users' message bits into a single data packet and each user decodes the entire packet to extract its own bits. This scheme was shown to outperform superposition coding and time division multiplexing (TDM) schemes. The work in [28] is an extension of [26] to \(K\)-user BCs. The work in [27] considered a two-user static BC and showed that under per-user reliability constraint, superposition coding combined with a rate splitting technique in which the message intended for the user with the lowest signal-to-noise ratio (SNR) (the cloud center message) is allocated to either users gives the largest second-order rate region.
### _Main Contributions_
While coding schemes for heterogeneous arrival and decoding deadlines have been developed for point-to-point channels, adapting these codes to broadcast channels remains an open problem. In this paper, we address this question by developing and analyzing a coding scheme tailored to broadcast channels with heterogeneous arrival times and decoding deadlines.
The main contributions of this work are:
* We introduce a coding scheme for two-user Gaussian BCs with heterogeneous arrival times and decoding deadlines in [20], which exploits Marton's coding strategy [30]. Accounting for finite decoding deadline constraints (corresponding to fixed blocklengths), we first derive rigorous bounds on the achievable transmission rate for each of the messages. This is achieved by combining techniques to analyze the Gel'fand-Pinsker channel in the finite blocklength regime [23] and multiple parallel channels [15].
* With the developed bounds, we obtain further rigorous bounds for point-to-point settings with one or multiple parallel channels. In the point-to-point setting with one channel, we show that our achievability scheme outperforms the Normal Approximation in (1) proposed by Polyanskiy, Poor and Verdu in [9] especially when the number of channel uses (the size of coding block) is smaller than \(200\). In the point-to-point setting with multiple parallel channels our achievability bound outperforms the Normal Approximation bound proposed by Erseghe in [15].
* In the broadcast setup, we provide a second-order analysis for the achievable rate regions. We show that our scheme agrees with Marton's bound in [30] for a sufficiently large number of channel uses.
* Finally, we show that our scheme outperforms the time-sharing scheme that transmits each message independently but over fewer number of channel uses.
### _Organization_
The rest of this paper is organized as follows. We end this section with some remarks on notation. Sections II and III describe the problem setup and the proposed coding scheme. Section IV and V present our main results and discussions on the related works. Section VI concludes the paper. Some technical proofs are referred to in appendices.
### _Notation_
The set of all integers is denoted by \(\mathbb{Z}\), the set of positive integers by \(\mathbb{Z}^{+}\) and the set of real numbers by \(\mathbb{R}\). For other sets we use calligraphic letters, e.g., \(\mathcal{X}\). Random variables are denoted by uppercase letters, e.g., \(X\), and their realizations by lowercase letters, e.g., \(x\). For vectors we use boldface notation, i.e., upper case boldface letters such as \(\mathbf{X}\) for random vectors and lower case boldface letters such as \(\mathbf{x}\) for deterministic vectors. Matrices are depicted with sans serif font, e.g., H. We also write \(X^{n}\) for the tuple of random variables \((X_{1},\ldots,X_{n})\) and \(\mathbf{X}^{n}\) for the tuple of random vectors \((\mathbf{X}_{1},\ldots,\mathbf{X}_{n})\).
## II Problem setup
Consider a transmitter S that seeks to communicate with two receivers Rx \(1\) and Rx \(2\). It wishes to transmit message \(m_{1}\) to receiver Rx \(1\) and message \(m_{2}\) to receiver Rx \(2\). At time \(t=a_{1}\), transmission commences for the first message \(m_{1}\). At time \(t=a_{2}\), transmission commences for the second message \(m_{2}\). The two messages \(m_{1},m_{2}\) are assumed to be drawn independently and uniformly on \(\{1,\ldots,M_{1}\}\) and \(\{1,\ldots,M_{2}\}\), respectively.
Each message is subject to different decoding delay constraints. 1 In particular, at time \(d_{1}\), Rx \(1\) attempts to reconstruct the message \(m_{1}\). Similarly, at time \(d_{2}>d_{1}\), Rx \(2\) attempts to reconstruct the message \(m_{2}\). See Fig. 0(a).
Footnote 1: In this work we do not consider the encoding/decoding time delays, but rather the delay due to transmission, i.e., in terms of number of channel uses.
Under the assumption that \(a_{1}<a_{2}\) and \(a_{2}<d_{1}<d_{2}\), the encoder outputs symbols at time \(t\in\{a_{1},\ldots,d_{2}\}\) as
\[X_{t}=\begin{cases}h_{t}(m_{1}),&t\in\{a_{1},\ldots,a_{2}-1\}\\ \psi_{t}(m_{1},m_{2}),&t\in\{a_{2},\ldots,d_{1}\}\\ \phi_{t}(m_{2}),&t\in\{d_{1}+1,\ldots,d_{2}\},\end{cases} \tag{2}\]
where \(h,\psi,\phi\) are the encoding functions corresponding to the channel uses where only message \(m_{1}\) has arrived but not \(m_{2}\), where both \(m_{1},m_{2}\) are present, and \(m_{1}\) has been decoded at Rx \(1\). We highlight that \(m_{2}\) is not known before time \(t=a_{2}\); i.e., encoding is causal. Define
\[n:=d_{2}-a_{1}+1,\quad n_{1,1}:=a_{2}-a_{1},\quad n_{1,2}:=d_{1}-a_{2}+1\quad \text{and}\quad n_{2,2}:=d_{2}-d_{1}. \tag{3}\]
Fig. 1: System model. (a) Arrival times and decoding deadlines of \(m_{1}\) and \(m_{2}\), (b) Problem setup with one transmitter and two receivers.
Choose \(\beta_{1,1}\), \(\beta_{\text{c}}\) and \(\beta_{2,2}\) in \([0,1]\) such that
\[n_{1,1}\beta_{1,1}+n_{1,2}\beta_{\text{c}}+n_{2,2}\beta_{2,2}=n. \tag{4}\]
We assume that the encoding functions satisfy an average block power constraint; namely,
\[\frac{1}{n}\sum_{i=a_{1}}^{d_{2}}X_{i}^{2}\leq\mathsf{P} \tag{5}\]
and consequently
\[\frac{1}{n_{1,1}}\sum_{i=a_{1}}^{a_{2}-1}X_{i}^{2} \leq\beta_{1,1}\mathsf{P}, \tag{6}\] \[\frac{1}{n_{1,2}}\sum_{i=a_{2}}^{d_{1}}X_{i}^{2} \leq\beta_{\text{c}}\mathsf{P},\] (7) \[\frac{1}{n_{2,2}}\sum_{i=d_{1}+1}^{d_{2}}X_{i}^{2} \leq\beta_{2,2}\mathsf{P}. \tag{8}\]
Denote the channel inputs by
\[\mathbf{X}_{1,1} =\{X_{a_{1}},\ldots,X_{a_{2}-1}\},\] \[\mathbf{X}_{\text{c}} =\{X_{a_{2}},\ldots,X_{d_{1}}\},\] \[\mathbf{X}_{2,2} =\{X_{d_{1}+1},\ldots,X_{d_{2}}\}, \tag{9}\]
and the corresponding channel outputs at Rx \(1\) by \(\mathbf{Y}_{1,1}\) and \(\mathbf{Y}_{1,2}\) and the channel outputs at Rx \(2\) by \(\mathbf{Y}_{2,1}\) and \(\mathbf{Y}_{2,2}\). The conditional distributions governing the four channels are then denoted by \(f_{\mathbf{Y}_{1,1}|\mathbf{X}_{1,1}}\), \(f_{\mathbf{Y}_{1,2}|\mathbf{X}_{\text{c}}}\), \(f_{\mathbf{Y}_{2,1}|\mathbf{X}_{\text{c}}}\) and \(f_{\mathbf{Y}_{2,2}|\mathbf{X}_{2,2}}\). We assume that each channel is additive, memoryless, stationary, and Gaussian; that is,
\[\mathbf{Y}_{1,1} =\mathbf{X}_{1,1}+\mathbf{Z}_{1,1}, \tag{10}\] \[\mathbf{Y}_{1,2} =\mathbf{X}_{\text{c}}+\mathbf{Z}_{1,2},\] (11) \[\mathbf{Y}_{2,1} =\mathbf{X}_{\text{c}}+\mathbf{Z}_{2,1},\] (12) \[\mathbf{Y}_{2,2} =\mathbf{X}_{2,2}+\mathbf{Z}_{2,2}, \tag{13}\]
where \(\mathbf{Z}_{i,j}\sim\mathcal{N}(\mathbf{0},\sigma_{i,j}^{2}\mathbf{I})\) for \(i=1,2\) and \(j=1,2\).
This setup is illustrated in Fig. (b)b.
Receive Rx \(1\) attempts to reconstruct message \(m_{1}\) based on the channel outputs \(\mathbf{Y}_{1,1}\) and \(\mathbf{Y}_{1,2}\) via the decoding function \(g_{1}\); i.e.,
\[\hat{m}_{1}=g_{1}(\mathbf{Y}_{1,1},\mathbf{Y}_{1,2}). \tag{14}\]
Receiver Rx \(2\) attempts to reconstruct message \(m_{2}\) based on the channel outputs \(\mathbf{Y}_{2,1}\) and \(\mathbf{Y}_{2,2}\) via the decoding function \(g_{2}\); i.e.,
\[\hat{m}_{2}=g_{2}(\mathbf{Y}_{2,1},\mathbf{Y}_{2,2}). \tag{15}\]
Observe that both receivers are causal.
The average probability of error for each of the messages is then
\[\epsilon_{1}=\mathbb{P}[\hat{m}_{1}\neq m_{1}],\quad\epsilon_{2}=\mathbb{P}[ \hat{m}_{2}\neq m_{2}]. \tag{16}\]
The focus of the remainder of this paper is on characterizing the tradeoff among the size of the message sets \(M_{1}\) and \(M_{2}\), the error probabilities \(\epsilon_{1},\epsilon_{2}\), and the decoding deadlines \(d_{1},d_{2}\). Formally, we study the achievable region defined as follows.
**Definition 1**: _Given the power constraint \(\mathsf{P}\), a tuple \((a_{1},a_{2},d_{1},d_{2},M_{1},M_{2},\epsilon_{1},\epsilon_{2})\) is achievable if the messages \(m_{1}\) and \(m_{2}\) of cardinality \(M_{1}\) and \(M_{2}\), respectively, arriving at the \(a_{1}\)-th and \(a_{2}\)-th channel uses can be decoded by the \(d_{1}\)-th and \(d_{2}\)-th channel uses with an average probability of error not exceeding \(\epsilon_{1}\) and \(\epsilon_{2}\), respectively._
## III Random Coding Scheme
In this section, we introduce our random coding scheme. Notice that instead of employing Gaussian codebooks, our analysis relies on the use of power-shell codebooks. A power-shell codebook of length \(n\) consists of codewords that are uniformly distributed on the centered \((n-1)\)-dimensional sphere with radius \(\sqrt{n\mathsf{P}}\) where \(\mathsf{P}\) is the average input power constraint.
### _Encoding_
The encoding process consists of three phases.
#### Iii-A1 Transmitting only \(m_{1}\)
In the first channel, consisting of \(n_{1,1}\) channel uses, only \(m_{1}\) is known to the encoder. The channel input \(\mathbf{X}_{1,1}\) corresponding to message \(m_{1}\) is a codeword \(\mathbf{X}_{1,1}(m_{1})\in\mathbb{R}^{n_{1,1}}\), which is independently distributed on the sphere \(\mathbb{S}^{n_{1,1}-1}\) with power \(n_{1,1}\beta_{1,1}\mathsf{P}\). That is, the probability density function of \(\mathbf{X}_{1,1}\) is given by
\[f_{\mathbf{X}_{1,1}}(\mathbf{x}_{1,1})=\frac{\delta\left(||\mathbf{x}_{1,1}||^{2}-n_{1,1} \beta_{1,1}\mathsf{P}\right)}{S_{n_{1,1}}(\sqrt{n_{1,1}\beta_{1,1}\mathsf{P}})}, \tag{17}\]
where \(\delta(\cdot)\) is the Dirac delta function, and \(S_{n}(r)\) is the surface area of a sphere of radius \(r\) in \(n\)-dimensional space.
#### Iii-A2 Transmitting both \(m_{1}\) and \(m_{2}\)
Over the next \(n_{1,2}\) channel uses, the encoder exploits Marton's coding strategy [30] to jointly encode \(m_{1}\) and \(m_{2}\). To this end, we choose \(\beta_{2,1}\in[0,1]\) and \(\beta_{1,2}\in[0,1]\) such that for a given \(\rho\in[0,1]\),
\[\beta_{1,2}+\beta_{2,1}+\rho\sqrt{\beta_{1,2}\beta_{2,1}}=\beta_{\text{c}} \tag{18}\]
The parameter \(\rho\) is defined shortly.
The following two codebooks are then generated.
_Codebook Generation:_
* Denote by \(L_{1}\) the random coding parameter illustrating the number of auxiliary codewords for each message \(m_{1}\in\{1,\ldots,M_{1}\}\). A random codebook \(C_{\mathbf{X}_{1,2}}\) containing \(M_{1}L_{1}\) auxiliary codewords \(\{\mathbf{X}_{1,2}(m_{1},\ell_{1})\}\) with \(\ell_{1}\in\{1,\ldots,L_{1}\}\) and \(m_{1}\in\{1,\ldots,M_{1}\}\) is generated where each codeword is independently distributed on the sphere \(\mathbb{S}^{n_{1,2}-1}\) with power \(n_{1,2}\beta_{1,2}\mathsf{P}\).
* Denote by \(L_{2}\) the random coding parameter illustrating the number of auxiliary codewords for each message \(m_{2}\in\{1,\ldots,M_{2}\}\). A random codebook \(C_{\mathbf{X}_{2,1}}\) containing \(M_{2}L_{2}\) auxiliary codewords \(\{\mathbf{X}_{2,1}(m_{2},\ell_{2})\}\) with \(\ell_{2}\in\{1,\ldots,L_{2}\}\) and \(m_{2}\in\{1,\ldots,M_{2}\}\) is generated where each codeword is independently distributed on the sphere \(\mathbb{S}^{n_{1,2}-1}\) with power \(n_{1,2}\beta_{2,1}\mathsf{P}\).
The codebooks are revealed to all terminals.
The transmitter chooses the pair \((\ell_{1},\ell_{2})\) such that \(\mathbf{X}_{1,2}(m_{1},\ell_{1})\in C_{\mathbf{X}_{1,2}}\) and \(\mathbf{X}_{2,1}(m_{2},\ell_{2})\in C_{\mathbf{X}_{2,1}}\) satisfy
\[\langle\mathbf{X}_{1,2}(m_{1},\ell_{1}),\mathbf{X}_{2,1}(m_{2},\ell_{2})\rangle\in \mathcal{D} \tag{19}\]
where
\[\mathcal{D}\triangleq\left[n_{1,2}\sqrt{\beta_{1,2}\beta_{2,1}}\mathsf{P} \rho:n_{1,2}\sqrt{\beta_{1,2}\beta_{2,1}}\mathsf{P}\right] \tag{20}\]
and \(\rho\) is the correlation parameter. If more that one such a pair exists, then one pair is selected arbitrarily.
The channel input over the \(n_{1,2}\) symbols allocated for the joint transmission of \(m_{1}\) and \(m_{2}\) is given by
\[\mathbf{X}_{\text{c}}=\alpha\left(\mathbf{X}_{1,2}(m_{1},\ell_{1})+\mathbf{X}_{2,1}(m_{2}, \ell_{2})\right), \tag{21}\]
where
\[\alpha:=\sqrt{\frac{\beta_{\text{c}}}{\beta_{\text{c}}^{\star}}}. \tag{22}\]
with
\[\beta_{\text{c}}^{\star}:=\beta_{1,2}+\beta_{2,1}+\rho^{\star}\sqrt{\beta_{1, 2}\beta_{2,1}}, \tag{23}\]
and \(\rho^{\star}\in[\rho,\ 1]\) is the correlation coefficient between the chosen codewords \(\mathbf{X}_{1,2}(m_{1},\ell_{1})\) and \(\mathbf{X}_{2,1}(m_{2},\ell_{2})\). Notice that \(\alpha\) is a power normalization coefficient that ensures that the transmit signal satisfies the power constraint in (5).
We have the encoding error event:
\[\mathcal{E}_{1,2}\triangleq\{\text{no }(\ell_{1},\ell_{2})\text{ exists such that (\ref{eq:error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_ _error_error__error_\text{error_error_error_error_error_error_error_error_error_error_error_ error_error_error_error_error_error_error_error_error_error_error_error_error_error_ error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_error_errorerror_error_error_errorerror_
#### Iii-C1 Decoding \(m_{1}\)
Given observations \(\mathbf{Y}_{1,1}\) and \(\mathbf{Y}_{1,2}\), Rx \(1\) estimates \(m_{1}\) according to the pair \((\hat{m}_{1},\hat{\ell}_{1})\), such that the corresponding sequences \(\mathbf{X}_{1,2}(\hat{m}_{1},\hat{\ell}_{1})\) and \(\mathbf{X}_{1,1}(\hat{m}_{1})\) maximize
\[i_{1}\left(\{\mathbf{x}_{1,j}\}_{j\in\{1,2\}};\{\mathbf{y}_{1,j}\}_{j\in\{1,2\}}\right) :=\log\prod_{j=1,2}\frac{f_{\mathbf{Y}_{1,j}|\mathbf{X}_{1,j}}(\mathbf{y}_{1,j}|\mathbf{x}_{1, j})}{f_{\mathbf{Y}_{1,j}}(\mathbf{y}_{1,j})} \tag{25}\]
over all pairs of \(\mathbf{x}_{1,1}\) and \(\mathbf{x}_{1,2}\in C_{\mathbf{X}_{1,2}}\). We have the following error event while decoding \(m_{1}\):
\[\mathcal{E}_{1}\triangleq\{\text{Rx }1\text{ chooses a message }\hat{m}_{1}\neq m_{1}\}. \tag{26}\]
Thus the average error of decoding \(m_{1}\) is bounded by
\[\epsilon_{1}\leq\mathbb{P}[\mathcal{E}_{1,2}]+\mathbb{P}[\mathcal{E}_{1}| \mathcal{E}_{1,2}^{c}]. \tag{27}\]
#### Iii-C2 Decoding \(m_{2}\)
Given observations \(\mathbf{Y}_{2,1}\) and \(\mathbf{Y}_{2,2}\), Rx \(2\) estimates \(m_{2}\) according to the pair \((\hat{m}_{2},\hat{\ell}_{2})\), such that the corresponding sequences \(\mathbf{X}_{2,1}(\hat{m}_{2},\hat{\ell}_{2})\) and \(\mathbf{X}_{2,2}(\hat{m}_{2})\) maximize
\[i_{2}\left(\{\mathbf{x}_{2,j}\}_{j\in\{1,2\}};\{\mathbf{y}_{2,j}\}_{j\in\{1,2\}} \right):=\log\prod_{j=1,2}\frac{f_{\mathbf{Y}_{2,j}|\mathbf{X}_{2,j}}(\mathbf{y}_{2,j}|\bm {x}_{2,j})}{f_{\mathbf{Y}_{2,j}}(\mathbf{y}_{2,j})} \tag{28}\]
over all pairs of \(\mathbf{x}_{2,1}\in C_{\mathbf{X}_{2,1}}\) and \(\mathbf{x}_{2,2}\). We have the following error event while decoding \(m_{2}\):
\[\mathcal{E}_{2}\triangleq\{\text{Rx }2\text{ chooses a message }\hat{m}_{2}\neq m_{2}\}. \tag{29}\]
Thus the average error of decoding \(m_{2}\) is bounded by
\[\epsilon_{2}\leq\mathbb{P}[\mathcal{E}_{1,2}]+\mathbb{P}[\mathcal{E}_{2}| \mathcal{E}_{1,2}^{c}]. \tag{30}\]
**Remark 1**: _To improve the finite blocklength performance, all the codewords are uniformly distributed on the power shell. According to Shannon's observation, the optimal decay of the probability of error near capacity of the point-to-point Gaussian channel is achieved by codewords on the power-shell [31]. As a result of this code construction, the induced output distributions \(f_{\mathbf{Y}_{i,j}}(\mathbf{y}_{i,j})\), with \(i=1,2\), \(j=1,2\), \(f_{\mathbf{Y}_{1,2}|\mathbf{X}_{1,2}}(\mathbf{y}_{1,2}|\mathbf{x}_{1,2})\) and \(f_{\mathbf{Y}_{2,1}|\mathbf{X}_{2,1}}(\mathbf{y}_{2,1}|\mathbf{x}_{2,1})\) are non-i.i.d.; thus we propose to bound the corresponding information density measure \(i_{i}\left(\{\mathbf{x}_{i,j}\}_{j\in\{1,2\}};\{\mathbf{y}_{i,j}\}_{j\in\{1,2\}}\right)\), for each \(i\in\{1,2\}\), by_
\[\tilde{i}_{i}\left(\{\mathbf{x}_{i,j}\}_{j\in\{1,2\}};\{\mathbf{y}_{i,j}\}_{j\in\{1,2\} }\right):=\log\frac{f_{\mathbf{Y}_{i,i}|\mathbf{X}_{i,i}}(\mathbf{y}_{i,i}|\mathbf{x}_{i,i}) \mathcal{Q}_{\mathbf{Y}_{i,j\neq i}|\mathbf{X}_{i,j\neq i}}(\mathbf{y}_{i,j\neq i}|\mathbf{x}_ {i,j\neq i})}{\prod_{j=1}^{2}Q_{\mathbf{Y}_{i,j}}(\mathbf{y}_{i,j})}, \tag{31}\]
_where the \(Q\)s are i.i.d Gaussian distributions. For each \(i\in\{1,2\}\), we thus introduce a parameter \(J_{i}\), chosen such that_
\[i_{i}\left(\{\mathbf{x}_{i,j}\}_{j\in\{1,2\}};\{\mathbf{y}_{i,j}\}_{j\in\{1,2\}}\right) \geq\tilde{i}_{i}\left(\{\mathbf{x}_{i,j}\}_{j\in\{1,2\}};\{\mathbf{y}_{i,j}\}_{j\in\{1,2\}}\right)+\log J_{i}. \tag{32}\]
## IV Main Results
Define \(n_{2,1}=n_{1,2}\). For each \(i\in\{1,2\}\) let \(j\in\{1,2\}\backslash i\), and define
\[\Omega_{i,i} :=\frac{\beta_{i,i}\mathsf{P}}{\sigma_{i,i}^{2}}, \tag{33a}\] \[\Omega_{i,j} :=\frac{(\beta_{i,j}+\rho^{2}\beta_{j,i}+\rho\sqrt{\beta_{i,j} \beta_{j,i}})\mathsf{P}}{\sigma_{i,j}^{2}+(1-\rho^{2})\beta_{j,i}\mathsf{P}},\] (33b) \[J_{i} :=e^{\frac{n_{i,j}}{2}(\log 2-\beta_{j,i}\mathsf{P})}\frac{4\sqrt{ \pi\beta_{i,j}\beta_{j,i}}(1+2\Omega_{i,i})}{243(1+\Omega_{i,i})(\beta_{i,j}+ \beta_{j,i})}. \tag{33c}\]
Our first result gives upper bounds on the achievable rates of the first and the second message.
**Theorem 1**: _Given \(n_{1,1},n_{1,2},n_{2,2}\) and \(\mathsf{P}\), for each \(i\in\{1,2\}\), let \(j\in\{1,2\}\backslash i\), \(\log L_{i}:=n_{i,j}R_{L_{i}}\) and \(\log M_{i}:=R_{i}\sum_{j=1}^{2}n_{i,j}\). For each \(i\in\{1,2\}\), we then have the following upper bound:_
\[\log M_{i}+\log L_{i}\leq\sum_{j=1}^{2}n_{i,j}C\left(\Omega_{i,j} \right)-\sqrt{\sum_{j=1}^{2}n_{i,j}V\left(\Omega_{i,j}\right)}\mathbb{Q}^{-1} \left(\epsilon_{i}-\Delta_{i}\right)+K_{i}\log\left(\sum_{j=1}^{2}n_{i,j}\right) \tag{34}\]
_subject to_
\[\sum_{j=1}^{2}\log L_{i}\geq\log\frac{\log\left(\epsilon_{1,2}\right)}{\log\left( 1-(1-\rho^{2})^{n_{1,2}-1}\right)}, \tag{35}\]
where \(C(x):=\frac{1}{2}\log(1+x)\) and \(V(x):=\frac{x(2+x)}{2(1+x)^{2}}\log e^{2}\) with
\[\Delta_{i}:=\frac{6T_{\max,i}}{\sqrt{(\sum_{j=1}^{2}n_{i,j}V(\Omega_{i,j}))^{3}} }+\frac{\prod_{j=1}^{2}(1+\Omega_{i,j})^{\frac{n_{i,j}}{2}}}{2^{E_{\min,i}}J_{i }\left(\sum_{j=1}^{2}n_{i,j}\right)^{-K_{i}}}+\left(1-\left(1-\rho^{2}\right)^{ n_{i,j}-1}\right)^{L_{1}\cdot L_{2}}, \tag{36}\]
where \(T_{\max,i}\) and \(E_{\min,i}\) are defined in (37) and \(\Phi(\cdot,\cdot,\cdot)\) is the Hurwitz Lerch transcendent, \(\Gamma_{i,i}:=\Gamma(0.5n_{i,i}+0.5)/\Gamma(0.5n_{i,i})\), \(\Gamma_{i,j}:=\Gamma(0.5n_{i,j}+0.5)/\Gamma(0.5n_{i,j})\), \(\tilde{\beta}_{\rm c}:=\beta_{1,2}+\beta_{2,1}+\sqrt{\beta_{1,2}\beta_{2,1}}\), and for any \(\kappa_{i}>1\), \(\zeta_{i}>1\) and \(\tilde{\zeta}_{i}>1\).
See Appendix A.
**Proposition 1**: _For sufficiently large \(n_{1,1},n_{1,2},n_{2,2}\), and for \(\rho>0\), we have_
\[\log M_{i}+\log L_{i}\leq\sum_{j=1}^{2}n_{i,j}C\left(\Omega_{i,j} \right)-\sqrt{\sum_{j=1}^{2}n_{i,j}V\left(\Omega_{i,j}\right)\mathbb{Q}^{-1} \left(\epsilon_{i}\right)}+O\left(\log\left(\sum_{j=1}^{2}n_{i,j}\right)\right) \tag{38}\]
_subject to_
\[\sum_{i=1}^{2}\log L_{i}\geq\log(-\log\epsilon_{1,2})-n_{1,2}\log (1-\rho^{2}). \tag{39}\]
See Appendix B.
## V Discussion on the Main Results and Related Works
In this section, we review related settings that can be covered by Theorem 1.
\[T_{\max,i} := 2^{3}\kappa_{i}+2^{4}\max\left\{\frac{1}{\Gamma\left(\frac{n_{i, i}}{2}\right)}\zeta_{i}2^{\frac{n_{i,i}-2}{4}}e^{-c_{i}}A(n_{i,i},k_{i},b_{i},c_{i}),\frac{1}{\Gamma\left(\frac{n_{i,j}}{2}\right)}\tilde{\zeta}_{i}2^{\frac{n_{ i,j}-2}{4}}e^{-\tilde{c}_{i}}A(n_{i,j},\tilde{k}_{i},\tilde{b}_{i},\tilde{c}_{i}) \right\}, \tag{37a}\] \[E_{\min,i} := \sum_{j=1}^{2}\frac{n_{i,j}}{2}\log(1+\Omega_{i,j})+\frac{(1- \sigma_{i,i}^{2})n_{i,i}\Omega_{i,i}}{2(1+\Omega_{i,i})}-\sqrt{2}\sigma_{i,i}^{ 2}\Gamma_{i,i}k_{i}b_{i}-\frac{n_{i,j}\sigma_{i,j}^{2}\beta_{\rm c}\mathsf{P} }{2(\sigma_{i,j}^{2}+\beta_{\rm c}\mathsf{P})}-\frac{\beta_{\rm c}\mathsf{P} ^{2}n_{i,j}\beta_{j,i}}{2\sigma_{i,j}^{2}(\beta_{\rm c}\mathsf{P}+\sigma_{i,j }^{2})}\] (37b) \[+ \frac{\beta_{\rm c}\mathsf{P}n_{i,j}\beta_{j,i}}{2\tilde{\beta}_ {\rm c}\sigma_{i,j}^{2}(\beta_{\rm c}\mathsf{P}+\sigma_{i,j}^{2})}\left(\sigma_ {i,j}^{2}+(\beta_{\rm c}\mathsf{P}-\sigma_{i,j}^{2})\left(\sqrt{\tilde{\beta}_ {\rm c}\beta_{\rm c}^{-1}}-1-\rho\sqrt{\beta_{j,i}\beta_{j,i}^{-1}}\right)^{2} \frac{\sqrt{2}\mathsf{P}\tilde{\beta}_{\rm c}\Gamma_{i,j}}{\sqrt{n_{i,j}\beta_ {j,i}}}\right)\] \[+ \sqrt{n_{i,j}\beta_{i,j}\mathsf{P}}\left(\sqrt{\beta_{\rm c}\tilde {\beta}_{\rm c}^{-1}}\left(\frac{1}{\sigma_{i,j}^{2}+\beta_{\rm c}\mathsf{P}}+1 +\rho\sqrt{\beta_{j,i}\beta_{i,j}^{-1}}\right)-\frac{1}{\sigma_{i,j}^{2}} \right)\left(\rho\sqrt{n_{i,j}\beta_{j,i}\mathsf{P}}-\sigma_{i,j}^{2}\sqrt{2} \Gamma_{i,j}\right),\] \[A(n,k,b,c) := \left((\tilde{\kappa}+1)^{3}-\frac{k^{3}b^{6}}{8}-\frac{3}{2}( \tilde{\kappa}+1)kb^{2}(\tilde{\kappa}+1-\frac{1}{2}kb^{2})\right)\Phi(e^{-1},- \frac{n}{2}+1,c)\] (37c) \[- 3kb\left(5k^{2}b^{4}+4(\tilde{\kappa}+1)((\tilde{\kappa}+1)-3kb^{ 2})\right)\Phi(e^{-1},-\frac{n}{2},c)-96\sqrt{2}k^{3}b\Phi(e^{-1},-\frac{n}{2} -\frac{3}{2},c)\] \[- 3kb\left(\frac{k^{2}b^{4}}{\sqrt{2}}+2\sqrt{2}(\tilde{\kappa}+1)b ((\tilde{\kappa}+1)-b^{2}k)\right)\Phi(e^{-1},-\frac{n}{2}+\frac{1}{2},c)-64k^{ 3}\Phi(e^{-1},-\frac{n}{2}-2,c)\] \[+ 8\sqrt{2}k^{2}b(6(\tilde{\kappa}+1)-5kb^{2})\Phi(e^{-1},-\frac{n}{ 2}-\frac{1}{2},c)+24k(2(\tilde{\kappa}+1)^{2}-5k^{2}b^{2})\Phi(e^{-1},-\frac{n} {2}-1,c),\] \[\tilde{\kappa}_{i} := \frac{k_{i}b_{i}^{2}}{4}+\frac{\tilde{k}_{i}\tilde{b}_{i}^{2}}{4}- \frac{n_{i,j}}{2}\log(1+\Omega_{i,i})-\frac{n_{i,j}}{2}\log\frac{\beta_{\rm c} \mathsf{P}+\sigma_{i,j}^{2}}{\sigma_{i,j}^{2}}-\frac{n_{i,i}\Omega_{i,i}}{2(1+ \Omega_{i,i})}\] (37d) \[- \tilde{k}_{i}\tilde{b}_{i}-\frac{n_{i,j}\beta_{i,j}\mathsf{P}}{2( \beta_{\rm c}\mathsf{P}+\sigma_{i,j}^{2})}+\frac{\sigma_{i,j}^{2}}{2}\left( \tilde{k}_{i}(\tilde{b}_{i}-\sqrt{n_{i,j}\beta_{j,i}\mathsf{P}})-\frac{\sqrt{n _{i,j}\beta_{j,i}\mathsf{P}}}{\beta_{\rm c}\mathsf{P}+\sigma_{i,j}^{2}} \right)^{2},\] \[c_{i} := \frac{1}{2}\left(\sqrt{\frac{\tilde{\kappa}_{i}+\kappa_{i}}{2k_{i }}}-\frac{b_{i}}{2}\right)^{2},\,\tilde{c}_{i}:=\frac{1}{2}\left(\sqrt{\frac{ \tilde{\kappa}_{i}+\kappa_{i}}{2\tilde{k}_{i}}}-\frac{\tilde{b}_{i}}{2}\right)^ {2},\,k_{i}:=\frac{2+\Omega_{i,i}}{2\sigma_{i,i}^{2}(1+\Omega_{i,i})},\, \tilde{k}_{i}:=\frac{2\sigma_{i,j}^{2}+\beta_{\rm c}\mathsf{P}}{2\sigma_{i,j}^{2 }(\beta_{\rm c}\mathsf{P}+\sigma_{i,j}^{2})},\] (37e) \[b_{i} := \frac{\sqrt{n_{i,i}\Omega_{i,i}}}{k_{i}\sigma_{i,i}(1+\Omega_{i,i} )},\,\,\,\,\,\,\,\,\tilde{b}_{i}:=\sqrt{n_{i,j}\beta_{j,i}\mathsf{P}}+\frac{ \sqrt{n_{i,j}\beta_{i,j}\mathsf{P}}}{\tilde{k}_{i}\sigma_{i,j}^{2}}\left(1- \sqrt{\frac{\beta_{\rm c}}{\tilde{\beta}_{\rm c}}}\left(1+\rho\sqrt{\frac{ \beta_{j,i}}{\beta_{i,j}}}\right)\right)+\frac{\sqrt{n_{i,j}\beta_{i,j}\mathsf{ P}}}{\tilde{k}_{i}(\beta_{\rm c}\mathsf{P}+\sigma_{i,j}^{2})}. \tag{37f}\]
### _Point-to-Point Settings_
#### Iv-A1 Transmission of only \(m_{1}\) over \(n_{1,1}\) channel uses
In this setting, we have only the following channel outputs:
\[\boldsymbol{Y}_{1,1}=\boldsymbol{X}_{1,1}+\boldsymbol{Z}_{1,1} \tag{40}\]
with \(||\boldsymbol{X}_{1,1}||^{2}=n_{1,1}\mathsf{P}\) and \(\boldsymbol{Z}_{1,1}\sim\mathcal{N}(0,\sigma_{1,1}^{2}I_{n_{1,1}})\). Set
\[\Omega_{1,1}=\frac{\mathsf{P}}{\sigma_{1,1}^{2}}. \tag{41}\]
_Theorem 2 (P2P, Only \(m_{1}\)):_ Given \(n_{1,1}\) and \(\mathsf{P}\), \(\log M_{1}:=n_{1,1}R_{1}\) is upper bounded as
\[\log M_{1}\leq n_{1,1}C\left(\Omega_{1,1}\right)-\sqrt{n_{1,1}V\left(\Omega_{ 1,1}\right)}\mathbb{Q}^{-1}\left(\epsilon_{1}-\Delta_{1}\right)+K_{1}\log \left(n_{1,1}\right), \tag{42}\]
where
\[\Delta_{1}:=\frac{6T_{\max,1}}{\sqrt{(n_{1,1}V(\Omega_{1,1}))^{3}}}+\frac{n_{ 1,1}^{K_{1}}(1+\Omega_{1,1})^{\frac{n_{1,1}}{2}}}{J_{1}\cdot 2^{E_{\min,1}}}, \tag{43}\]
and by [32, Proposition 2]:
\[J_{1}:=\frac{\sqrt{8(\sigma_{1,1}^{2}+2\mathsf{P})}}{27\sqrt{\pi}(\sigma_{1,1 }^{2}+\mathsf{P})} \tag{44}\]
with \(K_{1}\) a constant. Notice that \(T_{\max,1}\) and \(E_{\min,1}\) can be easily calculated from (37a) and (37b) by setting \(i=1\) and \(n_{i,j}=0\).
Proof:: The proof of this result follows the proof of Theorem 1 by setting
\[n_{1,2}=n_{2,1}=n_{2,2}=0\quad\text{and}\quad\beta_{1,1}=1. \tag{45}\]
_Remark 2:_ By choosing \(K_{1}\) such that \(\Delta_{1}=O(n_{1,1}^{-\frac{3}{2}})\), the bound in (42) matches the Normal Approximation bound in [9, Eq. 223] which is for Gaussian channels.
We now make a more specific comparison with the work of Polyanskiy, Poor and Verdu in [9]. To this end, note that, the authors in [9] show for both discrete memoryless channels and Gaussian channels, the Normal Approximation achieves
Fig. 2: Converse and achievability bounds on \(R_{1}\) versus \(n_{1,1}\) in the point-to-point case with \(\Omega_{1,1}=0\) dB and \(\epsilon_{1}=10^{-3}\). The value of \(n_{1,1}\) varies from \(70\) to \(2000\) with a step size of \(10\).
\[\log M_{1}=n_{1,1}C\left(\Omega_{1,1}\right)-\sqrt{n_{1,1}V\left(\Omega_{1,1} \right)}\mathbb{Q}^{-1}\left(\epsilon_{1}\right)+O\left(\log\left(n_{1,1} \right)\right). \tag{46}\]
See [9, Eq.223]. The authors then show that for the Gaussian channels with equal-power and maximal-power constraints, the \(O\left(\log\left(n_{1,1}\right)\right)\) can be bounded as
\[O\left(\log\left(n_{1,1}\right)\right)\leq\frac{1}{2}\log n_{1,1}+O(1), \tag{47}\]
and with the average power constraint as
\[O\left(\log\left(n_{1,1}\right)\right)\leq\frac{3}{2}\log n_{1,1}+O(1). \tag{48}\]
See [9, Eq.294 and Eq.295]. To compare our achievability bound in (42) with the Normal Approximation, Fig. 2 illustrates the bound in (42) for \(K_{1}=0.5\) and \(K_{1}=1\) as well as the Normal Approximation with \(O\left(\log\left(n_{1,1}\right)\right)\) equal to \(0\) and \(0.5\log n_{1,1}\). The converse bound is the meta-converse bound proposed in [9, Theorem 41]. As can be seen from this figure, our achievability schemes with \(K_{1}=1\) matches the converse bound for large values of \(n_{1,1}\) and has an improvement over the Normal approximation especially for values of \(n_{1,1}\) smaller than \(200\).
Iv-B2 Transmission of only \(m_{1}\) over two parallel channels of \(n_{1,1}\) and \(n_{1,2}\) channel uses
In this setting, we have the following two channel outputs:
\[\boldsymbol{Y}_{1,1}=\boldsymbol{X}_{1,1}+\boldsymbol{Z}_{1,1},\quad \boldsymbol{Y}_{1,2}=\boldsymbol{X}_{1,2}+\boldsymbol{Z}_{1,2}, \tag{49}\]
with \(||\boldsymbol{X}_{1,1}||^{2}=n_{1,1}\beta_{1,1}\)P, \(||\boldsymbol{X}_{1,2}||^{2}=n_{1,2}\beta_{1,2}\)P, and \(\boldsymbol{Z}_{1,1}\sim\mathcal{N}(0,\sigma_{1,1}^{2}I_{n_{1,1}})\) and \(\boldsymbol{Z}_{1,2}\sim\mathcal{N}(0,\sigma_{1,2}^{2}I_{n_{1,2}})\). The power sharing parameters \(\beta_{1,1}\in[0,1]\) and \(\beta_{1,2}\in[0,1]\) are chosen such that
\[\beta_{1,1}n_{1,1}+\beta_{1,2}n_{1,2}=n. \tag{50}\]
Set
\[\Omega_{1,1}=\frac{\beta_{1,1}\text{P}}{\sigma_{1,1}^{2}}\quad\text{and}\quad \Omega_{1,2}=\frac{\beta_{1,2}\text{P}}{\sigma_{1,2}^{2}}. \tag{51}\]
**Theorem 3** (P2P, Parallel Channels, Only \(m_{1}\)): _Given \(n_{1,1},n_{1,2},\beta_{1,1},\beta_{1,2}\) and P, \(\log M_{1}:=R_{1}(n_{1,1}+n_{1,2})\) is upper bounded as_
\[\log M_{1}\leq\sum_{j=1}^{2}n_{1,j}C\left(\Omega_{1,j}\right)-\sqrt{\sum_{j=1 }^{2}n_{1,j}V\left(\Omega_{1,j}\right)}\mathbb{Q}^{-1}\left(\epsilon_{1}- \Delta_{1}\right)+K_{1}\log\left(\sum_{j=1}^{2}n_{1,j}\right)\qquad, \tag{52}\]
_where_
\[\Delta_{1}:=\frac{6T_{\max,1}}{\sqrt{(\sum_{j=1}^{2}n_{1,j}V(\Omega_{1,j}))^{ 3}}}+\frac{\prod_{j=1}^{2}(1+\Omega_{1,j})^{\frac{n_{1,j}}{2}}}{2^{E_{\min,1} J_{1}}\left(\sum_{j=1}^{2}n_{i,j}\right)^{-K_{1}}} \tag{53}\]
_with_
\[J_{1}:=\frac{\sqrt{8(\sigma_{1,1}^{2}+2\beta_{1,1}\text{P})}}{27\sqrt{\pi}( \sigma_{1,1}^{2}+\beta_{1,1}\text{P})}\cdot\frac{\sqrt{8(\sigma_{1,2}^{2}+2 \beta_{1,2}\text{P})}}{27\sqrt{\pi}(\sigma_{1,2}^{2}+\beta_{1,2}\text{P})} \tag{54}\]
_and \(K_{1}\) a constant. Notice that \(T_{\max,1}\) and \(E_{\min,1}\) can be easily calculated from (37a) and (37b) by setting \(i=1\), \(\beta_{2,1}=0\) and \(\beta_{\epsilon}=\beta_{1,2}\)._
Proof:: The proof follows the proof of Theorem 1 by setting
\[n_{2,2}=0,\quad\beta_{2,1}=0,\quad\text{and}\quad\beta_{2,2}=0. \tag{55}\]
**Remark 3**: _By choosing \(K_{1}\) such that \(\Delta_{1}=O(n_{1,1}^{-\frac{3}{2}})\), and \(n_{1,1}=n_{1,2}\), the bound in (52) matches the Normal Approximation bound in [15, Eq. 54] that is for a two-parallel additive white Gaussian noise (AWGN) channels. This setting can be easily extended to a setting with more than two parallel channels._
We now make a more detailed comparison with the work of Erseghe in [15]. The leading idea of [15] is based on the fact that a probability of the form \(\mathbb{P}[\sum_{i=1}^{n}u_{i}\geq n\tilde{\lambda}]\) where \(u_{i}\)s are i.i.d continues random variables can be written and numerically evaluated using standard Laplace transforms. The author further employs this idea to propose new asymptotic approximations for the meta-converse and random union coding (RCU) achievability bounds for the parallel AWGN channels. In [15, Theorem 10] the author shows that in a \(K\)-parallel AWGN channels with each channel of \(\frac{n}{K}\) channel uses, the meta-converse bound, proposed by Polyanskiy, Poor and Verdu in [9, Theorem 41], is consistent with the normal approximation (1) with
\[V=\frac{1}{K}\sum_{k=1}^{K}\frac{\Omega_{k}(2+\Omega_{k})}{2(1+\Omega_{k})^{2} }\quad\text{and}\quad C=\frac{1}{K}\sum_{k=1}^{K}\frac{1}{2}\log(1+\Omega_{k}), \tag{56}\]
where \(\Omega_{k}\) is the SNR of the \(k\)-th channel. We now numerically compare our bound in (52) with the normal approximation bound in [15] for the case of \(2\)-parallel channels. Since in [15] all \(K\) channels are of equal blocklength thus we assume \(n_{1,1}=n_{1,2}\) and \(\Omega_{1,1}=\nu\cdot\Omega_{2,2}\) for some \(\nu>0\). Fig. 3 illustrates this comparison for \(\Omega_{1,1}=20\) dB, \(\nu=0.5\) and \(\nu=1\), \(\epsilon_{1}=10^{-6}\) and \(K_{1}=1.5\). In the case that channels are of equal SNRs, i.e., \(\nu=1\), the Normal Approximation for \(K\)-parallel channels each of blocklight \(\frac{n}{K}\) is equivalent to the Normal Approximation for one channel of blocklength \(n\). As can be seen from Fig. 3, for the case of \(\nu=1\), the Normal Approximation is not a good approximation of the meta-converse bound. Also, our achievability bound in (52) with \(K_{1}=1.5\) outperforms the normal approximation for \(\nu=1\) and \(\nu=0.5\) specifically at small values of \(n=n_{1,1}+n_{1,2}\).
### _Broadcast Setting: Marton's Bounds_
In this section, we compare our results with Marton's inner bound in [30] proposed for a general two-receiver broadcast channel. In [30], the Tx wishes to transmit message \(m_{1}\) to Rx \(1\) and message \(m_{2}\) to Rx \(2\) each over \(n\) channel uses. The Tx thus encodes \(m_{1}\) using the codeword \(\mathbf{U}_{1}^{n}(m_{1})\) and encodes \(m_{2}\) using the codeword \(\mathbf{U}_{2}^{n}(m_{2})\) where \(\mathbf{U}_{1}\) and \(\mathbf{U}_{2}\) are distributed according to \(f_{\mathbf{U}_{1}\mathbf{U}_{2}}(\mathbf{u}_{1},\mathbf{u}_{2})\) and are correlated with a correlation parameter \(\rho\) such that
\[\langle\mathbf{U}_{1},\mathbf{U}_{2}\rangle=n\rho\sqrt{\beta_{1,2}\beta_{2,1}}\text{ P}. \tag{57}\]
With the codewords \(\mathbf{U}_{1}\) and \(\mathbf{U}_{2}\), the following codeword is formed:
\[\mathbf{X}=\mathbf{U}_{1}+\mathbf{U}_{2}, \tag{58}\]
and is transmitted to Rx \(1\) over the channel \(f_{\mathbf{Y}_{1}|\mathbf{X}}\) and to Rx \(2\) over the channel \(f_{\mathbf{Y}_{2}|\mathbf{X}}\). Here, \(\mathbf{Y}_{1}\) and \(\mathbf{Y}_{2}\) denote the output sequences of the first and second channels, respectively.
Let \(R_{1}:=\log M_{1}/n\) and \(R_{2}:=\log M_{2}/n\), then the following inner bounds hold for this setting:
**Theorem 4** (Marton's bound): _The capacity region \(\mathcal{C}\) of this setting is a set of rate pairs \((R_{1},R_{2})\) satisfying_
\[R_{1} \leq I(\mathbf{U}_{1};\mathbf{Y}_{1}), \tag{59a}\] \[R_{2} \leq I(\mathbf{U}_{2};\mathbf{Y}_{2}),\] (59b) \[R_{1}+R_{2} \leq I(\mathbf{U}_{1};\mathbf{Y}_{1})+I(\mathbf{U}_{2};\mathbf{Y}_{2})-I(\mathbf{U}_{1}; \mathbf{U}_{2}). \tag{59c}\]
_To compare this setup with ours, let \(m_{1}\) and \(m_{2}\) to be jointly sent over the entire \(n\) channel uses. Rx \(1\) observes the channel outputs \(\mathbf{Y}_{1,1}\) and Rx \(2\) observes the channel outputs \(\mathbf{Y}_{2,2}\) given by_
\[\mathbf{Y}_{1,1}=\alpha(\mathbf{X}_{1,2}+\mathbf{X}_{2,1})+\mathbf{Z}_{1,1},\quad\mathbf{Y}_{2,2}= \alpha(\mathbf{X}_{1,2}+\mathbf{X}_{2,1})+\mathbf{Z}_{2,2} \tag{60}\]
where \(\mathbf{X}_{2,1}\sim\mathcal{N}(0,\beta_{1,2}\text{P}I_{n})\) and \(\mathbf{X}_{1,2}\sim\mathcal{N}(0,\beta_{2,1}I_{n})\) where \(\beta_{1,2}\) and \(\beta_{2,1}\) are chosen such that
\[\beta_{1,2}+\beta_{2,1}+\rho\sqrt{\beta_{1,2}\beta_{2,1}}=1. \tag{61}\]
For this Gaussian case, Theorem 4 can be written as the following proposition.
**Proposition 2**: _Let \(R_{1}:=\frac{\log M_{1}}{n}\), \(R_{2}:=\frac{\log M_{2}}{n}\), \(R_{L_{1}}:=\frac{\log L_{1}}{n}\), \(R_{L_{2}}:=\frac{\log L_{2}}{n}\), \(\mathbf{X}_{1,2}\sim\mathcal{N}(0,\beta_{1,2}\text{P}I_{n})\) and \(\mathbf{X}_{2,1}\sim\mathcal{N}(0,\beta_{2,1}\text{P}I_{n})\), we have the following inner bounds:_
\[R_{1}+R_{L_{1}}\leq\frac{1}{2}\log\left(\frac{\sigma_{1,1}^{2}+ \text{P}}{\sigma_{1,1}^{2}+(1-\rho^{2})\beta_{2,1}\text{P}}\right)\] (62a) (62b)
_and_
\[R_{2}+R_{L_{2}}\leq\frac{1}{2}\log\left(\frac{\sigma_{2,2}^{2}+ \text{P}}{\sigma_{2,2}^{2}+(1-\rho^{2})\beta_{1,2}\text{P}}\right) \tag{62c}\]
_subject to_
\[R_{L_{1}}+R_{L_{2}}\geq-\frac{1}{2}\log(1-\rho^{2}). \tag{63}\]
**Remark 4**: _By setting \(n_{1,1}=n_{2,2}=0\), \(n_{1,2}=n\) and_
\[\epsilon_{1,2}=\frac{1}{2^{(1-\rho^{2})^{\frac{n_{1,2}}{2}}}}, \tag{64}\]
_Proposition 1 matches Proposition 2 in the asymptotic regime (i.e., when \(n_{1,2}\rightarrow\infty\)) and when all the codewords are i.i.d Gaussian._
**Remark 5**: _If all the codewords are i.i.d Gaussian, by setting either \(L_{1}\) or \(L_{2}\) to \(0\), then in the asymptotic regime Proposition 1 matches the DPC results of Costa [22] and in the finite blocklength regime matches the results of Scarlett [23, Theorem 2]._
The correlation parameter \(\rho\) is of a vital importance in both our scheme and Marton's scheme. This is due to the fact that increasing \(\rho\) increases the right-hand side of (34) and (62) as well as the right-hand side of (35) and (63) of Theorem 1 and Proposition 2, respectively. More specifically, increasing \(\rho\) increases the upper bound on \(\log M_{i}+\log L_{i}\) as well as the lower bound on \(\sum_{i=1}^{2}\log L_{i}\). Hence, increasing \(\rho\) will not always increase the upper bound on \(R_{i}\). Fig. 4 illustrates the effects of increasing \(\rho\) on upper bounds on \(R_{1}\) and \(R_{2}\) in (34) and Marton's bounds in Proposition 2. To make a comparison with Marton's bounds, in this figure, we set \(n=n_{1,2}\), \(n_{1,1}=n_{2,2}=0\), \(\epsilon_{1,2}=10^{-3}\), \(K_{1}=K_{2}=1.5\), \(\beta_{1,2}=\beta_{2,1}\), \(L_{1}=L_{2}\), \(R_{1}=R_{2}\), and \(\sigma_{1,2}^{2}=\sigma_{2,1}^{2}\). Fig. 3(a) is for the case where \(n=5000\) and \(\text{P}/\sigma_{1,2}^{2}\) is equal to \(10\) dB and \(0\) dB. As can be seen from Fig. 3(a), when \(\text{P}/\sigma_{1,2}^{2}\) is equal to \(10\) dB, the transmission rate under our scheme and Marton's scheme maximizes around \(\rho=0.9\) and decays for the values of \(\rho\) larger than \(0.9\). Whereas, for \(\text{P}/\sigma_{1,2}^{2}\) equal to \(0\) dB, the transmission rate under our
Fig. 4: Effect of the correlation parameter \(\rho\) on the bounds in Theorem 1 and Proposition 2.
scheme and Marton's scheme maximizes at \(\rho=0\). A similar trend can be seen for very small values of \(n\). See Fig. (b)b. Notice that the upper bound is the case where \(R_{L_{1}}\) ( or \(R_{L_{2}}\)) is set at \(0\) thus \(R_{1}\) (or \(R_{2}\)) can be maximized.
### _Time-sharing: Transmission of both \(m_{1}\) and \(m_{2}\)_
In this setting, we divide the \(n_{1,2}\) channel uses into two parts \(\eta n_{1,2}\) and \((1-\eta)n_{1,2}\) for \(\eta\in[0,1]\). Therefore, \(m_{1}\) is transmitted over \(n_{1,1}+\eta n_{1,2}\) channel uses and \(m_{2}\) is transmitted over \((1-\eta)n_{1,2}+n_{2,2}\) channel uses. Transmissions of \(m_{1}\) and \(m_{2}\) are thus independent. In this setting, we have the following two channel outputs:
\[\mathbf{Y}_{1,1}=\mathbf{X}_{1,1}+\mathbf{Z}_{1,1},\quad\mathbf{Y}_{2,2}=\mathbf{X}_{2,2}+\mathbf{Z}_{2,2}, \tag{65}\]
with \(||\mathbf{X}_{1,1}||^{2}=(n_{1,1}+\eta n_{1,2})\beta_{1,1}\text{P}\), \(||\mathbf{X}_{2,2}||^{2}=((1-\eta)n_{1,2}+n_{2,2})\beta_{2,2}\text{P}\), and \(\mathbf{Z}_{1,1}\sim\mathcal{N}(0,\sigma_{1,1}^{2}I_{n_{1,1}+\eta n_{1,2}})\) and \(\mathbf{Z}_{2,2}\sim\mathcal{N}(0,\sigma_{2,2}^{2}I_{(1-\eta)n_{1,2}+n_{2,2}})\). The power sharing parameters \(\beta_{1,1}\in[0,1]\) and \(\beta_{2,2}\in[0,1]\) are chosen such that
\[(n_{1,1}+\eta n_{1,2})\beta_{1,1}+((1-\eta)n_{1,2}+n_{2,2})\beta_{2,2}=n. \tag{66}\]
Set
\[\Omega_{1,1}=\frac{\beta_{1,1}\text{P}}{\sigma_{1,1}^{2}}\quad\text{and}\quad \Omega_{2,2}=\frac{\beta_{2,2}\text{P}}{\sigma_{2,2}^{2}}. \tag{67}\]
**Theorem 5** (Time-Sharing, Both \(m_{1}\) and \(m_{2}\)): _Given \(n_{1,1}\), \(n_{2,2}\), \(\beta_{1,1}\), \(\beta_{2,2}\) and P, \(\log M_{1}:=(n_{1,1}+\eta n_{1,2})R_{1}\) and \(\log M_{2}:=(n_{2,2}+(1-\eta)n_{1,2})R_{2}\) are upper bounded as_
\[\log M_{1}\leq(n_{1,1}+\eta n_{1,2})C\left(\Omega_{1,1}\right)-\sqrt{(n_{1,1} +\eta n_{1,2})V\left(\Omega_{1,1}\right)}\mathbb{Q}^{-1}\left(\epsilon_{1}- \Delta_{1}\right)+\tilde{K}_{1}\log\left((n_{1,1}+\eta n_{1,2})\right), \tag{68}\]
_and_
\[\log M_{2}\leq(n_{2,2}+(1-\eta)n_{1,2})C\left(\Omega_{2,2}\right)-\sqrt{(n_{2, 2}+(1-\eta)n_{1,2})V\left(\Omega_{2,2}\right)}\mathbb{Q}^{-1}\left(\epsilon_{2 }-\Delta_{2}\right)+\tilde{K}_{2}\log\left((n_{2,2}+(1-\eta)n_{1,2})\right), \tag{69}\]
_where_
\[\Delta_{1}:=\frac{6T_{\max,1}}{\sqrt{(n_{1,1}+\eta n_{1,2})V(\Omega_{1,1}))^{ 3}}}+\frac{(n_{1,1}+\eta n_{1,2})^{\tilde{K}_{1}}(1+\Omega_{1,1})^{\frac{(n_{ 1,1}+\eta n_{1,2})}{2}}}{J_{1}\cdot 2^{E_{\min,1}}} \tag{70}\]
Fig. 5: Example of comparing our scheme with the time-sharing scheme for \(n=300\) and \(n_{1,2}\) taking values from \(200\) to \(0\) with a step size of \(50\).
\[\Delta_{2}:=\frac{6T_{\max,2}}{\sqrt{(n_{2,2}+(1-\eta)n_{1,2})V(\Omega_{2,2}))^{3} }}+\frac{(n_{2,2}+(1-\eta)n_{1,2})^{\tilde{K}_{2}}(1+\Omega_{2,2})^{\frac{(n_{2,2 }+(1-\eta)n_{1,2})}{2}}}{J_{2}\cdot 2^{E_{\min,2}}} \tag{71}\]
with
\[J_{1}:=\frac{\sqrt{8(\sigma_{1,1}^{2}+2\beta_{1,1}\text{P})}}{27 \sqrt{\pi}(\sigma_{1,1}^{2}+\beta_{1,1}\text{P})},\quad J_{2}:=\frac{\sqrt{8( \sigma_{2,2}^{2}+2\beta_{2,2}\text{P})}}{27\sqrt{\pi}(\sigma_{2,2}^{2}+\beta_ {2,2}\text{P})}, \tag{73}\]
with \(\tilde{K}_{1}\) and \(\tilde{K}_{2}\) being constants. Notice that \(T_{\max,1}\) and \(E_{\min,1}\) can be easily calculated from (37a) and (37b) by setting \(i=1\), \(n_{1,2}=0\) and replacing \(n_{1,1}\) by \(n_{1,1}+\eta n_{1,2}\). Similarly, \(T_{\max,2}\) and \(E_{\min,2}\) can be calculated from (37a) and (37b) by setting \(i=2\), \(n_{2,1}=0\) and replacing \(n_{2,2}\) by \(n_{2,2}+(1-\eta)n_{1,2}\).
Proof:: Follows the proof of Theorem 1 by setting
\[n_{1,2}=n_{2,1}=0,\quad\beta_{1,2}=0,\quad\beta_{2,1}=0, \tag{74}\]
and replacing \(n_{1,1}\) by \(n_{1,1}+\eta n_{1,2}\) and \(n_{2,2}\) by \(n_{2,2}+(1-\eta)n_{1,2}\).
To compare our scheme with the time-sharing scheme we consider a scenario where the total number of available channel uses is equal to \(n=300\). The first message \(m_{1}\) arrives at the beginning of the first channel use (i.e., \(a_{1}=1\)) and is sent over \(200\) channel uses (i.e., \(d_{1}=200\)). The second message \(m_{2}\) arrives at the time \(a_{2}\in[1:200]\) and has to be decoded at the end of \(300\) channel uses (i.e., \(d_{2}=300\)). Fig. 5 illustrates a schematic representation of this example under our scheme (Fig. 5.a) and the time sharing scheme with \(\eta=0.5\) (Fig. 5.b).
In Fig. 6, we plot the transmission rates \(R_{1}\) and \(R_{2}\) of our scheme and the time-sharing scheme for this example. Each point corresponds to a different value of \(n_{1,2}\) shown in the example starting from \(n_{1,2}=200\) to \(n_{1,2}=0\) with a step size of 50. At \(n_{1,2}=0\), our scheme coincides with the time-sharing scheme. In this figure, \(\text{P}=5\), \(\rho=0.9\), \(\sigma_{1,1}^{2}=\sigma_{1,2}^{2}=\sigma_{2,1}^{2}=\sigma_{2,2}^{2}=1\), \(K-1=K_{2}=\tilde{K}_{1}=\tilde{K}_{2}=0.5\) and the values of the parameters \(\beta_{1,1},\beta_{\text{c}},\beta_{2,2},\beta_{1,2},\beta_{2,1}\) are optimized to obtain the maximum sum transmission rates. In Fig. (a)a, in our scheme \(R_{1}=\log M_{1}/(n_{1,1}+n_{1,2})\) and \(R_{2}=\log M_{2}/(n_{1,1}+n_{2,2})\) and in the time sharing scheme \(R_{1}=\log M_{1}/(n_{1,1}+\eta n_{1,2})\) and \(R_{2}=\log M_{2}/(n_{1,1}+(1-\eta)n_{1,2})\). In Fig. (b)b, in both schemes \(R_{1}=\log M_{1}/n\) and \(R_{2}=\log M_{2}/n\). As can be seen from this figure, our scheme significantly outperforms the time-sharing scheme.
An important difference between our scheme and the time-sharing scheme appears when the channels to one receiver are stronger than the channels to another receiver. Under our scheme, it is possible to adjust the design parameters \(L_{1}\) and \(L_{2}\) such that the rate to the weaker receiver increases while keeping the sum-rate approximately constant. In Theorem 1, it is required that parameters \(L_{1}\) and \(L_{2}\) to be chosen such the condition in (35) is satisfied. It is clear that, large values of \(L_{1}\) and \(L_{2}\) reduce the transmission rates \(R_{1}\) and \(R_{2}\) and vice versa. In the analysis provided in the previous Section V-B, \(L_{1}\) and
Fig. 6: Comparison between the transmission rate pairs \((R_{1},R_{2})\) obtained in our scheme and the time-sharing scheme under the example shown in Fig. 5. (a)In our scheme: \(R_{1}=\log M_{1}/(n_{1,1}+n_{1,2})\) and \(R_{2}=\log M_{2}/(n_{1,1}+n_{2,2})\), in the time-sharing scheme: \(R_{1}=\log M_{1}/(n_{1,1}+\eta n_{1,2})\) and \(R_{2}=\log M_{2}/(n_{1,1}+(1-\eta)n_{1,2})\) with \(\eta=0.5\), (b) In both schemes: \(R_{1}=\log M_{1}/n\) and \(R_{2}=\log M_{2}/n\).
are set to be equal. In this section, we focus on the effect of changing the values of \(L_{1}\) and \(L_{2}\) on the achievable rates \(R_{1}\) and \(R_{2}\). To this end, we introduce a parameter \(\lambda\) as
\[\lambda:=\frac{1}{n_{1,2}}\log\left(\frac{L_{2}}{L_{1}}\right). \tag{75}\]
By (35),
\[\log L_{1}\geq\frac{1}{2}\log\frac{\log\left(\epsilon_{1,2}\right)}{\log \left(1-\left(1-\rho^{2}\right)^{n_{1,2}-1}\right)}-\frac{n_{1,2}\lambda}{2}, \tag{76}\]
and
\[\log L_{2}\geq\frac{1}{2}\log\frac{\log\left(\epsilon_{1,2}\right)}{\log \left(1-\left(1-\rho^{2}\right)^{n_{1,2}-1}\right)}+\frac{n_{1,2}\lambda}{2}. \tag{77}\]
Therefore, increasing \(\lambda\), increases \(L_{2}\) (decreases \(L_{1}\)) which results in decreasing \(R_{2}\) (increasing \(R_{1}\)).
To numerically evaluate the effect of \(\lambda\), we consider a case where the channels of the first receiver are weaker than the channels of the second receiver. In Fig. 7, we assume that \(\sigma_{1,1}^{2}=\sigma_{1,2}^{2}=1\) and \(\sigma_{2,1}^{2}=\sigma_{2,2}^{2}=0.1\). In Fig. (a)a, we set \(n_{1,1}=n_{1,2}=n_{2,2}=60\) and in Fig. (b)b we set \(n_{1,1}=n_{1,2}=n_{2,2}=180\). In this figure, \(\mathsf{P}=1\), \(\epsilon_{1}=\epsilon_{2}=10^{-5}\), \(\epsilon_{12}=10^{-7}\), \(K_{1}=K_{2}=0.5\), \(\rho=0.8\) and the values of power coefficients \(\beta_{1,1},\beta_{c},\beta_{2,2},\beta_{1,2},\beta_{2,1}\) are set such that the sum-rate \(R_{1}\) and \(R_{2}\) is maximized. We then increase \(\lambda\) from \(0\) (i.e., \(L_{1}=L_{2}\)) to \(1\) with step sizes of \(0.2\). As can be seen from this figure, by increasing \(L_{2}\) and consequently decreasing \(L_{1}\), it is possible to increase the rate of \(R_{1}\) while keeping the sum-rate \(R_{1}+R_{2}\) constant. For example, under our scheme, it is possible to achieve \(R_{1}=R_{2}=0.31\) by setting \(\lambda\) at approximately \(0.95\).
## VI Conclusions
We have considered a broadcast setting in which a transmitter sends two different messages to two receivers. Messages are considered to have different arrival times and decoding deadlines such that their transmission windows overlap. For this setting, we have proposed a coding scheme that exploits Marton's coding strategy. We have derived rigorous bounds on the achievable rate regions for broadcast setting and point-to-point settings with one or multiple parallel channels. In the point-to-point setup with one channel and more parallel channels, the proposed achievability scheme outperforms the Normal Approximation especially when the number of channel uses is smaller than \(200\). In the broadcast setup, our scheme agreed with Marton's strategy for sufficiently large numbers of channel uses. Our numerical analysis have shown significant performance improvements over standard approaches based on time sharing for transmission of short packets.
## Appendix A Proof of Theorem 1
In the following subsections we analyze the probability that events \(\mathcal{E}_{1,2}\) and \(\mathcal{E}_{1}|\mathcal{E}_{1,2}^{c}\) occur. The analysis related to \(\mathbb{P}[\mathcal{E}_{2}|\mathcal{E}_{1,2}^{c}]\) is similar to that of \(\mathbb{P}[\mathcal{E}_{1}|\mathcal{E}_{1,2}^{c}]\).
### _Analyzing \(\mathbb{P}[\mathcal{E}_{1,2}]\)_
Recall the definition of the error event \(\mathcal{E}_{1,2}\) from (24). Let
\[\cos(\theta)=\frac{\langle\mathbf{X}_{1,2}(m_{1},\ell_{1}),\mathbf{X}_{2,1}(m_{2},\ell_{ 2})\rangle}{n_{1,2}\sqrt{\beta_{1,2}\beta_{2,1}}\mathsf{P}}. \tag{78}\]
Then,
\[\mathbb{P}[\mathcal{E}_{1,2}] =\mathbb{P}\left[\langle\mathbf{X}_{1,2},\mathbf{X}_{2,1}\rangle\notin \mathcal{D}\right] \tag{79}\] \[=\mathbb{P}\left[\langle\mathbf{X}_{1,2},\mathbf{X}_{2,1}\rangle\notin \left[n_{1,2}\sqrt{\beta_{1,2}\beta_{2,1}}\mathsf{P}\rho:n_{1,2}\sqrt{\beta_{1,2}\beta_{2,1}}\mathsf{P}\right]\right]\] (80) \[=\prod_{\ell_{1}=1}^{L_{1}}\prod_{\ell_{2}=1}^{L_{2}}1-\mathbb{ P}\left[\langle\mathbf{X}_{1,2}(m_{1},\ell_{1}),\mathbf{X}_{2,1}(m_{2},\ell_{2}) \rangle\in\left[n_{1,2}\sqrt{\beta_{1,2}\beta_{2,1}}\mathsf{P}\rho:n_{1,2} \sqrt{\beta_{1,2}\beta_{2,1}}\mathsf{P}\right]\right]\] (81) \[=\left(1-\mathbb{P}\left[\langle\mathbf{X}_{1,2}(m_{1},1),\mathbf{X}_{2,1 }(m_{2},1)\rangle\in\left[n_{1,2}\sqrt{\beta_{1,2}\beta_{2,1}}\mathsf{P}\rho:n _{1,2}\sqrt{\beta_{1,2}\beta_{2,1}}\mathsf{P}\right]\right]\right)^{L_{1}L_{2}}\] (82) \[=\left(1-\mathbb{P}\Bigg{[}\rho\leq\cos(\theta)\leq 1\Bigg{]} \right)^{L_{1}L_{2}}\] (83) \[=\left(1+\mathbb{P}\left[\cos(\theta)\leq\rho\right]-\mathbb{P} \left[\cos(\theta)\leq 1\right]\right)^{L_{1}L_{2}}\] (84) \[=\left(1+F_{\cos(\theta)}(\rho)-F_{\cos(\theta)}\left(1\right) \right)^{L_{1}L_{2}}\] (85) \[=\left(F_{\cos^{2}(\theta)}((\rho)^{2})\right)^{L_{1}L_{2}}\] (86) \[=\left(I_{(\rho)^{2}}(1,n_{1,2}-1)\right)^{L_{1}L_{2}}\] (87) \[=\left(1-\left(1-\rho^{2}\right)^{n_{1,2}-1}\right)^{L_{1}L_{2}}. \tag{88}\]
Note that in (85), \(\cos^{2}(\theta)\sim\text{Beta}(1,n_{1,2}-1)\). As such, it follows that
\[F_{\cos^{2}(\theta)}(x)=I_{x}(1,n_{1,2}-1)=1-(1-x)^{n_{1,2}-1}, \tag{89}\]
where \(I_{x}(\cdot,\cdot)\) is regularized incomplete beta function. It then follows that
\[\mathbb{P}[\mathcal{E}_{1,2}]=\left(1-\left(1-\rho^{2}\right)^{n_{1,2}-1} \right)^{L_{1}L_{2}}. \tag{90}\]
In order to upper bound this error probability by a given threshold \(\epsilon_{1,2}\), \(L_{1}\) and \(L_{2}\) should be chosen such that
\[L_{1}\cdot L_{2}\geq\frac{\log(\epsilon_{1,2})}{\log(1-\left(1-\rho^{2}\right) ^{n_{1,2}-1})}. \tag{91}\]
This proves the bound in (35).
### _Analyzing \(\mathbb{P}[\mathcal{E}_{1}|\mathcal{E}_{1,2}^{c}]\)_
To analyze \(\mathbb{P}[\mathcal{E}_{1}|\mathcal{E}_{1,2}^{c}]\), we use the threshold-based metric bound [9]. Let \(\gamma_{1}\in\mathbb{R}\), since the first decoder selects among \(M_{1}L_{1}\) codewords, thus
\[\mathbb{P}[\mathcal{E}_{1}|\mathcal{E}_{1,2}^{c}]\leq\mathbb{P} [i_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\leq\gamma_{1}]+M_{1 }L_{1}\mathbb{P}[i_{1}(\bar{\mathbf{X}}_{1,1},\bar{\mathbf{X}}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y }_{2,1})\geq\gamma_{1}] \tag{92}\]
where \(\bar{\mathbf{X}}_{1,1}\sim f_{\mathbf{X}_{1,1}}(\mathbf{x}_{1,1})\) and \(\bar{\mathbf{X}}_{1,2}\sim f_{\mathbf{X}_{1,2}}(\mathbf{x}_{1,2})\) and are independent of \((\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\). Throughout our analysis, we interpret \(\mathbb{P}[i_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\leq\gamma _{1}]\) as the _missed-detection_ probability and \(\mathbb{P}[i_{1}(\bar{\mathbf{X}}_{1,1},\bar{\mathbf{X}}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1 })\geq\gamma_{1}]\) as the _false alarm_ probability.
#### Iv-B1 Analyzing the missed-detection probability
To calculate the missed-detection probability, i.e., \(\mathbb{P}[i_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\leq \gamma_{1}]\), first we define the following Gaussian distributions:
\[Q_{\mathbf{Y}_{1,j}}(\mathbf{y}_{1,j}) \sim\mathcal{N}(\mathbf{y}_{1,j}:\mathbf{0},\sigma_{y_{1,j}}^{2}\text{I}_ {n_{1,1}}),\quad\text{for }j=1,2 \tag{93}\] \[Q_{\mathbf{Y}_{2,1}|\mathbf{X}_{1,2}}(\mathbf{y}_{1,2}|\mathbf{x}_{1,2}) \sim\mathcal{N}(\mathbf{Y}_{2,1}:\mu_{1,2},\sigma_{y_{1,2}|x_{1,2}}^{2} \text{I}_{n_{1,2}}), \tag{94}\]
where
\[\sigma_{y_{1,1}}^{2} =\beta_{1,1}\mathsf{P}+\sigma_{1,1}^{2}, \tag{95a}\] \[\sigma_{y_{1,2}}^{2} =\beta_{\mathsf{P}}\bar{\mathsf{P}}+\sigma_{1,1}^{2},\] (95b) \[\sigma_{1,2}^{2}\leq\sigma_{y_{1,2}|x_{1,2}}^{2} \leq(1-\rho^{2})\beta_{2,1}\mathsf{P}+\sigma_{1,2}^{2}, \tag{95c}\]
\[\sqrt{\frac{\beta_{c}}{\tilde{\beta_{c}}}}\left(1+\rho\sqrt{\frac{\beta_{2,1}}{ \beta_{1,2}}}\right)\mathbf{X}_{1,2}\leq\mu_{1,2}\leq\left(1+\sqrt{\frac{\beta_{2,1 }}{\beta_{1,2}}}\right)\mathbf{X}_{1,2}. \tag{95d}\]
with \(\tilde{\beta_{c}}:=\beta_{1,2}+\beta_{2,1}+\sqrt{\beta_{1,2}\beta_{2,1}}\). We then introduce
\[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\triangleq \log\frac{\mathbf{f}_{\mathbf{Y}_{1,1}|\mathbf{X}_{1,1}}(\mathbf{y}_{1,1}|\mathbf{x}_{1,1})Q_{\bm {Y}_{2,1}|\mathbf{X}_{1,2}}(\mathbf{y}_{1,2}|\mathbf{x}_{1,2})}{\prod_{j=1}^{2}Q_{\mathbf{Y}_{ 1,j}}(\mathbf{y}_{1,j})}. \tag{96}\]
**Lemma 1**: _The following inequality holds:_
\[i_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\geq \tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})+\log J_{1}, \tag{97}\]
_where \(J_{1}\) is defined in (33c)._
See Appendix D.
Thus
\[\mathbb{P}[i_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\leq\gamma_{1}] \tag{98}\] \[\leq\mathbb{P}[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1, 1},\mathbf{Y}_{2,1})\leq\gamma_{1}-\log J_{1}]. \tag{99}\]
Denote by \(E_{1}\), \(V_{1}\) and \(T_{1}\) the first, second and third central moments of \(\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\). By the Berry-Esseen theorem, we have
\[\mathbb{P}\left[i_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y }_{2,1})\leq\gamma_{1}\right]\] \[\leq\mathbb{P}\left[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y }_{1,1},\mathbf{Y}_{2,1})\leq\gamma_{1}-\log J_{1}\right]\] \[\leq\mathbb{Q}(-K)+\frac{6T_{1}}{\sqrt{V_{1}^{3}}}, \tag{100}\]
where \(K>0\) is chosen such that
\[\gamma_{1}=K\sqrt{V_{1}}+E_{1}+\log J_{1}. \tag{101}\]
We now seek lower bounds on the first and second central moments \(E_{1}\) and \(V_{1}\) and an upper bound on the third central moment \(T_{1}\) since
\[K=\frac{1}{\sqrt{V}_{1}}\left(\gamma_{1}-\log J_{1}-E_{1}\right), \tag{102}\]
and
\[\mathbb{P}\left[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{ 2,1})-E_{1}\leq K\sqrt{V_{1}}\right]-\mathbb{Q}(-K)\leq\frac{6T_{1}}{\sqrt{V_ {1}^{3}}}. \tag{103}\]
**Lemma 2**: _The following inequalities hold:_
\[E_{1} \geq E_{\min,1}, \tag{104a}\] \[V_{1} \geq n_{1,1}V(\Omega_{1,1})+n_{1,2}V(\Omega_{1,2}),\] (104b) \[T_{1} \leq T_{\max,1}, \tag{104c}\]
_where \(V(x):=\frac{x(2+x)}{2(1+x)^{2}}\log e^{2}\) and \(T_{\max,1}\), \(E_{\min,1}\) are defined in (37a) and (37b), respectively._
See Appendix C.
Set
\[\gamma_{1}:=\log M_{1}+\log L_{1}-K_{1}\log(n_{1,1}+n_{1,2})-n_{1,1}C(\Omega_{ 1,1})-n_{1,2}C(\Omega_{1,2})+E_{1}+\log J_{1} \tag{105}\]
for some \(K_{1}\) and where \(C(x):=\frac{1}{2}\log(1+x)\). Thus
\[K=\frac{\log M_{1}+\log L_{1}-K_{1}\log(n_{1,1}+n_{1,2})-n_{1,1}C(\Omega_{1,1} )-n_{1,2}C(\Omega_{1,2})}{\sqrt{V_{1}}}. \tag{106}\]
We then have
\[\mathbb{P}\left[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1, 1},\mathbf{Y}_{2,1})\leq\gamma_{1}-\log J_{1}\right]\] \[=\mathbb{P}\left[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{ 1,1},\mathbf{Y}_{2,1})-E_{1}\leq\log M_{1}+\log L_{1}-K_{1}\log(n_{1,1}+n_{1,2})-n_ {1,1}C(\Omega_{1,1})-n_{1,2}C(\Omega_{1,2})\right]\] \[\leq\frac{6T_{1}}{\sqrt{V_{1}^{3}}}+\mathbb{Q}\left(\frac{-\log M _{1}-\log L_{1}+K_{1}\log(n_{1,1}+n_{1,2})+n_{1,1}C(\Omega_{1,1})+n_{1,2}C( \Omega_{1,2})}{\sqrt{V_{1}}}\right) \tag{107}\]
\[\leq\int_{\mathbb{R}^{n}}\mathbb{R}^{2-i_{1}(\overline{\mathbf{r}}_{1,1},\overline{\mathbf{r}}_{1,2};\mathbf{y}_{1,1},\mathbf{y}_{1,2})}\prod_{j=1}^{2 }\frac{f_{\mathbf{Y}_{1,j}|\mathbf{X}_{1,j}}(\overline{\mathbf{r}}_{1,j}| \overline{\mathbf{x}}_{1,j})}{f_{\mathbf{Y}_{1,j}}(\mathbf{y}_{1,j})}2^{\gamma _{1}}\] \[\quad\cdot f_{\mathbf{X}_{1,1}|\mathbf{Y}_{1,1}}(\overline{ \mathbf{x}}_{1,1}|\mathbf{y}_{1,1})f_{\mathbf{X}_{1,2}|\mathbf{Y}_{1,2}}( \overline{\mathbf{x}}_{1,2}|\mathbf{y}_{1,2})\mathrm{d}\overline{\mathbf{x}}_ {1,1}\mathrm{d}\overline{\mathbf{x}}_{1,2}\] \[\leq\int_{\mathbb{R}^{n}}\mathbb{R}^{2-i_{1}(\overline{\mathbf{r }}_{1,1},\overline{\mathbf{x}}_{1,2};\mathbf{y}_{1,1},\mathbf{y}_{1,2})}\prod_ {j=1}^{2}\frac{f_{\mathbf{Y}_{1,j}|\mathbf{X}_{1,j}}(\mathbf{y}_{1,j}| \overline{\mathbf{x}}_{1,j})}{f_{\mathbf{Y}_{1,j}}(\mathbf{y}_{1,j})}2^{- \gamma_{1}}\] \[\quad\cdot f_{\mathbf{X}_{1,1}|\mathbf{Y}_{1,1}}(\overline{ \mathbf{x}}_{1,1}|\mathbf{y}_{1,1})f_{\mathbf{X}_{1,2}|\mathbf{Y}_{1,2}}( \overline{\mathbf{x}}_{1,2}|\mathbf{y}_{1,2})\mathrm{d}\overline{\mathbf{x}}_ {1,1}\mathrm{d}\overline{\mathbf{x}}_{1,2}\] \[=2^{-\gamma_{1}}. \tag{110}\]
As a consequence,
\[\mathbb{P}[i_{1}(\bar{\mathbf{X}}_{1,1},\bar{\mathbf{X}}_{1,2}; \mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\geq\gamma_{1}]\leq 2^{-\gamma_{1}}. \tag{111}\]
By combining (108) and (111), we can bound the error probability in (92) by
\[\mathbb{P}[\mathcal{E}_{1}|\mathcal{E}_{1,2}^{c}] \leq\frac{6T_{\max,1}}{\sqrt{(n_{1,1}V(\Omega_{1,1})+n_{1,2}V( \Omega_{1,2}))^{3}}}\] \[\quad+\mathbb{Q}\left(\frac{-\log M_{1}-\log L_{1}+K_{1}\log(n_{ 1,1}+n_{1,2})+n_{1,1}C(\Omega_{1,1})+n_{1,2}C(\Omega_{1,2})}{\sqrt{n_{1,1}V( \Omega_{1,1})+n_{1,2}V(\Omega_{1,2})}}\right)\] \[\quad+\frac{(n_{1,1}+n_{1,2})^{K_{1}}(1+\Omega_{1,1})^{\frac{n_{1,1}}{2}}(1+\Omega_{1,2})^{\frac{n_{1,2}}{2}}}{J_{1}2^{E_{\min,1}}}. \tag{112}\]
As a result
\[\epsilon_{1} \leq\frac{6T_{\max,1}}{\sqrt{(n_{1,1}V(\Omega_{1,1})+n_{1,2}V( \Omega_{1,2}))^{3}}}\] \[\quad+\mathbb{Q}\left(\frac{-\log M_{1}-\log L_{1}+K_{1}\log(n_{ 1,1}+n_{1,2})+n_{1,1}C(\Omega_{1,1})+n_{1,2}C(\Omega_{1,2})}{\sqrt{n_{1,1}V( \Omega_{1,1})+n_{1,2}V(\Omega_{1,2})}}\right)\] \[\quad+\frac{(n_{1,1}+n_{1,2})^{K_{1}}(1+\Omega_{1,1})^{\frac{n_{1,1}}{2}}(1+\Omega_{1,2})^{\frac{n_{1,2}}{2}}}{J_{1}2^{E_{\min,1}}}+\left(1- \left(1-\rho^{2}\right)^{n_{1,2}-1}\right)^{L_{1}\cdot L_{2}}. \tag{113}\]
Equivalently,
\[\log M_{1}+\log L_{1} \leq n_{1,1}C(\Omega_{1,1})+n_{1,2}C(\Omega_{1,2})+K_{1}\log(n_{ 1,1}+n_{1,2})\] \[\quad-\sqrt{n_{1,1}V(\Omega_{1,1})+n_{1,2}V(\Omega_{1,2})}\mathbb{ Q}^{-1}\left(\epsilon_{1}-\Delta_{1}\right) \tag{114}\]
where \(\Delta_{1}\) is defined in (36). This concludes the proof of Theorem 1.
## Appendix B Proof of Proposition 1
For large values of \(n_{1,1}\) and \(n_{1,2}\), choose \(K_{1}\) such that
\[\lim_{n_{1,1},n_{1,2}\to\infty}\frac{(n_{1,1}+n_{1,2})^{K_{1}}(1+\Omega_{1,1})^{ \frac{n_{1,1}}{2}}(1+\Omega_{1,2})^{\frac{n_{1,2}}{2}}}{J_{1}2^{E_{\min,1}}}\to 0. \tag{115}\]
Hence,
\[\Delta_{1}=O\left(\frac{1}{\sqrt{(n_{1,1}V(\Omega_{1,1})+n_{1,2}V(\Omega_{1,2 }))^{3}}}\right) \tag{116}\]
as \(n_{1,1},n_{1,2}\to\infty\). This proves the bound in (38). To prove the bound in (39), define
\[A:=1-\left(1-\rho^{2}\right)^{n_{1,2}-1}. \tag{117}\]
We then use the approximation \(\log(1-x)\approx-x\) which holds when \(x\) is very small. Hence, for large values of \(n_{1,2}\)
\[\log L_{1}+\log L_{2}\geq\log(-\log\epsilon_{1,2})-n_{1,2}\log(1-\rho^{2}) \tag{118}\]
which concludes the proof.
## Appendix C Proof of Lemma 2
### _Bounding the first moment_
We start by lower bounding the first moment. To this end, we have
\[\mathbb{E}\left[\overline{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{ Y}_{1,1},\mathbf{Y}_{2,1})\right] \tag{119}\] \[\stackrel{{(i)}}{{=}}\mathbb{E}\Bigg{[}\frac{n_{1, 1}}{2}\log\frac{\sigma_{y_{1,1}}^{2}}{\sigma_{1,1}^{2}}+\frac{n_{1,2}}{2}\log \frac{\sigma_{y_{1,2}}^{2}}{\sigma_{y_{1,2}|x_{1,2}}^{2}}-\frac{||\mathbf{Z}_{1,1} ||^{2}}{2\sigma_{1,1}^{2}}+\frac{||\mathbf{X}_{1,1}+\mathbf{Z}_{1,1}||^{2}}{2\sigma_{ y_{1,1}}^{2}}\] \[-\frac{||\alpha(\mathbf{X}_{1,2}+\mathbf{X}_{2,1})+\mathbf{Z}_{1,2}-\mu_{1,2} ||^{2}}{2\sigma_{y_{1,2}|x_{1,2}}^{2}}+\frac{||\alpha(\mathbf{X}_{1,2}+\mathbf{X}_{2, 1})+\mathbf{Z}_{1,2}||^{2}}{2\sigma_{y_{1,2}}^{2}}\Bigg{]}\] (120) \[\stackrel{{(ii)}}{{\geq}}\frac{n_{1,1}}{2}\log\frac{ \sigma_{y_{1,1}}^{2}}{\sigma_{1,1}^{2}}+\frac{n_{1,2}}{2}\log\frac{\sigma_{y_{ 1,2}}^{2}}{((1-\rho^{2})\beta_{2,1}\mathsf{P}+\sigma_{1,2}^{2})}-\frac{\mathbb{ E}[||\mathbf{Z}_{1,1}||^{2}]}{2\sigma_{1,1}^{2}}+\frac{||\mathbf{X}_{1,1}||^{2}+\mathbb{E}[||\mathbf{Z}_{1,1 }||^{2}]+2\mathbb{E}(\mathbf{Z}_{1,1},\mathbf{X}_{1,1})}{2\sigma_{y_{1,1}}^{2}}\] \[-\frac{||\alpha(\mathbf{X}_{1,2}+\mathbf{X}_{2,1})+\mathbf{Z}_{1,2}-\sqrt{ \frac{\beta_{c}}{\beta_{c}}}(1+\rho\sqrt{\frac{\beta_{2,1}}{\beta_{1,2}}})\bm {X}_{1,2}||^{2}}{2\sigma_{y_{1,2}}^{2}}+\frac{||\alpha(\mathbf{X}_{1,2}+\mathbf{X}_{2, 1})+\mathbf{Z}_{1,2}||^{2}}{2\sigma_{y_{1,2}}^{2}}\] (121) \[= \frac{n_{1,1}}{2}\log\frac{\sigma_{y_{1,1}}^{2}}{\sigma_{1,1}^{2} }+\frac{n_{1,2}}{2}\log\frac{\sigma_{y_{1,2}}^{2}}{((1-\rho^{2})\beta_{2,1} \mathsf{P}+\sigma_{1,2}^{2})}+\frac{1}{2}\left(\frac{1}{\sigma_{y_{1,1}}^{2}}- \frac{1}{\sigma_{1,1}^{2}}\right)\mathbb{E}[||\mathbf{Z}_{1,1}||^{2}]+\frac{||\mathbf{ X}_{1,1}||^{2}}{2\sigma_{y_{1,1}}^{2}}\] \[+\frac{\mathbb{E}[(\mathbf{Z}_{1,1},\mathbf{X}_{1,1})]}{\sigma_{y_{1,1}}^ {2}}+\frac{1}{2}\left(\frac{\alpha^{2}}{\sigma_{y_{1,1}}^{2}}-\frac{\left( \alpha\sqrt{\tilde{\beta}_{c}}-\sqrt{\beta_{c}}(1+\rho\sqrt{\frac{\beta_{2,1} }{\beta_{1,2}}})^{2}\right)}{\tilde{\beta}_{c}\sigma_{1,2}^{2}}\right)||\mathbf{X}_{1,2}||^{2}\] \[+\left(\frac{\alpha}{\sigma_{y_{1,1}}^{2}}-\frac{\alpha\sqrt{ \tilde{\beta}_{c}}-\sqrt{\beta_{c}}(1+\rho\sqrt{\frac{\beta_{2,1}}{\beta_{1,2} }})}{\sqrt{\tilde{\beta}_{c}}\sigma_{1,2}^{2}}\right)\left(\mathbb{E}[(\mathbf{X}_ {1,2},\mathbf{X}_{2,1})]+\mathbb{E}[(\mathbf{X}_{1,2},\mathbf{Z}_{1,2})]\right)\] \[+\frac{1}{2}\left(\frac{1}{\sigma_{y_{1,2}}^{2}}-\frac{1}{\sigma _{1,2}^{2}}\right)\left(\alpha^{2}||\mathbf{X}_{2,1}||^{2}+\mathbb{E}[||\mathbf{Z}_{1, 2}||^{2}]+2\mathbb{E}[\langle\mathbf{X}_{2,1},\mathbf{Z}_{1,2}\rangle]\right)\] \[\stackrel{{(iii)}}{{\geq}}\frac{n_{1,1}}{2}\log\frac{ \sigma_{y_{1,1}}^{2}}{\sigma_{1,1}^{2}}+\frac{n_{1,2}}{2}\log\frac{\sigma_{y_{1,2} }^{2}}{((1-\rho^{2})\beta_{2,1}\mathsf{P}+\sigma_{1,2}^{2})}+\frac{\sigma_{1,1}^ {4}n_{1,1}}{2}\left(\frac{1}{\sigma_{y_{1,1}}^{2}}-\frac{1}{\sigma_{1,1}^{2}} \right)+\frac{n_{1,1}\beta_{1,1}\mathsf{P}}{2\sigma_{y_{1,1}}^{2}}\] \[-\frac{\sigma_{1,1}^{2}\sqrt{2n_{1,1}\beta_{1,1}\mathsf{P}}}{ \sigma_{y_{1,1}}^{2}}\frac{\Gamma\left(\frac{n_{1,1}+1}{2}\right)}{\Gamma \left(\frac{n_{1,1}}{2}\right)}+\frac{1}{2}\left(\frac{\beta_{c}}{\tilde{ \beta}_{c}\sigma_{y_{1,1}}^{2}}-\frac{\left(\sqrt{\tilde{\beta}_{c}}-\sqrt{ \beta_{c}}(1+\rho\sqrt{\frac{\beta_{2,1}}{\beta_{1,2}}})\right)^{2}}{\tilde{ \beta}_{c}\sigma_{1,2}^{2}}\right)n_{1,2}\beta_{1,2}\mathsf{P}\]
\[+\left(\frac{\sqrt{\beta_{\rm c}}}{\sqrt{\tilde{\beta}_{\rm c}} \sigma_{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\beta_{1,2}}}^{2}}}-\frac{ \sqrt{\tilde{\beta}_{\rm c}}(1+\rho\sqrt{\frac{\beta_{2,1}}{\beta_{1,2}}})}{ \sqrt{\tilde{\beta}_{\rm c}}\sigma_{1,2}^{2}}\right)\left(n_{1,2}\sqrt{\beta_ {1,2}\beta_{2,1}}\rho\mathsf{P}-\sigma_{1,2}^{2}\sqrt{2n_{1,2}\beta_{1,2}} \mathsf{P}\frac{\Gamma\left(\frac{n_{1,2}+1}{2}\right)}{\Gamma\left(\frac{n_{1, 2}}{2}\right)}\right)\] \[+\frac{1}{2}\left(\frac{1}{\sigma_{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}\beta_{1,2}}^{2}}}-\frac{1}{\sigma_{1,2}^{2}}\right) \left(n_{1,2}\beta_{2,1}\mathsf{P}+n_{1,2}\sigma_{1,2}^{4}-2\sqrt{2}\sigma_{1,2}^{2}\sqrt{n_{1,2}\beta_{2,1}\mathsf{P}}\frac{\Gamma\left(\frac{n_{1,2}+1}{2 }\right)}{\Gamma\left(\frac{n_{1,2}}{2}\right)}\right) \tag{123}\] \[= \frac{n_{1,1}}{2}\log(1+\Omega_{1,1})+\frac{n_{1,2}}{2}\log(1+ \Omega_{1,2})+\frac{(1-\sigma_{1,1}^{2})n_{1,1}\Omega_{1,1}}{2(1+\Omega_{1,1} )}-\Gamma_{1,1}\cdot\frac{\sqrt{2n_{1,1}\sigma_{1,1}^{2}\Omega_{1,1}}}{1+ \Omega_{1,1}}-\frac{n_{1,2}\sigma_{1,2}^{2}\beta_{\rm c}\mathsf{P}}{2(\sigma_ {1,2}^{2}+\beta_{\rm c}\mathsf{P})}\] (124) \[+\sqrt{n_{1,1}\beta_{1,2}\mathsf{P}}\left(\sqrt{\frac{\beta_{\rm c }}{\tilde{\beta}_{\rm c}}}\left(\frac{1}{\sigma_{1,2}^{2}+\beta_{\rm c} \mathsf{P}}+1+\rho\sqrt{\frac{\beta_{2,1}}{\beta_{1,2}}}\right)-\frac{1}{ \sigma_{1,2}^{2}}\right)\left(\rho\sqrt{n_{1,2}\beta_{2,1}\mathsf{P}}-\sigma_ {1,2}^{2}\sqrt{2}\Gamma_{1,2}\right)\] \[+\frac{\beta_{\rm c}n_{1,2}\beta_{1,2}\mathsf{P}\left(\sigma_{1, 2}^{2}+(\beta_{\rm c}\mathsf{P}-\sigma_{1,2}^{2})\left(\sqrt{\frac{\beta_{ \rm c}}{\tilde{\beta}_{\rm c}}}-1-\rho\sqrt{\frac{\beta_{2,1}}{\beta_{1,2}}} \right)^{2}\right)}{2\tilde{\beta}_{\rm c}\sigma_{1,2}^{2}(\beta_{\rm c} \mathsf{P}+\sigma_{1,2}^{2})}-\frac{\beta_{\rm c}\mathsf{P}\sqrt{n_{1,2}\beta _{2,1}\mathsf{P}}}{2\sigma_{1,2}^{2}(\beta_{\rm c}\mathsf{P}+\sigma_{1,2}^{2}) }\left(\sqrt{n_{1,2}\beta_{2,1}\mathsf{P}}-2\sqrt{2}\sigma_{1,2}^{2}\Gamma_{1,2}\right)\]
where \((i)\) is by (96), \((ii)\) is by (95), \((iii)\) is by (20) and the fact that \(||\mathbf{Z}_{1,1}||^{2}/\sigma_{1,1}^{4}\) and \(||\mathbf{Z}_{1,2}||^{2}/\sigma_{1,2}^{4}\) each follow a central chi-squared distribution with degree of \(n_{1,1}\) and \(n_{1,2}\), respectively, and \(||\mathbf{Z}_{1,1}||/\sigma_{1,1}^{2}\) and \(||\mathbf{Z}_{1,2}||/\sigma_{1,2}^{2}\) each follow a central chi-distribution with degree of \(n_{1,1}\) and \(n_{1,2}\), respectively. The last equality is by (33) and by defining \(\Gamma_{1,1}:=\Gamma\left(\frac{n_{1,1}+1}{2}\right)/\Gamma\left(\frac{n_{1,1} }{2}\right)\) and \(\Gamma_{1,2}:=\Gamma\left(\frac{n_{1,2}+1}{2}\right)/\Gamma\left(\frac{n_{1,2 }}{2}\right)\).
### _Bounding the second moment_
We continue by lower bounding the variance of \(\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\). One can write
\[\text{Var}[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1}, \mathbf{Y}_{2,1})]\] (125) \[= \text{Var}\Bigg{[}\frac{n_{1,1}}{2}\log\frac{\sigma_{{\color[rgb]{ 0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}\beta_{1,1}}^{2}}}{\sigma_{1,1}^{2}}+\frac{n_{1,2}}{2} \log\frac{\sigma_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{ 0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\beta_{1,2}}^{2}}}{ \sigma_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\beta_{1,2}}^{2}}}^{2}}- \frac{||\mathbf{Z}_{1,1}||^{2}}{2\sigma_{1,1}^{2}}+\frac{||\mathbf{X}_{1,1}+\mathbf{Z}_{1,1 }||^{2}}{2\sigma_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{ 0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\beta_{1,1}}^{2}}} \] \[\qquad\quad-\frac{||\alpha(\mathbf{X}_{1,2}+\mathbf{X}_{2,1})+\mathbf{Z}_{1,2}- \mu_{1,2}||^{2}}{2\sigma_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{ 0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\beta_{1,2}}^{2}}}^{2} \mathbf{Var}\Bigg{[}\frac{1}{2}\left(\frac{1}{\sigma_{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}\beta_{1,1}}^{2}}}-\frac{1}{\sigma_{1,1}^{2}}\right)|| \mathbf{Z}_{1,1}||^{2}+\frac{1}{2}\left(\frac{1}{\sigma_{{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}\beta_{1,2}}^{2}}}-\frac{1}{\sigma_{1,2}^{2}} \right)||\mathbf{Z}_{1,2}||^{2}+\frac{\langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle}{ \sigma_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\beta_{1,1}}^{2}}}\] \[\qquad\quad+\left(\frac{\sqrt{\beta_{\rm c}}}{\sqrt{\tilde{ \beta}_{\rm c}}\sigma_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{ 0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\beta_{1,2}}^{2}}}- \frac{1}{\sigma_{1,2}^{2}}\right)\langle\mathbf{X}_{2,1},\mathbf{Z}_{1,2}\rangle+ \left(\frac{\sqrt{\beta_{\rm c}}}{\sqrt{\tilde{\beta}_{\rm c}}\sigma_{{ \color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\beta_{1,2}}^{2}}}-\frac{1 -\sqrt{\frac{\beta_{\rm c}}{\tilde{\beta}_{\rm c}}}(1+\rho\sqrt{\frac{\beta_{ 2,1}}{\beta_{1,2}}})}{\sigma_{1,2}^{2}}\right)\langle\mathbf{X}_{1,2},\mathbf{Z}_{1,2 }\rangle\Bigg{]}\] (127) \[\stackrel{{(ii)}}{{\geq}}\frac{\sigma_{1,1}^{4}n_{1,1}}{2} \left(\frac{1}{\sigma_{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{ 0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\beta_{1,1}}^{2}}}- \frac{1}{\sigma_{1,1}^{2}}\right)^{2}+\frac{\sigma_{1,2}^{4}n_{1,2}}{2
\[= \mathbb{P}\left[\langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle\geq c_{1}+ \mathbb{E}[\langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle]\right]+\mathbb{P}\left[ \langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle\leq-c_{1}+\mathbb{E}[\langle\mathbf{X}_{1, 1},\mathbf{Z}_{1,1}\rangle]\right] \tag{132}\] \[= \mathbb{P}\left[||\mathbf{Z}_{1,1}||\cos(\theta)\geq\frac{c_{1}+ \mathbb{E}[\langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle]}{||\mathbf{X}_{1,1}||}\right] +\mathbb{P}\left[||\mathbf{Z}_{1,1}||\cos(\theta)\leq\frac{-c_{1}+\mathbb{E}[ \langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle]}{||\mathbf{X}_{1}||}\right]\] (133) \[\geq \mathbb{P}\left[-||\mathbf{Z}_{1,1}||\geq\frac{c_{1}+\mathbb{E}[ \langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle]}{||\mathbf{X}_{1,1}||}\right]+\mathbb{P} \left[||\mathbf{Z}_{1,1}||\leq\frac{-c_{1}+\mathbb{E}[\langle\mathbf{X}_{1,1},\mathbf{Z}_{ 1,1}\rangle]}{||\mathbf{X}_{1,1}||}\right]\] (134) \[= \mathbb{P}\left[||\mathbf{Z}_{1,1}||\leq-\frac{c_{1}+\mathbb{E}[ \langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle]}{||\mathbf{X}_{1,1}||}\right]+\mathbb{P} \left[||\mathbf{Z}_{1,1}||\leq\frac{-c_{1}+\mathbb{E}[\langle\mathbf{X}_{1,1},\mathbf{Z}_{ 1,1}\rangle]}{||\mathbf{X}_{1,1}||}\right]\] (135) \[\overset{(a)}{\geq} \mathbb{P}\left[||\mathbf{Z}_{1,1}||\leq\frac{-c_{1}-||\mathbf{X}_{1,1}|| \cdot\mathbb{E}[||\mathbf{Z}_{1,1}||]}{||\mathbf{X}_{1,1}||}\right]+\mathbb{P}\left[ ||\mathbf{Z}_{1,1}||\leq\frac{-c_{1}+||\mathbf{X}_{1,1}||\cdot\mathbb{E}[||\mathbf{Z}_{1,1 }||]}{||\mathbf{X}_{1,1}||}\right]\] (136) \[\overset{(b)}{=} 2F_{||\mathbf{Z}_{1,1}/\sigma_{1,1}^{2}||}\left(-\frac{c_{1}}{ \sigma_{1,1}^{2}\sqrt{n_{1,1}\beta_{1,1}\mathbb{P}}}+\frac{\sqrt{2}}{\sigma_{1,1}^{2}}\frac{\Gamma(\frac{n_{1,1}+1}{2})}{\Gamma(\frac{n_{1,1}}{2})}\right)\] (137) \[= 2G\left(\frac{n_{1,1}}{2},\frac{1}{2}\left(-\frac{c_{1}}{\sigma_{ 1,1}^{2}\sqrt{n_{1,1}\beta_{1,1}\mathbb{P}}}+\frac{\sqrt{2}}{\sigma_{1,1}^{2}} \frac{\Gamma(\frac{n_{1,1}+1}{2})}{\Gamma(\frac{n_{1,1}}{2})}\right)^{2}\right) \tag{138}\]
In step \((a)\) we used the following bounds on \(\mathbb{E}[\langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle]\):
\[-||\mathbf{X}_{1,1}||\cdot\mathbb{E}[||\mathbf{Z}_{1,1}||]\leq\mathbb{E}[\langle\mathbf{X }_{1,1},\mathbf{Z}_{1,1}\rangle]\leq||\mathbf{X}_{1,1}||\cdot\mathbb{E}[||\mathbf{Z}_{1,1}|] \tag{139}\]
In step \((b)\) we used the fact that \(\mathbf{Z}_{1,1}/\sigma_{1,1}^{2}\) follows a chi distribution of degree \(n_{1,1}\) and \(F_{||\mathbf{Z}_{1,1}/\sigma_{1,1}^{2}||}(\cdot)\) is the CDF of the chi distribution of degree \(n_{1,1}\) and is equal to
\[F(x;n)=G\left(\frac{n}{2},\frac{x^{2}}{2}\right) \tag{140}\]
where \(G(\cdot,\cdot)\) is the regularized gamma function.
Thus
\[\text{Var}\left[\langle\mathbf{X}_{1,1},\mathbf{Z}_{1,1}\rangle\right]\geq 2c_{1}^{2}G \left(\frac{n_{1,1}}{2},\frac{1}{2}\left(-\frac{c_{1}}{\sigma_{1,1}^{2}\sqrt{n_ {1,1}\beta_{1,1}\mathbb{P}}}+\frac{\sqrt{2}}{\sigma_{1,1}^{2}}\frac{\Gamma( \frac{n_{1,1}+1}{2})}{\Gamma(\frac{n_{1,1}}{2})}\right)^{2}\right) \tag{141}\]
for all \(c_{1}>0\).
Following the same procedure, we prove that
\[\text{Var}[\langle\mathbf{X}_{2,1},\mathbf{Z}_{1,2}\rangle]\geq 2c_{2}^{2}G \left(\frac{n_{1,2}}{2},\frac{1}{2}\left(-\frac{c_{2}}{\sigma_{1,2}^{2}\sqrt{n_ {1,2}\beta_{2,1}\mathbb{P}}}+\frac{\sqrt{2}}{\sigma_{1,2}^{2}}\frac{\Gamma( \frac{n_{1,2}+1}{2})}{\Gamma(\frac{n_{1,2}}{2})}\right)^{2}\right) \tag{142}\] \[\text{Var}[\langle\mathbf{X}_{1,2},\mathbf{Z}_{1,2}\rangle]\geq 2c_{3}^{2}G \left(\frac{n_{1,2}}{2},\frac{1}{2}\left(-\frac{c_{3}}{\sigma_{1,2}^{2}\sqrt{n_ {1,2}\beta_{1,2}\mathbb{P}}}+\frac{\sqrt{2}}{\sigma_{1,2}^{2}}\frac{\Gamma( \frac{n_{1,2}+1}{2})}{\Gamma(\frac{n_{1,2}}{2})}\right)^{2}\right) \tag{143}\]
for all \(c_{2}>0\) and \(c_{3}>0\). Thus
\[\text{Var}[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1}, \mathbf{Y}_{2,1})]\] (144) \[\geq \frac{\sigma_{1,1}^{4}n_{1,1}}{2}\left(\frac{1}{\sigma_{y_{1,1} }^{2}}-\frac{1}{\sigma_{1,1}^{2}}\right)^{2}+\frac{\sigma_{1,2}^{4}n_{1,2}}{2} \left(\frac{1}{\sigma_{y_{1,2}}^{2}}-\frac{1}{\sigma_{1,2}^{2}}\right)^{2}\] \[+\frac{2c_{1}^{2}}{(\sigma_{y_{1,1}}^{2})^{2}}G\left(\frac{n_{1,1 }}{2},\frac{1}{2}\left(-\frac{c_{1}}{\sigma_{1,1}^{2}\sqrt{n_{1,1}\beta_{1,1} \mathbb{P}}}+\frac{\sqrt{2}}{\sigma_{1,1}^{2}}\frac{\Gamma(\frac{n_{1,1}+1}{2} )}{\Gamma(\frac{n_{1,1}}{2})}\right)^{2}\right)\] \[+2c_{2}^{2}\left(\frac{\sqrt{\beta_{c}}}{\sqrt{\tilde{\beta}_{c}} \sigma_{y_{1,2}}^{2}}-\frac{1}{\sigma_{1,2}^{2}}\right)^{2}G\left(\frac{n_{1,2 }}{2},\frac{1}{2}\left(-\frac{c_{2}}{\sigma_{1,2}^{2}\sqrt{n_{1,2}\beta_{2,1} \mathbb{P}}}+\frac{\sqrt{2}}{\sigma_{1,2}^{2}}\frac{\Gamma(\frac{n_{1,2}+1}{2} )}{\Gamma(\frac{n_{1,2}}{2})}\right)^{2}\right)\] \[+2c_{3}^{2}\left(\frac{\sqrt{\beta_{c}}}{\sqrt{\tilde{\beta}_{c}} \sigma_{y_{1,2}}^{2}}-\frac{1-\frac{\beta_{c}}{\tilde{\beta}_{c}}(1+\rho\sqrt{ \frac{\beta_{2,1}}{\beta_{1,1}}})}{\sigma_{1,2}^{2}}\right)^{2}G\left(\frac{n_{1,2 }}{2},\frac{1}{2}\left(-\frac{c_{3}}{\sigma_{1,2}^{2}\sqrt{n_{1,2}\beta_{1,2} \mathbb{P}}}+\frac{\sqrt{2}}{\sigma_{1,2}^{2}}\frac{\Gamma(\frac{n_{1,2}+1}{2} )}{\Gamma(\frac{n_{1,2}}{2})}\right)^{2}\right)\] (145) \[= n_{1,1}V(\Omega_{1,1})+n_{1,2}V(\Omega_{1,2})-\frac{n_{1,1} \Omega_{1,1}}{(1+\Omega_{1,1})^{2}}-\frac{n_{1,2}\left(2\sigma_{1,2}^{2} \beta_{c}\mathbb{P}-(1-\rho^{2})\beta_{2,1}\mathbb{P}(2\sigma_{1,2}^{2}+(1- \rho^{
\[+\frac{2c_{1}^{2}}{(\sigma_{y_{1,1}}^{2})^{2}}G\left(\frac{n_{1,1}}{2},\frac{1}{2}\left(-\frac{c_{1}}{\sigma_{1,1}^{2}\sqrt{n_{1,1}\beta_{1,1}\mathsf{ P}}}+\frac{\sqrt{2}}{\sigma_{1,1}^{2}}\frac{\Gamma(\frac{n_{1,1}+1}{2})}{ \Gamma(\frac{n_{1,2}}{2})}\right)^{2}\right)\] \[+2c_{2}^{2}\left(\frac{\sqrt{\beta_{\mathrm{c}}}}{\sqrt{\bar{ \beta}_{\mathrm{c}}}\sigma_{y_{1,2}}^{2}}-\frac{1}{\sigma_{1,2}^{2}}\right)^{2} G\left(\frac{n_{1,2}}{2},\frac{1}{2}\left(-\frac{c_{2}}{\sigma_{1,2}^{2}\sqrt{n_{1,2} \beta_{2,1}\mathsf{P}}}+\frac{\sqrt{2}}{\sigma_{1,2}^{2}}\frac{\Gamma(\frac{n _{1,2}+1}{2})}{\Gamma(\frac{n_{1,2}}{2})}\right)^{2}\right)\] \[+2c_{3}^{2}\left(\frac{\sqrt{\beta_{\mathrm{c}}}}{\sqrt{\bar{ \beta}_{\mathrm{c}}}\sigma_{y_{1,2}}^{2}}-\frac{1-\frac{\bar{\beta}_{\mathrm{c }}}{\bar{\beta}_{\mathrm{c}}}(1+\rho\sqrt{\frac{\beta_{2,1}}{\beta_{1,2}}})}{ \sigma_{1,2}^{2}}\right)^{2}G\left(\frac{n_{1,2}}{2},\frac{1}{2}\left(-\frac{c _{3}}{\sigma_{1,2}^{2}\sqrt{n_{1,2}\beta_{1,2}\mathsf{P}}}+\frac{\sigma_{1,2} ^{2}}{\sqrt{2}}\frac{\Gamma(\frac{n_{1,2}+1}{2})}{\Gamma(\frac{n_{1,2}}{2})} \right)^{2}\right) \tag{146}\]
for all \(c_{1}>0\), \(c_{2}>0\) and \(c_{3}>0\) with \(V(x):=\frac{x(2+x)}{2(1+x)}\log e^{2}\). Given \(n_{1,1},n_{1,2},\mathsf{P},\beta_{1,1},\beta_{1,2},\beta_{2,1},\rho\), since \(c_{1},c_{2},c_{3}>0\) are arbitrary, we may choose them such that
\[\frac{n_{1,1}\Omega_{1,1}}{(1+\Omega_{1,1})^{2}}+\frac{n_{1,2} \left(2\sigma_{1,2}^{2}\beta_{\mathrm{c}}\mathsf{P}-(1-\rho^{2})\beta_{2,1} \mathsf{P}(2\sigma_{1,2}^{2}+(1-\rho^{2})\beta_{2,1}\mathsf{P})\right)}{2( \sigma_{1,2}^{2}+\beta_{\mathrm{c}}\mathsf{P})^{2}}\] \[=\frac{2c_{1}^{2}}{(\sigma_{y_{1,1}}^{2})^{2}}G\left(\frac{n_{1,1} }{2},\frac{1}{2}\left(-\frac{c_{1}}{\sigma_{1,1}^{2}\sqrt{n_{1,1}\beta_{1,1} \mathsf{P}}}+\frac{\sqrt{2}}{\sigma_{1,1}^{2}}\frac{\Gamma(\frac{n_{1,1}+1}{2} )}{\Gamma(\frac{n_{1,1}}{2})}\right)^{2}\right)\] \[+2c_{2}^{2}\left(\frac{1}{\sigma_{y_{1,2}}^{2}}-\frac{1}{\sigma_ {1,2}^{2}}\right)^{2}G\left(\frac{n_{1,2}}{2},\frac{1}{2}\left(-\frac{c_{2}}{ \sigma_{1,2}^{2}\sqrt{n_{1,2}\beta_{2,1}\mathsf{P}}}+\frac{\sqrt{2}}{\sigma_{1, 2}^{2}}\frac{\Gamma(\frac{n_{1,2}+1}{2})}{\Gamma(\frac{n_{1,2}}{2})}\right)^{ 2}\right)\] \[+2c_{3}^{2}\left(\frac{1}{\sigma_{y_{1,2}}^{2}}+\frac{\frac{\bar{ \beta}_{\mathrm{c}}}{\bar{\beta}_{\mathrm{c}}}(1+\rho\sqrt{\frac{\beta_{2,1}}{ \beta_{1,2}}})}{(1-\rho^{2})\beta_{2,1}\mathsf{P}+\sigma_{1,2}^{2}}\right)^{2} G\left(\frac{n_{1,2}}{2},\frac{1}{2}\left(-\frac{c_{3}}{\sigma_{1,2}^{2}\sqrt{n_{1,2} \beta_{1,2}\mathsf{P}}}+\frac{\sigma_{1,2}^{2}}{\sqrt{2}}\frac{\Gamma(\frac{n _{1,2}+1}{2})}{\Gamma(\frac{n_{1,2}}{2})}\right)^{2}\right). \tag{147}\]
Note that such \(c_{1},c_{2},c_{3}>0\) always exist since the corresponding factors in (147) are positive and \(\Omega_{1,1},\Omega_{1,2}>0\). Therefore,
\[\text{Var}[\tilde{i}_{1}(\boldsymbol{X}_{1,1},\boldsymbol{X}_{1,2};\boldsymbol{ Y}_{1,1},\boldsymbol{Y}_{2,1})]\geq n_{1,1}V(\Omega_{1,1})+n_{1,2}V(\Omega_{1,2}) \tag{148}\]
with \(V(x):=\frac{x(2+x)}{2(1+x)^{2}}\log e^{2}\). This proves (104b).
### _Bounding the third moment_
In this section, we upper bound the third moment. To this end, we employ the following inequality:
\[\mathbb{E}[|\tilde{i}_{1}(\boldsymbol{X}_{1,1},\boldsymbol{X}_{1,2};\boldsymbol{ Y}_{1,1},\boldsymbol{Y}_{2,1})-\mathbb{E}[\tilde{i}_{1}(\boldsymbol{X}_{1,1}, \boldsymbol{X}_{1,2};\boldsymbol{Y}_{1,1},\boldsymbol{Y}_{2,1})]|^{3}]\leq 2^{3} \mathbb{E}[|\tilde{i}_{1}(\boldsymbol{X}_{1,1},\boldsymbol{X}_{1,2}; \boldsymbol{Y}_{1,1},\boldsymbol{Y}_{2,1})|^{3}], \tag{149}\]
which follows from the Minkowski inequality [34].
Let \(f_{\hat{i}}(\cdot)\) be the probability density function of \(\tilde{i}_{1}(\boldsymbol{X}_{1,1},\boldsymbol{X}_{1,2};\boldsymbol{Y}_{1,1}, \boldsymbol{Y}_{2,1})\) with \(F_{i}\) as its cumulative density function. For simplicity, define \(Z:=\tilde{i}_{1}(\boldsymbol{X}_{1,1},\boldsymbol{X}_{1,2};\boldsymbol{Y}_{1,1}, \boldsymbol{Y}_{2,1})\). For any \(\kappa>1\) we have
\[\mathbb{E}[|Z|^{3}]\] \[=\int_{-\infty}^{\infty}|Z|^{3}f_{\hat{i}}(z)\mathrm{d}z\] \[\leq\kappa+\int_{\kappa}^{\infty}Z^{3}f_{\hat{i}}(z)\mathrm{d}z+ \int_{-\infty}^{-\kappa}|Z|^{3}f_{\hat{i}}(z)\mathrm{d}z\] \[=\kappa+\int_{\kappa}^{\infty}Z^{3}(f_{\hat{i}}(z)-f_{\hat{i}}(-z)) \mathrm{d}z\] \[=\kappa+\sum_{j=0}^{\infty}\int_{\kappa+j}^{\kappa+j+1}Z^{3}(f_{ \hat{i}}(z)-f_{\hat{i}}(-z))\mathrm{d}z\] \[\leq\kappa+\sum_{j=0}^{\infty}(\kappa+j+1)^{3}\int_{\kappa+j}^{ \kappa+j+1}(f_{\hat{i}}(z)-f_{\hat{i}}(-z))\mathrm{d}z\] \[=\kappa+\sum_{j=0}^{\infty}(\kappa+j+1)^{3}\left(F_{\hat{i}}( \kappa+j+1)-F_{\hat{i}}(\kappa+j)+F_{\hat{i}}(-\kappa-j)-F_{\hat{i}}(-\kappa-j- 1)\right)\] \[\leq\kappa+\sum_{j=0}^{\infty}(\kappa+j+1)^{3}(1-F_{\hat{i}}( \kappa+j)+F_{\hat{i}}(-\kappa-j)). \tag{150}\]
Notice that
\[1-F_{i}(\kappa+j)=\mathbb{P}[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})>\kappa+j] \tag{151}\]
and
\[F_{i}(-\kappa-j)=\mathbb{P}[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})\leq-\kappa-j]. \tag{152}\]
Hence,
\[\mathbb{E}[|\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1} )|^{3}]\leq\kappa+2\sum_{j=0}^{\infty}(\kappa+j+1)^{3}\mathbb{P}\left[|\tilde{ i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})|>\kappa+j\right]. \tag{153}\]
**Lemma 3**: _The following inequality holds:_
\[\mathbb{P}[|\tilde{i}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{ Y}_{2,1})|>\kappa+j]\] \[\leq\max\left\{\frac{2^{1-\frac{n_{1,1}}{2}}}{\Gamma\left(\frac{ n_{1,1}}{2}\right)}\left(\sqrt{\frac{\tilde{\kappa}+\kappa+j}{2\tilde{k}}}- \frac{b}{2}\right)^{n_{1,1}-2}e^{-\frac{1}{2}\left(\sqrt{\frac{\tilde{\kappa} +\tilde{\kappa}}{2\tilde{k}}}-\frac{b}{2}\right)^{2}},\right.\] \[\left.\frac{2^{1-\frac{n_{1,2}}{2}}\tilde{\zeta}}{\Gamma\left( \frac{n_{1,2}}{2}\right)}\left(\sqrt{\frac{\tilde{\kappa}+\kappa+j}{2\tilde{ k}}}-\frac{\tilde{b}}{2}\right)^{n_{1,2}-2}e^{-\frac{1}{2}\left(\sqrt{\frac{ \tilde{\kappa}+\tilde{\kappa}}{2\tilde{k}}}-\frac{b}{2}\right)^{2}}\right\} \tag{154}\]
_for \(\zeta>1\) and \(\tilde{\zeta}>1\) satisfying_
\[\frac{\zeta}{(\zeta-1)(\frac{n_{1,1}}{2}-1)}<\frac{1}{2}\left( \sqrt{\frac{\tilde{\kappa}+\kappa+j}{2\tilde{k}}}-\frac{b}{2}\right)^{2}, \tag{155}\] \[\frac{\tilde{\zeta}}{(\tilde{\zeta}-1)(\frac{n_{1,2}}{2}-1)}<\frac {1}{2}\left(\sqrt{\frac{\tilde{\kappa}+\kappa+j}{2\tilde{k}}}-\frac{\tilde{b} }{2}\right)^{2}, \tag{156}\]
_where \(k,\tilde{k},b,\tilde{b}\) and \(\tilde{\kappa}\) are defined in (37)._
See Appendix E.
**Lemma 4**: _It holds that_
\[\sum_{j=0}^{\infty}(\kappa+j+1)^{3}\mathbb{P}\left[|\tilde{i}_{1} (\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})|>\kappa+j\right] \tag{157}\] \[\leq\max\left\{\frac{\zeta\sqrt{2}^{\frac{n_{1,1}}{2}}-1}{\Gamma \left(\frac{n_{1,1}}{2}\right)}e^{-c}A(n_{1,1},k,b,c),\frac{\tilde{\zeta}\sqrt {2}^{\frac{n_{1,2}}{2}}-1}{\Gamma\left(\frac{n_{1,2}}{2}\right)}e^{-\tilde{c} }A(n_{1,2},\tilde{k},\tilde{b},\tilde{c})\right\}\]
_where \(c,\tilde{c}\) and \(A(n,k,b,c)\) are defined in (37)._
The proof is based on the following equality [35]:
\[\sum_{j=c}^{\infty}j^{n}e^{-j}=e^{-c}\Phi(e^{-1},-n,c). \tag{158}\]
Hence,
\[\mathbb{E}[|\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})|^{3}] \tag{159}\] \[\leq\kappa+2\max\left\{\frac{\zeta\sqrt{2}^{\frac{n_{1,1}}{2}}- 1}{\Gamma\left(\frac{n_{1,1}}{2}\right)}e^{-c}A(n_{1,1},k,b,c),\frac{\tilde{ \zeta}\sqrt{2}^{\frac{n_{1,2}}{2}-1}}{\Gamma\left(\frac{n_{1,2}}{2}\right)}e ^{-\tilde{c}}A(n_{1,2},\tilde{k},\tilde{b},\tilde{c})\right\}. \tag{160}\]
As a result
\[\mathbb{E}[|\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})-\mathbb{E}[\tilde{i}_{1}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1}, \mathbf{Y}_{2,1})|]^{3}]\] \[\leq 2^{3}\kappa+2^{4}\max\left\{\frac{\zeta\sqrt{2}^{\frac{n_{1,1} }{2}}-1}{\Gamma\left(\frac{n_{1,1}}{2}\right)}e^{-c}A(n_{1,1},k,b,c),\frac{ \tilde{\zeta}\sqrt{2}^{\frac{n_{1,2}}{2}}-1}{\Gamma\left(\frac{n_{1,2}}{2} \right)}e^{-\tilde{c}}A(n_{1,2},\tilde{k},\tilde{b},\tilde{c})\right\} \tag{161}\]
for any \(\kappa>1\). This concludes the proof.
## Appendix D Proof of Lemma 1
By [32, Proposition 2], we have the following bounds:
\[\frac{f_{\mathbf{Y}_{1,1}}(\mathbf{Y}_{1,1})}{Q_{\mathbf{Y}_{1,1}}(\mathbf{y}_{1,1})}\leq 27 \sqrt{\frac{\pi}{8}}\frac{\sigma_{1,1}^{2}+\beta_{1,1}\mathsf{P}}{\sqrt{\sigma_ {1,1}^{2}+2\beta_{1,1}\mathsf{P}}} \tag{162}\]
\[\frac{f_{\mathbf{Y}_{2,1}}(\mathbf{Y}_{2,1})}{Q_{\mathbf{Y}_{2,1}}(\mathbf{y}_{1,2})}\leq\frac {9(\beta_{1,2}+\beta_{2,1})}{2\pi\sqrt{2\beta_{1,2}\beta_{2,1}}}. \tag{163}\]
By [33, Lemma 5], we have the following bound:
\[\frac{f_{\mathbf{Y}_{2,1}|\mathbf{X}_{1,2}}(\mathbf{y}_{1,2}|\mathbf{x}_{1,2})}{Q_{\mathbf{Y}_{2, 1}|\mathbf{X}_{2,1}}(\mathbf{y}_{1,2}|\mathbf{x}_{1,2})}\geq 2^{\frac{n_{1,2}-2}{2}}e^{- \frac{n_{1,2}\beta_{2,1}P}{2}}. \tag{164}\]
Thus,
\[\log\frac{f_{\mathbf{Y}_{1,1}|\mathbf{X}_{1,1}}(\mathbf{y}_{1,1}|\mathbf{x}_{1,1} )f_{\mathbf{Y}_{2,1}|\mathbf{X}_{1,2}}(\mathbf{y}_{1,2}|\mathbf{x}_{1,2})}{f_{\mathbf{Y}_{1,1}}( \mathbf{y}_{1,1})f_{\mathbf{Y}_{2,1}}(\mathbf{y}_{1,2})} \tag{165}\] \[\geq\log\frac{f_{\mathbf{Y}_{1,1}|\mathbf{X}_{1,1}}(\mathbf{y}_{1,1}|\mathbf{x}_{1,1})Q_{\mathbf{Y}_{2,1}|\mathbf{X}_{1,2}}(\mathbf{y}_{1,2}|\mathbf{x}_{1,2})}{Q_{\mathbf{Y}_{1,1}} (\mathbf{y}_{1,1})Q_{\mathbf{Y}_{2,1}}(\mathbf{y}_{1,2})}+\log\frac{2^{\frac{n_{1,2}+4}{2 }}\sqrt{\pi\beta_{1,2}\beta_{2,1}(\sigma_{1,1}^{2}+2\beta_{1,1}\mathsf{P})}}{ e^{\frac{n_{1,2}\beta_{2,1}P}{2}}243(\sigma_{1,1}^{2}+\beta_{1,1}\mathsf{P})( \beta_{1,2}+\beta_{2,1})} \tag{166}\]
which concludes the proof.
## Appendix E Proof of Lemma 3
Notice that
\[\tilde{i}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})= \frac{n_{1,1}}{2}\log\frac{\sigma_{y_{1,1}}^{2}}{\sigma_{1,1}^{2}}+\frac{n_{1,2}}{2}\log\frac{\sigma_{y_{1,2}}^{2}}{\sigma_{y_{1,2}|x|_{1,2}}^{2}}-\frac{|| \mathbf{Z}_{1,1}||^{2}}{2\sigma_{1,1}^{2}}+\frac{||\mathbf{X}_{1,1}+\mathbf{Z}_{1,1}||^{2} }{2\sigma_{y_{1,1}}^{2}}\] \[-\frac{||\alpha(\mathbf{X}_{1,2}+\mathbf{X}_{2,1})+\mathbf{Z}_{1,2}-\mu_{1,2} ||^{2}}{2\sigma_{y_{1,2}|x|_{1,2}}^{2}}+\frac{||\alpha(\mathbf{X}_{1,2}+\mathbf{X}_{2, 1})+\mathbf{Z}_{1,2}||^{2}}{2\sigma_{y_{1,2}}^{2}}, \tag{167}\]
and by (95),
\[|\tilde{i}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y}_{2,1})| \leq\frac{n_{1,1}}{2}\log\frac{\sigma_{y_{1,1}}^{2}}{\sigma_{1,1}^ {2}}+\frac{n_{1,2}}{2}\log\frac{\sigma_{y_{1,2}}^{2}}{\sigma_{1,2}^{2}}+\frac{ ||\mathbf{Z}_{1,1}||^{2}}{2\sigma_{1,1}^{2}}+\frac{||\mathbf{X}_{1,1}+\mathbf{Z}_{1,1}||^{ 2}}{2\sigma_{y_{1,1}}^{2}}\] \[+\frac{||\mathbf{X}_{2,1}+\mathbf{Z}_{1,2}+\left(1-\sqrt{\frac{\tilde{ \beta}_{\kappa}}{\tilde{\kappa}}}(1+\rho\sqrt{\frac{\beta_{2,1}}{\beta_{1,2}} })\,\mathbf{X}_{1,2}||^{2}\right.}{2\sigma_{1,2}^{2}}+\frac{||\mathbf{X}_{1,2}+\mathbf{X}_ {2,1}+\mathbf{Z}_{1,2}||^{2}}{2\sigma_{y_{1,2}}^{2}}. \tag{168}\]
Recall the definitions of \(k,\tilde{k},b,\tilde{b}\) and \(\tilde{\kappa}\) from (37). Thus,
\[\mathbb{P}[|\tilde{i}(\mathbf{X}_{1,1},\mathbf{X}_{1,2};\mathbf{Y}_{1,1},\mathbf{Y }_{2,1})|>\kappa+j] \tag{169}\] \[\leq\mathbb{P}\Bigg{[}\frac{n_{1,1}}{2}\log\frac{\sigma_{y_{1,1} }^{2}}{\sigma_{1,1}^{2}}+\frac{n_{1,2}}{2}\log\frac{\sigma_{y_{1,2}}^{2}}{ \sigma_{1,2}^{2}}+\frac{||\mathbf{Z}_{1,1}||^{2}}{2\sigma_{1,1}^{2}}+\frac{||\mathbf{X} _{1,1}+\mathbf{Z}_{1,1}||^{2}}{2\sigma_{y_{1,1}}^{2}}\] \[+\frac{||\mathbf{X}_{2,1}+\mathbf{Z}_{1,2}+\left(1-\sqrt{\frac{\tilde{ \beta}_{\kappa}}{\tilde{\kappa}}}(1+\rho\sqrt{\frac{\beta_{2,1}}{\beta_{1,2}} })\,\mathbf{X}_{1,2}||^{2}\right.}{2\sigma_{1,2}^{2}}+\frac{||\mathbf{X}_{1,2}+\mathbf{X}_ {2,1}+\mathbf{Z}_{1,2}||^{2}}{2\sigma_{y_{1,2}}^{2}}>\kappa+j\Bigg{]}\] \[\leq\mathbb{P}\left[k(\|\mathbf{Z}_{1,1}\|+\frac{b}{2})^{2}+\tilde{k} (\|\mathbf{Z}_{1,2}\|+\frac{\tilde{b}}{2})^{2}>\tilde{\kappa}+\kappa+j\right]\] (170) \[\leq\max\left\{\mathbb{P}\left[k(\|\mathbf{Z}_{1,1}\|+\frac{b}{2})^{2} >\frac{\tilde{\kappa}+\kappa+j}{2}\right],\mathbb{P}\left[\tilde{k}(\|\mathbf{Z}_{1,2 }\|+\frac{\tilde{b}}{2})^{2}>\frac{\tilde{\kappa}+\kappa+j}{2}\right]\right\}\] (171) \[\leq\max\left\{\mathbb{P}\left[k(\|\mathbf{Z}_{1,1}\|+\frac{b}{2})^{2} >\frac{\tilde{\kappa}+\kappa+j}{2}\right],\mathbb{P}\left[\tilde{k}(\|\mathbf{Z}_{1,2 }\|+\frac{\tilde{b}}{2})^{2}>\frac{\tilde{\kappa}+\kappa+j}{2}\right]\right\} \tag{172}\]
\[=1-\min\left\{\mathbb{P}\left[||\mathbf{Z}_{1,1}||\leq\sqrt{\frac{ \tilde{k}+\kappa+j}{2k}}-\frac{b}{2}\right],\mathbb{P}\left[||\mathbf{Z}_{1,2}|| \leq\sqrt{\frac{\tilde{\kappa}+\kappa+j}{2\tilde{k}}}-\frac{\tilde{b}}{2} \right]\right\} \tag{173}\] \[\overset{(i)}{=}1-\min\left\{1-\frac{\Gamma\left(\frac{n_{1,1}}{2},\frac{1}{2\sigma_{1,1}^{2}}\left(\sqrt{\frac{\tilde{\kappa}+\kappa+j}{2k}}- \frac{b}{2}\right)^{2}\right)}{\Gamma\left(\frac{n_{1,1}}{2}\right)},1-\frac{ \Gamma\left(\frac{n_{1,2}}{2},\frac{1}{2\sigma_{1,2}^{2}}\left(\sqrt{\frac{ \tilde{\kappa}+\kappa+j}{2\tilde{k}}}-\frac{\tilde{b}}{2}\right)^{2}\right)}{ \Gamma\left(\frac{n_{1,2}}{2}\right)}\right\}\] (174) \[=\max\left\{\frac{\Gamma\left(\frac{n_{1,1}}{2},\frac{1}{2\sigma _{1,1}^{2}}\left(\sqrt{\frac{\tilde{\kappa}+\kappa+j}{2k}}-\frac{b}{2}\right) ^{2}\right)}{\Gamma\left(\frac{n_{1,1}}{2}\right)},\frac{\Gamma\left(\frac{n_ {1,2}}{2},\frac{1}{2\sigma_{1,2}^{2}}\left(\sqrt{\frac{\tilde{\kappa}+\kappa+ j}{2\tilde{k}}}-\frac{\tilde{b}}{2}\right)^{2}\right)}{\Gamma\left(\frac{n_ {1,2}}{2}\right)}\right\}\] (175) \[\leq\max\left\{\frac{(\sigma_{1,1}^{2}\sqrt{2})^{2-n_{1,1}}\zeta} {\Gamma\left(\frac{n_{1,1}}{2}\right)}\left(\sqrt{\frac{\tilde{\kappa}+ \kappa+j}{2k}}-\frac{b}{2}\right)^{n_{1,1}-2}e^{-\frac{1}{2\sigma_{1,1}^{2}} \left(\sqrt{\frac{\tilde{\kappa}+\kappa+j}{2\tilde{k}}}-\frac{b}{2}\right)^{2}},\right.\] \[\left.\frac{(\sigma_{1,2}^{2}\sqrt{2})^{2-n_{1,2}}\tilde{\zeta}} {\Gamma\left(\frac{n_{1,2}}{2}\right)}\left(\sqrt{\frac{\tilde{\kappa}+\kappa +j}{2\tilde{k}}}-\frac{\tilde{b}}{2}\right)^{n_{1,2}-2}e^{-\frac{1}{2\sigma_{1,2}^{2}}\left(\sqrt{\frac{\tilde{\kappa}+\kappa+j}{2\tilde{k}}}-\frac{\tilde{ b}}{2}\right)^{2}}\right\}, \tag{176}\]
where in \((i)\) we use the fact that \(||\mathbf{Z}_{1,1}||/\sigma_{1,1}^{2}\) and \(||\mathbf{Z}_{1,2}/\sigma_{1,2}^{2}||\) follow a central chi-distribution of degree \(n_{1,1}\) and \(n_{1,2}\), respectively. In the last inequality, we use the following bound [36]:
\[\Gamma(a,x)<\zeta x^{a-1}e^{-x}, \tag{177}\]
for \(a>1,\zeta>1,x>\zeta/(\zeta-1)(a-1)\). This concludes the proof.
|
2306.14784 | $α_s$ as an input parameter in the SMEFT | The QCD coupling, $\alpha_s$, has a critical role in Hadron collider studies
of the Standard Model Effective Field Theory (SMEFT). Patterns of measurements
can be modified by local contact operators in the SMEFT that change the
measured value of a Lagrangian parameter from the case of the Standard Model;
this is known as an input parameter correction. When such a parameter is then
used to predict another observable, this modifies the relationship between
observables. In this paper, we begin the process of characterizing $\alpha_s$
as an input parameter. | Michael Trott | 2023-06-26T15:43:09Z | http://arxiv.org/abs/2306.14784v1 | # \(\alpha_{s}\) as an input parameter in the SMEFT
###### Abstract
The QCD coupling, \(\alpha_{s}\), has a critical role in Hadron collider studies of the Standard Model Effective Field Theory (SMEFT). Patterns of measurements can be modified by local contact operators in the SMEFT that change the measured value of a Lagrangian parameter from the case of the Standard Model; this is known as an input parameter correction. When such a parameter is then used to predict another observable, this modifies the relationship between observables. In this paper, we begin the process of characterizing \(\alpha_{s}\) as an input parameter.
**I. Introduction:** The Standard Model Effective Field Theory (SMEFT)1 is a model independent extension of the Standard Model (SM) with local contact operators built out of the SM fields added to the mass dimension (\(d\leq 4\)) SM Lagrangian. The SMEFT is based on based on a few infrared (IR) assumptions: that physics beyond the SM is present at scales \(\Lambda>\bar{v}_{T}\equiv\sqrt{2\left\langle H^{\dagger}H\right\rangle}\), that no light hidden states/weakly interacting states are lurking undiscovered with masses \(M<\Lambda\), and a SU(2)\({}_{\rm L}\) scalar doublet (\(H\)) with Hypercharge \({\rm y}_{h}=1/2\) is present in the particle spectrum.
Footnote 1: See Refs. [1; 2] for reviews.
The SMEFT Lagrangian is defined as
\[{\cal L}_{\rm SMEFT} = {\cal L}_{\rm SM}+{\cal L}^{(5)}+{\cal L}^{(6)}+{\cal L}^{(7)}+\ldots, \tag{1}\] \[{\cal L}^{(d)} = \sum_{i}\frac{C_{i}^{(d)}}{\Lambda^{d-4}}{\cal Q}_{i}^{(d)}\ \ \ \ { \rm for}\ d>4.\]
Operators (\({\cal Q}_{i}^{(d)}\)) define SMEFT corrections to the SM predictions, and carry a mass dimension \(d\) superscript. We will generally indicate a perturbative loop correction with a \(\Delta\sim 1/16\pi^{2}\) and an operator correction as a \(\delta\sim 1/\Lambda^{2}\) perturbation. The operators multiply Wilson coefficients \(C_{i}^{(d)}\), which take on specific values as a result of the Taylor expanded effects of physics beyond the SM in a UV matching. The sum over \(i\), runs over the operators in a particular operator basis. We use the non-redundant Warsaw basis [3; 4] for \({\cal L}^{(6)}\). We adopt the convention of hat superscripts for Lagrangian parameters that are numerically fixed from some experimental measurement, or inferred from a combination of such measurements. We also use the notation \(\hat{C}_{i}^{(d)}=\bar{v}_{T}^{d-4}\,C_{i}^{(d)}/\Lambda^{d-4}\).
The extension of the SM with \({\cal Q}_{i}^{(d)}\) introduces several expansions proportional to ratios of scales \(q^{2}/\Lambda^{2}<1\), with \(q^{2}\) a kinematic invariant associated with experimental measurements studied with the EFT. This defines "power countings" [1; 2; 5; 6] in the SMEFT. Here the term power counting refers to ratios of scales in the EFT itself, and we use this term drawing a distinction from the case where such ratios of scales are convolved with specific UV matching patterns [7; 8; 9]. For the case of input parameter corrections, functionally the kinematic invariant is set to the numerical value of the vev with SM coupling factors then absorbed into the Wilson coefficients. This is because precise input parameter measurements are made on on-shell observables, so that \(q^{2}\simeq m_{SM}^{2}\simeq\left\langle H^{\dagger}H\right\rangle\), up to coupling dependence. This follows from the phase space dominance of SM resonances in precisely measured observables.
Generalizing the SM interactions to include higher dimensional operators leads to non-canonical normalization of the gauge fields, and modifies the gauge couplings, including the QCD coupling \(g_{3}\). These modifications are proportional to \(\left\langle H^{\dagger}H\right\rangle\). We follow the approach developed in Refs. [10; 11; 12; 13] which deals with such effects at all orders in the \(\sqrt{2\langle H^{\dagger}H\rangle/\Lambda}\) expansion for low \(n\leq 3\)-point functions, and the embedding of this approach into Ref. [14]. Using this geometric generalization of the SM (the geoSMEFT), we then know the leading effects to follow on \(\hat{\alpha}_{s}\) extractions to consider input parameter corrections. Notationally, bar superscripts will correspond to canonically normalized Lagrangian parameters in the geoSMEFT and \(\bar{g}_{3}\) is defined as the canonically normalized strong coupling in this theory [12].
**II. Input parameters, \(\alpha_{s}\) and \(g_{3}\)** To make predictions for observables, the dimensionless parameters of the SM Lagrangian, and the dimensionfull scale \(\sqrt{2\left\langle H^{\dagger}H\right\rangle}\) must be fixed to numerical values. The method used to fix these parameters defines an input parameter scheme. Two schemes are in common use in the literature in the SMEFT: \(\{\hat{M}_{W},\hat{M}_{Z},\hat{G}_{F},\hat{M}_{h},\hat{\alpha}_{s}\}\)[15; 16] or \(\{\hat{\alpha}_{ew},\hat{M}_{Z},\hat{G}_{F},\hat{M}_{h},\hat{\alpha}_{s}\}\)[1; 17; 18].2 Input parameters corrections exist, as the method used to measure a Lagrangian parameter is distinct from the appearance of the same Lagrangian parameter in all observable predictions. It is important to not "hard wire" a particular chosen input parameter correction into the formulation of the
SMEFT, so that this distinction is not lost [15; 20; 21].
Consider the schematic example of an observable \(\mathcal{O}\) fixing \(\bar{\alpha}_{s}\) via
\[\langle\mathcal{O}\rangle=\langle in|S_{SM}|out\rangle+\frac{C_{k}^{(6)}}{ \Lambda^{2}}\,\langle in|\mathcal{Q}_{k}^{(6)}|out\rangle+\cdots \tag{2}\]
with \(\langle in|S_{SM}|out\rangle\) the corresponding SM \(S\) matrix for the chosen in and out states at some order in perturbation theory. In the SM, the \(\bar{\alpha}_{s}=\bar{g}_{3}^{2}/4\pi\) loop expansion also defines a numerical series
\[\langle in|S_{SM}|out\rangle\simeq a_{0}+a_{1}\frac{\bar{\alpha}_{s}}{4\pi}+\cdots \tag{3}\]
If this relation is used to infer a numerical value for \(\alpha_{s}\)
\[\hat{\alpha}_{s}\equiv\frac{4\pi}{a_{1}}\left(\langle\mathcal{O}\rangle-a_{0} \right), \tag{4}\]
or equivalently
\[\hat{g}_{s}\equiv\frac{4\pi}{\sqrt{\bar{a}_{1}}}\left(\langle\mathcal{O} \rangle-a_{0}\right)^{1/2}. \tag{5}\]
In the presence of the local contact operators \(\mathcal{Q}_{i}^{(6)}\) in Eqn. 2, the inferred numerical value is shifted by
\[\bar{\alpha}_{s}\rightarrow\hat{\alpha}_{s}+\delta\alpha_{s}^{\mathcal{O}}, \tag{6}\]
with
\[\delta\alpha_{s}^{\mathcal{O}}\equiv-\frac{4\pi\,C_{k}^{(6)}}{a_{1}\,\Lambda^ {2}}\,\langle in|\mathcal{Q}_{k}^{(6)}|out\rangle. \tag{7}\]
One can account for the shift in this input observable for this measurement used to fix \(\hat{\alpha}_{s}\) by inserting Eqn. 6 in the prediction for another observable. For example, performing this replacement for gluon fusion Higgs production at leading order in \(\alpha_{s}\) in the \(m_{t}\rightarrow\infty\) limit, one finds
\[\frac{\sigma_{LO}^{SMEFT}}{\sigma_{LO}^{SM}}\simeq 1-\frac{8\pi}{\bar{ \alpha}_{s}\,a_{1}}\,\text{Re}\left[\frac{C_{k}^{(6)}}{\Lambda^{2}}\,\langle pp |\mathcal{Q}_{k}^{(6)}|h\rangle\right]. \tag{8}\]
This correction can effect global fits in the SMEFT of Higgs properties, and the size of the correction depends on \(\langle in|\mathcal{Q}_{k}^{(6)}|out\rangle/\Lambda^{2}\) in the EFT, in addition to the UV matching pattern inducing \(C_{k}^{(6)}\). Here \(\Lambda\) is the cut off scale in the EFT, and can take on various values in a particular matching, fixing \(C_{k}^{(6)}/M_{UV}^{2}\). Noting this distinction clarifies current debates in the literature on the validity of the SMEFT from a model independent [1; 22; 23; 24; 5; 6], or model dependent perspective [8; 9].
**III. Extractions of \(\hat{\alpha}_{s}\)** Input parameters can be fixed by one measurement, or by the combination of measurements. Many input parameters are chosen so that the former condition holds, while \(\hat{\alpha}_{s}\) is inferred from a combination of measurements. See Ref. [25] for a recent review. This is the key problem to overcome in including input parameter exractions for \(\hat{\alpha}_{s}\).
Consider a set of \(\mathcal{O}_{a}\) measurements of \(\hat{\alpha}_{s}\) with \(a=1,\cdots n\) and experimental/statistical and theoretical errors added in quadrature defining \(\epsilon_{a}\). Each measurement also has a SMEFT input parameter correction \(\delta\alpha_{s}^{\mathcal{O}_{a}}\). In least squares error propagation the combined input parameter is
\[\delta\alpha_{s}^{\mathcal{O}_{(a)}}=\frac{\sum_{i=1}^{n}\prod_{a=1}^{n} \epsilon_{a}^{2}\,\delta\alpha^{\mathcal{O}_{i}}/\epsilon_{i}^{2}}{\sum_{i=1}^ {n}\prod_{a=1}^{n}\epsilon_{a}^{2}/\epsilon_{i}^{2}}. \tag{9}\]
For a SM input parameter to be fixed in a useful fashion from a measurement or a set of measurements, the observables should be consistent with the SM prediction(s). If a set of measurements is used, they should be consistent with one another, and all consistent with the SM. Usually this occurs when one experimental extraction also has the smallest error, so that the case \(\epsilon_{i}\ll\epsilon_{j}\) for all \(j\neq i\) is of interest. In this case, \(\delta\alpha_{s}^{\mathcal{O}_{(a)}}\sim\delta\alpha^{\mathcal{O}_{i}}\).
The input parameter dependence of \(\hat{\alpha}_{s}\) is a non-trivial effect on SMEFT global measurements, that has not been appropriately characerized in the literature, or taken into account in global fits. See Ref. [26; 27; 28; 29] for recent examples of such fits. The key challenge is the vastly different input parameter effects that different extractions of \(\hat{\alpha}_{s}\) induce, and the further complication that global combinations of such extractions lead to.
Consider the case of the latest PDG global average of \(\hat{\alpha}_{s}(\hat{m}_{Z})=0.1179\pm 0.0009\). This average results from the combination of \(\hat{\alpha}_{s}\) measurements summarized in Table 1. Using Eqn. 9 this results in a net input parameter correction in terms of these sub-classes of extractions
\[\delta\alpha_{s}^{\mathcal{O}_{(a)}}\simeq 0.6\,\delta\alpha^{\text{Lat}}+0.1\, \delta\alpha^{\tau/Q^{2}}+0.1\,\delta\alpha^{PDF}+0.05\,\delta\alpha^{\text{ had}}+0.05\,\delta\alpha^{\text{ew}}+0.04\,\delta\alpha^{e^{+}e^{-}}+0.03\, \delta\alpha^{Q\bar{Q}}. \tag{10}\]
The detailed input parameter dependence requires each of the sub-classes of \(\hat{\alpha}_{s}\) extractions to themselves be expanded into specific observables, as each measurement can have different SMEFT input parameter corrections. It is important to minimize the number of input parameter effects to include in global SMEFT fits.
The most important effect of this input parameter correction is on inclusive \(\sigma(\mathcal{GG}\to h)\) production in the SMEFT. This is the actual result related to the schematic Eqn 8. Building on the analytic NLO result (including \(\delta^{2}\) corrections beyond quadratic terms) recently developed in Ref. [31; 32], we add the \(\hat{\alpha}_{s}\) input parameter correction to the SMEFT perturbation, resulting in Eqn. 11.
In some matching scenarios fixing Wilson coefficients in the SMEFT, it is possible to expect that this input parameter correction can dominate over other SMEFT corrections included in global fits. In particular, this can occur for some \(\delta\Delta\) corrections included in the last two lines of Eqn. 11. Minimizing this possibility, with an eye towards the most robust model independent approach, is an interesting global fit methodology to adopt.
\[\frac{\sigma^{\hat{\alpha}}_{\text{SMEFT}}(\mathcal{GG}\to h)}{ \hat{\sigma}_{\text{SM},m_{t}\to\infty}(\mathcal{GG}\to h)}\simeq 1+658\,\tilde{C}^{(6)}_{HG}+289\,\tilde{C}^{(6)}_{HG} \Big{(}\tilde{C}^{(6)}_{H\square}-\frac{1}{4}\tilde{C}^{(6)}_{HD}\Big{)}+4.68 \times 10^{4}\,(\tilde{C}^{(6)}_{HG})^{2}+289\,\tilde{C}^{(8)}_{HG}\] \[\phantom{\frac{\sigma^{\hat{\alpha}}_{\text{SMEFT}}(\mathcal{GG} \to h)}{\hat{\sigma}_{\text{SM},m_{t}\to\infty}(\mathcal{GG}\to h)}}+17\, \delta\alpha^{\mathcal{G}_{(\alpha)}}_{s}+0.85\Big{(}\tilde{C}^{(6)}_{H \square}-\frac{1}{4}\tilde{C}^{(6)}_{HD}\Big{)}-0.91\,\tilde{C}^{(6)}_{uH}-7.2 6\,\text{Re}\,\tilde{C}^{(6)}_{uG}-0.60\,\delta G^{(6)}_{F}\] \[\phantom{\frac{\sigma^{\hat{\alpha}}_{\text{SMEFT}}(\mathcal{GG} \to h)}{\hat{\sigma}_{\text{SM},m_{t}\to\infty}(\mathcal{GG}\to h)}}-4.42\, \text{Re}\,\tilde{C}^{(6)}_{uG}\,\log\Big{(}\frac{\hat{m}^{2}_{h}}{\Lambda^{2 }}\Big{)}-0.126\,\text{Re}\,\tilde{C}^{(6)}_{dG}\,\log\Big{(}\frac{\hat{m}^{2 }_{h}}{\Lambda^{2}}\Big{)}-0.057\,\text{Re}\,\tilde{C}^{(6)}_{dG}+2.06\,\tilde {C}^{(6)}_{dH}. \tag{11}\]
As expected, extractions of \(\hat{\alpha}_{s}\), with the smallest error, dominate the input parameter dependence. Lattice QCD determinations have the smallest quoted errors and result from mapping Lattice simulation results to \(\hat{\alpha}_{s}\). In this case, the SMEFT corrections to SM extractions are due to how the simulation and subsequent mapping to \(\hat{\alpha}_{s}\) is modified in the SMEFT. We argue in the following section that \(\delta\alpha^{\text{Lat}}\sim 0\) for current extractions of this parameter and that using the Lattice derived value in SMEFT global fits is a promising approach. In the Appendix we demonstrate how the input parameter dependence is modified when one uses a Lattice derived value combined with a \(\hat{\alpha}_{s}\) extraction via \(R(Q)\) where \(Q^{2}\ll\hat{m}^{2}_{z}\).
**III Lattice QCD Extractions** Lattice QCD determinations of \(\hat{\alpha}_{s}(\mu)\) quote: \(\hat{\alpha}_{s}(m_{z})=0.1182\pm 0.0008\). This is a \(\lesssim 1\%\) error, see Ref. [33] for details. In such determinations, non-perturbative parameters such as \(\Lambda_{QCD}\) (or related decay constants \(f_{\pi}\), \(f_{K}\)) are simulated leading to extractions of \(\hat{\alpha}_{s}(\mu)\) by a non-perturbative definition of these parameters in terms of \(\hat{\alpha}_{s}\), and its running
\[\frac{\Lambda}{\mu}\equiv(b_{0}\bar{g}_{3}^{2})^{-b_{1}/(2b_{0}^{2})}\ e^{-1/2b_{0}\,\bar{g}_{3}^{2}}\,\exp\left[-\int_{ 0}^{\bar{b}_{3}(\mu)}\text{dx}\left(\frac{1}{\beta}+\frac{1}{\text{b}_{0}\, \text{x}^{3}}-\frac{\text{b}_{1}}{\text{b}_{0}^{2}\,\text{x}}\right)\right]. \tag{12}\]
where \(N_{f}\) is the number of active flavours and
\[\beta = -b_{0}\,x^{3}-b_{1}\,x^{4}+\cdots \tag{13}\] \[b_{0} = (11-2N_{f}/3)\,/(4\pi)^{2},\ b_{1}=(102-38N_{f}/3)\,/(4\pi)^{4}.\]
These determinations include errors associated to the non-perturbative Hadronic effects, finite Lattice spacing and statistical and systematic simluation errors. In addition, perturbative truncation errors of the map from the nonperturbative parameter to the series in \(\bar{\alpha}_{s}\) are also assigned. A useful characterization is summarized in two types of errors on \(\Lambda_{QCD}\) which is used to map to \(\bar{\alpha}_{s}(\mu)\)[33]
\[\left(\frac{\nabla\Lambda}{\Lambda}\right)_{\nabla\alpha_{s}} = \frac{\nabla\bar{\alpha}_{s}(\mu)}{8\pi b_{0}\bar{\alpha}_{s}( \mu)^{2}}\left[1+\mathcal{O}(\bar{\alpha}_{s}(\mu))\right], \tag{14}\] \[\left(\frac{\nabla\Lambda}{\Lambda}\right)_{trunc} = k\bar{\alpha}_{s}(\mu)^{n_{1}}+\mathcal{O}(\bar{\alpha}_{s}^{n_{ 1}+1}(\mu)). \tag{15}\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline Lattice & \(0.1182\pm 0.0008\) & \(\delta\alpha^{\text{Lat}}\) \\ \hline \(\tau\) decay, low \(Q^{2}\) & \(0.1178\pm 0.0019\) & \(\delta\alpha^{\tau/Q^{2}}\) \\ \hline PDF fits & \(0.1162\pm 0.0020\) & \(\delta\alpha^{PDF}\) \\ \hline Hadronic Obs. & \(0.1165\pm 0.0028\) & \(\delta\alpha^{\text{had}}\) \\ \hline EW fits & \(0.1208\pm 0.0028\) & \(\delta\alpha^{\text{ew}}\) \\ \hline \(e^{+}e^{-}\) jets/shapes & \(0.1171\pm 0.0031\) & \(\delta\alpha^{e^{+}e^{-}}\) \\ \hline \(Q\bar{Q}\) decays & \(0.1181\pm 0.0037\) & \(\delta\alpha^{QQ}\) \\ \hline \end{tabular}
\end{table}
Table 1: Preaveraged results for sub-classes of measurements of \(\hat{\alpha}_{s}\) used in the PDG [30]. The first column labels the sub-class of measurements, the second the averaged result for such \(\hat{\alpha}_{s}\) determinations reported in the PDG, and the last column is the label for the corresponding input parameter correction.
The first error captures statistical and systematic errors, and the later is a truncation error with \(k\propto b_{n_{1}+1}\) and is \(\mathcal{O}(1)\) in \(\overline{\text{MS}}\). The perturbative error is truncated at forth order, i.e. \(n_{1}=5\). Here we have changed notation from Ref. [33]\(\Delta\to\nabla\) in the equations above to maintain our notational convention that \(\Delta\) indicates a loop correction. Numerically, both forms of error are of similar order of magnitude, but the former (statistical) errors are currently dominant, see Ref. [33; 34].
For Lattice \(\hat{\alpha}_{s}(m_{z})\) extractions, the leading SMEFT corrections are not due to the appearance of higher dimensional operators in an experimental observable, but due to how the simulation results determining this parameter is inaccurate due to the SM being the wrong low energy theory to simulate.
The leading SMEFT effects in Lattice determinations in the geoSMEFT are the modified running effects changing the scale dependence of the bare canonically normalized coupling \(\bar{g}_{3}\), and how such running corrections feed into the non-pertrubative determination of \(\Lambda_{QCD}\), and related decay constants mapping to an extracted \(\hat{\alpha}_{s}(m_{z})\).
The effect of SMEFT correction on the running of \(\bar{g}_{3}\) is limited, as such renormalizations can be calculated in the phase in the theory with manifest \(\text{SU}(2)_{\text{L}}\times\text{U}(1)_{\text{Y}}\) symmetry [35; 36; 37; 38]. The only effects form \(\mathcal{L}^{(6,8,\cdots)}\) that can modify the SM running of \(\bar{g}_{3}\) must be proportional to \(m_{h}^{2}\) as this is the only mass scale in the theory in this phase. This fact limits modifications of the running of the SM parameters to a small set of operators, and these effects come about only from closed scalar loops.
Defining the SMEFT operators
\[\mathcal{Q}^{(6)}_{HG} =(H^{\dagger}H)\,G^{A}_{\mu\nu}\,G^{\mu\nu}_{A}, \qquad\mathcal{Q}^{(8)}_{HG} =(H^{\dagger}H)^{2}\,G^{A}_{\mu\nu}\,G^{\mu\nu}_{A},\] \[\mathcal{Q}^{(6)}_{H\Box} =(H^{\dagger}H)\Box(H^{\dagger}H),\qquad\mathcal{Q}^{(6)}_{HD} =(D_{\mu}H^{\dagger}H)(H^{\dagger}D^{\mu}H),\]
the leading SMEFT correction to the running of \(\bar{g}_{3}\) comes from Fig. 1 (upper left) and is [39]
\[\left(\mu\frac{d\bar{g}_{3}}{d\mu}\right)_{SMEFT}=\left(\mu\frac{d\bar{g}_{3} }{d\mu}\right)_{SM}-\frac{\lambda\,\bar{g}_{3}}{2\pi^{2}}\,\tilde{C}^{(6)}_{ HG}+\cdots \tag{16}\]
This correction scales as a loop correction and an operator correction, i.e. \(\sim\Delta\delta\). This SMEFT effect is usefully thought of as a \(\Delta\delta b_{0}\) correction in the relationship
\[\frac{1}{\bar{\alpha}_{s}(\mu_{2})}=\frac{1}{\bar{\alpha}_{s}(\mu_{1})}-8\pi \left(b_{0}+\delta\Delta b_{0}\right)\log\frac{\mu_{2}}{\mu_{1}}+\cdots \tag{17}\]
where
\[\delta\Delta b_{0}\equiv\frac{1}{8\pi^{3}}\frac{\lambda}{\bar{\alpha}_{s}}\, \tilde{C}_{HG}. \tag{18}\]
The calcuable dependence on the IR physics parameters of the SM gives \(\delta\Delta b_{0}\simeq 4\times 10^{-3}\times\tilde{C}^{(6)}_{HG}\). As \(\Lambda\gg\bar{v}_{T}\) for the SMEFT approach to be predictive, this correction will be even further suppressed in observables. A useful comparison is \(\delta\Delta b_{0}\ll b_{1}\bar{g}_{3}\) for \(\Lambda\gg\bar{v}_{T}\) when \(C^{(6)}_{HG}\lesssim 1\). However, for \(\Lambda\sim 2\,\text{TeV}\) and \(C^{(6)}_{HG}\sim 1\) numerically \(\delta\Delta b_{0}\sim b_{2}\bar{g}_{3}^{2}\). The domination of statistical errors (i.e. Eqn. 14) in current Lattice determinations over the size of these corrections is a critical numerical fact justifying taking \(\delta\alpha^{\text{Lat}}\sim 0\).
In the case that the matching coefficient \(\tilde{C}^{(6)}_{HG}\sim\Delta\), i.e. is loop suppressed [40] as can be expected in a purely renormalizable perturbative UV completion of the SM at higher scales [41; 42], the errors in Lattice determinations are even farther in excess of the SMEFT corrections induced on the running of \(\bar{\alpha}_{s}\). In this case, it is safe to use the global average value of \(\bar{\alpha}_{s}\) with no input parameter correction. However, as such an extra loop suppression can still be numerically significant, so the effect of \(\mathcal{Q}^{(8)}_{HG}\) modifying the running of \(\bar{g}_{3}\) is also of interest. (The loop suppression pattern for renormalizable UV compleplations in matching coefficients can differ at \(\mathcal{L}^{(8)}\)[42] compared to \(\mathcal{L}^{(6)}\).) The naive expectation is that in principle, this can lead to an \(1/\epsilon^{2}\) pole at two loops modifying the running of \(\bar{g}_{3}\) via Fig. 1 (upper right). Consistent with dimensional regularization conventions at two loops in Refs. [43; 44], such poles are not present and the renormalization takes place via the counterterm introduced to \(\mathcal{Q}^{(6)}_{HG}\) at one loop (see Fig. 1 bottom) given in Ref. [45]
Figure 1: Running effects of \(C^{(6)}_{HG}\) and \(C^{(8)}_{HG}\) modifying the QCD \(\beta\) function. The operator insertion is indicated with a black dot.
\[\mu\frac{d}{d\mu}\frac{C^{(6)}_{HG}}{\Lambda^{2}}\supset\frac{m_{H}^{2}}{16\pi^{2} \,\Lambda^{4}}\left(-14\,C^{(6)}_{H\Box}\,C^{(6)}_{HG}+4C^{(6)}_{HD}\,C^{(6)}_{ HG}-12(C^{(6)}_{HG})^{2}-6\,C^{(8)}_{HG}\right). \tag{19}\]
Such \({\cal L}^{(8)}\) corrections is absorbed, via "mixing down", into the numerical effects of \(C^{(6)}_{HG}\) at lower mass scales.
IV ConclusionsAs a summary of this discussion, these results indicate that currently one can take \(\delta\alpha^{\rm Lat}\sim 0\). Our recommendation for dealing with the input parameter depdendence of \(\hat{\alpha}_{s}\) is to use the Lattice average value as the numerical input parameter, and to neglect \(\delta\alpha_{s}^{{\cal O}_{(\alpha)}}\) when this is done in global SMEFT fits.
## Acknowledgements
We acknowledge the Villum Fund, project number 00010102 and thank Perimeter Institute and the University of Toronto for hospitality. We thank T. Corbett, A. Martin and A. Manohar for comments on the draft.
**Appendix: \(e^{+}e^{-}\) ratios at \(Q^{2}<m_{Z}^{2}\)** As an example of the Wilson coefficient parameter proliferation with \(\alpha_{s}\) extractions from other sources are combined with Lattice extractions (with no improvement in the resulting error on the input parameter) consider the study of \(e^{+}e^{-}\) event shapes and \(\sigma(e^{+}e^{-}\to\) hadrons). Several extractions of \(\hat{\alpha}_{s}\) follow from studying inclusive/exclusive \(\sigma(e^{+}e^{-}\to\) Hadrons) at low energy. These \(\sigma\) are dominated by single \(\gamma\) exchange for \(Q^{2}<m_{Z}^{2}\).
Consider the case of inclusive
\[\frac{\sigma(e^{+}e^{-}\to{\rm had})}{\sigma(e^{+}e^{-}\to\mu^{+ }\mu^{-})} = R(Q),\] \[= \sum_{f}N_{c}^{f}\;Q_{f}^{2}(1+\Delta_{QCD}+\delta_{SMEFT}^{f}).\]
Using the geoSMEFT, the leading \(\hat{\alpha}_{s}\) input parameter corrections can be characterized as the canonically normalized photon coupling to fermions fields \(f\) is given by
\[\langle\gamma|\bar{f}_{p}f_{r}\rangle = -\bar{e}\,Q_{f}\,\delta_{pr}\,\bar{f}_{p}\,\bar{f}_{\gamma}\,f_{r}. \tag{21}\]
The canonically normalized electric coupling \(\bar{e}\) cancels out in the constructed ratio. This coupling is defined in the SMEFT in Refs. [11; 12]. The leading SMEFT contribution is [46]
\[\delta_{SMEFT}^{f} = -\frac{Q^{2}/8\,\pi\,\bar{v}_{T}^{2}}{\hat{\alpha}_{ew}\,Q_{f}} \,{\rm Re}\left(\bar{C}^{e,f}_{LL,RR}+\bar{C}^{e,f}_{LR}\right), \tag{22}\] \[- \frac{Q^{2}/8\,\pi\,\bar{v}_{T}^{2}\,\hat{\alpha}_{ew}}{\sum_{f} {N_{c}^{f}\,Q_{f}^{2}}}{\rm Re}\left(\bar{C}^{e,\mu}_{LL,RR}+\bar{C}^{e,\mu}_{ LR}\right)\]
for \(f\) summed over the final state quarks leading to the Hadrons, \(\alpha_{ew}\) is taken to be real, and each \(C\) is summed over the contributing set of chiral four fermion operators in the Warsaw basis. If considering this input parameter correction to order \(\delta^{2}\), the \(\delta\) difference between the measured \(\hat{\alpha}_{ew}\) and \(\bar{e}\) defined via Eqn. 21 needs to be included, modifying Eqn. 22, in a chosen input parameter scheme.
The series in \(\bar{\alpha}_{s}\) defining \(\Delta_{QCD}\) is known up to \({\cal O}(\alpha_{s}^{4})\)[47; 48] and this series can be used to extract \(\hat{\alpha}_{s}\) from experimental measurements of \(R(Q)\). The first two terms in the series are
\[\Delta_{QCD}=\frac{\bar{\alpha}_{s}}{\pi}+(1.9857-0.1152\,N_{f})\frac{\bar{ \alpha}_{s}^{2}}{\pi^{2}}+\cdots \tag{23}\]
Extractions of \(\hat{\alpha}_{s}\) from measurements of this ratio assign errors due to the neglected \({\cal O}(\alpha_{s}^{5})\) terms and also from corrections of \({\cal O}(\Lambda_{QCD}^{4}/Q^{4})\) due to neglected power corrections and light quark masses. The later corrections are known and reported in Ref. [49; 50; 51]. Although the series in \(\bar{\alpha}_{s}\) generally converges more slowly than a naive estimate based on \(\bar{\alpha}_{s}/4\pi\), it is instructive to compare the following ratios
\[\frac{\alpha_{s}^{5}}{\pi^{5}}\sim 8\times 10^{-8},\quad\frac{\Lambda_{QCD} ^{4}}{Q^{4}}\sim 8\times 10^{-7}\frac{[300\,{\rm MeV}]^{4}}{\Lambda_{QCD}^{4}} \frac{[10\,{\rm GeV}]^{4}}{Q^{4}},\]
\[\frac{C}{8\,\pi\hat{\alpha}_{ew}}\frac{Q^{2}}{\Lambda^{2}}\sim 5\times 10^{-4} \,C\,\frac{[1\,{\rm TeV}]^{2}}{\Lambda^{2}}\frac{[10\,{\rm GeV}]^{2}}{Q^{2}}.\]
For interesting values of \(\Lambda\) to constrain from Hadron collider studies, the last correction is larger than the quoted current theoretical errors, due to its enhancement by an inverse power of \(\hat{\alpha}_{ew}\). Consider the case that an averaged value of \(\hat{\alpha}_{s}\) to use in global SMEFT studies is defined by a Lattice extraction of \(\hat{\alpha}_{s}(\hat{m}_{z})\) and an extraction from CLEO data [52] using \(R(Q)\) reported in Ref. [47] as \(\hat{\alpha}_{s}(\hat{m}_{z})=0.1198\pm 0.0015\). In this (hypothetical) case the input parameter correction for this \(R(Q)\) extraction would be
\[\delta\alpha_{s}^{R(Q)}\simeq-\frac{\pi}{\sum_{f}N_{c}^{f}Q_{f}^{2}}\delta_{SMEFT} ^{f} \tag{24}\]
and the total \(\hat{\alpha}_{s}(\hat{m}_{z})\) input parameter correction is
\[\delta\alpha_{s}^{\mathcal{O}_{(a)}}\simeq 0.22\,\delta\alpha_{s}^{R(Q)} \tag{25}\]
as the Lattice input parameter correction is taken to be negligible, and the global fit correction is scaled by the relative errors of the measurements feeding into the average. \(\hat{\alpha}_{s}\) extractions from \(R(Q)\) are pre-averaged with other exclusive Hadronic measurements that are not completely independent in the recent PDG global \(\hat{\alpha}_{s}\)[30]. Such combinations and pre-averages complicate determining the input parameter correction, as this leads to weighted sums over the particular \(\delta_{SMEFT}^{f}\) for a Hadronic decay, and other input parameter corrections for alternate extractions of \(\hat{\alpha}_{s}\) similar to Eqn. 10.
We also note that these are extractions of \(\hat{\alpha}_{s}\) that are multi-dimensional fits to experimental observables, with a non-perturbative parameter, or quark mass, simultaneously fit to. The input parameter dependence for \(\hat{\alpha}_{s}\) can be degenerate with a shifted value of a non-perturbative parameter simultaneously fit to in such cases, and the errors assigned to such parameters. The input parameter dependence in such measurements would appear in predictions, if such parameters (such as quark masses) simultaneously fit to, are also used to make other collider predictions.
|
2310.09755 | Beyond Segmentation: Road Network Generation with Multi-Modal LLMs | This paper introduces an innovative approach to road network generation
through the utilization of a multi-modal Large Language Model (LLM). Our model
is specifically designed to process aerial images of road layouts and produce
detailed, navigable road networks within the input images. The core innovation
of our system lies in the unique training methodology employed for the large
language model to generate road networks as its output. This approach draws
inspiration from the BLIP-2 architecture arXiv:2301.12597, leveraging
pre-trained frozen image encoders and large language models to create a
versatile multi-modal LLM.
Our work also offers an alternative to the reasoning segmentation method
proposed in the LISA paper arXiv:2308.00692. By training the large language
model with our approach, the necessity for generating binary segmentation
masks, as suggested in the LISA paper arXiv:2308.00692, is effectively
eliminated. Experimental results underscore the efficacy of our multi-modal LLM
in providing precise and valuable navigational guidance. This research
represents a significant stride in bolstering autonomous navigation systems,
especially in road network scenarios, where accurate guidance is of paramount
importance. | Sumedh Rasal, Sanjay Kumar Boddhu | 2023-10-15T06:46:15Z | http://arxiv.org/abs/2310.09755v1 | # Beyond Segmentation: Road Network Generation with Multi-Modal LLMs +
###### Abstract
This paper introduces an innovative approach to road network generation through the utilization of a multi-modal Large Language Model (LLM). Our model is specifically designed to process aerial images of road layouts and produce detailed, navigable road networks within the input images. The core innovation of our system lies in the unique training methodology employed for the large language model to generate road networks as its output. This approach draws inspiration from the BLIP-2 architecture [1], leveraging pre-trained frozen image encoders and large language models to create a versatile multi-modal LLM.
Our work also offers an alternative to the reasoning segmentation method proposed in the LISA paper [2]. By training the large language model with our approach, the necessity for generating binary segmentation masks, as suggested in the LISA paper [2], is effectively eliminated. Experimental results underscore the efficacy of our multi-modal LLM in providing precise and valuable navigational guidance. This research represents a significant stride in bolstering autonomous navigation systems, especially in road network scenarios, where accurate guidance is of paramount importance.
Multi-Modal Language Models Road Network Generation Autonomous Navigation
## 1 Introduction
In recent years, the field of large language models (LLMs) has witnessed a remarkable transformation, transitioning from text-based generation to the generation of diverse modalities, including text, images, audio, and video, all through a single LLM. This evolution, from ChatGPT-4 [3] to a myriad of multi-modal LLMs, has significantly advanced the capabilities of AI systems to process and understand various forms of data.
These multi-modal LLMs are designed to emulate the holistic perceptual abilities of humans, enabling them to process and generate content in more versatile ways. Unlike previous models, such as ChatGPT-4 [3], MiniGPT-4 [4], LISA [2], and others [5], which aimed to be general-purpose multi-modal models [6][7], our work introduces a novel approach that tailors the training of such models to address a specific challenge: generating navigable road networks from aerial images.
In the groundbreaking LISA paper [2], the authors introduced a novel concept of utilizing both text and images within a large language model, pioneering the use of segmentation masks. While innovative, this concept prompted us to question whether segmentation masks are indispensable for this task. Could we achieve similar outcomes by training a large language model differently, while adhering to the general multi-modal architecture principles highlighted in models like MiniGPT-4 [4], NextGPT [8], BLIP-2 [1], and others?
Our model, known as NavGPT, is built upon the architecture of MiniGPT-4, harnessing Vicuna's visual component, as introduced in BLIP-2 [1]. NavGPT reuses the crucial modification of a new projection layer that aligns encoded visual features with Vicuna's language model [9] while keeping all other vision and language components frozen [10][11][12], introduced in MiniGPT-4's [4] architecture. During training, we provide the model with a JSON file listing the
image name and the precise coordinates of all road networks found in the image. The Q-Former module is trained to bridge its output with the frozen large language model (Vicuna).
This innovative training approach not only allows researchers to address unique problems using open-sourced pre-trained large language models but also serves as a testament to the versatility of large language models. In this paper, we highlight one such use case. As we transition this system into production, we aim to share insights into the challenges we've encountered, thereby demonstrating the validity of our novel training technique for mastering a wide array of tasks using an LLM. Training our model required just one A100 GPU for approximately 26 hours, highlighting the cost-effectiveness of retraining such models to cater to personalized use cases. Our objective is to showcase that large language models offer a novel solution to addressing the challenges of map-making, potentially paving the way for achieving an autonomous navigable world.
In essence, our paper makes the following pivotal contributions:
* _Reconstructing Perception_: Leveraging the potential of large language models, we propose a novel image-instruction pair that includes the image's identifier and precise coordinates of the road network(s). This pairing empowers the large language model to develop an intrinsic comprehension of road network identification when confronted with aerial view images.
* _Segmentation Simplified_: Our unique training approach obviates the need for segmentation masks in large language models. By dispensing with the step of generating training data for region-of-interest segmentation and architecturally eschewing the production of segmented masks as model outputs, we streamline the training process and enhance efficiency.
* Our model builds upon the robust architecture of MiniGPT-4 [4]. However, it distinguishes itself by requiring a mere 10,000 image-instruction pairs for training. Remarkably, even in a zero-shot setting, our model demonstrates commendable performance, underscoring its efficiency in relation to the retraining effort. This demonstrates that upcoming multi-modal large language models can effectively address a wide array of challenges, many of which may not be solely text-based.
In summary, our research builds upon the recent advancements in multi-modal LLMs [2][13][1][8][14][4][15], providing a focused and innovative solution to the task of generating navigable road networks from aerial images. By leveraging a tailored training approach, NavGPT aims to empower autonomous navigation systems, particularly in scenarios where precise navigational guidance is essential.
## 2 Related Works
### Evolution of Large Language Models
The quest for progress in Artificial General Intelligence (AGI) has been an enduring aspiration within the research community, with numerous tools and methodologies explored [16][17]. However, a true breakthrough remained elusive. Everything changed with the introduction of GPT-3, particularly the emergence of ChatGPT [18], which harnessed the power of GPT-3. The journey of GPT-based models has been a remarkable evolution, marked by continuous advancements. The most recent stride, embodied in ChatGPT powered by GPT-4 [3], has redefined the landscape of general artificial intelligence.
Yet, the inner workings of GPT-4 [3] remained shrouded in mystery for the broader research community. This enigma persisted until the advent of open-sourced models such as Vicuna [9] and LLaMa [19][20][21]. Each of these models brought noteworthy enhancements in terms of retraining possibilities and the quality of inferred model outputs. Notably, Meta's recent release of LLaMa-2 [22] designed for commercial applications, represents a significant development. This newfound accessibility is expected to foster innovation across various domains.
### Foundational Multi-Modal Large Language Models
As Large Language Models (LLMs) [9][23][21][19][22][17][24][25][26][27][28][29][30][31][32] began to conquer text generation challenges, a world of new possibilities unfurled. Notably, they exhibited an improved grasp of contextual nuances, leading to more coherent text-to-text conversations. Extensive efforts were dedicated to incorporating human-in-the-loop feedback, refining the conversational finesse of LLMs. However, it was inevitable that the research landscape would expand beyond text-to-text generation alone.
Soon enough, the research community shifted its focus to text-and-image conversations as inputs for LLMs, heralding the era of Multi-Modal Large Language Models. The fundamental idea driving this approach involved training the
models to comprehend images by processing visual features through dedicated encoders. These features were then seamlessly integrated into LLMs as input. This methodology greatly facilitated the LLMs' capacity to interpret images, harnessing the valuable information embedded in image-caption pairs during the training phase.
### Advancing Multi-Modal Large Language Models
In recent times, the latest iterations of multi-modal large language models [2][13][1][14][8][4] have embraced versatility, accommodating an array of input formats. These formats encompass text, images, videos, and audio, reflecting the dynamic nature of contemporary data sources [15]. Correspondingly, these models exhibit a harmonious synergy between inputs and outputs. For each distinct format, a dedicated encoder and decoder are seamlessly integrated. This structural architecture empowers advanced systems like ChatGPT-4 [3] to seamlessly process input data and generate corresponding outputs.
Nevertheless, even these advanced systems are not immune to imperfections, particularly within their encoders and decoders. These systems may encounter challenges related to information loss during data transformation, leading to instances where the broader contextual understanding is compromised. This paper introduces novel solutions to address a few of these challenges.
## 3 Method
While multi-modal large language models offer numerous advantages, a significant hurdle lies in the generation of training data. Our model is rooted in the architectural framework of MiniGPT-4 [4], designed to establish synergy between visual data from a pre-trained vision encoder and a sophisticated large language model (LLM). We leverage Vicuna [9] as the language decoder, introducing a pioneering training method for road network identification within images.
In terms of visual perception, we adopt a similar approach to the visual encoder employed in BLIP-2 [1], harnessing a Vision Transformer [33] backbone in conjunction with their pre-trained Q-Former [1]. To facilitate seamless communication between the visual encoder and the LLM, we introduce a novel training procedure, augmenting the MiniGPT-4 [4] projection layer. This innovative approach bridges the gap, enabling effective collaboration between the two components.
Figure 1: NavGPT architecture based on MiniGPT-4 Source: [4].
### Data Collection and Preprocessing
Automating road navigation presents a formidable challenge, primarily due to the labor-intensive nature of data collection. However, our work benefits from operating within the spatial domain, where obtaining a portion of the required training data is relatively straightforward. We are privileged to have access to satellite imagery for specific global regions. In this study, we focus on training the model using imagery sourced from the Western European region.
To infuse a higher level of naturalness into the generated language and elevate the model's overall utility, we advocate for the importance of road navigation segment training. Unfortunately, datasets suitable for the vision-language domain, especially in the context of road navigation, are virtually non-existent. To overcome this gap, we painstakingly assembled a meticulously detailed image description dataset. This dataset has been meticulously crafted with the sole purpose of facilitating the alignment of vision and language. During the alignment phase of our NavGPT model, this dataset is deployed to fine-tune the model for enhanced performance.
### Novel Training Approach
NavGPT, our innovative approach, deviates from the reliance on segmentation masks suggested by LISA [2]. Our model draws inspiration from MiniGPT-4 [4], which is itself influenced by BLIP-2 [13][1]. The pivotal component, the projection layer coupled with the Q-Transformer [1], empowers our model to assess the presence or absence of a road network in a given image.
This accomplishment is realized through training the model with an image-instruction pair file that explicitly indicates whether a given image contains a road network. Here is a snippet of the JSON file contents, which sheds light on our training data.
{ "image_id": "54534_33840", "caption": "Found a road" }, { "image_id": "54537_33868", "caption": "No roads found" } We conducted model training for 5,000 steps and observed promising results in a zero-shot setting. This initial success ignited our curiosity about the model's potential. What if, in addition to recognizing the presence of a road network within an image, we could train it to pinpoint the image coordinates of the road network? To explore this hypothesis, we required a substantial dataset.
Fortunately, we had access to various road geometries in the Western European region. Leveraging HERE's internal aerial imagery service, we initiated image queries in regions where road network geometry overlapped. The images had dimensions of 1280 by 1280 pixels. These overlapping regions corresponded to areas with well-defined road networks. A Python function was employed to perform these queries and extract the image coordinates of the road network (in the form of a line string) for 10,000 such scenarios.
Notably, approximately 40% of the images did not contain overlapping road network geometry. However, these images were still included in the original set of 10,000 instructions. This inclusion served the dual purpose of enabling the model to discern the presence or absence of a road network and, if present, to provide accurate image coordinates.
{ "image_id": "54537_33867", "caption": Found 1 road. Image coordinates are as follows: [[(219, 114), (283, 271)]]" }, { "image_id": "54537_33879", "caption": "Found 2 roads. Image coordinates are as follows: [[(0, 775), (0, 731), (644, 28)], [(365, 0), (629, 3), (644, 28)]]" }
## 4 Experiment
In this section, we embark on a journey to unravel the diverse and emerging capabilities of our NavGPT model. Through a series of qualitative experiments, we shed light on NavGPT's remarkable proficiency in a spectrum of navigation-based tasks, showcasing its advanced abilities compared to traditional vision-language models. Refer figures 2 3
To assess our model, we implemented a straightforward method to compare its output with the ground truth data acquired during the training phase. We reserved a set of 100 image-instruction pairs for the testing phase.
Table 1 1 presents the model's accuracy in discerning the presence or absence of a road network in an image, with a score of 0.69.
In Table 2 2, we focus on the model's ability to identify the number of roads within a given image, achieving an accuracy score of 0.37. We believe that further refinement can be achieved by extending the model's training to 2,000 - 5,000 additional steps.
large language models, which were crafted to be general-purpose, NavGPT steps beyond this constraint to address real-world challenges in a relevant context.
### Limitations
Given the NavGPT model training approach, we are moving away from a general-purpose multi-modal language [2][13][1][1][14][8] to make it a navigation-based model which means a small amount strips away the general-purpose utility of this model. The context of our training data improves the whole representation of the model to solve/understand one of the most critical problems we face from an autonomous world perspective.
The model exhibits a reasonable performance, although it does encounter some limitations. In our forthcoming research within this domain, we intend to expand the range of scenarios presented in the model and undertake retraining to enhance its overall accuracy.
## 5 Conclusion
In conclusion, NavGPT presents a novel way to train the multi-modal large language models. By diverging from traditional segmentation masks and leveraging the architecture of MiniGPT-4[4] influenced by BLIP-2 [13][1], NavGPT empowers itself to discern and describe road networks in images. Our approach, founded on the projection layer and the Q-Transformer [1], offers a nuanced understanding of visual content, surpassing the capabilities of its predecessors.
Through extensive training with image-instruction pairs, NavGPT has demonstrated impressive abilities in identifying and describing road networks within images. The model's success, even in zero-shot settings, has inspired further exploration. By introducing image coordinates into the training process, we aim to unlock even greater potential.
Our work showcases the advancements in multi-modal large language models and addresses a real-world problem under the right context. NavGPT's contribution extends beyond the scientific community, offering practical applications for developing autonomous navigation systems.
In summary, NavGPT represents a remarkable leap in multi-modal AI, heralding a new era in understanding and generating diverse modalities from text and images, with the potential to transform the field of AI in ways we are only beginning to fathom.
## Acknowledgments
We extend our gratitude to HERE North America LLC for generously providing the hardware necessary for model training and conducting our experiments. We also appreciate HERE for granting us access to their aerial imagery service and the road network line strings that were instrumental in NavGPT's training.
Figure 3: Experiment 2: Identify road network |
2302.06677 | System identification of neural systems: If we got it right, would we
know? | Artificial neural networks are being proposed as models of parts of the
brain. The networks are compared to recordings of biological neurons, and good
performance in reproducing neural responses is considered to support the
model's validity. A key question is how much this system identification
approach tells us about brain computation. Does it validate one model
architecture over another? We evaluate the most commonly used comparison
techniques, such as a linear encoding model and centered kernel alignment, to
correctly identify a model by replacing brain recordings with known ground
truth models. System identification performance is quite variable; it also
depends significantly on factors independent of the ground truth architecture,
such as stimuli images. In addition, we show the limitations of using
functional similarity scores in identifying higher-level architectural motifs. | Yena Han, Tomaso Poggio, Brian Cheung | 2023-02-13T20:32:37Z | http://arxiv.org/abs/2302.06677v2 | # System identification of neural systems:
###### Abstract
Artificial neural networks are being proposed as models of parts of the brain. The networks are compared to recordings of biological neurons, and good performance in reproducing neural responses is considered to support the model's validity. A key question is how much this system identification approach tells us about brain computation. Does it validate one model architecture over another? We evaluate the most commonly used comparison techniques, such as a linear encoding model and centered kernel alignment, to correctly identify a model by replacing brain recordings with known ground truth models. System identification performance is quite variable; it also depends significantly on factors independent of the ground truth architecture, such as stimuli images. In addition, we show the limitations of using functional similarity scores in identifying higher-level architectural motifs.
Machine Learning, Neural Networks, Neural Networks, Neural Networks, Neural Networks, Neural Networks
## 1 Introduction
Over the last two decades, the dominant approach for machine learning engineers in search of better performance has been to use standard benchmarks to rank networks from most relevant to least relevant. This practice has driven much of the progress in the machine learning community. A standard comparison benchmark enables the broad validation of successful ideas. Recently such benchmarks have found their way into neuroscience with the advent of experimental frameworks like Brain-Score (Schrimpf et al., 2020) and Algonauts (Cichy et al., 2021), where artificial models compete to predict recordings from real neurons in animal brains. Can engineering approaches like this be helpful in the natural sciences?
The answer is clearly yes: the "engineering approach" described above ranks models that predict neural responses better as better models of animal brains. While such rankings may be a good measure of absolute performance in approximating the neural responses, which on its own is valuable for various applications (Bashivan et al., 2019), it is an open question whether they are sufficient. In neuroscience, understanding natural intelligence at the level of the underlying neural circuits requires developing model systems that reproduce the abilities of their biological analogs while respecting the constraints provided by biology, including anatomy and biophysics (Marr and Poggio, 1976; Schaeffer et al., 2022). A model that reproduces neural responses well but turns out to require connectivity or biophysical mechanisms that are different from the biological ones is thereby falsified.
Consider the conjecture that the similarity of responses between model units and brain neurons allows us to conclude that brain activity fits better, for instance, a convolutional motif rather than a dense architecture. If this were true, it would mean that functional similarity over large data sets effectively constrains architecture. Then the need for a separate test of the model at the level of anatomy would become, at least in part, less critical for model validation. Therefore, we ask the question: could functional similarity be a reliable predictor of architectural similarity?
We describe an attempt to benchmark the most popular similarity techniques by replacing the brain recordings with data generated by various known networks with drastically different architectural motifs, such as convolution vs. attention. Such a setting provides a valuable upper bound to the identifiability of anatomical differences.
### System identification from leaderboards
When artificial models are compared against common biological benchmarks for predictivity (Yamins and DiCarlo, 2016), models with the top score are deemed better models for neuroscience. As improvements to scores are made over time, ideally, more relevant candidates emerge. Nevertheless, if two artificial models with distinctly different architectures, trained on the same data, happen to be similar in reproducing neural activities (target model), then it would be impossible to conclude what accounts for the
similarity. It can be biologically relevant _motifs from each architecture_, the properties of the _stimulus input_, or _similarity metric_. Such ambiguity is due to the many-to-one mapping of a model onto a leaderboard score. Our work shows that multiple factors play a role in representational similarities.
An interesting example is offered by Chang et al. (2021), which compares many different models with respect to their ability to reproduce neural responses in the inferotemporal (IT) cortex to face images. The study concludes that the 2D morphable model is the best, even though the operations required in the specific model, such as correspondence and vectorization, do not have an apparent biological implementation in terms of neurons and synapses. This highlights that there are multiple confounds besides the biological constraints which affect neural predictivity.
## 2 Related Work
While the analogy between neural network models and the brain has been well validated (Bashivan et al., 2019), the extent of this correspondence across multiple levels (Marr and Poggio, 1976) has been taken for granted. This assumed correspondence could be attributed to methodological limitations of evaluating such models simultaneously across all levels. Jonas and Kording (2017) investigated the robustness of standard analysis techniques in neuroscience with a microprocessor as a ground-truth model to determine the boundaries of what conclusions could be drawn about a known system. The presumption of correspondence could also be attributed to underappreciated variability from model hyperparameters (Schaeffer et al., 2022). In a similar spirit to Jonas and Kording (2017); Lazebnik (2002), we evaluate system identification on a known ground-truth model to establish the boundaries of what architectural motifs can be reliably uncovered. We perform our analysis under favorable experimental conditions to establish an upper bound.
As modern neural network models have grown more prominent in unison with the corresponding resources to train these models, pre-trained reference models have become more widely available in research (Wightman, 2019). Consequently, the need to compare these references along different metrics has followed suit. Kornblith et al. (2019); Morcos et al. (2018) explored using different similarity measures between the layers of artificial neural network models. Kornblith et al. (2019) propose various properties a similarity measure should be invariant such as orthogonal transformations and isotropic scaling while not invariant to invertible linear transformations. Kornblith et al. (2019) found centered kernel alignment (CKA), a method very similar to Representation Similarity Analysis (Kriegeskorte et al., 2008), to best satisfy these requirements. Ding et al. (2021) explored the sensitivity of methods like canonical correlation analysis, CKA, and orthogonal procrustes distance to changes in factors that do not impact the functional behavior of neural network models.
## 3 Background and Methods
The two predominant approaches to evaluating computational models of the brain are using metrics based on linear encoding analysis for neural predictivity and population-level representation similarity. The first measures how well a model can predict the activations of individual units, whereas the second metric measures how correlated the variance of internal representations is. We study the following neural predictivity scores consistent with the typical approaches: Linear Regression and Centered Kernel Alignment (CKA).
In computational neuroscience, we usually have a neural system (brain) that we are interested in modeling. We call this network a _target_ and the proposed candidate model a _source_. Formally, for a layer with \(p_{x}\) units in a source model, let \(X\in\mathbb{R}^{n\times p_{x}}\) be the matrix of representations with \(p_{x}\) features over \(n\) stimulus images. Similarly, let \(Y\in\mathbb{R}^{n\times p_{y}}\) be a matrix of representations with \(p_{y}\) features of the target model (or layer) on the same \(n\) stimulus images. Unless otherwise noted, we subsample 3000 target units to test an analogous condition as in biology, where recordings are far from exhaustive. Our analyses partially depend upon the target coverage, and we later examine the effect of increasing the number of target units.
### Encoding Model: Linear Regression
Following the procedure developed by previous works (Schrimpf et al., 2020; Yamins et al., 2014; Conwell et al., 2021; Kar et al., 2019; Mitchell et al., 2008), we linearly project the feature space of a single layer in a source model to map onto a single unit in a target model (a column of \(Y\)). The linear regression score is the Pearson's correlation \(r(\cdot,\cdot)\) coefficient between the predicted responses of a source model and the ground-truth target responses to a set of stimulus images.
\[\hat{\beta} =\text{argmin}_{\beta}||Y-XS\beta||_{F}^{2}+\lambda||\beta||_{F}^ {2} \tag{1}\] \[LR(X,Y) =r(XS\hat{\beta},Y) \tag{2}\]
We first extract activations on the same set of stimulus images for source and target models. To reduce computational costs without sacrificing predictivity, we apply sparse random projection \(S\in\mathbb{R}^{p_{x}\times q_{x}}\) for \(q_{x}<<p_{x}\), on the activations of the source model (Conwell et al., 2021). This projection reduces the dimensionality of the features to \(q_{x}\) while still preserving relative distances between points (Li
et al., 2006). Unlike principal component analysis, sparse random projection is a dataset-independent dimensionality reduction method. This removes any data-dependent confounds from our processing pipeline for linear regression and isolates dataset dependence into our variables of interest: linear regression and candidate model.
We apply ridge regression on every layer of a source model to predict a target unit using these features. We use nested cross-validations in which the regularization parameter \(\lambda\) is chosen in the inner loop and a linear model is fitted in the outer loop. The list of tested \(\lambda\) is \([0.01,0.1,1.0,10.0,100]\). We use 5-fold cross-validation for both inner and outer loops. As there are multiple target units, the median of Pearson's correlation coefficients between predicted and true responses is the aggregate score for layer-wise comparison between source and target models. Note that a layer of a target model is usually assumed to correspond to a visual area, e.g., V1 or IT, in the visual cortex. For a layer-mapped model, we report maximum linear regression scores across source layers for target layers.
### Centered Kernel Alignment
Another widely used type of metric builds upon the idea of measuring the representational similarity between the activations of two neural networks for each pair of images. While variants of this metric abound, including RSA or re-weighted RSA (Kriegeskorte et al., 2008; Khaligh-Razavi et al., 2017), we use CKA (Cortes et al., 2012) as Kornblith et al. (2019) showed strong correspondence between layers of models trained with different initializations, which we will further discuss as a validity test we perform. We consider linear CKA in this work:
\[\text{CKA}(X,Y)=\frac{||Y^{T}X||_{F}^{2}}{||X^{T}X||_{F}||Y^{T}Y||_{F}} \tag{3}\]
Kornblith et al. (2019) showed that the variance explained by (unregularized) linear regression accounts for the singular values of the source representation. In contrast, linear CKA depends on the singular values of both target and source representations. Recent work (Diedrichsen et al., 2020) notes that linear CKA is equivalent to a whitened representational dissimilarity matrix (RDM) in RSA under certain conditions. We also call CKA a neural predictivity score because a target network is observable, whereas a source network gives predicted responses.
### Identifiability Index
To quantify how selective predictivity scores are when a source matches the target architecture compared to when the architecture differs between source and target networks, we define an identifiability index as:
\[\text{Identifiability Index}=\frac{\text{score}(s=t)-\overline{\text{score}}(s \neq t)}{\text{score}(s=t)+\overline{\text{score}}(s\neq t)} \tag{4}\]
where \(s\) is the source or candidate model, and \(t\) is the target model. In brief, it is a normalized difference between the score for the true positive and the mean score (\(\overline{\text{score}}\)) for the true negatives. Previous works (Dobs et al., 2022; Freiwald and Tsao, 2010) defined selectivity indices in the same way in similar contexts, such as the selectivity of a neuron to specific tasks.
### Identification of architectures in an ideal setting
### Simulated Environment
If a target network is a brain, it is essentially a black box, making it challenging to understand the properties or limitations of the comparison metrics. Therefore, we instead use artificial neural networks of our choice as targets for our experiments.
We investigate the reliability of a metric to compare models, mainly to discriminate the underlying computations specified by the model's architecture. We intentionally create favorable conditions for identifiability in a simulated environment where the ground truth model is a candidate among the source models. Taking these ideal conditions further, our target and source models are deterministic and do not include adverse conditions typically encountered in biological recordings, such as noise and temporal processing. We consider the following architectures:
**Convolutional:** AlexNet (Krizhevsky et al., 2012), VGG11 (Simonyan and Zisserman, 2014), ResNet18 (He et al., 2016)
**Recurrent:** CORnet-S (Kubilius et al., 2019)
Figure 1: Linear regression scores of deep neural networks for brain activations in the macaque visual cortex. Architecture list in SectionA.1. For V1, the top performing three models are in the VOneNet family (Dapello et al., 2020), which are explicitly designed to mimic the known properties of V1.
**Transformer:** ViT-B/32 (Dosovitskiy et al., 2020)
**Mixer:** MLP-Mixer-B/16 (Tolstikhin et al., 2021)
These architectures are emblematic of the vision-based models used today. Each architecture has a distinct motif, making it unique from other models. For example, transformer networks use the soft-attention operation as a core motif, whereas convolutional networks use convolution. Recurrent networks implement feedback connections which may be critical in the visual cortex for object recognition (Kubilius et al., 2019; Kar et al., 2019). Moreover, mixer networks (Tolstikhin et al., 2021; Touvron et al., 2021; Melas-Kyriazi, 2021) uniquely perform fully-connected operations over image patches, alternating between the feature and patch dimensions.
## 4 Results
### Different models with equivalent neural predictivity
We compare various artificial neural networks with publicly shared neural recordings in primates (Majaj et al., 2015; Freeman et al., 2013) via the Brain-Score framework (Schrimpf et al., 2020). Our experiments show that the differences between markedly different neural network architectures are minimal after training (Figure 1), consistent with the previous work (Schrimpf et al., 2020; Kubilius et al., 2019; Conwell et al., 2021). Previous works focused on the relative ranking of models and investigated which model yields the highest score. However, if we take a closer look at the result, the performance difference is minimal, with the range of scores having a standard deviation \(<0.03\) (for V2=0.021, V4=0.023, IT=0.016) except for V1. For V1, VOneNets (Dapello et al., 2020), which explicitly build in properties observed from experimental works in neuroscience, significantly outperform other models. Notably, the models we consider have quite different architectures based on combinations of various components, such as convolutional layers, attention layers, and skip connections. This suggests that architectures with different computational operations reach almost equivalent performance after training on the same large-scale dataset, i.e., ImageNet.
One potential interpretation of the result would be that different neural network architectures are equally good (or bad) models of the visual cortex. An alternative explanation would be that the method we use to compare models with the brain has limitations in identifying precise computational operations. To test the hypothesis, we focus on where underlying target neural networks are known instead of being a black box, as with biological brains. Specifically, we replace the target neural recordings with artificial neural network activations. By examining whether the candidate source model with the highest predictivity is identical to the target model, we can evaluate to what extent we can identify
Figure 2: Linear regression scores as a function of target network layer. Different initialization seeds are used for source networks, except for one (MLP-Mixer-B/16*), which uses identical weights. Black dots \(\bullet\) indicate that the correct architecture does not outperform others with statistically significant differences (\(p>0.01\)). Red dots \(\bullet\) indicate that the median scores for those networks are higher than the median for the correct architecture. The ranking of source models is typically based on median scores (Schrimpf et al., 2020).
architectures with the current approach.
#### 4.1.1 Linear Regression
We first compare various source models (AlexNet, VGG11, ResNet18, CORnet-S, ViT-B/32, MLP-Mixer-B/16) with a target network, the same architecture as one of the source models and is trained on the same dataset but initialized with a different seed. We test a dataset composed of 3200 images of synthetic objects studied in (Majaj et al., 2015) to be consistent with the evaluation pipeline of Brain-Score. The ground-truth source model will yield a higher score than other models if the model comparison pipeline is reliable. For most target layers, source networks with the highest median score are the correct network (Figure 2). However, strikingly, for several layers in VGG11, ResNet18, and CORnet-S the best-matched layers belong to a source model that is not the correct architecture. In other words, given the activations of ResNet18, for instance, and based on linear regression scores, we would make an incorrect prediction that the system's underlying architecture is closest to a recurrent neural network CORnet-S. This is especially noteworthy in that the prediction leads to an incorrect inference about the presence of recurrent connections in the target network.
In addition, because of our ideal setting, where an identical network is one of the source models, we expect to see a significant difference between matching and non-matching models. However, for multiple target layers, linear regression scores for the non-identical architectures, when compared with those for the identical one, do not show a significant decrease in predictivity based on Welch's t-test with \(p<0.01\) applied as a threshold (Figure 2). This result suggests that the identification of the underlying architectures of unknown neural systems is far from perfect.
#### 4.1.2 Centered Kernel Alignment
Next, we examine another widely used metric, CKA, for comparing representations. Again, we compare different source models to a target model, also an artificial neural network. For the target models we tested, the ground-truth source models achieve the highest score (Figure 3). Still, some unmatched source networks lead to scores close to the matched networks, even for the target MLP-Mixer-B/16, where the source network of the same architecture type also has identical weights.
When applying CKA to compare representations, we subsample a set (3000) of target units to mimic the limited coverage of single-unit recordings. Assuming we can increase the coverage for future experiments with more advanced
Figure 3: CKA for different source and target networks. The experimental setup is identical to Figure 2 besides using CKA instead of linear regression as the metric. As in Figure 2, when we test MLP-Mixer-B/16 as a target and the source network type matches the target, weights are identical. We show the results for MLP-Mixer-B/16 in bar plots with a pattern to indicate the difference from other targets.
Figure 4: **Top** Sample images of each stimulus image type. **(a)** Identifiability index using CKA and **(b)** linear regression for different types of stimulus images and target networks.
measurement techniques, we test whether the identifiability improves if we include all target units. Additionally, methods similar to CKA, such as RSA, are often applied to interpret neural recordings, including fMRI, MEG, and EEG (Cichy and Oliva, 2020; Cichy and Pantazis, 2017), which can have full coverage of the entire brain or a specific region of interest. Therefore, we simulate such analyses by having all units in the targets. Overall, the ground-truth source models outperform the other source models with a significant margin (Figure 6. More experimental results are in the Appendix). This suggests system identification can be more reliable with more recording units and complete target coverage.
### Effects of the stimulus distribution on identifiability
A potentially significant variable overlooked in comparing computational models of the brain is the type of stimulus images. What types of stimulus images are suited for evaluating competing models? In Brain-Score, stimulus images for comparing models of the high-level visual areas, V4 and IT, are images of synthetic objects (Majaj et al., 2015). In contrast, those for the lower visual areas, V1 and V2, are images of texture and noise (Freeman et al., 2013). To examine the effect of using different stimulus images, we test images of synthetic objects (3200 images), texture and noise (135 images), and ImageNet (3000 images), which are more natural images than the first two datasets.
In Figure 4, we analyze Identifiability Index for different stimulus images. More realistic stimulus images (i.e., synthetic objects and ImageNet) show higher identifiability than texture and noise images for all target models. We observe identifiability increases with layer depth. Notably, even for early layers in target models, which would correspond to V1 and V2 in the visual cortex, texture and noise images fail to give higher identifiability. Also, between images of synthetic objects and ImageNet, ImageNet shows higher identifiability. As target models are trained on ImageNet, our results suggest that using stimuli images closer to the images that targets see will help identify architectures better.
It is important to note that the images of texture and noise we use in the experiment help characterize certain aspects of V1 and V2 in the previous work (Freeman et al., 2013). More specifically, the original work investigated the functional role of V2 in comparison with V1 by showing that naturalistic structure modulates V2. Although the image set plays an influential variable in a carefully designed experiment for a more targeted hypothesis, it does not translate as a sufficient test set for any hypothesis, such as evaluating different neural networks.
### Challenges of identifying key architectural motifs
Interesting hypotheses for a more biologically plausible design principle of brain-like models often involve key high-level architectural motifs. For instance, potential questions
Figure 5: Different architectural variants (12 CNNs and 14 ViTs) are compared with two CNNs and ViT target networks. Each data point is the maximum score of an architecture for corresponding target layers. Solid markers indicate the mean score of the corresponding model class, and error bars are standard deviations. \(\copyright\) indicate that corresponding layers do not show a statistically significant difference between model classes.
are whether recurrent connections are crucial in visual processing or, with the recent success of transformer models in deep learning, whether the brain similarly implements computations like attention layers in transformers. The details beyond the key motif, such as the number of layers or exact type of activation functions, may vary and be underdetermined within the scope of such research questions. Likewise, it is unlikely that candidate models proposed by scientists align with the brain at every level, from low-level specifics to high-level computation. Therefore, an ideal methodology for comparing models should help separate the key properties of interest while being invariant to other confounds.
Considering it is a timely question, with the increased interest in transformers as models of the brain in different domains (Schrimpf et al., 2021; Berrios and Deza, 2022; Whittington et al., 2021), we focus on the problem of identifying convolution vs. attention. We test 12 Convolutional Networks and 14 Vision Transformers of different architectures (list in Appendix A.2), and to maximize identifiability, we use ImageNet stimulus images. Note that an identical architecture with the target network is not included as a source network.
Figure 5 shows that for both CKA and regression, there is high inter-class variance for many target layers. For CKA, one layer in VGG13, 7 layers in ResNet34, and 7 layers in ViT-L/16, and for regression, three layers in VGG13, 6 layers in ResNet34, and one layer in ViT-L/16 do not show a statistically significant difference between the two model classes based on Welch's t-test with \(p<0.01\) used as a threshold. The significant variance among source models suggests that model class identification can be incorrect depending on the precise variation we choose, especially if we rely on a limited set of models.
## 5 Discussion
Under idealized settings, we tested the identifiability of various artificial neural networks with differing architectures. We present two contrasting interpretations of model identifiability based on our results, one optimistic (Glass half full) and one pessimistic (Glass half empty).
**Glass half full:** Despite the many factors that can lead to variable scores, linear regression and CKA give reasonable identification capability under unrealistically ideal conditions. Across all the architectures tested, identifiability improves as a function of depth.
**Glass half empty:** However, system identification is highly variable and dependent on the properties of the target architecture and the stimulus data used to probe the candidate models. For architecture-wide motifs, like convolution vs. attention, scores overlap significantly across almost all layers. This indicates that such distinct motifs do not play a significant role in the score.
Our results suggest two future directions for improving system identification with current approaches: 1) Using stimuli images that are more natural, i.e., closer to the inputs to the target network (brain) in a natural setting. 2) With more neurons recorded in the brain, neural predictivity scores can be more reliable in finding the underlying architecture.
On the other hand, it is worthwhile to note that we may have reached close to the ceiling using neural predictivity scores for system identification. As an example, when our source network is AlexNet, its regression scores against the brain (Figure 1) are on par with, or slightly higher than, the scores against another AlexNet (Figure 2). In other words, based on the current methods, AlexNet predicts the brain as well as, if not better than, predicting itself. This observation is not limited to AlexNet but applies to other target networks. This fundamental limitation of present evaluation techniques, such as the linear encoding analysis used in isolation, emphasizes the need to develop new approaches beyond comparing functional similarities.
As we argued earlier, ranking models in terms of their agreement with neural recordings is the first step in verifying or falsifying a neuroscience model. Since several different models are very close in ranking, the next step - architectural validation - is the key. Furthermore, it may have to be done independently of functional validation and with little guidance from it, using standard experimental tools in neuroscience. A parallel direction is, however, to try to develop specially designed, critical stimuli to distinguish between different architectures instead of measuring the overall fit to data. As a simple example, it may be possible to discriminate between dense and local (e.g., CNN) network architectures by measuring the presence or absence of interactions between parts of a visual stimulus that are spatially separated.
## Acknowledgements
This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. Y. Han is a recipient of the Lore Harp McGovern Fellowship.
|
2301.06327 | Scientific value of the quantum tests of equivalence principle in light
of Hilbert's sixth problem | In his sixth problem, Hilbert called for an axiomatic approach to theoretical
physics with an aim to achieve precision and rigour in scientific reasoning,
where logic and language (semantics) of physics play the pivotal role. It is
from such a point of view, we investigate the scientific value of the modern
experiments to perform quantum tests of equivalence principle. Determination of
Planck constant involves the use of acceleration due to gravity of the earth
$(g)$ that results in the force on a test mass. The equivalence between
inertial mass and gravitational mass of a test object is assumed in the process
of logically defining $g$ from the relevant hypotheses of physics.
Consequently, if Planck constant is used as input in any experiment (or in the
associated theory that founds such an experiment) that is designed to test the
equivalence between inertial and gravitational mass, then it is equivalent to
establish a scientific truth by implicitly assuming it i.e. a tautology. There
are several notable examples which plague the frontiers of current scientific
research which claim to make quantum test of equivalence principle. We question
the scientific value of such experiments from Hilbert's axiomatic point of
view. This work adds to the recently reported semantic obstacle in any
axiomatic attempt to put "quantum" and "gravity" together, albeit with an
experimental tint. | Abhishek Majhi, Gopal Sardar | 2023-01-16T09:38:31Z | http://arxiv.org/abs/2301.06327v1 | # Scientific value of the quantum tests of equivalence principle in light of Hilbert's sixth problem
###### Abstract
In his sixth problem, Hilbert called for an axiomatic approach to theoretical physics with an aim to achieve precision and rigour in scientific reasoning, where logic and language (semantics) of physics play the pivotal role. It is from such a point of view, we investigate the scientific value of the modern experiments to perform quantum tests of equivalence principle. Determination of Planck constant involves the use of acceleration due to gravity of the earth (\(g\)) that results in the force on a test mass. The equivalence between inertial mass and gravitational mass of a test object is assumed in the process of logically defining \(g\) from the relevant hypotheses of physics. Consequently, if Planck constant is used as input in any experiment (or in the associated theory that founds such an experiment) that is designed to test the equivalence between inertial and gravitational mass, then it is equivalent to establish a scientific truth by implicitly assuming it i.e. a tautology. There are several notable examples which plague the frontiers of current scientific research which claim to make quantum test of equivalence principle. We question the scientific value of such experiments from Hilbert's axiomatic point of view. This work adds to the recently reported semantic obstacle in any axiomatic attempt to put "quantum" and "gravity" together, albeit with an experimental tint.
## I Introduction
The significance of logic and language (semantics) in physics become evident from the study of Hilbert's sixth problem, namely, _Mathematical Treatment of the Axioms of Physics[1; 2]_ and the associated modern research that has germinated from such roots e.g. see ref.[3] and the references therein (also see refs.[4; 5; 6; 7]). If we consider Einstein's views[8], then "_physics constitutes a logical system of thought_" and "_the justification (truth content) of the system rests in the proof of usefulness of the resulting theorems on the basis of sense experiences, where the relations of the latter to the former can only be comprehended intuitively._" While certainly the experimental observations ("_sense experiences_") must have the final say on the essence of some theoretical construction, however, the inner consistency, of the "_logical system of thought_" that underlies the theory, is necessary for the whole process of reasoning to be considered as scientific. Hilbert's call for axiomatization was aimed to achieve more precision and rigour in such reasoning. Such a call obviously is associated with a tension between exactness of logical truths and uncertain nature of experimental truths, the significance of which is manifested through Born's statement on p.81 of ref.[9]: "_a physical situation must be described by means of real numbers in such a way that the natural uncertainty in all observations is taken into account_". While such tension has motivated lines of research investigation that concern the intuitive refinement of the language of physics itself[4; 5; 6; 7; 10; 11], but Hilbert's axiomatic point of view continues to retain its value in the mainstream literature[2; 3; 12; 13; 14; 15] where the equations of physics are used to discuss physical phenomena and working principles of experiments with exact quantities devoid of the natural uncertainties involved in the experimental measurements. Recently, by adopting such a Hilbertian point of view, one of us (A. M.[10]) has pointed out a semantic dilemma that one has to encounter in any attempt to axiomatically treat together "quantum" and "gravity" so as to pen down any theory of "quantum gravity"[16]. It is now our intent to investigate, from a similar point of view, the logic behind the modern experiments on the "quantum" tests of equivalence principle which are, nevertheless, very much motivated from the "quantum gravity" mindset[17; 18; 19; 20; 21; 22].
An experiment is considered to have a scientific value when (a) it verifies some hypothesis or theoretical prediction (b) it brings forth some newly observed phenomenon that is yet to be explained by theory. However, an experiment of the type (a) can not depend on any input that explicitly or implicitly depends on the concerned hypothesis. If it does, then such an experiment verifies the truth by assuming it in the process. We consider this as a physical demonstration of a logical tautology. Such an experiment is devoid of any scientific value (unless it brings forth some hitherto unknown phenomenon that is not related to the concerned theory i.e. type (b)). Here, we intend to discuss one such scenario, that involves a certain class of experiments (type (a)) which are designed to perform "quantum" tests of the equivalence principle [17; 18; 19; 20; 21; 22], _where and henceforth, by "equivalence principle" we mean "equivalence between inertial mass and gravitational mass of an object" [23]_. These experiments inevitably involve the Planck constant (henceforth, to be denoted as "\(h\)") which justifies the word "quantum". To be more particular, such experiments rely on atom interferometry where the De Broglie wavelength (\(\lambda\)) of the respective matter wave, with momentum \(p\), needs to be calculated from the formula \(\lambda=h/p\) and for this calculation \(h\) is considered as a given input - see refs. [17; 18; 19; 20; 21; 22] and the relevant references there in.
We may express our concern in short as follows. Irrespective of the different measurement procedures to determine \(h\) employed till date, the corresponding theoretical analyses involve, either explicitly or implicitly, the use of \(g\), where by the symbol "\(g\)", we denote "acceleration due to gravity of the earth", or equivalently "gravitational field of the earth". For example, as we shall discuss here, in case of the Kibble balance method, \(h\) is expressed in terms of \(g\). On the other hand, in case of photo-electron emission method, \(g\) gets implicitly incorporated into the analyses through the processes by which "charge of an electron" is determined and used as an input. However, \(g\) can be _defined_ from the axioms and hypotheses of physics, _if and only if the equivalence principle is assumed_. Therefore, the so called "quantum tests of the equivalence principle" are attempts to physically demonstrate a logical tautology and are devoid of any scientific value from Hilbert's axiomatic point of view. We shall point out that the assumption of the equivalence between "inertial mass" and "gravitational mass" is exact, and not associated with uncertainties, in the experiments that determine \(h\). So, the "quantum tests of the equivalence principle" can not be interpreted as a recursive refinement of logic that verifies the equivalence principle with an uncertainty that is less than the uncertainty with which the equivalence principle is assumed to determine \(h\). In fact, the determination of \(h\) is based on the equivalence principle with zero or no uncertainty (i.e. exact) and then such \(h\) is used as an input in the "quantum tests of equivalence principle" to verify the equivalence principle with non-zero uncertainty i.e. it is a case of adulteration (rather than refinement) of logic. Thus, if we take experimental uncertainty into account, the situation is even worse than a logical tautology.
To present our arguments, we begin with a discussion regarding the definition of \(g\), especially with a focus on how the assumption of equivalence principle is necessary for such a definition. Then, we discuss in what way the theoretical analyses for the different procedures of determining \(h\) are founded on the assumption of the equivalence principle due to the involvement of \(g\) and also due to the use of only one concept of "mass" regardless of any distinction as "inertial" and "gravitational". Finally, we conclude with a summary and some remarks regarding the status of this work in light of the new convention adopted by the science community in 2019[24].
## II Interpretations of the symbol "\(g\)"
There are two kinds of interpretations that we generally associate with the symbol "\(g\)". One is the _operational_ interpretation and the other is the _logical_ interpretation.
We provide the operational interpretation of "\(g\)" in terms of what we can measure and it is the "acceleration of a freely falling object due to earth's gravity" which we determine experimentally (e.g. by dropping objects). To mention, the "operational" viewpoint of any definition in physics was advocated by Mach[25]. We know that Einstein, who was highly influenced by Mach, took an "operational" approach to give meaning to "time" by stating our everyday experience of seeing the hands of a clock to mark the timing of an observed event like "arrival of the train at the station"[26]. The operational perspective was revisited in an elaborate fashion by Bridgman[27].
However, if we restrict ourselves to being _completely_ operational, we can not ask questions like the following - "how do we interpret that the acceleration is _due to earth's gravity_?" This is because, in order to have a reasonable answer to this question, we need to consider hypotheses, axioms, etc. of physics, namely Newton's laws of gravity and motion, to provide the logical explanations corresponding to "earth's gravity" through
mathematical expressions. So, we need to be both operational (i.e. induce from experience) and logical (i.e. deduce from assumed truths i.e. axioms, hypotheses, etc.) - we can not just stick to only one of the following stances - "a posteriori induction", "a priori deduction". One can consult ref.[28] for a discussion regarding such issues. We discuss, in what follows, that the equivalence principle is a _necessary_ proposition for the logical (deductive) definition of "\(g\)" [see ref.[29, 30] for some in depth discussion regarding such logical perspective of "definition"].
### Definition of "\(g\)" and the logical status of the equivalence principle
At first sight it may appear to be an utter stupidity to have a discussion regarding the definition of "\(g\)" as it is "too trivial". Nevertheless, keeping in mind that the discussion is regarding some logical issues, we believe that the definition must be put into formal terms, rather than in a colloquial language, so as to clear any doubts regarding any of the individual propositions involved in the process, irrespective of its apparent triviality[30]. Therefore, in order to avoid any misjudgment by the concerned reader regarding the definition of "\(g\)", we consider formal statements, regardless of its simplicity and familiarity in the science community. It is much like what Hadamard has done to clear the doubt regarding the statement of Huygens' principle in section (33) of ref.[31]. He has considered "simple formulae and statements" in the form of propositions and has made a formal logical analysis in order to resolve doubts: "_But, however simple the preceding formulae and statements, they have, nevertheless, opened somewhat important and lengthy scientific discussions, of which we have now to speak and which refer to what is called Huygens' Principle_."
Apart from the above mentioned reasons, we adopt such a path based on formal logic so as to put an emphasis on the necessity of the assumption of equivalence principle, as a formal statement, for the deductive definition of "\(g\)". In what follows, the symbols "\(\wedge,\iff\,.=\)" denote "logical conjunction, logical equivalence (if and only if), defined as" respectively. In the context of writing down the propositions \(A\) and \(B\) citing ref.[32, 33] would have been relevant. However, those references do not contain the equations in the form that we use today. So, we have assumed that the reader is already acquainted with the Newton's laws.
* **Proposition**\(A\): Newton's laws of motion apply to _an object whose motion is to be studied as a whole_, called _test object_. Associated concept of "mass" is called inertial mass (\(m_{I}\)). Acting force is given by \[F=m_{I}a,\] (1) where \(a\) is the acceleration of the test object.
* **Proposition**\(B\): _The_ test object obeys Newton's law of gravitation. Associated concept of "mass" is called gravitational mass (\(m_{G}\)). Force of gravitation on the test object due to the earth is given by \[F=G\frac{m_{G}M_{earth}}{r^{2}}.\] (2) Here, \(G\) is a proportionality constant, \(M_{earth}\) is the gravitational mass (ignoring "active/passive" distinction, unlike in ref.[23]) of the earth and \(r\) is the distance between the test object and the earth, considering them as two points (point-masses).
* **Proposition**\(C\): Equivalence principle holds: \[m_{I}=m_{G},\] (3) where the meaning of the symbols "\(m_{I}\)" and "\(m_{G}\)" are explained by \(A\) and \(B\) respectively. **Corollary:**\(G\) is a fundamental constant if and only if \(C\) is true for any type of material i.e. \(m_{I(type)}=m_{G(type)}\) for any type of material like iron, aluminium, copper, etc.
* **Proposition**\(D\): Concept of "gravitational field due to earth", represented by the symbol "\(g\)", which is conceived through the observation of vertically falling objects towards the earth, is definable.
* **Proposition**\(E\): \(g\) is definable _if and only if_ Newton's laws of motion, Newton's law of gravitation and the equivalence principle hold simultaneously for the test object. Formally, \(((A\wedge B)\wedge C)\iff D\).
**Corollary:**\(E\) implies the definition of \(g\). Formally,
\[E\implies g:=\frac{GM_{earth}}{r^{2}}. \tag{4}\]
* **Proposition**\(F\): Considering \(g\) as input, experiments are modeled to determine \(h\). We call such models as "\(g\)-experiments" i.e. \[g\text{-experiments}:(g\wedge\cdots).\] (5) The "dots" stand for propositions, other than \(D\), which need to hold for the modeling. Such propositions can be both theoretical and experimental in nature. By "experimental propositions" we mean, the choices of appropriate physical conditions which are made by the experimenter in the laboratory.
Let us explicate the relevance of the above propositions and the nature of the logical structure of the same as follows:
* **Step 1:**\(A\) provides meaning to the symbol "\(m_{I}\)". \(B\) provides meaning to the symbol "\(m_{G}\)". Hence, \(C\) is _meaningless_ without either of \(A\) and \(B\). Consequently, \(C\) can neither be validated nor invalidated without \(A\) and \(B\) together. Also, validation or invalidation of \(C\) does not affect the validity of \(A\) or of \(B\).
* **Step 2:**\(D\) is an assertion about the possibility of theoretically defining the concept of "gravitational field due to the earth", represented by the symbol "\(g\)". \(E\) states the condition when such definition is possible by using \(A,B\) and \(C\). Thus, \(E\)_implies_ the definition of \(g\). In other words, the definition of \(g\) is implied by all the propositions from \(A\) to \(E\), albeit a one-sided implication.
* **Step 3:**\(F\) is the modeling of the \(g\)-experiments used to determine \(h\). The meaning associated with the symbol "\(g\)" is explained by Step 1 and Step 2.
Therefore, we can draw the following conclusion:
_Since \(C\) is necessary for the definition of \(g\), then \(F\) is implied by \(C\) i.e. the validity of the equivalence principle goes in as a necessary assumption when \(g\) is used as an input to determine \(h\)._
### "Completely operational" or "logical contradiction"?
Although we have briefly argued just a bit earlier that we can not be completely operational, we would like to emphasize that point once again in light of the above logical analysis. One can certainly object to the conclusion (drawn in the earlier section) by arguing that, since \(g\) is only implied by the successive propositions (\(E\implies g\)) and the reverse is not necessarily true (i.e. it is not an "if and only if" condition and rather a one-sided implication), then the tautology can be avoided. Instead of the above explained meaning of "\(g\)", a complete operational meaning can be assigned to "\(g\)"as follows:
_The acceleration of a vertically falling object can be measured to have a value \(9.81\) m/s\({}^{2}\) without referring to Newton's law of gravitation. Then, this explains the meaning of "\(g\)", and provides its numerical value that has been used in the \(g\)-experiments. In this way, there is no necessity of \(B\) and hence, \(C,D,E\) become meaningless. Only \(A\) and the symbol "\(a\)" suffice and the introduction of the symbol "\(g\)" is altogether unnecessary. We can have just "\(a\)-experiments" to determine \(h\)._
However, in such a way of reasoning, the operational arguer _denies_ the knowledge of the Newtonian[32; 33] (as well as Einsteinian[34]) theory of gravity that explains the phenomenon of falling object and consequently, denies the knowledge of the concept of "gravitational mass". So, if the arguer now claims to examine the validity of equivalence principle by using \(h\), then he should _accept_ the knowledge that he has already _denied_. Hence, the operational arguer runs into a logical contradiction, that is unacceptable.
At this point a diligent reader may be wondering whether our arguments imply that Eotvos et. al., in ref.[35], have made a logical contradiction. The answer is certainly negative. It is true that the authors have considered \(g\) in operational terms i.e. they used a directly measured value of \(g\) in their analyses. However, their motive has been to judge how close are \(g\) and \(a\) if we _do not assume the equivalence principle_ (e.g. see
Box 1.2 on page 16 of ref.[36]). Their case is different from the scenario where people consider the logical definition of \(g\) by assuming the validity of equivalence principle and use the value of \(g\) in operational terms if required to determine some quantity, which they use in turn to test the validity of equivalence principle. The issue will be more clear as we proceed and discuss what follows.
## III Principles behind determination of \(h\)
Now, let us discuss how the assumption of the equivalence principle is involved either explicitly or implicitly in the determination of \(h\). Broadly, there are two categories of such experiments: (i) the Kibble balance experiments e.g. see refs.[37; 38] and the relevant references therein (see ref.[39] for Kibble's original work) (ii) photo-electron emission experiments e.g. see refs.[40; 41] and the relevant references in ref.[41].
### Kibble balance experiments
In Kibble balance experiments [37; 38], a crucial step is the balancing between the gravitational force (due to earth) on a test mass (\(m\))(which includes the tare mass) and force generated by a current (\(I\)) carrying coil perpendicularly placed in a magnetic field. So, the relevant equation is[37]:
\[F=mg=BLI \tag{6}\]
where \(L\) is the length of the coil and \(B\) is the magnetic flux density. This is called the weighing phase.
The other phase, which is called the moving phase, is designed to measure \(BL\) from the voltage developed in the coil (\(V\)) when it passes with constant velocity \(v\) through the magnetic field. The relevant equation is \(V=BLv\),which is then used to eliminate \(BL\) from eq.(6), under the assumption that \(BL\) remains same in both phases, to obtain the Kibble equation:
\[mg=\frac{IV}{v}\, \tag{7}\]
Going through some further theoretical manipulations, corresponding to the measurement process (details can be found in ref.[37], unnecessary for the present purpose), the expression for \(h\) comes out to be the following:
\[h=\gamma gvm \tag{8}\]
where \(\gamma\) is a quantity with physical dimension \([T]^{2}\) (time squared) and it depends on a collection of measured variables that depend upon the scaling parameters within the measurement technique.
We note that in the theoretical analysis of the Kibble balance experiments, only _one_ concept of "mass", denoted by "\(m\)", has been used and no categorization like "inertial mass" and "gravitational mass" has been made. Furthermore, \(g\) appears explicitly in the expression for \(h\) i.e. in eq.(8). Therefore, the equivalence principle has been assumed and this assumption is exact because there is no mention of uncertainty regarding the equivalence of "inertial mass" and "gravitational mass" in refs.[37; 38]. [_Remark_: Since Joule balance technique is just an alternative of the Kibble balance and follows the same basic principle of balancing under gravity (see e.g. refs.[42; 43; 44]), we do not discuss it separately. Our views regarding the Kibble balance that concerns the equivalence principle as the founding premise of the theoretical analysis, also applies for the Joule balance.]
### Photo-electron emission experiments
In photo-electron emission experiments, as was first done by Millikan [40], the theoretical analysis is founded upon the photoelectric equation rooted to Einstein's work [45]: \(h\nu=\phi+E_{max}\), where \(E_{max}\) is the maximum kinetic energy of photo-electron, \(\nu\) is the frequency of the incident light and \(\phi\) is the work function of the illuminated material.
It is found in such experiments that \(E_{max}\) is proportional to \(\nu\). \(E_{max}\) is determined by measuring the stopping potential \(V\) from the equation \(E_{max}=eV\), where \(e\) is the charge of an electron. Therefore, plotting \(V\) along \(y\)-axis and \(\nu\) along \(x\)-axis, \(h/e\) is obtained as the slope of the straight line. Hence, _the value of \(h\) is determined from this slope \(h/e\) by taking the value of \(e\) as input._
This fact does not change even in modern day experiments which involve photo-emission spectroscopy techniques to increase the precision of such experiments e.g. see ref.[41] and the relevant references there in. So, it is worth understanding how \(e\) is determined (without using \(h\) as input). We discuss the relevant methods in the following section.
[_Remark:_ There are examples of experiments, where experimenters claim to determine \(e\) by using some value of \(h\) as input in the process, without any elaborate discussion regarding how such value of \(h\) is determined in the first place. For example, the experiments like those discussed in ref.[46; 47], which use the Single Electron Tunneling (SET) mechanism[48; 49], use \(h\) as input to determine \(e\). We keep any discussion regarding such experiments out of the present context because our motive is to discuss the methods of determining \(h\) and not to assume its value as given.]
### Principles behind determination of \(e\) (without using \(h\) as input)
There are broadly two methods of determining \(e\) without using \(h\) as input, which have been explored till date viz. oil-drop experiment and the x-ray spectroscopy method. While the former is explicitly dependent on the use of \(g\), latter is implicitly dependent on the use of \(g\) through the determination of Avogadro constant in the process. Therefore, both the procedures are based on the assumption of equivalence principle, which we shall discuss in what follows.
#### ii.3.1 Oil-drop experiment
The oil-drop experiment is due to Millikan [50; 51] and Fletcher [52]. The theoretical analysis behind the experiment can be traced back to ref.[50], where the first equation has been written in the following way: "_The relations between the apparent mass \(\mathbf{m}\) of a drop, the charge \(e_{n}\), which it carries, its speed, \(v_{1}\) under gravity, and its speed \(v_{2}\) under the influence of an electrical field of strength \(E\), are given by the simple equation_
\[\frac{v_{1}}{v_{2}}=\frac{\mathbf{m}g}{e_{n}E-\mathbf{m}g}\quad\text{or} \quad e_{n}=\frac{\mathbf{m}g}{E}\left(\frac{v_{1}+v_{2}}{v_{1}}\right).^{\prime\prime} \tag{9}\]
The following clarification has been provided in a footnote: "_The term 'apparent mass' has been used to denote the difference between the actual mass and the buoyancy of the air._". Therefore, only _one_ concept of "mass" (called "actual mass") has been used in such an analysis. No distinction such as "inertial mass" and "gravitational mass" has been made. Furthermore, the involvement of \(g\) in the analysis is explicitly manifest from the expression for \(e_{n}\), where \(e_{n}\) stands for some integral multiple of \(e\). Therefore, the equivalence principle has been assumed and this assumption is exact because there is no mention of uncertainty regarding the equivalence of "inertial mass" and "gravitational mass" in refs.[50; 51].
#### ii.3.2 X-ray spectroscopy method and the principles behind determination of Avogadro constant
The second method of determining \(e\) relies on the use of x-ray spectroscopy to study crystal lattices as grating e.g. see refs. [53; 54] and the relevant references therein. This method of determining \(e\) requires the use of Avogadro constant \(N_{A}\) as input [42]. And, the theoretical analyses, which underlie the experimental determination of \(N_{A}\) by various methods, are invariably founded upon only _one_ concept of "mass" along with explicit involvement of \(g\)[55; 56; 57], as we explain in what follows. Of course, by "various methods" we mean only those methods of measurements which neither use the value of \(h\) nor use the value of \(e\) as input to determine \(N_{A}\).
In ref.[55], the final expression for \(N_{A}\), that appears in the section called "_Precise Determination of
_Avogadro's Constant_", explicitly depends on \(g\):
\[2.303(RT/N_{A})\log_{10}(n_{0}/n)=(4/3)\pi a^{3}g(\Delta-\delta){\bf h}. \tag{10}\]
The significance of the other symbols in the equation can be found in ref.[55] and the symbol "h" represent some length and _not_ Planck constant. The above expression is rooted to the buoyancy related analysis that follow from the earlier sections of ref.[55]. Such analysis is based on only _one_ concept of "mass" which becomes apparent from only _one_ concept of "density". A distinction as "inertial mass" and "gravitational mass" would have led to two different notions of densities viz. "inertial density" and "gravitational density".
To mention, \(\Delta,\delta\) are the densities of the granular material and the inter-granular liquid, respectively, which have been used to perform the experiment. Further, \(g\) is explicitly manifest in eq.(10). Hence, the equivalence principle has been assumed in the process and this assumption is exact because there is no mention of uncertainty regarding the equivalence of "inertial mass" and "gravitational mass" in the concerned references.
In ref.[56], the authors declare (in the second paragraph), the following: "_We have made, instead, readily available highly spherical steel artifacts as local and temporary "standards" of density. Their masses were determined in terms of the U.S. National Standard (kilogram replica number 20) by well understood procedures_." This "well understood procedure", described in ref.[58], is based on only _one_ concept of "mass" and the use of \(g\). Therefore, the equivalence principle has been assumed in the process and this assumption is exact because there is no mention of uncertainty regarding the equivalence of "inertial mass" and "gravitational mass" in ref.[58].
In a nutshell, the determination of \(N_{A}\) depends on the measured value of the density (\(\rho\)) of the associated crystal and \(\rho\) is measured through buoyancy related experiments[59, 60, 61, 62, 63, 64]. Any such experiment is based on Archimedes' principle [65] where \(g\) plays the role in defining "the weight of an object". Also, due to the inputs of the mass measurements, \(g\) gets involved in the process as well[58]. This is because mass measurements are done through mass comparators which are just balancing instruments with a founding principle based on the use of \(g\) and only _one_ concept of "mass" [66, 67, 68]. Importantly, such mass measurements using mass comparators are involved in any modern experiment1 that intend to determine \(N_{A}\) e.g. the modern x-ray crystal density method discussed in refs.[69, 70] indeed relies on mass comparators for mass measurement of the silicon crystal which becomes clear from the relevant references therein and especially from refs.[71, 72].
Footnote 1: We consider only those experiments which do not use \(h\) as input in the process of determining Avogadro constant. This is because we are investigating the methods by which \(h\) is determined.
Therefore, we can conclude from the above discussion that the determination of Avogadro constant (\(N_{A}\)) is based on the assumption of the equivalence principle and this assumption is exact because there is no mention of uncertainty regarding the equivalence of "inertial mass" and "gravitational mass" in the concerned references.
## IV Concluding remarks
Let us conclude by providing a concise account of what we have discussed, followed by some crucial remarks regarding the status of this work in light of the new convention adopted in 2019[24]. While we define the symbol "\(g\)" to bear the meaning "acceleration due to gravity of the earth", we assume in course of such a definition that the equivalence principle (equivalence between inertial and gravitational mass) holds. Any measurement procedure of Planck constant (\(h\)), that involves the use of \(g\), either explicitly or implicitly, is therefore dependent on the assumption of equivalence principle. While the Kibble balance method explicitly involves the use of \(g\), the photo-electron emission experiments implicitly involves the use of \(g\) through the measurement of the charge of an electron \(e\). This is because the determination of \(e\) through oil-drop experiment explicitly depends on the use of \(g\) and the method of x-ray spectroscopy, to determine \(e\), implicitly depends on \(g\) due to the determination of Avogadro constant (\(N_{A}\)) by measuring the crystal density through buoyancy related experiments (which require "weight of an object" to be defined) and mass measurements through mass comparators (which is based on balancing mechanism involving \(g\) in principle). In view of these, we conclude that the modern day experiments which claim to make quantum tests of equivalence principle, are just attempts to physically demonstrate a logical tautology i.e. in such experiments one
assumes the equivalence principle to test the equivalence principle. Importantly, this assumption is exact because there is no mention of uncertainty regarding the equivalence of "inertial mass" and "gravitational mass" in the references that are concerned with the determination of \(h\) directly or indirectly. Therefore, use of \(h\), that has been determined with the assumption of the equivalence principle with no uncertainty, to test the equivalence principle with non-zero uncertainty, actually presents a case of adulteration of logic (worse than tautology).
Now, an apparently legitimate objection may be raised against this work, if one considers the new convention adopted by the scientific community in the year 2019 [24], in the following way:
**Objection:**_Since 2019 the Planck constant (\(h\)) has been considered as a defining constant, in terms of which (and other defining constants), the kilogram - unit of mass - is defined. The distinction between "inertial" mass and "gravitational" mass does not enter such a definition._
While such an objection may appear to be legitimate at first, however it is fallible to simple counter reasoning as we demonstrate in what follows. It is true that \(h\) has been considered a defining constant since 2019 and it can be found to have been clearly stated in the Preface of ref.[24]. So, the respective value (in some system of units like SI system) is chosen to be exact and the base units like kilogram are defined in terms of \(h\) and other defining constants. However, such a definition of kilogram, as given in ref.[24], is _theoretical_ and yet to be realized through experiment. In fact, this has been clearly mentioned, on page no. 131 of ref.[24], that:
"_The present definition fixes the numerical value of \(h\) exactly and the mass of the prototype has now to be determined by experiment._"
In fact, in the Appendix 2 of ref.[24], the expected uncertainty in such experimental realization of "kilogram", with reference to the mass of the international prototype of the kilogram, has also been mentioned.
That, the objection raised about the redundancy of the equivalence between inertial mass and gravitational mass, is empty of any essence, can be understood if we pose the following question regarding the above promise that "the mass of the prototype has now to be determined by experiment":
_What type of mass of the prototype is going to be determined by experiment - inertial or gravitational?_
In this work we have discussed precisely this issue regarding mass determinations in general and we have pointed out that one always assumes the equivalence between inertial mass and gravitational mass while performing such mass measurements (or even discussing the principles) because only one concept of "mass" is generally stated without any hint of such distinction. Importantly, this assumption of equivalence is exact and devoid of any uncertainty, which is distinct from the uncertainty expected to be associated with the experimental realization of kilogram reported in the Appendix 2 of ref.[24]. However, such an assumption is not stated formally and rather kept unmentioned or remains hidden.
Further, we may point out that the chosen value of \(h\) has been determined through years of experimental research and then only it has been possible to reach a general consensus to consider an exact value, based on which the base units are defined in ref.[24]. What we have discussed in the article is that, all such procedures to determine the value of \(h\), which have been performed over the years, are based on the assumption of the equivalence between inertial mass and gravitational mass, explicitly or implicitly. Now, if we consider the current convention of choosing \(h\) to be _exact_, then it is implied by such convention that inertial mass and gravitational mass are _exactly_ equivalent (irrespective of the expected uncertainty in the experimental realization of "kilogram" mentioned in the Appendix 2 of ref.[24]). Therefore, in light of the 2019 convention, the quantum tests of equivalence between inertial mass and gravitational mass [17, 18, 19, 20, 21, 22], are physical demonstrations of adulteration of logic (worse than tautology) because one starts with an exact \(h\) (possible if and only if an exact equivalence between inertial mass and gravitational mass is assumed) and uses it for quantum tests of the equivalence between inertial mass and gravitational mass where the equivalence is associated with some non-zero uncertainty.
In light of Hilbert's sixth problem and therefore, taking into account the importance of logic and language in physics, recently it has been reported by one of us (A.M.[10]), how one should face a semantic or logic-linguistic dilemma in course of axiomatizing any theory that aims to take into account "quantum" and "gravity" in the same framework. The present work brings to light similar scientific queries, but now with an association of the experimental aspect. Noting the recent upsurge in an interest in logic and language (semantics) of physics [3, 4, 5, 6, 7, 10], the present work adds to such research endeavours but with an added bit of direct association with questions concerning the experimental aspect as far as the critical question of treating "quantum" and "gravity" in the same framework is concerned, and even includes the context of the most recent 2019 convention of redefining of units[24]. In view of this we hope that this present work
can provide a different and fresh insight, for the concerned scientific community, as far as the experimental and the axiomatic aspect of "quantum gravity" is concerned and, in particular, may germinate the seeds of certain reasonable doubts concerning the scientific reasoning that underlies the modern quantum tests of equivalence between inertial mass and gravitational mass[17; 18; 19; 20; 21; 22].
_Acknowledgment:_ The authors thank R. Radhakrishnan for pointing out ref.[31]. The work was accomplished while G. S. was visiting The Indian Statistical Institute, Kolkata. A. M. is supported by the Department of Science and Technology of India through the INSPIRE Faculty Fellowship, Grant no.- IFA18-PH208.
_Conflict of interest statement:_ On behalf of all authors, the corresponding author (A. M.) states that there is no conflict of interest.
_Declaration on competing interest:_ The authors declare that there is no competing interest.
|
2302.07569 | Beam pattern evolution of accreting X-ray pulsar 1A 0535+262 during its
2020 giant outburst | We report on pulse profile decomposition analysis of a bright transient X-ray
pulsar 1A 0535+262 using the broadband Insight-HXMT observations during a giant
outburst of the source in 2020. We show that the observed pulse profile shape
can be described in terms of a combination of two symmetric single-pole
contributions for wide range of energies and luminosities for a fixed geometry
defining basic geometry of the pulsar. This corresponds to a slightly distorted
dipole magnetic field, i.e., one pole has to be offset by $\sim 12^{\circ}$
from the antipodal position of the other pole. We reconstruct the intrinsic
beam patterns of the pulsar assuming the geometry recovered from the
decomposition analysis, and find evidence for a transition between "pencil" and
"fan" beams in energy ranges above the cyclotron line energy which can be
interpreted as transition from sub- to super-critical accretion regimes
associated with onset of an accretion column. At lower energies the beam
pattern appears, however, to be more complex, and contains substantial "fan"
beam and an additional "pencil" beam component at all luminosities. The latter
is not related to the accretion rate and is stronger in the fading phase of the
outburst. We finally discuss results in context of other observational and
theoretical findings earlier reported for the source in the literature. | Y. F. Hu, L. Ji, C. Yu, P. J. Wang, V. Doroshenko, A. Santangelo, I. Saathoff, S. N. Zhang, S. Zhang, L. D. Kong | 2023-02-15T10:18:17Z | http://arxiv.org/abs/2302.07569v2 | # Beam pattern evolution of accreting X-ray pulsar 1A 0535+262 during its 2020 giant outburst
###### Abstract
We report on pulse profile decomposition analysis of a bright transient X-ray pulsar 1A 0535+262 using the broadband _Insight_-HXMT observations during a giant outburst of the source in 2020. We show that the observed pulse profile shape can be described in terms of a combination of two symmetric single-pole contributions for wide range of energies and luminosities for a fixed geometry defining basic geometry of the pulsar. This corresponds to a slightly distorted dipole magnetic field, i.e., one pole has to be offset by \(\sim 12^{\circ}\) from the antipodal position of the other pole. We reconstruct the intrinsic beam patterns of the pulsar assuming the geometry recovered from the decomposition analysis, and find evidence for a transition between "pencil" and "fan" beams in energy ranges above the cyclotron line energy which can be interpreted as transition from sub- to super-critical accretion regimes associated with onset of an accretion column. At lower energies the beam pattern appears, however, to be more complex, and contains substantial "fan" beam and an additional "pencil" beam component at all luminosities. The latter is not related to the accretion rate and is stronger in the fading phase of the outburst. We finally discuss results in context of other observational and theoretical findings earlier reported for the source in the literature.
X-rays: binaries-- pulsars-- individual: 1A 0535+262 +
Footnote †: journal: La Jííí[email protected]
## 1 Introduction
In high mass X-ray binaries (HXMBs), compact objects accrete material from a companion star with a mass greater than 10 solar masses via the Roche lobe or winds (Iben, 1991; Davidson & Ostriker, 1973; Frank et al., 1992). If the compact star is a highly magnetized neutron star, the accreted matter will be channelled by its magnetic field onto the magnetic poles on the surface of the compact object ultimately converting gravitational potential energy into X-rays, which can be pulsed if the spin axis and the magnetic axis of the neutron star are misaligned.
Pulse profiles of accreting X-ray pulsars are known to have complex morphology, which is in general highly variable with energy and the luminosity (e.g., Alonso-Hernandez et al., 2022). Observed changes of pulse profile shapes with energy is believed to reflect details and angular dependence of radiation transfer in the emission region and often referred to as the intrinsic beam pattern of a pulsar1, while variations with luminosity indicate the changes of the emission region geometry with the accretion rate (see, e.g., Mushtukov & Tsygankov, 2022).
Footnote 1: In this paper, the ”beam pattern” refers to the flux distribution as a function of the angle between the dipole magnetic axis and the line of sight to a distant observer. The ”intrinsic beam pattern” is the beam pattern without considering the relativistic light deflection.
In order to interpret the observed pulse profiles, several theoretical models were proposed (Wang & Welter, 1981; Meszaros & Nagel, 1985; Ferrigno et al., 2011; Cappallo et al., 2017). However, details of interaction
of magnetic field, light and matter near the polar caps are still poorly understood, which hampers detailed and physically motivated modeling of observed pulse profiles. An alternative approach to the direct modeling of pulse profiles has been proposed by Kraus et al. (1995) where the intrinsic beam pattern associated with each individual magnetic pole is assumed to be intrinsically symmetric. The asymmetric shape of observed pulse profiles is then attributed to offset of the magnetic dipole from the center of the pulsar. Kraus et al. (1995) exploit energy and luminosity variations of the observed pulse profiles to find a unique solution defining geometry of the pulsar, which needs to remain constant regardless on details of radiative transfer or changes of emission region geometry with accretion rate. This decomposition method has been successfully applied to several accretion pulsars, such as Cen X-3, Her X-1, EXO 2030+375, 1A 0535+262, 4U 0115+63 and V 0332+53 (Kraus et al., 1996; Blum and Kraus, 2000; Sasaki et al., 2010; Caballero et al., 2011; Sasaki et al., 2012). The advantage of the method is that interpretation of the reconstructed intrinsic beam patterns associated with individual poles is arguably easier than full modeling of the pulse profiles from theoretical perspective. It is also worth noting that Her X-1's geometry estimated by the pulse profile decomposition (Blum and Kraus, 2000) appears to be consistent with the recent polarization study performed by the Imaging X-ray Polarimetry Explorer (IXPE) (Doroshenko et al., 2022). This indicates that the decomposition method might indeed be a good approximation for reconstructing the beam pattern of pulsars.
The transient X-ray pulsar 1A 0535+262 was discovered by Ariel V during a giant outburst in 1975 (Rosenberg et al., 1975). It consists of a pulsating neutron star with a spin period of 103 s and a Be companion HD 245770 (Hudec, 1975). The distance to the source is estimated at about 2 kpc measured by Gaia (Bailer-Jones et al., 2018). 1A 0535+262 is an active Be X-ray binary that has shown frequent outbursts in its history (see Camero-Arranz et al., 2012, and references therein). Caballero et al. (2011) applied the decomposition method to the pulse profiles observed during the outburst in 2005 with _RXTE_ mission at luminosity level of about \(0.8\times 10^{37}\) erg s\({}^{-1}\). They estimated the polar angles defining geometry of the pulsar \(\Theta_{1}\approx 50^{\circ}\) and \(\Theta_{2}\approx 130^{\circ}\) (see below), and found a possible solution of the beam pattern interpreted as a hollow column plus a halo of radiation scattered off the neutron star surface.
In November 2020, the brightest outburst ever recorded from the source was observed, with a luminosity reaching \(1.2\times 10^{38}\) erg s\({}^{-1}\). Kong et al. (2021) found that the observed energy of the cyclotron absorption line was anti-correlated with the luminosity around the burst peak, which is an important change compared to historical observations at lower fluxes where no such correlation was observed, and which can be interpreted as an evidence for the transition of accretion regimes. Kong et al. (2021) also reported significant differences in observed broadband X-ray spectrum between the rising and fading phases of the outburst, suggesting a somewhat different emission region geometry even at the same accretion rate in two phases of the outburst. On the other hand, using _Insight_-HXMT observations, Wang et al. (2022) reported the complex pulse profile evolution throughout the outburst, which exhibits a strong energy and luminosity dependence. Here we investigate observed variations of the pulse profile more quantitatively and apply decomposition method by Kraus et al. (1995) to _Insight_-HXMT data. This allows us to recover the evolution of intrinsic beam patterns as a function of luminosity (including both rising and fading phases) in 1A 0535+262 including also the previously unexplored range of luminosities close to the peak of the outburst. The paper is organized as follows: Sections 2 and 3 briefly describe the data and the decomposition method used in this paper. Results are presented in Section 4. Finally, we discuss results and present conclusions in Section 5.
## 2 Available Data and Data Selection
_Insight_-HXMT (Zhang et al., 2014, 2020) performed a high-cadence observational campaign and obtained unprecedented high quality data of 1A 0535+026 during its giant outburst in 2020. In Figure 1 we show observed evolution of the pulse profiles (adopted from Wang et al. (2022)), as a function of bolometric luminosity of the source estimated using the broadband spectroscopy in the energy range of 2-150 keV and assuming a distance of 2 kpc. During the outburst, as reported by Wang et al. (2022), the pulse profile shows a complex variation with energy and luminosity. This is an important prerequisite for our application of the decomposition method (Kraus et al., 1995).
The energy ranges we considered in this study is 15-30 keV, 30-40 keV, 40-50 keV and 50-70 keV. This is due to the fact that a significant fraction of low energy photons come from thermal components (Kong et al., 2021) and they may not be emitted directly from the polar caps (Poutanen et al., 2013; Tao et al., 2019). On the other hand, at higher energies the radiation is dominated by the instrumental background. In addition, the centroid energy (\(E_{\rm cyc}\)) of cyclotron resonant scattering features (CRSFs) in 1A 0535+262 is approximately
45 keV and the energy-dependent cross section results in dramatic changes of pulse profiles around \(E_{\rm cyc}\) (see Figure 4 in Wang et al. (2022)). The energy ranges we used are due to a trade-off between statistics and pulse profile variation with energy. The details of data reduction and analysis are presented in Wang et al. (2022), and here we focus on the decompositional analysis of the obtained pulse profiles in several energy ranges.
To investigate possible changes of the emission region geometry, we decomposed pulse profiles observed at several characteristic luminosities marked in Figure 1 (A-F). Point C is at the outburst peak with a luminosity \(L\approx 11.5\times 10^{37}\) ergs s\({}^{-1}\). Point E corresponds to the critical luminosity (\(L\approx 6.7\times 10^{37}\) ergs s\({}^{-1}\)) proposed by Kong et al. (2021), above and below which the accretion regimes are expected to be different (see, e.g., Basko & Sunyaev, 1976; Becker et al., 2012; Mushtukov et al., 2015). In addition, as reported by Figure 2 in Wang et al. (2022), a transition of pulse profiles in 10-30 keV appears when \(L\approx 9.5\times 10^{37}\) ergs s\({}^{-1}\), and therefore, point D is included in this study. Point F is also taken into account to represent a relatively low luminosity state (\(L\approx 2.6\times 10^{37}\) ergs s\({}^{-1}\)) 2. Finally, points A and B, which have almost the same luminosities compared with points F and E, are also included to assess possible differences in intrinsic beam patterns between rising and fading phases of the outburst.
Footnote 2: Another transition of pulse profiles that occurs at \(L\approx 1.1\times 10^{37}\) ergs s\({}^{-1}\)(Wang et al., 2022) is not included in the paper. This is because 1) the analysis at a similar luminosity state has been done by Caballero et al. (2011); 2) the pulse phase at low luminosities can not be well aligned with those at high luminosities due to the sudden change of pulse profiles; 3) at low luminosities uncertainties of the background estimation significantly influence the ”non-negative” criterion (see text below).
Following Wang et al. (2022), after barycentric and binary corrections with the ephemeris provided by Camero-Arranz et al. (2012), we estimated the pulse period of each _Insight_-HXMT observation using the epoch-folding method. The pulse profiles were obtained by folding the background-subtracted light curves at given energy ranges with a phase bin of 32. The maximum in each of the 24 pulse profiles was normalized to unity. The pulse profiles obtained for different observations were aligned according to an averaged template of 30-120 keV (which is relatively simple and stable) with the FFTFIT routine (Taylor, 1992)3. The evolution of pulse profiles has been shown in Figure 2 reported by Wang et al. (2022) and the detailed spin history is given in Table 2 reported by Hou et al. (2023). We note that the alignment may not be perfect due to variations in the pulse profile, and some phase offsets are expected in practice. We show all pulse profiles used in the following analysis in Figure 2.
Footnote 3: We tried other alignment methods, such as using a sharp feature as the reference, and eventually obtained comparable results.
## 3 Decomposition Analysis
Here we briefly summarize main assumptions and steps of the pulse profile decomposition analysis method proposed and comprehensively described by Kraus et al. (1995). The basic assumption of the method is that observed asymmetric pulse profiles of X-ray pulsars can be represented as a combination of two symmetric (in phase) components associated with emission from regions around two magnetic poles. Each single-pole pulse profile is then a function of \(\theta\), i.e., the angle between the magnetic axis and line of sight. It is symmetric with respect to two points \(\Phi\) and \(\Phi+\pi\), corresponding to the instant at which the pole is either closest or furthest from the direction of observation. If the magnetic field is an ideal dipole field, both poles will have the same symmetry points, and thus the total pulse profile can only be symmetric as well. In general this is not, however, consistent with observations. So Kraus et al. (1995) proposed a distorted magnetic dipole field, i.e., two magnetic poles are not located opposite to each other but rather offset by some angles.
The corresponding basic geometry of the pulsar is shown in Figure 3. \(\Theta_{0}\) is the polar angle of the direction of observation. The magnetic poles are located at polar angles \(\Theta_{1}\), \(\Theta_{2}\). \(\theta\) is the angle between a magnetic pole and the direction of observation, which is a function of the rotation phase \(\Phi\). For each pole, the relation between \(\theta\),\(\Theta_{i}\),and \(\Phi_{i}\) can be determined using the spherical triangles:
\[\cos\theta=\cos\Theta_{0}\cos\Theta_{i}+\sin\Theta_{0}\sin\Theta_{i}\cos(\Phi- \Phi_{i}) \tag{1}\]
The angular distance \(\delta\) between one magnetic pole and the point that is antipodal to another magnetic pole represents the deviation from an ideal dipole magnetic field. The corresponding difference of the azimuthal angle is \(\Delta:=\pi-(\Phi_{1}-\Phi_{2})\). The angular distance (\(\delta\)) between one magnetic pole and the point that is antipodal to the another magnetic pole can be written as:
\[\cos\delta=-\cos\Theta_{2}\cos\Theta_{1}+\sin\Theta_{2}\sin\Theta_{1}\cos\Delta. \tag{2}\]
### Decompositions
As discussed by Kraus et al. (1995), it is convenient to search for possible decompositions in Fourier rather than real space. The observed pulse profile \(F(\Phi)\) can be
expressed as a Fourier series (Kraus et al., 1995):
\[F(\Phi)=\frac{1}{2}u_{0}+\sum_{k=1}^{n/2-1}[u_{k}\cos(k\Phi)+v_{k}\sin(k\Phi)]+u_{n /2}\cos(\frac{n}{2}\Phi), \tag{3}\]
where \(n\) is the number of bins of pulse profiles, \(\Phi\) is the phase, and \(u_{k}\), \(v_{k}\) are coefficients that can be calculated by
\[u_{k}=\frac{1}{\pi}\int_{-\pi}^{+\pi}F(\Phi)\cos(k\Phi)d\Phi, \tag{4}\]
\[v_{k}=\frac{1}{\pi}\int_{-\pi}^{+\pi}F(\Phi)\sin(k\Phi)d\Phi. \tag{5}\]
As suggested by Kraus et al. (1995), we only considered the first 10 terms (\(k\leq 10\)) and omitted higher terms which are not really constrained by observations. \(F(\Phi)\) can be written as a sum of two single-pole pulse profiles \(f_{1}(\Phi)\) and \(f_{2}(\Phi)\), which are assumed to be symmetric with respect to points \(\Phi_{1}\) and \(\Phi_{2}\), respectively. Therefore, their Fourier expansions can be written as
\[f_{1}(\Phi)=\frac{1}{2}c_{0}+\sum_{k=1}^{n/2}c_{k}\cos[k(\Phi-\Phi_{1})], \tag{6}\]
and
\[f_{2}(\Phi)=\frac{1}{2}d_{0}+\sum_{k=1}^{n/2}d_{k}\cos\{k[\Phi-(\Phi_{2}+\pi)]\}, \tag{7}\]
It can be shown that for arbitrary choice of symmetry points \(\Phi_{1}\) and \(\Phi_{2}\), coefficients in \(f_{1}(\Phi)\) and \(f_{2}(\Phi)\), i.e., \(c_{k}\) and \(d_{k}\) (\(k\neq 0\)), can be uniquely determined by solving \(F(\Phi)=f_{1}(\Phi)+f_{2}(\Phi)\). On the other hand, the unmodulated flux (\(u_{0}=c_{0}+d_{0}\)) which represent the 0th frequency coefficient in fourier decomposition cannot be determined in this way. We first defined, therefore, the minimum values \(c_{0,\rm min}\) and \(d_{0,\rm min}\) of each single-pole pulse profile by shifting the minima of \(f_{1}(\Phi)\) and \(f_{2}(\Phi)\) to zero. The distribution of the residual flux \(u_{0}-c_{0,\rm min}-d_{0,\rm min}\) was estimated when combining two beam patterns (see below). We note that although the decomposition exists for every choice of \(\Phi_{1}\) and \(\Phi_{2}\), not all decomposition make sense (Kraus et al., 1995). We selected, therefore, only the solutions that satisfy the following criteria:
1. Non-negative: all values in \(f_{1}(\Phi)\) and \(f_{2}(\Phi)\) are non-negative because they represent the flux.
Figure 1: The evolution of the pulse profiles in the energy ranges of 15-30 keV, 30-40 keV, 40-50 keV and 50-70 keV, respectively. The colors present the intensity of pulse profiles normalized in the [0,1] range. The long-term lightcurve is shown as the red line in each panel. Six representative observations (A-F) are selected for the following decomposition. The black dashed vertical line indicates the outburst peak and the blue line corresponds to the time when the source has the critical luminosity in the decay phase of the outburst.
2. No-ripples: \(f_{1}(\Phi)\) and \(f_{2}(\Phi)\) are not expected to have small-scale features that cancel out in the sum. Also, the single-pole pulse profiles \(f_{1}(\Phi)\) and \(f_{2}(\Phi)\) should not be much more complicated than the observed total pulse profile.
3. Same geometry: the symmetry points \(\Phi_{1}\) and \(\Phi_{2}\) should be acceptable for different energy bands of all observations.
### Pulsar geometry
As the pulsar rotates, the angle \(\theta\) between one magnetic pole and the line of sight varies with the phase \(\Phi\) in the range \(\theta\in[\theta_{\rm min},\theta_{\rm max}]\), and the angle's range for another magnetic pole is \(\theta\in[\theta^{\prime}_{\rm min},\theta^{\prime}_{\rm max}]\), which may have an overlapping range with \([\theta_{\rm min},\theta_{\rm max}]\). \(\theta_{\rm min}\), \(\theta_{\rm max}\), \(\theta^{\prime}_{\rm min}\) and \(\theta^{\prime}_{\rm max}\) are related to the geometry of the pulsar, i.e., the polar angle (\(\Theta_{1}\) and \(\Theta_{2}\)) of the magnetic pole and the viewing angle \(\Theta_{0}\). Following Kraus et al. (1995), we assume that intrinsic beam patterns from both poles should be the same even if different parts of each may be observed. In this case, considering that the beam pattern from the two poles is only a function of \(\theta\), it is possible to identify an overlapping region of the beam pattern from both magnetic poles and thus recover a larger fraction of the full beam pattern. Although this assumption is probably oversimplified4, it was partially verified by overlapping regions found in Cen X-3 and Her X-1 (Kraus et al., 1996; Blum and Kraus, 2000), which are consistent with recent polarization observations (Doroshenko et al., 2022; Tsygankov et al., 2022).
Figure 2: Pulse profiles of 1A 0535+262 during its 2020 giant outburst observed by Insight-_HXMT_. All pulse profiles were normalized to their maximum values.
In practice, this means that at an instant \(\Phi\) one pole is observed at an angle \(\theta\), and the same angle is observed for the second pole at another instant \(\tilde{\Phi}\)(Kraus et al., 1995). In this case, the relation between \(\Phi\) and \(\tilde{\Phi}\) is
\[\cos(\Phi-\Phi_{1})=\frac{\cot\Theta_{0}(\cos\Theta_{2}-\cos\Theta_{1})}{\sin \Theta_{1}}+\frac{\sin\Theta_{2}}{\sin\Theta_{1}}\cos(\tilde{\Phi}-\Phi_{2}). \tag{8}\]
For convenience, we write as
\[\cos(\Phi-\Phi_{1})=a+b\cos(\tilde{\Phi}-\Phi_{2}),\ b>0, \tag{9}\]
where \(a\) and \(b\) can be estimated by minimizing the deviation of single-pole pulse profiles in the overlapping region of the beam pattern (for details, see Equ. 35 in Kraus et al. (1995)). In this step, the distribution of the remaining flux \(u_{0}-c_{0,\rm min}-d_{0,\rm min}\) is also calculated. If
Figure 4: Acceptable decompositions of \(\Phi_{1}\) and \(\Delta\) after applying the non-negative criteria (blue points) and non-negative plus no-ripples criteria (red points).
Figure 3: Intrinsic geometry of the neutron star. Figures are adopted from Kraus et al. (1995).
the viewing angle \(\Theta_{0}\) is known independently, the geometry of the pulsar can finally be determined by
\[\tan\Theta_{1}=\frac{-2a\,\tan\Theta_{0}}{(a\,\tan\Theta_{0})^{2}+b^{2}-1}, \tag{10}\]
and
\[\tan\Theta_{2}=\frac{b\,\tan\Theta_{1}}{a\,\tan\Theta_{0}\,\tan\Theta_{1}+1}. \tag{11}\]
## 4 Results
To search for acceptable decompositions, we need to consider all possibilities in the \((\Phi_{1},\Phi_{2})\) parameter space where \(0\leq\Phi_{1},\Phi_{2}\leq\pi\). For convenience, we replaced \(\Phi_{2}\) with an auxiliary variable \(\Delta:=\pi-(\Phi_{1}-\Phi_{2})\), which represents the azimuthal displacement between the two magnetic poles (see Figure 3). In this case, the searched parameter space becomes \(0\leq\Phi_{1}\leq\pi\) and \(0\leq\Delta\leq\pi/2\). We defined, therefore, grids with steps \(1^{\circ}\times 1^{\circ}\) covering this range, and tested the above criteria one by one for each \(\Phi_{1}\)-\(\Delta\) selection.
We first applied the non-negative criterion (blue points shown in Figure 4), which means that the remaining flux \(u_{0}-c_{0,\rm min}-d_{0,\rm min}\) should be positive when offsetting the minimum points of both single-pole profiles (\(f_{1}\) and \(f_{2}\)) to zero (i.e. to ensure that observed flux from each of the poles is positive at all phases for pulse profiles in all energy bands and in all observations). Figure 4 shows the decompositions that are acceptable for all energy bands of each observation (A-F). For the no-ripples criterion, we estimated the complex
Figure 5: Decompositions of original pulse profiles \(F(\Phi)\) (purple lines) into single-pole contributions \(f_{1}(\Phi)\) and \(f_{2}(\Phi)\) (red and blue lines) for all observations and energy bands. The unpulsed flux \(u_{0}-c_{0,min}-d_{0,min}\) is shown by the cyan dotted horizontal lines. Symmetry points of each single-pole pulse profile (i.e., \(\Phi_{1}\), \(\Phi_{1}+\pi\), \(\Phi_{2}\) and \(\Phi_{2}+\pi\)) are indicated by dotted vertical lines.
ity of pulse profiles by counting the number of peaks, calculated using the Python module Scipy5. Following Kraus et al. (1995), we excluded decompositions if their single-pole pulse profiles have much more peaks than the total pulse profile. Figure 4 demonstrates acceptable solutions after applying both the non-negative and no-ripples criteria (red points). We then searched for the common solution of A-F observations in order to satisfy the "same geometry" criterion. However, no such solution could be found. We speculate that this might be due to the imperfect alignment between different observations, because of the variable shape of pulse profiles. To account for such possibility, we assumed that there might be additional systematic error of 5 degrees related to imperfect alignment of pulse profiles from different observations as suggested by Caballero et al. (2011), and searched for possible decompositions again. Finally, some solutions were found, clustered around \(\Phi_{1}=11^{\circ}/191^{\circ}\) and \(\Phi_{2}=26^{\circ}/206^{\circ}\). The corresponding \(\Delta\) is \(\sim 15^{\circ}\) if the dipole magnetic field is not dramatically distorted. We note that the two solutions are responsible for the two symmetry points (i.e., \(\Phi_{\rm i}\) and \(\Phi_{\rm i}+\pi\)) for each single-pole pulse profiles. We can not decide which point corresponds to the instant at which the pole is closest to (or farthest from) the line of sight, so both possibilities were considered in the following, and we called them "_plus_ (+)" and "_minus_ (-)" solutions, respectively. Selecting the more realistic solution shall then be done based on extra arguments such as comparison with theoretical pulse profile models (see below). The finally obtained single-pole pulse profiles are presented in Figure 5 together with the un-pulsed flux.
Footnote 5: [https://docs.scipy.org/doc//scipy/reference/generated/scipy](https://docs.scipy.org/doc//scipy/reference/generated/scipy). signal.find_peaks.html
We finally calculated the beam pattern as seen by a distant observer according to the single-pole pulse profiles. In practice, we first searched for the overlapping region of beam patterns based on the deviations between single-pole profiles for all observations and energy bands (for details, see Eq. 35 in Kraus et al. (1995)). However, unlike the cases in Her X-1 and Cen X-3 (Kraus et al., 1996; Blum & Kraus, 2000), no overlapping region could be found, and therefore both \(a\) and \(b\) in Equ. 9 could not be determined directly. This is likely due to the fact that the geometry of the pulsar only allows the observer to see different sections of the total beam pattern. We assumed that \(b\) is close to 1, which corresponds to the case where the magnetic field is not dramatically distorted (Kraus et al., 1995). On the other hand, following previous studies in EXO 2030+375 and 1A 0535+262 that show similar geometries (Sasaki et al., 2010; Caballero et al., 2011), \(a\) was estimated to be around -2.2, based on the assumption that the sections of the two single-pole beam patterns can almost be connected to each other with a small gap. In Figure 6, we show the reconstructed beam patterns for all observations and energy bands. Here both the \(\theta_{+}\) and \(\theta_{-}\) represent the angle between the dipole magnetic axis and the line of sight to a distant observer, which correspond to the two possible solutions (_plus_ and _minus_) as mentioned above. The relation between the two solutions is \(\theta_{+}=\pi-\theta_{-}\).
We calculated the polar angles of the pulsar using Eq. 10 and 11. For a given \(\Theta_{0}=37^{\circ}\) estimated from the orbital inclination (Giovannelli et al., 2007), the resulting \(\Theta_{1}\) and \(\Theta_{2}\) are \(50^{\circ}\) and \(130^{\circ}\), respectively. Therefore, the angular distance \(\delta\) (between one magnetic pole and the point that is antipodal to the another magnetic pole) is \(12^{\circ}\) according to Eq. 2. Considering the error propagation, the error of \(\delta\) is about \(3^{\circ}\) if typical errors of \(\Delta\), \(a\), \(\Theta_{0}\) are assumed to be \(5^{\circ}\), 0.1 and \(2^{\circ}\), respectively.
## 5 Discussion
Based on the extensive _Insight_-HXMT observations of the 2020 giant outburst of 1A 0535+262, we extracted energy-dependent pulse profiles at different luminosity states and decomposed them into single-pole contributions using the method proposed by Kraus et al. (1995). We considered several physically motivated criteria to select reliable decompositions (see Sect. 3.1), and found that only solutions defined by symmetry points \(\Phi_{1}=11^{\circ}/191^{\circ}\) and \(\Phi_{2}=26^{\circ}/206^{\circ}\) are acceptable for all pulse profiles considered in our study. The corresponding angle \(\Delta\) defining offsett of the dipole from the center is found to be around \(15^{\circ}\), which is slightly smaller than the previous estimates (i.e., \(33^{\circ}\pm 5^{\circ}\)) inferred from different observations using the same method (Caballero et al., 2011). Since the \(\Delta\) angle is a system parameter of the pulsar, it is not predicted to change significantly. Therefore, this deviation might reflect a systematic error of the decomposition method due to the imperfect underlying assumptions. For example, recent studies reveal the presence of multi-pole magnetic fields which will also influence the observed pulse profiles (Monkkonen et al., 2022; Kong et al., 2022). In addition, only in 1A 0525+262 there are independent studies based on different pulse profiles obtained from different outbursts. The validity of the decomposition method needs to be further tested, for instance through polarimetric observations of more X-ray pulsars.
Nevertheless, we divided the total pulse profiles into single-pole contributions (Figure 5) and searched for
overlapping regions of the beam pattern for the recovered geometry. Eventually, no overlapping region was found, which is consistent with previous report by Caballero et al. (2011). This suggests that the two single-pole profiles are responsible for different parts of the total beam pattern. Similar results have been obtained in other sources, e.g., EXO 2030+375 (Sasaki et al., 2010). We estimate the total beam pattern by assembling the two parts with a small gap (Figure 6). In an ideal situation, it should be possible to connect the two parts by adjusting the distribution of the un-pulsed flux into the two single-pole profiles. As shown in Figure 6, in most cases the two beam pattern parts can be connected although there are some exceptions, such as in 50-70 keV in Observation C and in 15-30 keV in Observation D. The discontinuity of the beam pattern may be due to the fact that the gap we assumed is too small. On the other hand, we cannot rule out the possibility that the beam pattern indeed changes suddenly because of the obscuration by the accretion column or the neutron star. In 1A 0535+262, we found that the dipole magnetic field is not significantly distorted, which has a small offset \(\delta\sim 12^{\circ}\) between one pole and the antipodal position of the other pole. This offset is slightly larger than that in Her X-1 (Blum and Kraus, 2000), and is comparable to that in Cen X-3 (Kraus et al., 1996)6. Recent studies indicate that the geometries of Her X-1 and Cen X-3, inferred from the pulse profile decomposition, are consistent with polarization observations (Doroshenko et al., 2022; Tsygankov et al., 2022). Therefore, we strongly encourage polarization studies of 1A 0535+262 during the future giant outbursts that occurs every a few years with observatories such as _Imaging X-ray Polarimeter Explorer_(Weisskopf et al., 2022) and _enhanced X-ray timing and polarimetry_(Zhang et al., 2019).
Footnote 6: In literature, large offsets were reported in 4U 0115+63, V 0332+53 and EXO 2030+375 (Sasaki et al., 2010, 2012). However, these results were based on unknown and assumed viewing angles, leading to large uncertainties of the resulting \(\delta\) (e.g., see Figure 5 in Sasaki et al. (2012)).
It is known that the relativistic light bending has a significant effect on observed pulse profiles. The specific radiation region of pulsars, for instance, the height of the accretion column, is still poorly known. If we assume that the radiation is mainly emitted around polar caps and on the surface of the neutron star, we can convert the apparent beam pattern to the intrinsic beam pattern (shown in Figure 7 and Figure 8) using the approximate formula (Beloborodov, 2002),
\[\mathrm{cos}\vartheta\approx\mathrm{cos}\theta(1-\frac{r_{\mathrm{g}}}{R})+ \frac{r_{\mathrm{g}}}{R}, \tag{12}\]
where \(\vartheta\) is the angle between the radiation direction and the normal to the stellar surface measured by a local observer in the comoving frame. \(r_{\mathrm{g}}\) and \(R\) are the Schwarzschild radius and the radius of the neutron star, which has \(R=2.4\,r_{\mathrm{g}}\) considering canonical values, i.e., \(R\)=10 km and the mass of the neutron star \(M=1.4M_{\odot}\).
The basic picture of the accretion process onto highly magnetized neutron stars has been proposed by many authors (e.g., Nagel, 1981; Meszaros, 1992). However, it is still difficult to reproduce the pulse profiles in theory. This is due to the fact that many non-linear effects need to be taken into account, most notably strong energy and magnetic field dependent scattering cross-sections defining plasma opacities and thus radiative pressure and the dynamical structure of the accretion flow, the gravitational light bending (for details, see Falkner, 2018), and more. Generally, the radiation we observed is emitted from the accretion mound/column directly or the reprocessing via the surface of the neutron star and/or the upper accretion stream. It is generally accepted that at low luminosities there is an accretion mound on the polar cap of the neutron star, and the emission is mainly transported (and scattered) through the infalling matter, forming a "pencil" beam parallel to the magnetic field lines. On the other hand, at high luminosities, an accretion column appears. As a result, photons can only escape from the sides of the column and perpendicularly to the magnetic field, leading to a "fan" beam (Basko and Sunyaev, 1976; Becker et al., 2012). As shown in Figure 8, the _minus_ solution has indeed two main components parallel and perpendicular to the magnetic field (i.e., \(\vartheta\sim 0^{\circ}/90^{\circ}\)) respectively, which mimics the combination of the canonical "pencil" and "fan" beam patterns, and therefore is more consistent with theoretical expectations (albeit rather simplistic). In addition, the _minus_ solution is similar to the beam patterns in Her X-1 and Cen X-3 (Blum and Kraus, 2000; Kraus et al., 1996), which also suggests that it is probably the correct one for the pulsar.
Kong et al. (2021) studied the evolution of cyclotron resonant scattering features (CRSFs) in 1A 0535+262 during its 2020 giant outburst and found that the CRSF energy is positively (negatively) correlated with luminosity when the luminosity is smaller (larger) than a critical value \(6.7\times 10^{37}\mathrm{ergs/s}\). This theoretically suggests the transition of accretion regimes between "pencil" and "fan" beam patterns. However, as shown in Figure 7 and Figure 8, the beam pattern is more complex and energy-dependent. The cyclotron line energy \(E_{\mathrm{cyc}}\) is \(\sim 45\) keV in 1A 0535+262 (Kong et al., 2021), resulting in dramatic changes of the cross section around this energy range and therefore significant variations of
pulse profiles (Wang et al., 2022). For the energy band of \(E\gtrsim E_{\rm cyc}\) (i.e., 40-50 keV and 50-70 keV), the beam evolution is qualitatively consistent with theoretical expectations aforementioned, i.e., dominated by the "pencil" beam when the source is relatively faint and dominated by the "fan" beam around the outburst peak. In Observations C, D and E when the source is bright, we find that there is a significant fraction of high energy X-rays emerging from the direction \(\vartheta>90^{\circ}\). We consider that this might originate from the scattering in the upper accretion stream as suggested by Kraus et al. (2003); Sasaki et al. (2010); Caballero et al. (2011); Sasaki et al. (2012). On the other hand, the beam pattern of \(E<E_{\rm cyc}\) is more complex. For example, in Observation A the 15-30 keV pulse profile presents a "fan" beam which is different from that of high energies. To our knowledge, this is the first time the transition of beam patterns with energy is discovered. This is consistent with the theoretical prediction by Brainerd & Meszaros (1991), who interprets it as a result of the scattering in the accretion column if the column is optically thin to Thomson scattering and optically thick to resonant Compton scattering. In addition, another "pencil" beam component also appears for pulse profiles at low energies (15-30 keV and 30-40 keV). It is stronger in the fading phase of the outburst than that of the rising phase, even though the accretion rate is the same in both cases. This might be related to the hysteresis effects of spectral and temporal properties reported by other authors (e.g., Doroshenko et al., 2017; Wang et al., 2020; Kong et al., 2021). The physical mechanism is still poorly known. Nevertheless, we speculate that this "pencil" beam must be attributed to an accumulated effect, such as a gradual change of the shape of the accretion mound/column, which may influence the velocity of the in-falling matter near the accretion column's wall and therefore the illumination onto the surface of the neutron star. As a result, the reflection (Lyubarskii & Syunyaev, 1988; Poutanen et al., 2013; Kylafis et al., 2021) might be stronger in the fading phase of the outburst, corresponding to the additional "pencil" beam.
## 6 Acknowledgments
This work is based on observations with _Insight_HXMT, a project funded by the China National Space Administration (CNSA) and the Chinese Academy of Sciences (CAS). This work is supported by the National Natural Science Foundation of China under grants No. 12173103, U2038101, U1938103, 11733009. This work is also supported by International Partnership Program of Chinese Academy of Sciences (Grant No.113111KYSB20190020), the National SKA Program of China (Grant No. 2022SKA 0120101) and the National Key R&D Program of China (No. 2020YFC2201200), the science research grants from the China Manned Space Project (No. CMSCSST-2021-B09, CMSCSST-2021-B12 and CMS-CSST-2021-A10), and opening fund of State Key Laboratory of Lunar and Planetary Sciences (Macau University of Science and Technology) (Macau FDCT Grant No. SKL-LPS(MUST)-2021-2023). C.Y. has been supported by the National Natural Science Foundation of China (Grant Nos. 11521303, 11733010, and 11873103).
|
2303.08573 | Rapid in-situ quantification of rheo-optic evolution for cellulose
spinning in ionic solvents | It is critical to monitor the structural evolution during deformation of
complex fluids for the optimization of many manufacturing processes, including
textile spinning. However, in situ measurements in a textile spinning process
suffer from paucity of non-destructive instruments and interpretations of the
measured data. In this work, kinetic and rheo-optic properties of a
cellulose/ionic liquid solution were measured simultaneously while fibers were
regenerated in aqueous media from a miniature wet spinline equipped with a
customized polarized microscope. This system enables to control key spinning
parameters, while capturing and processing the geometrical and structural
information of the spun fiber in a real-time manner. We identified complex flow
kinematics of a deformed fiber during the coagulation process via feature
tracking methods, and visualized its morphology and birefringent responses
before and during regeneration at varying draw ratios and residence time.
Meanwhile, a three-dimensional physical rheological model was applied to
describe the non-linear viscoelastic behavior in a complex wet-spinning process
incorporating both shear and extensional flows. We subsequently compared the
birefringent responses of fibers under coagulation with the transient
orientation inferred from the rheological model, and identified a superposed
structure-optic relationship under varying spinning conditions. Such structural
characterizations inferred from the flow dynamics of spinning dopes are readily
connected with key mechanical properties of fully-regenerated fibers, thus
enabling to predict the spinning performance in a non-destructive protocol. | Jianyi Du, Javier Paez, Pablo Otero, Pablo B. Sanchez | 2023-03-15T12:50:50Z | http://arxiv.org/abs/2303.08573v1 | # Rapid in-situ quantification of rheo-optic evolution for cellulose spinning in ionic solvents
###### Abstract
It is critical to monitor the structural evolution during deformation of complex fluids for the optimization of many manufacturing processes, including textile spinning. However, _in situ_ measurements in a textile spinning process suffer from paucity of non-destructive instruments and interpretations of the measured data. In this work, kinetic and rheo-optic properties of a cellulose/ionic liquid solution were measured simultaneously while fibers were regenerated in aqueous media from a miniature wet spinline equipped with a customized polarized microscope. This system enables to control key spinning parameters, while capturing and processing the geometrical and structural information of the spun fiber in a real-time manner. We identified complex flow kinematics of a deformed fiber during the coagulation process via feature tracking methods, and visualized its morphology and birefringent responses before and during regeneration at varying draw ratios and residence time. Meanwhile, a three-dimensional physical rheological model was applied to describe the non-linear viscoelastic behavior in a complex wet-spinning process incorporating both shear and extensional flows. We subsequently compared the birefringent responses of fibers under coagulation with the
transient orientation inferred from the rheological model, and identified a superposed structure-optic relationship under varying spinning conditions. Such structural characterizations inferred from the flow dynamics of spinning dopes are readily connected with key mechanical properties of fully-regenerated fibers, thus enabling to predict the spinning performance in a non-destructive protocol.
American Chemical Society, Department of Physics, University of California, Berkeley, CA 94720, USA
## 1 Introduction
Macromolecular systems undergoing highly non-linear deformation in manufacturing processes are subject to transient evolution of their internal structures, including polymer extension and chain orientations. Such structural evolution on the microscopic level results in temporal and spatial-varying properties at larger lengthscales, which are usually accompanied by significant optical responses and can be captured readily through different microscopic or spectroscopic techniques [1]. Among all the optical phenomena, birefringent responses arising from flow-induced anisotropy are one of the most accessible optical indicators of the structural properties [2]. For measurements, birefringence visualizes different refraction indices between the ordinary and extraordinary axes and can be visualized using polarized microscopy if the materials are non-opaque [2, 3]. Well-established techniques, such as the Berek compensator and its variants have been used to quantify static or slowly-varying birefringence. In contrast, in many manufacturing processes, transient birefringent responses are of critical importance to capture the evolution of internal structures during material deforming or phase change, which is key to the resulting properties. However, a fast and accurate _in situ_ measurement of such rheo-optic properties has been addressed in very few occasions to the best of our knowledge [4, 5, 6, 7].
An emerging application that necessitates rapid and accurate monitoring of the transient physical and chemical responses is the regeneration of man-made cellulosic fibers (MMCFs) that is aimed to replace conventional cotton fibers with high carbon footprints [8]. In general, MMCFs are produced by dissolving cellulose and spinning the subsequent solutions in
non-solvent media to regenerate fibers. Suitable solvents for mass productions are required to dissolve cellulose with minimal degradation, while producing fibers of high quality and exhibiting operational and environmental benefits [9]. As a result, the dissolution and regeneration processes are commonly multi-staged and highly transient with complex disruption and formation of inter-cellulose chain linkages. Such transient dynamics are key to monitor the evolution of the spinning process. A rapid measurement for the temporal evolution of the cellulose crystallinity and its internal structures is closely connected to the performance of the spinning and coagulation stages and optimize the overall process [7].
Over the past decades, a certain family of compounds named ionic liquids (ILs) has proven to dissolve cellulose with minimal polymer degradation [10]. ILs consist of large ions with highly delocalized charges [11]. This chemical structure confers them unique physicochemical properties for a wide variety of applications [12]. Given the huge number of ionic combinations, ILs are often referred as solvents with tailored properties, which have been described in detail in a number of seminal works [13, 14, 11]. In real cellulose processing, the application of ILs is deemed as an alternative to the more common Lyocell process to produce textile fibers from biomass [15, 8, 16]. In a typical cellulose/IL solution for spinning, (referred as "spinning dope"), the dissolved cellulose, despite losing its network integrity, remains broadly entangled and dynamically interactive at high concentrations to retain its spinnability and to enhance fiber yield from a spinning process. The spinning performance is a result of the complex material evolution; hence, the mechanical and chemical properties of the cellulose/IL solutions need to be characterized in a local, real-time manner during the spinning and regeneration processes, in which a transient exchange of solvent and anti-solvent media occurs to the drawn fibers and progressively reconstruct the cellulose structure. To produce high-quality cellulose fibers, the spinning parameters need to be optimized based on accurate monitoring of the structural information in the process [9].
Conversion of dissolved cellulose into fibers is achieved via the wet spinning process [17], in which the dope is extruded through a spinneret (diameter \(D_{0}\)) at an averaged velocity
ranging approximately from 1 m/ min to 5 m/ min.[18] Of note, the dope often passes through an air gap (named "dry-jet wet-spinning") before entering the coagulation bath filled with an anti-solvent media.[9] To impose a preferred conformation to the cellulose chains, a pulling rod named _godet wheel_ collects the extruded filament at a linear speed \(v_{g}\) higher than the extrusion velocity from the spinneret \(v_{0}\), where \(v_{\mathrm{g}}/v_{0}=\Gamma\) is referred as the draw ratio. The spun fibers are simultaneously coagulated due to the exchange of solvent and anti-solvent for a sufficiently long residence time (RT), during which the cellulose chains link together via hydrogen bond formation to regenerate a polymer network.[9, 19]
Dynamics of the coagulation process are normally inferred from the characterizations of either the cellulose dopes prior to the spinning process, or the fully-coagulated spun fibers in a _post factum_ manner, and these measurements are retrospectively subsumed into a trial-and-error process to optimize the spinning design for targeted fiber properties.[6, 9, 20] The knowledge obtained from these measurements is thus statistics-based. As a result, the obtained spinning parameters based on the implications of specific cellulose-dope systems are not necessarily applicable to a wider variety of material and spinline configurations. Birefringent responses of the fibers in an ongoing regeneration process, in contrast, provide an easy and non-destructive probe to the cellulose structure, and can be readily connected to the resulting fiber dynamics and mechanical responses through a rheo-optic relationship. In previous studies, the birefringent responses during a fiber spinning process have been briefly captured for Lyocell processes[5, 7] and cellulose nanofiber systems.[21] However, the measured optical responses are mainly phenomenological and do not readily reveal the underlying morphological variations of cellulose chains under spinning, largely due to limitations in _in situ_ visualization tools and rheo-optical interpretations of the constitutive parameters extracted from the complex rheology of spinning dopes. The lack of both instrumentation and fundamental understanding hampers the construction of an accurate structural-property relationship, thus delaying an optimal spinning process with great industrial potentials.
To address this limitation, we bridge the gap between _a priori_ knowledge of the spinning
dope rheology and the structural evolution during the fiber spinning process from directly visualizing the birefringent responses of cellulose fibers during coagulation. A customized instrument is constructed, comprised of a charge-coupled device (CCD) camera and a liquid-crystal (LC) compensator with tunable retardation to allow for an accurate and scalable _in situ_ measurement of the flow kinematics and birefringent responses of extruded filaments during coagulation. The measured birefringence at varying spinning conditions can be readily connected to the averaged cellulose conformation predicted from the spinning-dope rheology and the corresponding flow kinematics. This relationship allows us to predict the mechanical properties of fully-coagulated fibers through simple online observation and independent rheological characterizations of the spinning dope. As a case study, we measured the kinematics of fiber spinning and birefringent responses of a selected cellulose/ionic liquid (1-ethyl-3-methylimidazolium acetate) system at different spinning configurations and degrees of coagulation. We quantified the temporal evolution of an extruded filament, as well as solvent/anti-solvent exchange during the spinning process. From these measurements, we predicted the morphological evolution in the filament along the spinning direction using a physical constitutive framework based on tube models under complex flow conditions [2, 22]. We hereby derive a more comprehensive structure-originated dynamic without performing complex scattering-based structural analysis. Outputs from this study can help accurately predict the evolution of fibers in a more general and scaled-up spinning scenario.
## 2 Results
A miniature spinline is configured with an optical setup perpendicular to the direction of fiber drawing for birefringence measurements (Figure 1a, and real setup in Figure 1c). In the optical setup, a polarizer and an analyzer are configured on either side of the sample with well-positioned angles. A liquid-crystal (LC) retarder is positioned along the optical path prior to the measured fiber as a retardance compensator controlled by an external
circuit for _in situ_ measurement (Figure 1b). Details of the optical setup are presented in the Methods section. Along the fiber direction, the spinning dope under shear stress is extruded from the spinneret, and subsequently spun under an extensional flow imposed by a faster-spinning godet wheel. Two close-up schematics under Figure 1a show the fiber kinematics near (1) and far from (2) the spinneret, respectively. Near the spinneret, a swollen fiber close to the spinneret is expected due to the non-trivial normal stress in the radial direction arising from the viscoelasticity of the spinning dope. At a draw ratio \(\Gamma>1\), the filament undergoes an extension with an imposed accumulated strain of \(\epsilon=\ln\left(v_{\mathrm{g}}/v_{0}\right)\), during which the cellulose chains reorient towards the drawing direction. Far from the spinneret, the fiber has reached the godet wheel velocity and moves broadly as a right body in an aqueous coagulation bath as antisolvents. A prolonged time period of travel (residence time) in the coagulation bath is provided to allow for sufficient exchange of the solvents and anti-solvents, which reconstructs the hydrogen bonds between cellulose chains, hereby linking the cellulose chains to form stable internal structures.
In this study, we demonstrated the cellulose regeneration with the prehydrolysis-kraft dissolving pulp, _Eucalyptus urograndis_, dissolved in 1-ethyl-3-methylimidazolium acetate, [C\({}_{2}\)C\({}_{1}\)Im][OAc] (Proionic GmbH, Austria), at \(c=5\,\%\). This concentration of cellulose is selected above its entangled concentration \(c_{\mathrm{e}}\) to optimize the fiber yield with minimal amount of solvent needed [23, 24]. When \(c>c_{\mathrm{e}}\), cellulose chains start to entangle and form larger-scale networks, modifying the rheological responses due to the deformation and alignment of the collective chain deformation [25]. As a result, it is critical to extract the flow kinematics during the spinning process and the complex rheology to describe the fiber evolution in a spinning process more comprehensively.
Figure 2 shows the overall filament morphology under varying spinning configurations. In general, we noticed expanded fiber diameter at the spinneret outlet (Figure 2a). We measured the fiber diameter at \(x=2\,\mathrm{mm}\), where \(x\) is the distance from the end of spinneret along the fiber direction, at different draw ratios imposed by a constant flow rate (\(v_{0}=0.85\,\mathrm{mm/s}\)
from the spinneret but different godet wheel speeds. The extracted fiber diameter \(D\) at varying draw ratios (red circles in Figure 2b) exhibit a negative power-law trend against the draw ratio, and the values exceed the predicted diameter based on conservation of volume \(D_{\mathrm{CV}}=D_{0}/\sqrt{\Gamma}\) (solid line in Figure 2a and red reference line in Figure 2c and d; \(D_{0}=$300\,\mathrm{\SIUnitSymbolMicro m}$\)), which can be attributed to both the die-swelling effect from the spinning dope that leads to a non-trivial radial normal stress, and the exchange of solvents and antisolvents. The ratio of \(D/D_{\mathrm{CV}}\) (blue triangles) remains above unity and steadily increases with the draw ratios. When the fiber travels far from the spinneret, the normal stress component in the radial direction due to die-swelling effect rapidly relaxes (indicated in Figure 3b), and the fiber kinematics are progressively dominated by the specified spinning parameters. Figure 2c shows the snapshots of fiber morphology at \(\Gamma=5.7,6.2and14.6\) and residence time of \(10\,\mathrm{s}\) and \(94\,\mathrm{s}\). The measured fiber diameter is significantly larger than the diameter under conservation of volume \(D_{\mathrm{CV}}\) (red reference lines). Consequently, the exchange of solvents and anti-solvents result in a net flow into the filament. In addition to the overall change in the fiber volume, the spatial distribution of different components is radially heterogeneous due to relatively low solvent/anti-solvent diffusivity in the coagulation process. This non-uniformity radial profile can be visualized by observing the fully-coagulated fiber (\(\Gamma=5.7\)) under brightfield imaging (Figure 2d), in which a core-shell structure can be clearly identified. Similar fiber structures have been characterized in a number of previous studies [9, 26]. We plotted the fiber diameters at varying draw ratios and resident times (Figure 2e). At different residence times, the evolution of fiber diameters broadly overlap and progressively grow beyond the reference diameter \(D_{\mathrm{CV}}\) (black line) as the draw ratio increases. Of special note, the diameter of fully-coagulated fiber significantly decreases below that under conservation of volume (Figure 2d). As a result, we justified the change of fiber diameter in a coagulation process primarily attributed from the solvent/anti-solvent exchange. More quantitatively, the swelling ratios \(A/A_{\mathrm{CV}}=(D/D_{\mathrm{CV}})^{2}\) can be calculated under the assumption that the fiber swells uniformly along the radial direction (Figure 2f). From the figure, the degree of fiber swelling in a
coagulation process is largely dominated by the draw ratio.
From Figure 2, the fiber geometries in a spinning process deviate significantly from the predictions under conservation of volume. Consequently, the fiber kinematics cannot be inferred faithfully from the evolution of its diameter. Therefore, we performed feature-tracking velocimetry using the disperse phases in spinning dopes (Figure 3a). These features are largely resulted from the partially-dissolved cellulose with a typical size ranging from \(10\,\mathrm{\SIUnitSymbolMicro m}\) to \(100\,\mathrm{\SIUnitSymbolMicro m}\). While ionic-liquid solvents can dissolve native cellulose at concentrations above \(15\,\mathrm{\char 37}\), the preparation process requires delicate pretreatments at scaled-up production to facilitate dissolution with controllable derivatizing effects [27]. As a result, spinning dopes with partially-dissolved cellulose can better represent the material systems used for industrial applications [16]. We justified that these disperse features can be used for particle-tracking velocimetry (PTV) by calculating the Stokes numbers defined as \(\mathrm{St}=\tau_{\mathrm{i}}/\tau_{\mathrm{f}}\). Here, \(\tau_{\mathrm{i}}\) is the relaxation time of a feature object in a flow field and is calculated from \(\tau_{\mathrm{i}}=\rho_{\mathrm{i}}d_{\mathrm{i}}^{2}/(18\eta)\), where \(\rho_{\mathrm{i}}\) and \(d_{\mathrm{i}}\) are the density and diameter of the feature object, respectively, and \(\eta\) is the viscosity of the fluid phase. On the denominator, \(\tau_{\mathrm{f}}=d_{\mathrm{i}}/U_{0}\) characterizes the time of flow past the feature object, and \(U_{0}\) is the field velocity. We calculated the Stokes number for a typical spinning process to be \(\mathrm{St}=1\times 10^{-10}to1\times 10^{-8}\ll 1\) based on independent measurements of the material properties, justifying the use of feature objects as tracers. To recover the axial velocity along the spinning direction, we sampled multiple feature objects at different distance \(x=0\,\mathrm{mm}to11mm\) from the spinneret, and calculated the "transient" axial velocity of each feature object from two adjoining frames (Figure 3a). The image processing is performed by a third-party computation package _trackpy_[28]. We measured the axial velocity at \(\Gamma=2\) at different locations and averaged the raw data into specified bin sizes (\(1\,\mathrm{mm}\)) for plot legibility (Figure 3b). We noticed the fiber accelerated to the godet wheel speed \(v_{\mathrm{g}}\) (blue solid line) within a short distance (\(x\approx 1\,\mathrm{mm}\)). A closer look in the range of \(0\,\mathrm{mm}\) to \(3\,\mathrm{mm}\) at varying draw ratios substantiates a consistent "acceleration length" \(L_{0}\approx 1\,\mathrm{mm}\) independent of the imposed draw ratios. As a result, the kinematics of extruded spinning dopes in a
generic cellulose-fiber spinning process can be approximated as a piece-wise process: When \(x<L_{0}\), the fiber is extended rapidly to reach the desired terminal velocity (\(v_{\mathrm{g}}\)), during which an extensional strain of \(\epsilon_{0}=\ln\Gamma\) is accumulated. Beyond this acceleration period, the fiber moves broadly as a rigid body and undergoes a coagulation process over an extended time period. We further justified this separable accelerating-coagulation process via the distinct times for acceleration (approximately 1 s) and diffusion (approximately 100 s), the latter of which is calculated based on the diffusivity measurements during solvent/anti-solvent exchanges from a number of previous studies [27, 29]. Because the kinematics evolve much faster than the diffusion, the deformation of fibers is relatively instantaneous compared with their regeneration through solvent/anti-solvent exchange.
To understand the dynamics of spinning dopes prior to the onset of regeneration, we performed comprehensive rheological characterizations to the spinning dope under both shear and extensional flows, which incorporate the deformation in the spinneret and during the spinning process, respectively. The shear rheology was measured using a commercial rheometer (Physica MCR 101, Anton Paar), and the extensional rheology was characterized using a customized capillarity-driven breakup extensional rheometer (CaBER) [27]. Both measurements were performed at 80 \({}^{\circ}\)C. The CaBER works by rapidly imposing an extensional strain to a fluid sample that rests between two coaxial plates, which induces filament pinch-off as a result of the driving surface tension and resistance from the material. An apparent extensional viscosity can thus be calculated from the measured filament diameter \(D(t)\) (Figure 3d) as \(\eta_{\mathrm{E,app}}=\sigma_{\mathrm{sd}}/[-\dot{D}(t)]\), where \(\sigma_{\mathrm{sd}}\) is the surface tension of the spinning dope (47 mN/m [30, 23]). The snapshots of the filament show a breakup time of approximately 57 s, during which the transient strain rates in the necking region of the filament increase from 0.04 to 1 \(s^{-1}\) (Figure 3e). In this process, the cellulose chains are forced to reorient and the material properties are significantly modified. Finally, the extracted shear and apparent extensional viscosities are plotted and compared (Figure 3f). The spinning dope shows rate-thinning behavior in both shear and extensional flows. To extract the structural evolution during the flow
deformation, we applied a physical constitutive model (Rolie-Poly model) to fit the experimental data in both shear and extensional flows following identical procedures in the previous study [30] (solid and dashed lines in Figure 3f). The fitting lines are in excellent agreement with the experimental data.
The predicted structural evolution from the measured spinning dope kinematics and rheological responses is subsequently compared with the _in situ_ birefringence measurements of fibers in a spinning process using an alternating LC retarder. Briefly, the LC retarder generates a series of retardation within half the wavelength, which is subsequently superposed to the unknown birefringent fibers. The birefringence measurement is obtained by extracting the phase difference between the evolution of light intensity with or without fibers. Because the phase difference is independent of the overall brightness, such birefringence measurements apply semi-opaque material systems as well. Using this measuring technique, we found the measured fiber birefringence to increase consistently with draw ratios at varying residence times (Figure 4a), showing primary birefringence contributions from fiber extension. We subsequently fit the evolution of birefringence with a linear relationship expressed as
\[\Delta n=K(\Gamma-1)+\Delta n_{0}, \tag{1}\]
where \(\Delta n_{0}\) is the intercept of birefringence in the absence of extension (\(\Gamma=1\), vertical dashed line). The fitted values (blue squares) remain broadly constant, while the slope \(K=\mathrm{d}\Delta n/\mathrm{d}\Gamma\) (black triangles) decreases as the residence time increases (Figure 4b). The constant non-zero intercept in the absence of extension can be attributed to the non-trivial residue cellulose alignment during the flow in spinneret, which remains unaffected during the regeneration process. As we attribute the variations in birefringence to the flow-induced anisotropic structures resulted from the reorientation of semi-flexible cellulose chains under drawing process, the decreased slope \(K\) at increased residence time represents enhanced resistance to a preferred cellulose alignment under external drawing as the fibers are increasingly coagulated,
partially due to enhanced fiber stiffness and cellulose relaxation. In contrast, the birefringent responses of spun fibers are largely determined during the extensional deformation of spinning dopes, parameterized by the draw ratios. Figure 3 has shown that such extensional deformation is imposed in a short acceleration length \(L_{0}\) when the majority of the filament remains uncoagulated. As a result, the structural properties of the spun fiber at \(x=L_{0}\) can be largely inferred from the rheology of spinning dope using previously measured flow kinematics, and become an accessible property indicator that readily connects to the fully-coagulated fibers. Quantitatively, an orientation tensor \(\mathbf{W}\) is commonly used to describe the ensemble-averaged orientation of Kuhn steps of all the polymer chains in a solution, and has been integrated into a number of constitutive models based on kinetic theories to connect micro- and macroscopic properties for polymer solutions and polymer melts [25, 31]. Specifically, Owens et al. [30] has applied the Rolie-Poly model to derive a unified mechanical framework to describe the flow behavior of cellulose dissolved in ionic liquids over a wide range of strain rates. A frame-invariant scalar can be derived from the double-dot product \(S=\mathbf{W}:\mathbf{W}\) to describe the macroscopic anisotropy arising from preferred orientation of polymer-chain ensembles, where \(S=1/3\) corresponds to a randomly-oriented distribution, whereas \(S=1\) corresponds to a well-aligned distribution [32] (Figure 4c). In a spinning process, the evolution of \(\mathbf{W}\) can be calculated from the measured flow kinematics in the form of axial velocity \(v(x)\). To demonstrate this relationship, we approximate the evolution of axial velocity in a coagulation process using a simple linear form (dashed line in Figure 3c) as
\[v(x)=\begin{cases} v_{0}+\dfrac{(v_{\mathrm{g}}-v_{0})x}{L_{0}},& x\leq L_{0}\\ v_{\mathrm{g}},& x>L_{0}\end{cases} \tag{2}\]
where \(L_{0}\) is the acceleration length measured in Figure 3. The imposed strain rate during fiber acceleration remains constant and can be calculated as \(\dot{\epsilon}(x)=\mathrm{d}v/\mathrm{d}x=(v_{\mathrm{g}}-v_{0})/L_{0}\) for an extended period of \(t_{\mathrm{a}}=\int_{0}^{L_{0}}\mathrm{d}x/v=L_{0}\ln{(\Gamma)}/(v_{\mathrm{g }}-v_{0})\). The transient extensional
rate \(\dot{\epsilon}(x)\) can thus be substituted into the constitutive equation to calculate the evolution of orientation tensor \(\mathbf{W}(x)\). The initial condition (\(x=0\) in a Eulerian frame) is inferred from the steady-state shear flow in the spinneret based on the imposed flow rate and the spinneret geometry. By plotting the measured birefringence against a peak orientation scalar \(S_{x=L_{0}}\) defined at \(x=L_{0}\) for each residence time (Figure 4d), we identified similar increasing trends in the birefringence as the structure becomes more aligned. Compared with the imposed spinning parameters, the birefringence provides a more generic and consolidated measure for the resulting fiber structure. To show this, we establish a superposing relationship between the birefringence and the structural parameters under varying spinning conditions. We notice that at a fixed residence time (hence \(v_{\text{g}}\)), the peak orientation scalar converges to a finite value \(S_{\Gamma\rightarrow\infty}\) as the draw ratio increases. This finite value can be determined by imposing a steady-state extensional rate of \(\dot{\epsilon}=v_{\text{g}}/L_{0}\), which can be shown to produce a mathematically equivalent flow dynamic (dashed vertical asymptotes in Figure 4d). We identified similar asymptotic trends of birefringence measurements at different residence time. To render a superposed relationship across varying residence times, we replotted the birefringence due to spinning, \(\Delta n-\Delta n_{0}\), against \(S_{\Gamma\rightarrow\infty}-S_{x=L_{0}}\) (Figure 4e). We found that all curves exhibit a power-law decaying trend with a broadly constant power exponent of \(-1\). A horizontal shifting factor \(b_{\text{S}}\) based on the measurement at residence time of \(10.0\,\text{s}\) is imposed to further consolidate a master curve at varying residence time (Figure 4f), and the shifting factor \(b_{\text{S}}\) extracted from the least square regression exhibits a clear correlation with the residence time (Figure 4g). The last shifting operation is not trivial, because the rate of relaxation for cellulose orientation can vary significantly with the residence time, and needs to be described separately with an additional superposition. The extracted shifting factor \(b_{\text{S}}\) appears to be proportional with the logarithmic residence time (black line), which indicates a slow-down in cellulose relaxation as the residence time increases. As a result, we justify a universal rheoptic relationship derived from the general constitutive model for entangled spinning dopes using the peak orientation scalar. Of special note, the birefringence of cellulose fibers is not
only function of the draw ratio and the residence time, but also of the extrusion speed at the spinneret, which dominates the cellulose orientation under rapid extension at the onset of fiber spinning. The mechanical properties of fully-coagulated fibers thus vary accordingly, even if the draw ratios are identical.
The spun fibers under varying spinning condition were collected from the miniature spin-line, fully coagulated and dried for structural and mechanical characterizations (Figure 5). Longitudinal (i, iii, v, vii) and cross-sectional (ii, iv, vi, viii) morphologies at varying draw ratios (\(\Gamma=1,2,4and6\) at \(v_{0}=0.75\,\mathrm{mm/s}\)) were captured from scanning electron microscopy (Figure 5a; JEOL USA), and stronger extension in the drawing direction can be identified as the draw ratio increases. The fiber cross-sections at high draw ratios are smoother and progressively deviate from circular shapes, which are compatible deformations caused by the horizontal rods in the coagulation bath (see experimental setup in Figure 1) induced by the strain in the take-over rod (Godet wheel), demonstrating increased plasticity due to enhanced cellulose alignment. The mechanical properties of the fully-coagulated fibers were measured using a standard mechanical tester (AGS-X STD, Shimadzu) equipped with a \(10\,\mathrm{N}\)-load cell according to the standard measuring protocol [33]. To substantiate the effects from both draw ratios and extrusion speeds on the spinning performance, three extrusion speeds (\(v_{0}=0.37,0.75,1.49\,\mathrm{mm/s}\)) were tested. Figure 5b shows specific force against stroke at distinct extrusion speeds of \(0.37\,\mathrm{mm/s}\) (thin dashed lines) and \(1.49\,\mathrm{mm/s}\) (thick solid lines) at varying draw ratios. We identified similar two-stage mechanical responses with elastic and plastic regions under varying spinning conditions. However, the magnitudes on both abscissa and ordinate show distinct trends. To quantify the mechanical responses of the spun fibers, three specific properties were extracted from the force-stroke curve: the stiffness, the tenacity, and the strain energy (Figure 5e). The stiffness, which describes the linear elasticity, increases at a higher draw ratio, but remains broadly unchanged at varying extrusion speeds (Figure 5c). Beyond the linear region, the tenacity and strain energy, which describe the fiber strength and toughness, respectively, increase at a higher draw ratio as well as at
a slower extrusion speed. For an entangled polymer network such as regenerated cellulose fibers, the linear elasticity arises from variations in microscopic entropy due to reorientation of cellulose chains [25]. As a result, the magnitude of elastic moduli is readily connected to the structural conformation after spinning, regardless of the transient deformation throughout the process. Figure 4b, the birefringence measurement under no extension (\(\Delta n_{0}\)) is broadly constant across varying residence times. As result, the structural conformation of the regenerated cellulose is largely determined by the draw ratio, hence the linear elastic properties. On the contrary, despite a number of studies that have addressed the dependence of tenacity on draw ratio [9, 34], these non-linear mechanical properties appear to be functions of the extrusion speed \(v_{0}\) as well. The enhanced tenacity and specific strain energy at lower extrusion speeds have been briefly reported previously [35, 36], which attributed the more tenacious and ductile trends in the regenerated fibers at lower extrusion speeds to a smaller deformation rate in the spinneret, thus less deformation energy in the spinning-dope before spinning and coagulation. However, as stated previously, we did not observe significant change in the birefringence of spinning dope right after extrusion from the spinneret at \(\Gamma=1\) (\(\Delta n_{0}\) in Figure 4b). As a result, we attribute the increased tenacity and toughness to the prolonged acceleration time \(t_{\mathrm{a}}\) during the fiber drawing period (\(x<L_{0}\)) as the extrusion speed decreases at a constant draw ratio. By substituting the spinning parameters, we found that such acceleration times under all the studied spinning conditions are greater than \(0.24\,\mathrm{s}\), which remains larger than the disengagement time (\(\tau_{\mathrm{d}}\approx$0.21\,\mathrm{s}$\)) in the Rolie-Poly model extracted from the rheological characterizations (Figure 3f). As a result, despite the fiber drawing induced by a non-trivial draw ratio, which induces significant reorientation to the cellulose chains via advection, these chains simultaneously undergo a disengaging process and are reorganized to reduce the free energy. This "annealing-like" process homogenizes the microstructures and gives rise to enhanced resistance to material failure at larger external deformation, and is critical to grant strong and tough fibers that may find commercial applications.
## Conclusions
In this work, we propose a universal rheo-optic framework to monitor the regeneration process for cellulose dissolved in ionic liquids via a wet-spinning. A mini-spinline was constructed and integrated with a polarized microscope to visualize the geometry and birefringence of spun cellulose fibers in real time at varying draw ratios and residence times. Using feature tracking techniques, we identified a broadly constant distance within which the fibers are extended upon extrusion from the spinneret. Beyond this point, the fibers move in the coagulation bath with minimal deformation for an extended period (residence time), where the exchange of solvents and anti-solvents gradually regenerates the cellulose network.
We measured the birefringence of fibers in the spinning process to substantiate the microstructural variation. To quantify the flow-induced structural evolution during the spinning process, a rheo-optic framework based on the Rolie-Poly model was proposed, and the constitutive parameters were extracted from independent shear and extensional rheological characterizations. Based on this rheo-optic framework, a superposing relationship can be obtained between the optical measurements and the inferred structural anisotropy, hence providing accessible indicators of the cellulose structures from online birefringence measurements.
Finally, the mechanical properties of regenerated fibers at varying draw ratios and extrusion speeds were measured. While the linear elastic properties appear to be sole functions of the draw ratio, we identified enhanced tenacities and strain energies as the extrusion speed decreased. We attributed this trend in the non-linear region to the lower transient anisotropy of cellulose structures throughout the spinning process, which allows for enhanced degree of structural relaxation. As a result, the coagulation process is more homogenized to reduce the free energy of formed cellulose chains, and to facilitate the growth of larger cellulose networks.
## 3 Experimental Section
### Material preparations
The material system applied in this work is prehydrolysis-kraft dissolving pulp (_Eucalyptus urograndis_, 93% cellulose I, \(M_{\mathrm{w}}=269\,\mathrm{kDa}\) and \(M_{\mathrm{n}}=79\,\mathrm{kDa}\) with a polydispersity of 3.4) dissolved in 1-ethyl-3- methylimidazolium acetate ([C\({}_{2}\)C\({}_{1}\)Im][OAc]) provided through courtesy of Prof. Michael Hummel from Aalto University. The concentration is selected at \(5\,\mathrm{wt}\%\) (corresponding to \(c/c_{\mathrm{e}}\approx 2.5\), where \(c_{\mathrm{e}}\) is the entanglement concentration). Material systems with \(c>c_{\mathrm{e}}\) are selected to be consistent with real spinning processes, in which concentrated cellulose spinning dopes are generally applied [9]. Spinning dopes were prepared following a standard procedure that has been illustrated elsewhere [30]. Briefly, a certain amount of cellulose was dissolved at \(90\,\mathrm{\SIUnitSymbolDegree}\)C, in a glass beaker sealed with a PTFE stirrer bearing under mechanical mixing for \(60\,\mathrm{m}\mathrm{m}\)s. After complete dissolution dopes where filtered at room temperature through a \(7\,\mathrm{\SIUnitSymbolMicro m}\)-filter mesh and degassed at \(70\,\mathrm{\SIUnitSymbolDegree}\)C.
### Customized spin-line with _in situ_ birefringence measurement
A customized spin-line was constructed (Figure 1a), in which the spinning dope is extruded from a custom designed spinneret with the nozzle diameter of \(300\,\mathrm{\SIUnitSymbolMicro m}\). The extruded dope undergoes the coagulation process in an aqueous bath as antisolvent between two pillars of an identical size. The fibers, after a fixed time of residence are subsequently reeled with a godet wheel and collected for _post factum_ characterizations. During the spinning and coagulation process, the spinning dope undergoes extensional deformation along the spinning direction. As antisolvent diffuses into the dope and induces the gelation of crystalline cellulose, an overall orientation of the cellulose structure is induced. Due to the resulting anisotropy in the overall structure, birefringent responses are generated (Figure 1a:1-2), where the slow axis points in the spinning direction. To quantify the birefringent responses, an optical setup
(Figure 1b) derived from the work of Honda et al. [37] was constructed. Here, a collimated light (SOLIS-525A, Thorlabs) with a mean wavelength of 525 nm is incident through a polarizer with its slow axis configured at 45\({}^{\circ}\) with the horizontal plane. Another analyzer is set with the slow axis set at \(-45^{\circ}\) against the horizontal plane. Between the polarizer and analyzer, the fiber to be measured is set within the light beam range. To allow for calibrated real-time measurements, a liquid-crystal (LC) retarder is installed long the optical path with the slow axis set at 0\({}^{\circ}\).
In a birefringence measurement, a well-modulated voltage profile is provided to the LC retarder, which induces a temporally-evolving predetermined birefringence along the optical path. Such birefringence induced by the LC-retarder superposed with the intrinsic birefringence from the sample results in alternating bright-dark image snapshots at different root mean square (RMS) voltage levels (Figure 6a). The net optical retardance from the fiber can be determined by subtracting the measured retardance in the background from that at the fiber centerline. The fiber birefringence can be subsequently determined given the fiber geometry. We implemented a simple scheme to determine the fiber edges by locating the maximal image gradient along a cross-sectional cut line (Figure 6b). Subsequently, the fiber centerline and the background regions can be identified. Of note, some LC retarder voltage values are insufficient to generated highly-contrasted background-fiber interface, leading to an incorrect estimation of the fiber diameter. To correct such miscalculation, the recorded measurement for fiber diameter is taken as the median measurement over a cycle of voltage iteration from the LC retarder.
For simplified calculation, we assumed negligible reflectance and absorbance for the polarizers. Given the previously-stated slow-axis orientations for all the optical components, the resulting light transmittance (dimensionless) is given by Equation 3 as
\[T_{\Delta\phi}=\frac{I_{\Delta\phi}}{I_{0}}=\sin^{2}(\pi\Delta\phi)=\sin^{2}( \pi\Delta nd/\lambda), \tag{3}\]
where \(\Delta\phi=\phi_{\rm f}-\phi_{\rm LC}=(\Delta n_{\rm f}-\Delta n_{\rm LC})D/\lambda\) is the dimensionless retardance difference between the fiber sample and the LC retarder at an incident light wavelength \(\lambda\). The birefringence of the fiber sample and the LC retarder are \(\Delta n_{\rm f}\) and \(\Delta n_{\rm LC}\), with the optical path lengths identical to the fiber diameter \(D\) and the LC retarder thickness, respectively.
From the LC retarder, its retardance \(\phi_{\rm LC}(V_{\rm RMS})\) is a function of the input voltage, \(V_{\rm RMS}\) (Figure 6c, provided by the manufacturer). We note that in the accessible range of the retardance, the transmittance of the LC retarder \(T_{\Delta\phi}\) is non-monotonic regarding to \(V_{\rm RMS}\) (black line in Figure 6d), as well as when superposed with the birefringent fiber (green line in Figure 6d). The two measured image intensities are subsequently plotted against retardance using an interpolated form of the retardance calibration curve (Figure 6e; black: LC retarder; green: LC retarder superposed with fiber; gray scale: phase difference). The two image intensity responses exhibit periodical patterns with a non-trivial phase difference. From Equation 3, the light intensity follows a sinusoidal form regarding to the LC retardance \(\phi_{\rm LC}\) as
\[I_{\Delta\phi}(\phi_{\rm LC})=\frac{1-\cos[2\pi(\phi_{\rm f}-\phi_{\rm LC})]}{ 2}I_{0}. \tag{4}\]
In practice, the numerical intensity extracted from the image pixels slightly deviates from a sinusoidal form due to its non-linear correlation with the light intensity. Nevertheless, a phase difference between the two periodical patterns can still be identified via numerical fitting of a sinusoidal function \(f(\phi)=A\sin[2\pi(\phi+B)]+C\), where \(A\), \(B\) and \(C\) are the fitting parameters. Finally, the difference in the values of \(B\) coincides with the retardance of the fiber.
**Acknowledgements**
P.O. and P.B.S. were supported by Ministerio de Ciencia e Innovacion under the grants PRE2020-093158 and RYC2021-033826-I respectively. J.D. and P.B.S thank Crystal Owens and Prof. Gareth H. McKinley from MIT for the insightful discussions.
**Conflict of Interest**
The authors declare no conflict of interest.
**Data Availability Statement**
The data that support the findings of this study are available from the corresponding author upon reasonable request.
**References**
|
2310.07778 | Moduli stacks of Higgs bundles on stable curves | In this article, we construct a flat degeneration of the derived moduli stack
of Higgs bundles on smooth curves using the stack of expanded degenerations of
Jun Li. We show that there is an intrinsic relative log-symplectic form on the
degeneration and we compare it with the one constructed by the second author.
We show that the Hitchin map of the degeneration we construct has complete
fibers. Furthermore, we show that the Hitchin map is flat and that a suitable
open subset of the smooth locus of the reduced nilpotent cone is Lagrangian. We
also extend the construction of the moduli of Higgs bundles along with the
relative log-symplectic form over the universal moduli stack of stable curves. | Oren Ben-Bassat, Sourav Das, Tony Pantev | 2023-10-11T18:05:42Z | http://arxiv.org/abs/2310.07778v1 | # Moduli stacks of Higgs bundles on stable curves
## Abstract
In this article, we construct a flat degeneration of the derived moduli stack of Higgs bundles on smooth curves using the stack of expanded degenerations of Jun Li. We show that there is an intrinsic relative log-symplectic form on the degeneration and we compare it with the one constructed by the second author. We show that the Hitchin map of the degeneration we construct has complete fibers. Furthermore, we show that the Hitchin map is flat and that a suitable open subset of the smooth locus of the reduced nilpotent cone is Lagrangian. We also extend the construction of the moduli of Higgs bundles along with the relative log-symplectic form over the universal moduli stack of stable curves.
###### Contents
* 1 Introduction
* 1.1 Notations and Conventions
* 2 Acknowledgements
* 3 Preliminaries
* 3.1 Space of bounded Expanded degenerations
* 3.1.1 Degeneration of curves.
* 3.1.2 Space of expanded degenerations
* 3.1.3 Stack of bounded Expanded degenerations
* 3.1.4 An alternative construction of the stack \(\mathfrak{M}\)
Construction of the family of curves * 3.3 Log structures on Derived Artin stacks * 3.3.1 Log structures on Artin stacks * 3.3.2 Locally free Log structures on Derived Artin stacks * 3.3.3 Relative logarithmic cotangent complex * 3.4 Relative shifted log-symplectic forms
* 4 Logarithmic Dolbeault Moduli stack and shifted log-symplectic form
* 5 Completeness of the Hitchin map
* 6 Flatness of the Hitchin map
* 7 On the relative logarithmic Dolbeault moduli over \(\overline{\mathcal{M}_{g}}\)
* 7.1 Log structures on \(\mathcal{M}_{g}^{ss}\)
* 7.1.1 Versal deformation space of \(\mathcal{M}_{g}^{ss}\) and the versal picture of the map \(\pi:\mathcal{M}_{g}^{ss}\)
* 7.2 Relative log-cotangent complex of the map \(\mathcal{M}_{g}^{ss}\longrightarrow\overline{\mathcal{M}_{g}}\)
* 7.3 Relative logarithmic Dolbeault shape and shifted symplectic forms
* 8 Appendix: Classical Artin stack of Gieseker-Higgs bundles and its local properties
* 8.1 The stacks of Gieseker vector bundles and Gieseker Higgs bundles
* 8.1.1 Stack of torsion-free Hitchin pairs
* 8.1.2 Classical Artin Stack of Gieseker-Higgs bundles
* 8.1.3 Construction of an atlas for \(N_{Gie}\)
* 8.1.4 Construction of an atlas for \(M_{Gie}^{cl}\)
* 8.2 Dimension and local properties of \(M_{Gie,0}^{cl}\)
* 8.2.1 Relative Log-Symplectic reduction
## 1 Introduction
It is well known that moduli stacks of vector bundles on nodal curves are not complete. Often their completions involve adding torsion free sheaves as points of the boundary. Because we want to use the mapping space techniques of shifted symplectic geometry, we instead adopt the alternative approach of completing the moduli by adding boundary points which are bundles on a bubbling of the node. This approach goes back to the classical works of David Gieseker [11] and Jun Li [14]. In
our setting we will implement the approach by utilizing Jun Li's stack of expanded degenerations. This allows us to use only vector bundles as opposed to torsion-free coherent sheaves and apply techniques of shifted symplectic geometry [28] obtaining a relative symplectic structure on a complete moduli stack. We extend the Hitchin map to Higgs bundles on the bubbled curves parametrized by the stack of expanded degenerations and then use the completeness of this moduli stack to prove that our extended Hitchin map is complete.
The derived moduli stack of Higgs bundles on a smooth curve \(C\) has virtual dimension \(-\chi_{\mathrm{Dol}}(C,\mathcal{E}nd(E,\phi))=n^{2}\deg(c_{1}(T_{C}))=2n^{2}(g-1)\) where \(g\) is the genus of \(C\) while the derived moduli stack of vector bundles (or any other Lagrangian in the derived moduli stack of Higgs bundle) has virtual dimension \(-\chi(Y,\mathcal{E}nd(E))=\int_{C}\mathrm{ch}(\mathcal{E}nd(E))\mathrm{td}(C )=n^{2}(g-1)\). These moduli stacks are homotopically locally of finite presentation and so exhibit the "hidden smoothness" envisioned in derived deformation theory. In view of this it is natural to study other geometric properties of these moduli. This article investigates 0-shifted symplectic structures when \(C\) degenerates to a nodal curve. After writing this article we realized there are a lot of relations with the articles [2], [3] and significant overlap with a forthcoming article [10] of Ron Donagi and Andres Fernandez Herrero. In their work they give a construction of a good moduli space for the semistable locus, and semistable reduction relative to the Hitchin fibration (so the moduli space is proper over the Hitchin base in families). This uses the "infinite dimensional GIT" picture developed with Dan Halpern-Leistner. They also showed flatness of the Hitchin morphism, and that things are syntomic (as classical stacks), including the symplectic leaves. They are pursuing the story with punctures: thinking about the log Poisson reduction picture for the relative log cotangent stack of framed Gieseker vector bundles.
To start with, we fix a family of projective curves \(\mathcal{X}\) over the spectrum of a discrete valuation ring \(S\) such that the generic fibre is smooth, the closed fibre is an irreducible nodal curve with a single node and the total space \(\mathcal{X}\) is smooth. We also fix a rank \(n\) and degree \(d\) for the vector bundles in our moduli problem. In Section SS2, we recall the construction of the stack \(\mathfrak{M}\) of bounded expanded degenerations (bounded by the integer \(n\)) of the family of curves \(\mathcal{X}/S\) following [14] and [37]. One of the main results in this subsection is Lemma 3.9, which we use to prove the following proposition.
**Proposition 3.15**: _The morphism \(\mathfrak{M}\longrightarrow S\) of ordinary Artin stacks is a log-smooth map. Moreover, the relative log-cotangent complex \(\mathbb{L}^{log}_{\mathfrak{M}/S}=0\)._
In subsection SS2.3, we define shifted log-symplectic structure on a quasi-smooth derived Artin stack (Definition 3.18) equipped with a locally free log structure. In subsection SS2.4, we recall the definition of relative shifted symplectic forms for a quasi-smooth morphism of derived Artin stacks. We define relative shifted log-symplectic forms for certain logarithmic morphisms of derived Artin stacks equipped with locally-free log structures.
In Section SS3, we construct a logarithmic version of the relative Dolbeault moduli stack for the universal expanded family of curves \(X_{\mathfrak{M}}\rightarrow\mathfrak{M}\). We show that the relative logarithmic Dolbeault moduli stack has a relative \(0\)-shifted log-symplectic form over \(S\). Moreover, we show that the relative log-symplectic form is an extension of Hitchin's symplectic form on the generic fibre of the moduli stack over \(S\). This was proved for moduli schemes in [9]. The main result of this section are the following.
**Theorem 4.11**: \(\mathcal{X}_{Dol}\) _is \(\mathcal{O}\)-compact and \(\mathcal{O}\)-oriented over \(S\). Hence, \(\mathsf{Map}_{\mathcal{S}}(\mathcal{X}_{Dol},BGL_{n}\times\mathcal{S})\) has a \(0\)-shifted relative symplectic structure over \(S\)._
**Theorem 4.13**: _The derived Artin stack \(\mathsf{Map}_{\mathfrak{M}}(\mathcal{X}_{\mathfrak{M},Dol},BGL_{n}\times \mathfrak{M})\) has a natural relative \(0\)-shifted log-symplectic structure over \(S\)._
In Section SS4, we define the Hitchin map on the classical Artin stack of Gieseker-Higgs bundles \(M^{cl}_{Gie}\) (see 8.1.2). We prove that the Hitchin map is complete.
**Theorem 5.3**: _The morphism \(h:M^{cl}_{Gie}\longrightarrow B\) is complete._
This result was proved in [1] for the Hitchin map on the moduli scheme in the case where rank and degree are co-prime. We prove it here for the moduli stack and the argument does not require us to assume that the rank and degree are coprime.
In section SS5, we study the reduced global nilpotent cone of \(M^{cl}_{Gie}\), which is the reduction of the scheme theoretic fibre over the point \(0\in B\). We prove that every irreducible component of of the reduced nilpotent cone has an open subset (denoted by \(\mathcal{N}ilp^{sm,gen}\)) which is an isotropic substack of \(M\) (the derived stack of
Higgs bundles) with respect to its log-symplectic form. We use this to compute the dimension of the reduced nilpotent cone and to show that the Hitchin map is flat. The main theorem in this section is the following.
**Theorem 6.9**:
1. _The Hitchin map_ \(h:M^{cl}_{Gie}\longrightarrow B\) _is surjective._
2. _The sub-stack_ \(\mathcal{N}ilp^{sm,gen}\) _is relatively isotropic over_ \(S\)_._
3. _The Hitchin map_ \(h:M^{cl}_{Gie}\longrightarrow B\) _is flat._
In Section SS6, we construct the Gieseker-like derived moduli stack of Higgs bundles \(\mathcal{M}^{Dol}_{g}\) over the moduli stack of stable curves of genus \(g\geq 2\). We prove the following theorem.
**Theorem 7.9**: _There is a \(0\)-shifted relative log-symplectic form on \(\mathcal{M}^{Dol}_{g}\) (relative to the moduli stack of stable curves \(\overline{\mathcal{M}_{g}}\))._
In the Appendix, we construct the relative classical Artin stack of Gieseker-Higgs bundles and study its local properties. The main results of the appendix are Proposition 8.9 and Theorem 8.10. In the first lemma we prove that the stack of Gieseker vector bundles is an almost very good stack. We use this in Lemma 8.10 to show that the classical stack of Gieseker-Higgs bundles is an irreducible, local complete intersection.
**Proposition 8.9**: _The closed fibre \(N^{cl}_{Gie,0}\) is an irreducible, equidimensional, almost very good stack ([32, Definition 2.1.2]) with normal crossing singularities._
**Theorem 8.10**: _The stack \(M^{cl}_{Gie,0}\) is an irreducible local complete intersection of pure dimension \(2\dim N_{Gie,0}+1\)._
### Notations and Conventions
* \(\Bbbk\) be an algebraically closed field of characteristic zero.
* \(S:=\{\eta,\eta_{0}\}\) denotes the spectrum of a complete discrete valuation ring, where \(\eta\) denotes the generic point, and \(\eta_{0}\) denotes the closed point.
* \(\mathcal{X}\to S\) denotes a flat family of curves whose generic fibre is smooth projective, and the closed fibre is a nodal curve with a single node. We denote the nodal curve by \(X_{0}\) and the node by \(x\). We denote its normalisation by \(q:\widetilde{X}_{0}\longrightarrow X_{0}\) and the two pre-images of the node \(x\) by \(\{x^{+},x^{-}\}\).
* \(dg_{\Bbbk}\) is the category of dg-modules over \(\Bbbk\) (i.e. of complexes of \(\Bbbk\)-modules). By convention, the differential of an object in \(dg_{\Bbbk}\)_increases_ degrees.
* \(cdga_{\Bbbk}\) is the category of commutative dg-algebras over \(\Bbbk\), and \(cdga_{\Bbbk}^{\leq 0}\) its full subcategory of non-positively graded commutative dg-algebras.
* \(dg_{\Bbbk}\), \(cdga_{\Bbbk}\) (respectively \(cdga_{\Bbbk}^{\leq 0}\)) are endowed with their natural model structures for which equivalences are quasi-isomorphisms, and fibrations are epimorphisms (respectively epimorphisms in strictly negative degrees).
* \(d\mathit{Aff}_{\Bbbk}:=(cdga_{\Bbbk}^{\leq 0})^{op}\) is the category of derived affine \(\Bbbk\)-schemes.
* The \(\infty\)-categories associated with the model categories \(dg_{\Bbbk},cdga_{\leq 0},d\mathit{Aff}_{\Bbbk}\) are denoted by \(\mathbf{dg}_{\Bbbk},\mathbf{cdga}_{\leq 0},\mathbf{dAff}_{\Bbbk}\).
* The \(\infty\)-category of simplicial sets is denoted by \(\mathbb{S}\). It is also called the \(\infty\)-category of spaces, and space will be used to mean simplicial set.
* The \(\infty\)-category of derived stacks over \(\Bbbk\), for the etale topology, is denoted by \(\mathbf{dSt}_{\Bbbk}\). If \(X\) is a derived stack, the \(\infty\)-category of derived stacks over \(X\) is denoted by \(\mathbf{dSt}_{X}\). The truncation of a derived stack \(X\) is denoted by \(h^{0}(X)\).
* For a family of semistable curves \(\mathcal{C}\) over a scheme \(T\), \(\mathcal{C}_{lDol}\) denotes the relative logarithmic Dolbeault shape of \(\mathcal{C}/T\).
Acknowledgements
T.P. (University of Pennsylvania) was partially supported by NSF FRG grant DMS-2244978, NSF/BSF grant DMS-2200914, NSF grant DMS-1901876, and Simons Collaboration grant number 347070. O.B. (University of Haifa) and S.D. would like to acknowledge NSF-BSF grant 2021717 for supporting S.D. as a postdoc. S.D. would like to thank Professor Alek Vainshtein of the University of Haifa, Israel for the financial support from his Israel Science Foundation grant number 876/20 during a postdoc position at the University of Haifa. S.D. would also like to thank Professor Vikraman Balaji for the financial support from his SERB Core Research Grant during a postdoc position at Chennai Mathematical Institute.
## 3 Preliminaries
### Space of bounded Expanded degenerations
In this subsection, we will recall a construction by Jun Li [14] called the stack of expanded degenerations. We will use this stack to construct our degeneration. The construction of Jun Li starts with a one-parameter family of varieties (of any dimension) such that the total space of the family is smooth, the generic fibre is smooth and the special fibre is a normal crossing divisor in the total space of the family. For our purpose, we only need to consider the expanded degenerations of such a family for curves. We will make use of the standard fact that for any nodal curve one can always construct a smoothing over the spectrum of a discrete valuation ring which also has a smooth total space. The precise setting we consider can be spelled out as follows.
#### 3.1.1 Degeneration of curves.
Start with a flat family of projective curves \(\mathcal{X}\longrightarrow S\), such that
1. the generic fibre \(\mathcal{X}_{\eta}\) is a smooth curve of genus \(g\geq 2\),
2. the closed fibre is a nodal curve \(X_{0}\) with a single node \(x\in X_{0}\), and
3. the total space \(\mathcal{X}\) is regular over \(\operatorname{Spec}\mathbb{k}\).
Let us denote the relative dualising sheaf by \(\omega_{\mathcal{X}/S}\). **From here onwards, we assume that \(S\) is a neighbourhood of the origin in \(\mathbb{A}^{1}:=Spec\ \ \Bbbk[t]\)**.
**Definition 3.1**.: (**Gieseker curve/Expanded degeneration/Modification**) Let \(X_{0}\) be a nodal curve with a single node \(x\in X_{0}\), and let \(x^{\pm}\) label the two preimages of \(x\) in the normalization \(\widetilde{X}_{0}\) of \(X_{0}\). Let \(r\) be a positive integer.
1. A _chain of \(r\) projective lines_ is a scheme \(R[r]\) of the form \(\cup_{i=1}^{r}R[r]_{i}\) such that 1. \(R[r]_{i}\cong\mathbb{P}^{{}^{1}}\), 2. for any \(i<j\), \(R[r]_{i}\cap R[r]_{j}\) consists of a single point \(p_{j}\) if \(j=i+1\) and empty otherwise. We call \(r\) the length of the chain \(R[r]\). Let us choose and fix two smooth points \(p_{1}\) and \(p_{r+1}\) on \(R[r]_{1}\) and \(R[r]_{r}\), respectively. \(p_{1}\)\(p_{1}\
1. \(p_{T}:\mathcal{X}_{T}^{mod}\longrightarrow T\) is flat;
2. the horizontal morphism is finitely presented and is an isomorphism over smooth fibers \((\mathcal{X}_{T})_{t}\) of \(\mathcal{X}_{T}\to T\);
3. over each closed point \(t\in T\) which maps to \(\eta_{0}\in S\), we have \((\mathcal{X}_{T}^{mod})_{t}\cong X_{r}\) for some integer \(r\) and the horizontal morphism restricts to the morphism \(X_{r}\to X_{0}\) which contracts the \(\mathbb{P}^{1}\)'s in \(X_{r}\) to the node \(x\in X_{0}\).
**Definition 3.3**.: (**Morphisms of families of expanded degenerations:**) Let \(T\to S\) be a scheme mapping to \(S\), and let \(\mathcal{X}_{T}^{mod}\) and \(\mathcal{X^{\prime}}_{T}^{mod}\) be two \(T\)-relative modifications of \(\mathcal{X}\to S\).
We call \(\mathcal{X}_{T}^{mod}\) and \(\mathcal{X}_{T}^{\prime mod}\)_isomorphic_ if there exists an isomorphism \(\sigma_{T}:\mathcal{X}_{T}^{mod}\longrightarrow\mathcal{X}_{T}^{\prime mod}\) such that the following diagram commutes
(3.2)
#### 3.1.2 Space of expanded degenerations
As before we will choose an uniformizer on \(S\) so that \(S\) is identified with a neighborhood of zero in \(\mathbb{A}^{1}\). For any positive integer \(n\), we set
\[S[n]:=S\times_{\mathbb{A}^{1}}\mathbb{A}^{n+1} \tag{3.3}\]
where the map \(\mathbb{A}^{n+1}\longrightarrow\mathbb{A}^{1}\) is given by \((t_{1},\ldots,t_{n+1})\mapsto t_{1}\cdots t_{n+1}\).
Consider the group
\[G[n]:=\mathbb{G}_{m}^{n} \tag{3.4}\]
acting on \(\mathbb{A}^{n+1}\) by
\[(\sigma_{1},\ldots,\sigma_{n})\cdot(t_{1},\ldots,t_{n+1}):=(\sigma_{1}\cdot t _{1},\ldots,\sigma_{i-1}^{-1}\cdot\sigma_{i}\cdot t_{i},\ldots,\sigma_{n}^{-1} \cdot t_{n+1}). \tag{3.5}\]
The map \(\mathbb{A}^{n+1}\to\mathbb{A}^{1}\), \((t_{1},\ldots,t_{n+1})\mapsto t_{1}t_{2}\cdots t_{n+1}\) intertwines this action with the trivial action on \(\mathbb{A}^{1}\) and hence (3.5) induces an action of \(G[n]\) on \(S[n]\).
In [14, Section 1.1], Jun Li constructed a \(G[n]\)-equivariant family of expanded degenerations \(W[n]\) over \(S[n]\). It is constructed by several birational transformations of the family \(\mathcal{X}\times_{S}S[n]\). The curves occurring in the family are all possible expanded degenerations of the family \(\mathcal{X}/S\) for which the length of rational chains is bounded by \(n\).
#### 3.1.3 Stack of bounded Expanded degenerations
The significance of the family \(W[n]\to S[n]\) is that it can be used as an atlas for the universal family of expanded degenerations with bubbling of length \(\leq n\).
Indeed, recall from [14, Definition 1.9, Proposition 1.10] the following
**Definition 3.4**.: The _stack of expanded degenerations_ of \(\mathcal{X}/S\) is the stack
\[\mathfrak{M}:Sch/S\longrightarrow Groupoids\]
given by the assignment
\[T\mapsto\left\{\begin{array}{l}\text{Pairs }(W_{T},\pi)\text{ where }T\text{ is a scheme, }W_{T}\to T\text{ is a family}\\ \text{of projective curves, and }\pi:W_{T}\to\mathcal{X}\text{ is a morphism over}\\ S\text{ such that there exists an etale cover }\widetilde{T}\to T\text{ and a map}\\ \widetilde{T}\to S[n]\text{ so that }W_{T}\times_{T}\widetilde{T}\cong W[n]\times_{S[n]} \widetilde{T}\end{array}\right\} \tag{3.6}\]
Two such families \(W_{T}\) and \(W_{T}^{\prime}\) are isomorphic if there is a \(T\)-isomorphism \(f:W_{T}\longrightarrow W_{S}^{\prime}\) compatible to the tautological projections \(W_{T}\longrightarrow W\) and \(W_{T}^{\prime}\longrightarrow W\).
By construction, the stack of expanded degenerations has the following properties
1. \(\mathfrak{M}\) is a smooth Artin stack of finite type.
2. The projection map \(\mathfrak{M}\longrightarrow S\) is generically an isomorphism.
3. The closed fibre of \(\mathfrak{M}\to S\) is a normal crossing divisor in \(\mathfrak{M}\).
Here we use the following
**Definition 3.5**.: Let \(\mathcal{Y}\) be a smooth Artin stack and \(\mathcal{D}\) be a closed sub-stack of co-dimension one. We say that \(\mathcal{D}\) is a _normal crossing divisor_ in \(\mathcal{Y}\) if for any smooth morphism \(f:T\to\mathcal{Y}\) from a smooth scheme \(T\), the pull-back of the divisor \(\mathcal{D}\times_{\mathcal{Y}}T\) is a normal crossing divisor in \(T\).
#### 3.1.4 An alternative construction of the stack \(\mathfrak{M}\)
In [37, Definition 2.22], Zijun Zhou gave an alternative construction of the stack \(\mathfrak{M}\) which will be useful for our purpose. Here we briefly recall his construction.
First let us introduce some useful notation and terminology. For any integer \(n\), we set \([n]:=\{1,\dots,n+1\}\). It will be useful to have short names for the natural maps between the spaces \(S[k]\) for various values of \(k\).
**Definition 3.6**.: Given any subset \(I\subseteq[n]\) of cardinality \(k+1\), we define
* The _Standard Embedding_ corresponding to \(I\) is the embedding \[\gamma_{I}:S[k]\hookrightarrow S[n]\] (3.7) given by \(\gamma_{I}(t_{1},\dots,t_{k+1})=t_{i}\) if \(i\in I\) and \(\gamma_{I}(t_{1},\dots,t_{k+1})=1\) otherwise.
* The _Standard Open Embedding_ corresponding to \(I\) is the open embedding \[\tau_{I}:S[k]\times\mathbb{G}_{m}^{n-k}\hookrightarrow S[n]\] (3.8) where \(S[k]\times\mathbb{G}_{m}^{n-k}\) denotes the open subset swept by \(S[k]\) under the free action of the subgroup \(\mathbb{G}_{m}^{n-k}\subset G[n]\) whose \(j\)-th component is \(1\) if \(j\notin I\).
**Definition 3.7**.: **(i)**: Given two subsets \(I\) and \(I^{\prime}\) of \([n]\) both of order \(k+1\), define an equivalence relation \(R_{I,I^{\prime}}\) on \(S[n]\) by setting
\[R_{I,I^{\prime}}=S[k]\times\mathbb{G}_{m}^{n-k}\rightrightarrows S[n] \tag{3.9}\]
where the two maps are \(\tau_{I}\) and \(\tau_{I^{\prime}}\).
**(ii)**: For every \(k\leq 0\) define a discrete equivalence relation \(R_{k}\rightrightarrows S[n]\) as
\[R_{k}:=\coprod_{|I|=|I^{\prime}|=k+1,I,I\subseteq[n]}R_{I,I^{\prime}}\ \rightrightarrows\ S[n]. \tag{3.10}\]
**(iii)**: Define the amalgamated discrete equivalence relation \(R_{dis}\) on \(S[n]\) by setting
\[R_{dis}:=\coprod_{k\leq n}R_{k}\ \rightrightarrows\ S[n] \tag{3.11}\]
**(iv)**: Finaly we define the total smooth groupoid
\[R_{tot}:=\mathbb{G}_{m}^{n}\times R_{dis}\rightrightarrows S[n]\]
which is generated by \(R_{dis}\) and the action of \(\mathbb{G}_{m}^{n}\) on \(S[n]\).
\[R_{tot}:=\mathbb{G}_{m}^{n}\times R_{dis} \tag{3.12}\]
which is generated by \(R_{dis}\) and the action of \(\mathbb{G}_{m}^{n}\) on \(S[n]\).
**Definition 3.8**.: Let
\[\mathfrak{M}_{n}:=[R_{tot}\rightrightarrows S[n]] \tag{3.13}\]
the quotient stack of \(S[n]\) by the smooth groupoid \(R_{tot}\).
From [37, Remark 2.23], it follows that the stack \(\mathfrak{M}_{n}\) is the stack of expanded degenerations of the family \(\mathcal{X}/S\) bounded by the integer \(n\).
**Notation:** From here onwards, we will only work with \(\mathfrak{M}_{n}\) and we drop the subscript \("n"\) from the notation and denote it simply by \(\mathfrak{M}\).
**Lemma 3.9**.: _The morphism between quotient stacks_
\[[S[n]/G[n]]\longrightarrow\mathfrak{M} \tag{3.14}\]
_is etale._
Proof.: Let us first show that the relation \(R_{dis}\) descends to an equivalence relation on the quotient stack \([S[n]/G[n]]\)). Recall that \(R_{dis}:=\coprod_{k\leq n}R_{k}\) and \(R_{k}:=\coprod_{|I|=|I^{\prime}|=k+1,I,I\subseteq[n]}R_{I,I^{\prime}}\), where the equivalence relations \(R_{I,I^{\prime}}\) are given by
\[S[k]\times\mathbb{G}_{m}^{n-k}\rightrightarrows S[n] \tag{3.15}\]
Recall that the map \(R_{I}:S[k]\times\mathbb{G}_{m}^{n-k}\longrightarrow S[n]\) is given by the standard open embedding.
For simplicity, let us assume that \(S=\mathbb{A}^{1}\). Then \(S[n]=\mathbb{A}^{n+1}\). Notice that \(G[n]:=\mathbb{G}_{m}^{n}\). But in fact, there is an action of a bigger group \(\mathbb{G}_{m}^{n+1}\) on \(\mathbb{A}^{n+1}\) given by the coordinatewise multiplication. Moreover, the group \(\mathbb{G}_{m}^{n}\) can be identified as a subgroup of \(\mathbb{G}_{m}^{n+1}\) consisting of elements of the form \((t_{1},\ldots,t_{n+1})\) such that \(\prod_{i=1}^{n+1}t_{i}=1\). Let's consider the action of the bigger group \(G[n+1]\).
Let us first define a natural action of \(G[n+1]\) on \(R_{I}:=\mathbb{A}^{k+1}\times\mathbb{G}_{m}^{n-k}\). Notice that the cardinality of the set \(I\) equals \(k+1\). It is an ordered subset of \([n+1]\). Therefore, the complement \(J:=[n+1]\setminus I\) is also an ordered set of cardinality \(n-k\), where the order is the induced order from \([n+1]\). Now we see that \(\mathbb{G}_{m}^{n+1}\cong\mathbb{G}_{m}^{I}\times\mathbb{G}_{m}^{J}\). Finally, we define the action of \(\mathbb{G}_{m}^{n+1}\) on \(R_{I}:=\mathbb{A}^{I}\times\mathbb{G}_{m}^{J}\) by the obvious action of \(\mathbb{G}_{m}^{I}\times\mathbb{G}_{m}^{J}\). It is easy to check that the map \(R_{I}:\mathbb{A}^{k+1}\times(\mathbb{k}^{*})^{n-k}\longrightarrow\mathbb{A}^ {n+1}\) is \(\mathbb{G}_{m}^{n+1}\)-equivariant. Hence, it is also equivariant under the action of the smaller subgroup \(\mathbb{G}_{m}^{n}\). Therefore, the equivalence relations \(R_{I,I^{\prime}}\) descends to the quotient stack \([S[n]/G[n]]\).
Now we will show that the relation \(\underline{R}_{dis}\) is an etale equivalence relation. First of all, notice that the maps \(R_{I}:S[k]\times\mathbb{G}_{m}^{n-k}\longrightarrow S[n]\) are open immersions. Therefore, \(R_{k}:=\coprod_{|I|=|I^{\prime}|=k+1,I,I\subseteq[n]}R_{I,I^{\prime}} \rightrightarrows\mathbb{A}^{k+1}\) is an etale equivalence relation because both the projections are Zariski open immersions. Therefore, \(R_{dis}\) on \(S[n]\) and \([R_{dis}/G[n]]\) on \([S[n]/G[n]]\) defines an etale equivalence relation.
### Construction of the family of curves
In this subsection, we will briefly recall from [14, 37] the fact that there is a family of expanded degeneration (whose rational chain is bounded by length \(n\)). Let us choose an etale neighbourhood \(U_{p}\) of the node \(p\in\mathcal{X}\) such that the nodal curve \(U_{p}\times_{\mathcal{X}}X_{0}\) is a reducible nodal curve with two smooth connected components intersecting transversally at the node \(p\). We choose a etale covering \(\mathcal{X}=U_{p}\cup V\) such that \(p\notin V\). We have the following pushout diagram.
(3.16)
Then notice that \(U_{p}\longrightarrow S\) is a simple degeneration required to construct the expanded degeneration [14, 37]. We denote by \(U_{p}[n]\longrightarrow S[n]\) the family of expanded degenerations. Also, \(V[n]:=V\times_{S}S[n]\). Now we construct the family of expanded degenerations, denoted by \(\mathcal{X}[n]\) over \(S[n]\) by the following push-out diagram.
(3.17)
Since it is an etale gluing, the total space \(\mathcal{X}[n]\), in general, is an algebraic space. The family of curves \(\mathcal{X}[n]\longrightarrow S[n]\) is the desired family of expanded degeneration of the original family \(\mathcal{X}\longrightarrow S\). We refer to [37, Proposition 2.13] for the interesting properties of the family of expanded degenerations. we should remark that the properties (3) and (4) of [37, Proposition 2.13] imply that the family of expanded degenerations descends to the stack of expanded degenerations \(\mathfrak{M}\). We denote this family of expanded degenerations by \(\mathcal{X}_{\mathfrak{M}}\).
### Log structures on Derived Artin stacks
In this subsection, we define shifted log-symplectic structure on a quasi-smooth derived Artin stack (Definition 3.18) equipped with a locally free log structure. We refer to [22, 23] and [25] for the prerequisite material on log-structure/locally free
log-structure on Artin/derived Artin stacks. We now recall few necessary definitions.
#### 3.3.1 Log structures on Artin stacks
Let \(\mathfrak{L}^{0}\) denote the algebraic stack, which classifies fine log structures, and \(\mathfrak{L}^{1}\) denote the algebraic stack, which classifies morphisms of fine log structures. We denote by \(\mathfrak{L}^{0}_{f}\) and \(\mathfrak{L}^{1}_{f}\) the classifying substack of locally free log structures and morphisms between locally free log structures, respectively. Given a logarithmic scheme \((S,M,\alpha)\), the stack \(\mathfrak{L}^{0}_{S,f}:=\mathfrak{L}^{1}_{f}\times_{\mathfrak{L}^{0}_{f}}S\) is the classifying stack of logarithmic morphisms from a scheme with locally free log structure to \((S,M,\alpha)\).
**Definition 3.10**.: A locally free log structure on an algebraic stack \(\mathfrak{X}\) is a morphism of stacks \(\mathfrak{X}\longrightarrow\mathfrak{L}^{0}_{f}\).
**Remark 3.11**.: By [25, Proposition 1.7], we know that the algebraic stack \(\mathcal{L}^{0}_{f}\) and \(\mathcal{L}^{1}_{f}\) are smooth.
#### 3.3.2 Locally free Log structures on Derived Artin stacks
**Definition 3.12**.: A locally free log structure on a derived algebraic stack \(\mathfrak{X}\) is a morphism \(\mathfrak{X}\longrightarrow\mathcal{L}^{0}_{f}\) between stacks.
**Definition 3.13**.: A morphism of locally-free derived log stacks \(\mathfrak{X}\longrightarrow\mathfrak{Y}\) is a commutative diagram of stacks
(3.18)
#### 3.3.3 Relative logarithmic cotangent complex
**Definition 3.14**.: Given a morphism between derived stacks with locally free log structures \(f:\mathfrak{X}\longrightarrow\mathfrak{Y}\), we define the relative log-cotangent complex by
\[\mathbb{L}^{\,log}_{\mathfrak{X}/\mathfrak{Y}}:=\mathbb{L}_{\mathfrak{X}/ \mathcal{L}^{0}_{f}} \tag{3.19}\]
Here \(\mathbb{L}_{\mathfrak{X}/\mathcal{L}^{0}_{f}}\) denote the relative cotangent complex for the morphism of stacks \(\mathfrak{X}\longrightarrow\mathcal{L}^{0}_{f}\).
**Proposition 3.15**.: _The morphism \(\mathfrak{M}\longrightarrow S\) of ordinary Artin stacks is a log-smooth map. Moreover, the relative log-cotangent complex \(\mathbb{L}^{\,log}_{\mathfrak{M}/S}=0\)._
Proof.: From Lemma 3.9, it follows that the map \([S[n]/G[n]]\longrightarrow\mathfrak{M}\) is etale. Notice that the varieties \(S[n]\) and \(S\) are log smooth varieties with the log structure coming from the divisor given by the preimage of the closed point of \(S\) under the map \(S[n]\longrightarrow S\). Since the map is basically the same as the map \(\mathbb{A}^{n+1}\longrightarrow\mathbb{A}^{1}\) given by \((t_{1},\ldots,t_{n+1})\mapsto t_{1}\cdots t_{n+1}\), the map is a log-smooth morphism. Therefore, it follows that the map \([S[n]/G[n]]\longrightarrow S\) is also a log-smooth morphism of log-smooth Artin stacks.
The explicit description of the relative log-cotangent complex is the following. The pull-back (to \(S[n]\)) of the log-cotangent complex of the stack \([S[n]/G[n]]\) is given by
\[\big{[}\Omega_{S[n]}(\log\ \partial S[n])\longrightarrow\mathcal{O}^{\oplus n }_{S[n]}\big{]} \tag{3.20}\]
where the rank \((n+1)\) locally free sheaf \(\Omega_{S[n]}(\log\ \partial S[n])\) sitting in degree \(0\). Since the action of \(G[n]\) on \(S[n]\) respects the normal crossing divisor of \(S[n]\), the action lifts to an action on the log-cotangent bundle of \(S[n]\) such that the natural log-symplectic structure is \(G[n]\)-equivariant. The morphism (3.20) is given by the moment map for the action of \(G[n]\) on the log-cotangent bundle of \(S[n]\). We recall the action of \(G[n]\) on \(S[n]\)
\[(t_{1},\ldots,t_{n+1})\cdot(x_{1},\ldots,x_{n+1})=(t_{1}x_{1},t_{2}x_{2},\ldots,t_{n+1}x_{n+1}) \tag{3.21}\]
where \(t_{1}\cdots t_{n+1}=1\). Also remember the map \(S[n]\longrightarrow S\) is given by \((t_{1},\ldots,t_{n+1})\mapsto\prod_{i=1}^{n+1}t_{i}\). Therefore, we see that the composite map
\[\Omega_{S}(\log\ \ \partial S)\hookrightarrow\Omega_{S[n]}(\log\ \ \partial S[n]) \longrightarrow\mathcal{O}^{\oplus n}_{S[n]} \tag{3.22}\]
is \(0\). Therefore, we have a complex
\[\big{[}\Omega_{S[n]/S}(\log\ \ \partial S[n])\longrightarrow\mathcal{O}^{\oplus n }_{S[n]}\big{]} \tag{3.23}\]
This is precisely the relative log cotangent complex of the morphism \(\big{[}S[n]/G[n]\big{]}\longrightarrow S\) pulled back to \(S[n]\). But notice that the morphism \(\Omega_{S[n]/S}(\log\ \ \partial S[n])\longrightarrow\mathcal{O}^{\oplus n}_{S[n]}\) is an isomorphism. Therefore, the relative log-cotangent complex of the morphism \(\big{[}S[n]/G[n]\big{]}\longrightarrow S\) is equivalent to \(0\).
### Relative shifted log-symplectic forms
In this subsection, we will recall the definition of relative shifted symplectic forms for a quasi-smooth morphism of derived Artin stacks. We will define relative shifted log-symplectic forms for certain logarithmic morphisms of derived Artin stacks equipped with locally-free log structures.
Let \(\pi:M\longrightarrow S\) be a relatively derived Artin stack with \(S\) a scheme. Suppose we also assume that the stack \(M\) and the scheme \(S\) are equipped with locally free log structures such that the map \(\pi\) is a morphism of log structures. The above data is equivalent to a map of stacks \(\pi_{log}:M\longrightarrow\mathcal{L}^{0}_{S}\), where \(\mathcal{L}^{0}_{S}\) is the classifying log stack for \(S\). We also denote by \(\mathcal{L}^{0}_{S}\) the associated derived stack. We further assume that the map \(\pi_{log}\) is a quasi-smooth morphism of derived stacks (defined below 3.18).
**Definition 3.16**.: (**Locally of finite presentation morphism**)[29, Def. 2.16]
1. (classical) A map \(R\longrightarrow S\) of discrete commutative \(k\)-algebras is finitely presented if \(\mathsf{Hom}_{R/CAlg(k)}(S,-)\) commutes with filtered colimits.
2. (derived) A map \(A\longrightarrow B\) in \(cdga^{\leq 0}\) is derived finitely presented if \(\mathsf{Map}_{A/cdga^{\leq 0}_{k}}(B,-)\) commutes with (homotopy) filtered colimits.
**Remark 3.17**.: A map \(A\longrightarrow B\) in \(cdga^{\leq 0}\) is finitely presented if and only if
1. \(H^{0}(A)\longrightarrow H^{0}(B)\) is classically finitely presented, and
2. \(\mathbb{L}_{B/A}\) is a perfect \(B\)-dg module (i.e., a dualizable object in \((dgmod(B),\otimes)\)). ([30, 2.2.] and [19, Theorem 7.4.3.18])
**Definition 3.18**.: (**Quasi-smooth morphism of Derived stacks**) A map of derived Artin stacks \(f:\mathcal{X}\longrightarrow\mathcal{Y}\) is called a quasi-smooth morphism if it is locally of finite presentation and the relative cotangent complex \(\mathbb{L}_{f}\) has Tor amplitude in \([-1,1]\).
**Definition 3.19**.: [25, Definition 2.12] We define the relative log cotangent complex \(\mathbb{L}_{M/S}^{\,log}:=\mathbb{L}_{\pi_{log}}\). Here \(\mathbb{L}_{\pi_{log}}\) denotes the relative cotagnet complex for the morphism \(\pi_{log}\).
**Remark 3.20**.: Since the map \(\pi_{log}:M\longrightarrow\mathcal{L}^{0}_{S}\) is assumed to be quasi-smooth, therefore \(\mathbb{L}_{M/S}^{\,log}\) is a perfect complex over \(M\) and is of Tor-amplitude \([-1,1]\).
Now we recall the definitions of relative shifted symplectic forms for a finitely presented map of derived Artin stacks \(M\longrightarrow N\) from [28], [8, Definition 1.4.1]. The finitely presented condition implies that the relative cotangent and relative tangent complexes are perfect.
**Definition 3.21**.: We define the relative de Rham algebra for a finitely presented morphism of derived Artin stacks \(M:=\operatorname{Spec}A\longrightarrow N\), for a cdga \(A\),
\[DR(M/N):=\operatorname{Sym}_{A}(\mathbb{L}_{M/N}[1])), \tag{3.24}\]
where \(\mathbb{L}_{M/N}\) is the relative cotangent complex. Notice that \(Sym_{A}(\mathbb{L}_{M/N}[1])\cong\oplus_{p=0}^{\infty}\wedge^{p}\mathbb{L}_{M /N}[p]\) as \(A\) modules and therefore \(Sym_{A}(\mathbb{L}_{M/N}[1])\cong\oplus_{p=0}^{\infty}\wedge^{p}\mathbb{L}_{M /N}[p]\), as \(\Bbbk\)-vector spaces. A cohomological differential is induced from the internal differential of \(A\), which we denote by \(d\). The differential \(d\) makes it a commutative differential graded algebra (cdga). We also have the de Rham differential \(d_{DR}:\wedge^{p}\mathbb{L}_{M/N}\longrightarrow\wedge^{p+1}\mathbb{L}_{M/N}\). The de Rham differential defines a mixed structure on \(DR(M/N)\) with \(\epsilon:=d_{DR}\). This makes \(DR(M/N)\) a mixed cdga. There are two types of gradings: namely, the cohomological grading (induced by the internal differential \(d\)) and a "weight grading" defined as \(DR(M/N)(p):=\wedge^{p}L_{M/N}[p]\). This graded mixed structure on \(DR(M/N)\) is also compatible with the multiplicative structure and makes it into a graded mixed cdga over \(\Bbbk\). The degree and weight of the internal differential \(d\) are \(1\) and \(0\), respectively. The degree and weight of \(\epsilon\) are \(-1\) and \(1\), respectively.
The assignment \((\operatorname{Spec}A\longrightarrow N)\mapsto DR(\operatorname{Spec}A/N)\) defines a functor
\[cdga_{N}^{\leq 0}\longrightarrow\epsilon-dg_{\Bbbk}^{gr}\]
We can derive this functor on the left, by precomposing it with cofibrant replacement functor on \(cdga_{N}^{\leq 0}\), to obtain
\[LDR(-/N):cdga_{N}^{\leq 0}\longrightarrow\epsilon-dg_{\Bbbk}^{gr}\]
which preserves quasi-isomorphisms. Therefore, it induces a well-defined \(\infty\)-functor \(\mathbf{DR}(-/N):\mathbf{cdga}_{N}\longrightarrow\epsilon-\mathbf{dg}^{gr}\).
Given a cdga \(E\in cdga_{\leq 0}\), we get a simplicial set \(|E|\) given by the Dold-Kan correspondence.
**Definition 3.22**.: For \(A\in cdga_{\leq 0}\) with a map \(M:=\operatorname{Spec}A\longrightarrow N\) of derived stacks and two integers \(p\geq 0\) and \(n\in\mathbb{Z}\), we define
1. \(\mathcal{A}_{N}^{p}(M,n):=|\wedge^{p}\mathbb{L}_{M/N}[n]|\), a simplicial set.
2. \(\mathcal{A}_{N}^{p,cl}(M,n):=|NC^{w}(DR(M/N))[n-p](p)|\), where for a cdga \(E\), \(NC^{w}(E):=NC(E)(p)\) and \(NC^{n}(E)(p):=\prod_{i\geq 0}E^{n-2i}(p+i)\).
We have two \(\infty\)-functors \(\boldsymbol{\mathcal{A}_{N}^{p}(-,n)}\) and \(\boldsymbol{\mathcal{A}_{N}^{p,cl}(-,n)}\) from \(\boldsymbol{\operatorname{cdga}_{N}}\longrightarrow\mathbb{S}\), where \(\boldsymbol{\operatorname{cdga}_{N}}\) denote the \(\infty\)-category of cdga's over a fixed derived stack \(N\) and \(\mathbb{S}\) denotes the \(\infty\)-category of simplicial sets.
**Definition 3.23**.: For \(A\in cdga_{N}\)\((M:=SpecA)\), the simplicial set \(\mathcal{A}_{N}^{p}(M,n)\) (respectively \(\mathcal{A}_{N}^{p,cl}(M,n)\)) is called the space of \(p\)-forms of degree \(n\) on the derived stack \(M\), relative to \(N\) (respectively the space of closed \(p\)-forms of degree \(n\) on the derived stack \(M\), relative to \(N\)).
The two \(\infty\)-functors \(\boldsymbol{\mathcal{A}_{N}^{p}(-,n)}\) and \(\boldsymbol{\mathcal{A}_{N}^{p,cl}(-,n)}\) can be viewed as derived prestacks \(\boldsymbol{\operatorname{dAff}_{N}^{op}}\longrightarrow\mathbb{S}\). Following the arguments of [28, Proposition 1.11], one can show that the derived pre-stacks \(\boldsymbol{\mathcal{A}_{N}^{p}(-,n)}\) and \(\boldsymbol{\mathcal{A}_{N}^{p,cl}(-,n)}\) are derived stacks for the big etale site over \(N\). For the definitions of an isotropic fibration structure and Lagrangian fibration structure, we refer to [7, Definition 1.13].
Now, in order to define relative shifted \(\log\)\(p\)-forms for a logarithmic map of derived Artin stacks equipped with locally free \(\log\) structures \(M\longrightarrow S\), we put \(N:=\mathcal{L}_{S}^{0}\). We denote the space of closed relative \(\log\)\(p-\)forms of degree \(n\) over \(S\) by \(\mathcal{A}_{S,log}^{p,cl}(-,n)=\mathcal{A}_{\mathcal{L}_{S}^{0}}^{p,cl}(-,n)\).
**Definition 3.24**.:
1. We define the space of \(S\) relative logarithmic \(n\)-shifted \(p\)-forms on \(M\in\boldsymbol{\operatorname{dSt}_{S,log}}\) by \(\mathcal{A}_{log}^{p}(M/S,n):=\operatorname{\mathsf{Map}}_{\boldsymbol{ \operatorname{dSt}_{S,log}}}(M,\mathcal{A}_{\mathcal{L}_{S}^{0}}^{p}(-,n))\).
2. We define the space of closed relative logarithmic \(n\)-shifted \(p\)-forms \(\mathcal{A}_{log}^{p,cl}(M/S,n):=\operatorname{\mathsf{Map}}_{\boldsymbol{ \operatorname{dSt}_{S,log}}}(M,\mathcal{A}_{\mathcal{L}_{S}^{0}}^{p,cl}(-,n))\).
3. A 2-form \(\omega\in\mathcal{A}_{log}^{2}(M/S,n)\) is non-degenerate if the induced map in \(D_{qcoh}(M)\) \[\Theta_{\omega}:\mathbb{T}_{M/S}^{log}\longrightarrow\mathbb{L}_{M/S}^{log}[n]\] (3.25) is a quasi-isomorphism. We denote by \(\mathcal{A}_{log}^{2}(M/S,n)^{nd}\) the full subspace of \(\mathcal{A}_{log}^{2}(M/S,n)\) which is the union of all the connected components consisting of non-degenerate relative logarithmic 2-forms of degree \(n\) on \(M\).
4. Finally we define the space of \(n\)-shifted relative log symplectic forms as the homotopy fibre product \[Symp_{log}(M/S,n):=\mathcal{A}_{log}^{2}(M/S,n)^{nd}\times_{\mathcal{A}_{log}^{2 }(M/S,n)}^{h}\mathcal{A}_{log}^{2,cl}(M/S,n).\]
## 4 Logarithmic Dolbeault Moduli stack and shifted log-symplectic form
In this section, we will construct a logarithmic version of relative Dolbeault moduli stack for the family of curves \(\mathcal{X}_{\mathfrak{M}}\rightarrow\mathfrak{M}\). We will show that the relative logarithmic Dolbeault moduli stack has a relative \(0\)-shifted log-symplectic form over the spectrum of a discrete valuation ring \(S\). Moreover, we will show that the relative log-symplectic form is an extension of Hitchin's symplectic form on the generic fibre of the moduli stack over the spectrum of a discrete valuation ring. This was proved for moduli schemes in [9].
**Definition 4.1**.: Let \(\mathcal{X}\longrightarrow S\) be a morphism of classical Artin stacks of relative dimension \(1\). We call it a semi-stable family of curves if every geometric fibre of \(\mathcal{X}\longrightarrow S\) is isomorphic to a semi-stable curve i.e., it satisfies the following properties
1. every geometric fibre is a connected projective nodal curve,
2. for every sub-curve \(E\) isomorphic to \(\mathbb{P}^{1}\) it intersects with its complement at exactly two smooth points.
**Definition 4.2**.: let \(C\) be a nodal curve and \(D\) denote the set of nodes. Let \(q:\tilde{C}\longrightarrow C\) denote the normalisation and \(\tilde{D}\) denote the pre-image \(q^{-1}(D)\). Then the dualising sheaf \(\omega_{C}\) is the kernel of the map
\[q_{*}\Omega_{\tilde{C}}(\tilde{D})\longrightarrow\oplus_{x\in D}\Bbbk_{x}, \tag{4.1}\]
where
1. \(\Bbbk_{x}\) denotes the sky-scraper sheaf at the point \(x\).
2. the map \(q_{*}\Omega_{\tilde{C}}(x^{+}+x^{-})\longrightarrow\Bbbk_{x}\) is given by \[s\mapsto Res(s;x^{+})+Res(s;x^{-})\] (4.2)
We denote it by \(\omega_{C}\). Here, \(Res(s;x)\) denotes the residue of a form \(s\) at a point \(x\). It is straightforward to check that it is a locally free sheaf.
**Remark 4.3**.: For the definition and existence of the relative dualising sheaf for a family of semi-stable/ pre-stable curves we refer to [34, 109.19 The relative dualizing sheaf].
**Definition 4.4**.: Let \(\mathcal{X}\longrightarrow\mathcal{S}\) be a morphism of derived Artin stacks of relative dimension \(1\). We call it a semi-stable family of curves if the induced map between underlying classical Artin stacks is family of semi-stable curves.
**Remark 4.5**.: The above definition makes sense because the space of deformations of a projective curve \(C\) over \(\operatorname{Spec}\,\mathbb{K}[\epsilon]\), (where \(\mathbb{K}[\epsilon]\) denote the free cdga generated by \(\epsilon\) with \(\deg\,\epsilon=-1\)) can be interpreted as \(H^{2}(C,T_{C})\), which is equal to \(0\) because \(\dim\,C=1\). Here \(T_{C}\) denote the tangent complex of the curve \(C\), which is a perfect complex because \(C\) is a local complete intersection. We refer to [29, Proposition 6.3] for the precise statement.
**Definition 4.6**.: A Higgs bundle over a family of semistable curves \(\mathcal{X}/\mathcal{S}\) is a locally free \(\mathcal{O}_{\mathcal{X}}\) module \(\mathcal{E}\) with a Higgs field \(\phi:\mathcal{E}\longrightarrow\mathcal{E}\otimes\omega_{\mathcal{X}/\mathcal{ S}}\), where \(\omega_{\mathcal{X}/\mathcal{S}}\) denotes the relative dualising sheaf for the family \(\mathcal{X}/\mathcal{S}\).
We define \(Tot(\omega_{\mathcal{X}/\mathcal{S}}^{\vee}):=\underline{Spec}_{\mathcal{X}} Sym_{\mathcal{O}_{\mathcal{X}}}(\omega_{\mathcal{X}/\mathcal{S}})\); it is the total space of the dual of the relative dualising sheaf of the family of curves \(\mathcal{X}\longrightarrow\mathcal{S}\).
**Definition 4.7**.: Let \(\widehat{\operatorname{Tot}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})}\) denote the formal completion of \(\operatorname{Tot}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})\) along the zero section. Notice that It is a formal group scheme over \(\mathcal{X}\), whereas \(\operatorname{Tot}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})\) is an abelian group scheme over \(\mathcal{X}\). The logarithmic Dolbeault shape of \(\mathcal{X}/\mathcal{S}\) is defined as the following quotient stack.
\[\mathcal{X}_{\text{\it{${\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{{\rm{ \rm{ \rm }}}}}} \rm{{\rm{{\rm{\rm{{\rm{\rm{\rm\rm{\rm\rm{{ \rm\rm \rm { \rm \rm }}}}}}}}}}}}}}}}}} \ \ \rm \rm{ \rm{\rm{{{{{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm\rm{\rm\rm{\rm\,\rm{\rm\,\,}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\,\,\,\,
**Lemma 4.9**.:
1. \(Qcoh(\mathcal{X}_{lDol})\cong Mod_{Sym_{\mathcal{O}_{\mathcal{X}}}\omega_{ \mathcal{X}/\mathcal{S}}^{\vee}}(QCoh(X))\)__
2. \(H_{S}^{*}(\mathcal{X}_{lDol},E)\cong H_{lDol,S}^{*}(\mathcal{X},\mathcal{E})\)__
Proof.: The proof follows verbatim [27, Proposition 5.1.2] after replacing the relative cotangent complex \(\mathbb{L}_{\mathcal{X}/\mathcal{S}}\) with the relative dualising complex \(\omega_{\mathcal{X}/\mathcal{S}}\).
**Remark 4.10**.: Notice that \(\mathcal{X}_{lDol/\mathcal{S}}\) is the classifying stack \(\widehat{B\mathrm{Tot}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})}\) of the formal group scheme \(\widehat{\mathrm{Tot}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})}\) over \(\mathcal{X}\). The nilpotent logarithmic Dolbeault shape \(\mathcal{X}_{lDol/\mathcal{S}}^{nil}\) is the classifying stack of the abelian group scheme \(Tot(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})\) over \(\mathcal{X}\). Therefore we have the following equivalence of categories.
\[\left\{\begin{aligned} &\text{Quasi-coherent sheaves}\\ &\text{over }\widehat{B\mathrm{Tot}(\omega_{\mathcal{X}/ \mathcal{S}}^{\vee})}\end{aligned}\right\}\cong\left\{\begin{aligned} &\text{Quasi-coherent sheaves over }\mathcal{X}\text{ with}\\ &\text{an action of the sheaf of algebras}\\ &\text{Sym}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})\end{aligned}\right\} \tag{4.5}\]
The quadratic dual algebra \((Sym\ \omega_{\mathcal{X}/\mathcal{S}}^{\vee})^{!}\) is isomorphic to the dg-algebra with zero differential \(\mathrm{Sym}\ (\omega_{\mathcal{X}/\mathcal{S}}[-1])\). Moreover, we have the following equivalence of categories ([26]).
\[\left\{\begin{aligned} &\text{Quasi-coherent sheaves over }\mathcal{X}\text{ with}\\ &\text{an action of the sheaf of algebras}\\ &\text{Sym}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})\end{aligned}\right\} \cong\left\{\begin{aligned} &\text{Quasi-coherent sheaves over}\\ &\text{Spec }\text{ Sym }(\omega_{\mathcal{X}/\mathcal{S}}[-1])\end{aligned}\right\} \tag{4.6}\]
From (4.5) and (4.6), we have the following.
\[\widehat{B\mathrm{Tot}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})}\cong \text{Spec }\text{ Sym }(\omega_{\mathcal{X}/\mathcal{S}}[-1]). \tag{4.7}\]
Similarly, we have the following equivalence.
\[\left\{\begin{aligned} &\text{Vector bundle}\\ &\text{over }B\mathrm{Tot}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee}) \end{aligned}\right\}\cong\left\{\begin{aligned} &\text{Vector bundle over }\mathcal{X}\text{ with an}\\ &\text{action of the sheaf of algebras}\\ &(\text{Sym}(\omega_{\mathcal{X}/\mathcal{S}}))^{\vee}\cong \widehat{\mathrm{Sym}(\omega_{\mathcal{X}/\mathcal{S}}^{\vee})}\end{aligned} \right\}\cong\left\{\begin{aligned} &\text{nilpotent Higgs}\\ &\text{bundle over }\mathcal{X}\end{aligned}\right\} \tag{4.8}\]
**Theorem 4.11**.: \(\mathcal{X}_{lDol}\) _is \(\mathcal{O}\)-compact and \(\mathcal{O}\)-oriented over \(\mathcal{S}\). Hence, \(\mathsf{Map}_{\mathcal{S}}(\mathcal{X}_{lDol},BGL_{n}\times\mathcal{S})\) has a \(0\)-shifted relative symplectic structure over \(\mathcal{S}\)._
Proof.: Proof of \(\mathcal{O}\)-compactness: Let us begin with recalling the notations for the following maps from Section 1.1.
\[\mathcal{X}_{lDol}\xrightarrow[p]{q}\mathcal{X}\xrightarrow[p]{r}\mathcal{S} \tag{4.9}\]
From (4.7), we have
\[Rq_{*}\mathcal{O}_{\mathcal{X}_{lDol}}\cong\mathcal{O}_{\mathcal{X}}\oplus \omega_{\mathcal{X}/\mathcal{S}}[-1], \tag{4.10}\]
which is clearly a perfect complex. Therefore \(\mathcal{O}_{\mathcal{X}_{Dol}}\) is \(\mathcal{O}\)-compact.
Proof of \(\mathcal{O}\)-orientation: Using (4.10) we have
\[(Rp_{*}\mathcal{O}_{\mathcal{X}_{lDol}})^{\vee}[-2]\cong(Rr_{*}\mathcal{O}_{ \mathcal{X}}\oplus Rr_{*}\omega_{\mathcal{X}/\mathcal{S}}[-1])^{\vee}[-2] \cong(Rr_{*}\mathcal{O}_{\mathcal{X}})^{\vee}[-2]\oplus(Rr_{*}\omega_{ \mathcal{X}/\mathcal{S}}[-1])^{\vee}[-2]. \tag{4.11}\]
By Serre duality we have
\[Rr_{*}(\omega_{\mathcal{X}/\mathcal{S}}[-1])\cong(Rr_{*}((\omega _{\mathcal{X}/\mathcal{S}}[-1])^{\vee}\otimes\omega_{\mathcal{X}/\mathcal{S}} [1]))^{\vee}\cong(Rr_{*}(\mathcal{O}_{\mathcal{X}}[2]))^{\vee}\cong(Rr_{*} \mathcal{O}_{\mathcal{X}})^{\vee}[-2]\] \[\implies(Rr_{*}(\omega_{\mathcal{X}/\mathcal{S}}[-1]))^{\vee}[-2] \cong Rr_{*}(\mathcal{O}_{\mathcal{X}}[2])[-2]\cong Rr_{*}\mathcal{O}_{ \mathcal{X}}\]
We see that \(H^{0}(Rr_{*}\mathcal{O}_{\mathcal{X}})\cong\mathcal{O}_{\mathcal{S}}\). We choose an isomorphism once and for all and denote it by \(\eta\). This defines an element
\[\eta\in(Rp_{*}\mathcal{O}_{\mathcal{X}_{lDol}})^{\vee}[-2]=\mathsf{Hom}(Rp_{*} \mathcal{O}_{\mathcal{X}_{lDol}},\mathcal{O}_{\mathcal{S}}[-2]). \tag{4.12}\]
We want to show that it is an \(\mathcal{O}\) orientation.
Let \(\mathcal{E}\) be a perfect complex on \(\mathcal{X}_{lDol}\times_{\mathcal{S}}\mathbf{Spec}A\), for an \(A\in\mathbf{cdga}_{\mathcal{S}}^{\leq 0}\). We have to show that the following map
\[(-\cap\eta_{A}):Rp_{A_{*}}\mathcal{E}\longrightarrow(Rp_{A_{*}}(\mathcal{E}^{ \vee}))^{\vee}[-2]=(Rp_{A_{*}}(\mathcal{E}^{\vee})[2])^{\vee} \tag{4.13}\]
is a quasi-isomorphism of \(A\)-dg-modules, where \(\eta_{A}:Rp_{A*}\mathcal{O}_{\mathcal{X}_{lDol,A}}\longrightarrow A[-2]\) is the derived pullback of \(\eta:Rp_{*}\mathcal{O}_{\mathcal{X}_{lDol}}\longrightarrow\mathcal{O}_{ \mathcal{S}}[-2]\) under the map \(\mathrm{Spec}\;A\longrightarrow\mathcal{S}\) and \((Rp_{A*}(\mathcal{E}^{\vee}))^{\vee}\) is the derived \(A\)-dual of \(Rp_{A*}(\mathcal{E}^{\vee})\). Here \(p_{A}:\mathcal{X}_{lDol}\times_{\mathcal{S}}\mathbf{Spec}A\longrightarrow \mathcal{S}\).
**Spec**\(A\) is the projection. The LHS is isormorphic to \(Rr_{A_{*}}\mathcal{D}_{(E_{A},\phi_{A})}\), where
\[\mathcal{D}_{(E_{A},\phi_{A})}:=[E_{A}\stackrel{{ \phi_{A}}}{{\longrightarrow}}E_{A}\otimes\omega_{\mathcal{X}_{A}/\operatorname{ Spec}A}]\] \[0\hskip 56.905512pt1\]
corresponding to \(Rq_{A*}\mathcal{E}_{A}\) (here \(E_{A}\) is sitting in degree \(0\)). The RHS is isomorphic to \((Rr_{A*}(\mathcal{D}^{\vee}_{(E_{A},\phi_{A})}))^{\vee}\), where
\[\mathcal{D}^{\vee}_{(E_{A},\phi_{A})}:=[E^{\vee}_{A}\stackrel{{ -\phi^{\vee}_{A}}}{{\longrightarrow}}E^{\vee}_{A}\otimes\omega_{ \mathcal{X}_{A}/\operatorname{Spec}\ A}] \tag{4.14}\] \[-2\hskip 56.905512pt-1 \tag{4.15}\]
where \(\phi^{\vee}_{A}\) is the image of \(\phi_{A}\) under the natural isomorphism \(\mathcal{E}nd\ E_{A}\cong\mathcal{E}nd\ E^{\vee}_{A}\). Now since the Grothendieck-Serre dual of the complex \(\mathcal{D}_{(E_{A},\phi_{A})}\) is the same as \(\mathcal{D}^{\vee}_{(E_{A},\phi_{A})}\), the morphism (4.13) is quasi-isomorphism. (Notice that the bold-font dual \(\mathcal{D}^{\vee}_{(E_{A},\phi_{A})}\) in (4.14) is to remind that it is not the ordinary dual)
Therefore, we have shown that \(\mathcal{X}_{lDol}\) is \(\mathcal{O}\)-compact and \(\mathcal{O}\)-oriented over \(\mathcal{S}\) and from [28, Theorem 2.5], it follows that \(\mathsf{Map}_{\mathcal{S}}(\mathcal{X}_{lDol},BGL_{n})\) has a \(0\)-shifted relative symplectic structure over \(\mathcal{S}\).
Let us fix an integer \(n\), which will represent rank in the moduli problem. Let \(S\) be a spectrum of a discrete valuation ring over \(\Bbbk\). We start with the set-up as in 3.1.1. Let us denote the relative dualising sheaf by \(\omega_{\mathcal{X}/S}\). Let \(\mathfrak{M}\) denote the stack of expanded degeneration bounded by the integer \(n\). By abuse of notation, we denote the universal curve over \(\mathfrak{M}\) by \(\mathcal{X}_{\mathfrak{M}}\). We denote the relative logarithmic Dolbeault shape by \(\mathcal{X}_{\mathfrak{M},lDol}\).
**Proposition 4.12**.: _The morphism \(\mathsf{Map}_{{}_{\mathfrak{M}}}(\mathcal{X}_{\mathfrak{M},lDol},BGL_{n} \times\mathfrak{M})\longrightarrow\mathfrak{M}\) is a quasi-smooth morphism of derived Artin stacks._
Proof.: Let us denote \(\mathsf{Map}_{{}_{\mathfrak{M}}}(\mathcal{X}_{\mathfrak{M},lDol},BGL_{n} \times\mathfrak{M})\) by \(M\), for simplicity of notation. The relative tangent complex of the morphism \(\pi:M\longrightarrow\mathfrak{M}\) is given by
\[\mathbb{T}_{M/\mathfrak{M}}\cong Rp_{*}\mathcal{E}nd\ \mathcal{E}[1]\cong Rr_{*} \mathcal{C}(E,\phi)[1] \tag{4.16}\]
where \(\mathcal{C}(E,\phi)\) denotes the following complex
\[[\mathcal{E}nd\ E\xrightarrow{[-,\phi]}\mathcal{E}nd\ E\otimes\omega_{\mathcal{X}_ {M}/M}] \tag{4.17}\]
with \(\mathcal{E}nd\ E\) sitting in the degree \(0\). Here \(\mathcal{E}\) denotes the universal sheaf on \(\mathcal{X}_{M,lDol}\) and \((E,\phi)\) denotes the corresponding universal Higgs complex on \(M\). From [35, Theorem 0.3] it follows that, \(\mathbb{L}_{M/\mathfrak{M}}\) is a perfect complex because the projection map \(\mathcal{X}_{M,lDol}\longrightarrow M\) is a proper locally complete intersection morphism. Also, using Grothendieck-Serre duality we can see that the complex \(Rr_{*}\mathcal{C}(E,\phi)[1]\) has Tor-amplitude in \([-1,1]\). Therefore we conclude that the morphism \(\mathsf{Map}_{\mathfrak{M}}(\mathcal{X}_{\mathfrak{M},lDol},BGL_{n}\times \mathfrak{M})\longrightarrow\mathfrak{M}\) is a quasi-smooth morphism of derived Artin stacks.
**Theorem 4.13**.: _The derived Artin stack \(\mathsf{Map}_{\mathfrak{M}}(\mathcal{X}_{\mathfrak{M},lDol},BGL_{n}\times \mathfrak{M})\) has a natural relative \(0\)-shifted log-symplectic structure over \(S\)._
Proof.: Consider the composite morphism \(M:=\mathsf{Map}_{\mathfrak{M}}(\mathcal{X}_{\mathfrak{M},lDol},BGL_{n}\times \mathfrak{M})\longrightarrow\mathfrak{M}\longrightarrow\mathcal{L}_{S}^{0}\). The composite morphism induces a log structure on \(M\), which is the pullback of the log structure from \(\mathfrak{M}\). The first map is quasi-smooth and the second map is smooth with \(\mathbb{L}_{\mathfrak{M}/\mathcal{L}_{S}^{0}}\cong 0\). Therefore it follows that \(\mathbb{L}_{M/\mathcal{L}_{S}^{0}}\cong\mathbb{L}_{M/\mathfrak{M}}\cong Rr_{* }\mathcal{C}(E,\phi)[1]\). So, the relative log-cotangent complex of the map \(M\longrightarrow S\) is isomorphic to the relative cotangent complex of the map \(M\longrightarrow\mathfrak{M}\). From 4.11, it follows that the stack \(M\) has a \(0\)-shifted symplectic structure relative to the stack \(\mathfrak{M}\), which translates into the fact that there is a \(0\)-shifted log-symplectic structure on \(M\) relative to \(S\).
The log-symplectic pairing can be described as follows. The stack \(BGL_{n}\) has a \(2\)-shifted sympletcic form given by the trace pairing.
\[T_{BGL_{n}}\wedge T_{BGL_{n}}=\mathfrak{gl}_{n}^{*}[1]\wedge gl_{n}^{*}[1] \xrightarrow{(A,B)\mapsto Trace(AB)}\Bbbk[2] \tag{4.18}\]
Now given a map \(f_{A}:\mathcal{X}_{lDol,A}\longrightarrow BGL_{n}\times\operatorname{Spec}A\), It corresponds to a Principal \(GL_{n}\) bundle \(P\) on \(\mathcal{X}_{lDol,A}\) and a Higgs bundle \((E\xrightarrow{\phi_{A}}E\otimes\omega_{\mathcal{X}_{A}/A})\). We have the following induced pairing by pulling back (4.18).
\[f_{A}^{*}T_{BGL_{n}}\otimes f_{A}^{*}T_{BGL_{n}}=\operatorname{ad}P[1]\otimes \operatorname{ad}P[1]\xrightarrow{Tr:=Trace}\mathcal{O}_{\mathcal{X}_{lDol,A }}[2] \tag{4.19}\]
Now by pushing it forward by the map \(q_{A}\), we get
\[q_{A\ast}(\mathrm{ad}P[1])\otimes q_{A\ast}(\mathrm{ad}P[1])\xrightarrow{\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
diagram with solid arrows of the following type
\[\begin{CD}\text{Spec}\;\;K@>{}>{}>\mathcal{X}\\ @V{}V{}V@V{}V{f}V\\ \text{Spec}\;\;A@>{}>{}>\mathcal{Y}\end{CD} \tag{5.1}\]
\(\exists\) a finite field extension \(K\longrightarrow K^{\prime}\) with \(A^{\prime}\) the integral closure of \(A\) in \(K^{\prime}\) and a lift (the dotted arrow) \(Spec\;A^{\prime}\longrightarrow\mathcal{X}\) making all the triangles in the diagram commute.
\[\begin{CD}\text{Spec}\;\;K^{\prime}@>{}>{}>\text{Spec}\;\;K@>{}>{}>\mathcal{X} \\ @V{}V{}V@V{}V{f}V\\ \text{Spec}\;\;A^{\prime}@>{}>{}>\text{Spec}\;\;A@>{}>{}>\mathcal{Y}\end{CD} \tag{5.2}\]
A morphism of derived Artin stacks is called complete if the underlying morphism of ordinary Artin stacks is complete.
**Definition 5.2**.: **(Hitchin map)** We recall the following two well-known definitions
1. \(B:=\oplus_{i=1}^{n}R^{0}\pi_{*}\omega_{\mathcal{X}/S}\). It is a vector bundle over the spectrum of a discrete valuation ring \(S\). It is well known as the Hitchin Base.
2. There is a natural map \(h:M\longrightarrow B\), which sends a family of Higgs bundles \((\mathcal{E},\phi)\) to \((-Trace(\phi),Trace(\wedge^{2}\phi),...,(-1)^{i}Trace(\wedge^{i}\phi),...,(-1) ^{n}Trace(\wedge^{n}\phi))\).
**Theorem 5.3**.: _The morphism \(h:M^{cl}_{Gie}\longrightarrow B\) is complete._
Proof.: Let \(T:=Spec\;A\) be a spectrum of a discrete valuation ring with function field \(K\). Let us denote \(T^{o}:=Spec\;K\). Suppose we are given a commutative diagram
\[\begin{CD}T^{o}@>{}>{}>M^{cl}_{Gie}\\ @V{}V{h}V\\ T@>{}>{}>B\end{CD} \tag{5.3}\]
For the proof of completeness, we need to recall the defintion and few facts about of the Artin stack (and the construction of their atlas) of torsion-free sheaves/ torsion-free Higgs pairs on the original family of curves \(\mathcal{X}/S\) (see Appendix 8.1.1).
There is a morphism \(\theta:M^{cl}_{Gie}\to TFH(\mathcal{X}/S)\) given by the pushforward of the Hitchin map under the modification map of Gieseker curves \(\mathcal{X}^{mod}\to\mathcal{X}\). Therefore, from (5.3) we get the following commutative diagram (without the dotted arrow)
(5.4)
Using [12, Proposition 6] and [20, Lemma 6.5], we can get the dotted arrow/extension. First of all, notice that the surface \(\mathcal{X}\times_{S}T\) is either a normal surface with an isolated singularity of type \(\frac{\Bbbk[[x,y,t]]}{xy-t^{d}}\) or isomorphic to the product \(X_{0}\times T\) depending on whether the spectrum of a discrete valuation ring \(T\) is faithfully flat over \(S\) or not.
In the first case, it is a routine checking that [12, Proposition 6] holds for the normal surface, i.e., any vector bundle \(\mathcal{E}_{K}\) over the generic fibre \(\mathcal{X}\times_{S}T^{o}\) can be extended to the surface \(\mathcal{X}\times_{S}T\) as a family of torsion-free sheaves (Let us denote it by \(\mathcal{F}\)). Using [20, Lemma 6.5] we also extend the Higgs field over the complement of the node in the surface \(\mathcal{X}\times_{S}T\). Let \(i:\mathcal{X}\times_{S}T^{o}\hookrightarrow\mathcal{X}\times_{S}T\) denote the inclusion.
(5.5)
Consider the composite map \(\mathcal{F}\longrightarrow\frac{i_{*}\mathcal{E}_{K}}{\mathcal{F}}\otimes \omega_{\mathcal{X}_{T}/T}\). Since we have extended the Higgs field on \(\mathcal{F}\) except at the node, therefore this composite map vanishes at the generic point of the closed fibre \(X_{0}\). Now if we show that the sheaf \(\frac{i_{*}\mathcal{E}_{K}}{\mathcal{F}}\) is torsion-free, then it will prove that the Higgs field extends everywhere. To see this we restrict the upper exact sequence to the closed fibre \(X_{0}\). We get
\[0\longrightarrow Tor^{1}_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
One can easily check that \(Tor^{1}_{\mathcal{X}_{T}}(\frac{i_{*}\mathcal{E}_{K}}{\mathcal{F}},\mathcal{O}_{X _{0}})\cong\frac{i_{*}\mathcal{E}_{K}}{\mathcal{F}}\otimes\mathcal{O}_{ \mathcal{X}_{T}}(-X_{0})\). Since \(\mathcal{O}_{\mathcal{X}_{T}}(-X_{0})\) is a locally-free sheaf, \(\frac{i_{*}\mathcal{E}_{K}}{\mathcal{F}}\) is torsion-free if and only if \(Tor^{1}_{\mathcal{X}_{T}}(\frac{i_{*}\mathcal{E}_{K}}{\mathcal{F}},\mathcal{O} _{X_{0}})\) is torsion-free. But since \(Tor^{1}_{\mathcal{X}_{T}}(\frac{i_{*}\mathcal{E}_{K}}{\mathcal{F}},\mathcal{O} _{X_{0}})\) is a subsheaf of \(\mathcal{F}|_{X_{0}}\), therefore it is torsion-free.
In the second case, the surface \(\mathcal{X}_{T}\cong X_{0}\times T\). Therefore the normalisation of \(\mathcal{X}_{T}\) is isomorphic to \(\tilde{X}_{0}\times T\). Let us denote the normalisation morphism by \(n\). From [DI, Proposition 3.2], it follows that we can extend the pullback of the generic Higgs bundle \((n^{*}\mathcal{E}_{K},n^{*}\phi_{K})\) as a good Generalised parabolic Hitchin pair which in turn gives us the desired torsion-free Hitchin pair on the surface \(\mathcal{X}_{T}\).
So now since we have extended the generic Higgs bundle as a torsion-free Hitchin pair, we have the following commutative diagram
(5.7)
Let \(R^{\Lambda}_{S}\xrightarrow{smooth}TFH(\mathcal{X}/S)\) denote a suitable Quot scheme (we omit the integer \(m\) from the notation) so that the image of the map \(T\longrightarrow TFH(\mathcal{X}/S)\) lies inside the image of the smooth map \(R^{\Lambda}_{S}\longrightarrow TFH(\mathcal{X}/S)\) (see 8.1.1). Therefore, there exists a finite cover \(T^{\prime}:=\operatorname{Spec}A^{\prime}\) of \(T\) such that the composite map \(T^{\prime}\longrightarrow TFH(\mathcal{X}/S)\) lifts to a map \(T^{\prime}\longrightarrow R^{\Lambda}_{S}\). We denote by \((T^{\prime})^{o}:=SpecK^{\prime}\), where \(K^{\prime}\) is the function field of \(A^{\prime}\) (\(K^{\prime}\) is a finite extension of \(K\)). Let \(\mathcal{Y}^{H}_{S}\) denote the Quot scheme for Gieseker-Hitchin pairs constructed over the Quot scheme \(R^{\Lambda}_{S}\) (see 8.1.4). Let us denote the projection map by \(\tilde{\theta}:\mathcal{Y}^{H}_{S}\longrightarrow R^{\Lambda}_{S}\). Then there exists a lift \((T^{\prime})^{o}\longrightarrow\mathcal{Y}^{H}_{S}\) of the map \((T^{\prime})^{o}\longrightarrow M^{cl}_{Gie}\). Therefore, we get a following commutative diagram
(5.8)
From [1, Proposition 5.11], it follows that the map \(\mathcal{Y}^{H}_{S}\longrightarrow R^{\Lambda}_{S}\) is proper (see Remarks 5.4 and 5.5 for an outline of the argument). So there exists a lift \(T^{\prime}\longrightarrow\mathcal{Y}^{H}_{S}\).
The composite map \(T^{\prime}\longrightarrow\mathcal{Y}_{S}^{H}\longrightarrow M_{Gie}^{cl}\) is our desired extension.
**Remark 5.4**.: Let \(Y\longrightarrow R\) be a quasi-projective morphism of separated schemes over a spectrum of a discrete valuation ring \(S\) and isomorphic over the generic point of \(S\). Suppose any map \(T\longrightarrow R\) flat over \(S\) can be lifted to a map \(T\longrightarrow Y\). Then we claim that the map \(Y\longrightarrow R\) is proper. To see this, take the closure of \(Y\) inside some relative projective space (relative over \(R\)) to get a projective morphism \(\overline{Y}\longrightarrow R\). Take any element \(y\in\overline{Y}\setminus Y\). Then there exists a map \(T\longrightarrow\overline{Y}\) (here \(T\) is a d.v.r) passing through \(y\) and flat over \(S\). Therefore we get a map \(T\longrightarrow R\). But then this map can be lifted to a map \(T\longrightarrow Y\). Let \(y^{\prime}\in Y\) be the image of the closed point of \(T\) under this map. But, by separatedness of the \(\overline{Y}\), we must have \(y=y^{\prime}\) and therefore \(\overline{Y}=Y\) and the map \(Y\longrightarrow R\) is proper.
**Remark 5.5**.: Let \(\mathcal{X}\longrightarrow S\) be the original family of curves and let \((\mathcal{F},\phi)\) be a flat family of torsion-free Higgs pair on \(\mathcal{X}_{T}\longrightarrow T\), where \(T\longrightarrow S\) is a surjective map of spectrum of a discrete valuation rings. Notice that the surface \(\mathcal{X}_{T}\) may have a singularity of type \(\frac{\Bbbk|[x,y,t]|}{xy-t^{n}}\). Let \(r:\mathcal{X}_{T}^{res}\longrightarrow\mathcal{X}_{T}\) be the minimal resolution of singularities. Then from [15, Proposition 6.5] it follows that the vector bundle \(\mathcal{E}:=\frac{r^{*}\mathcal{F}}{Torsion}\) has the property that \(r_{*}\mathcal{E}\cong\mathcal{F}\). Notice also that by construction and by the property that \(r_{*}\mathcal{E}\cong\mathcal{F}\), it follows that the natural map \(r^{*}r_{*}\mathcal{E}\longrightarrow\mathcal{E}\) is a surjective map of sheaves, which means \(\mathcal{E}|_{R}\) is globally generated. Here \(R\) denotes the chain of rational curves in \(\mathcal{X}_{T}^{res}\). Hence, by definition (Def. 8.1 and Remark 8.2), the vector bundle \(\mathcal{E}\) is a Gieseker vector bundle. The pullback of the Higgs field \(\phi\) defines a Higgs field on \(\mathcal{E}\).
## 6 Flatness of the Hitchin map
In this section, we study the reduced global nilpotent cone of \(M_{Gie}^{cl}\) (see 8.1.2), which is the ordinary scheme theoretic fibre over the point \(0\in B\) equipped with the reduced induced structure. We prove that every irreducible component of of the reduced nilpotent cone has an open subset which is an isotropic substack of \(M\) (the derived stack of Higgs bundles) with respect to its log-symplectic form. We use this to compute the dimension of the reduced nilpotent cone and to show that the Hitchin map is flat.
**Definition 6.1**.:
1. The nilpotent cone is the Hitchin fibre over the zero section \(0_{S}\) of \(B\longrightarrow S\), i.e., the following fibre product (fibre product of classical Artin stacks) \[\begin{CD}\mathcal{N}ilp:=0_{S}\times_{B}M^{cl}_{Gie}@>{}>{}>M^{cl}_{Gie}\\ @V{}V{}V@V{}V{}V\\ 0_{S}@>{}>{}>B\end{CD}\] (6.1) The fibre product is, in general, non-reduced.
2. We can put the reduced induced structure on \(\mathcal{N}ilp\). We refer to the resulting stack as the reduced nilpotent cone and denote it by \(\mathcal{N}ilp^{red}\). It is a stack over the spectrum of a discrete valuation ring \(S\). We denote the closed fibre by \(\mathcal{N}ilp^{red}_{0}\).
**Remark 6.2**.: First, we notice that \(\mathcal{N}ilp^{red}_{0}\) may have several irreducible components. Since it is reduced, the generic points are smooth points. A priori, a generic point is a tuple \((X_{s},\mathcal{E},\phi)\), where \(X_{s}\) is a Gieseker curve with \(s\) many \(\mathbb{P}^{1}\)'s (\(0\leq s\leq n\)), \(\mathcal{E}\) is a Gieseker vector bundle and \(\phi\) is a nilpotent Higgs field, i.e., \(\phi^{n}=0\). Given a nilpotent Higgs field, we get a canonical filtration by saturated torsion-free subsheaves
\[\mathcal{E}^{0}:=0\subsetneq\mathcal{E}^{1}:=Ker\phi\subsetneq\mathcal{E}^{2}: =Ker\phi^{2}\subsetneq\cdots\subsetneq\mathcal{E}^{k}:=Ker\phi^{k}\subsetneq \mathcal{E}^{k+1}:=\mathcal{E} \tag{6.2}\]
for some integer \(k\) such that \(\phi^{k+1}=0\).
**Definition 6.3**.: **(Type of a nilpotent torsion-free Higgs pair on the nodal curve \(X_{0}\))** Let \((\mathcal{F},\psi)\) be a torsion-free Higgs pair on \(X_{0}\), where \(\mathcal{F}\) is a torsion-free coherent sheaf on \(X_{0}\) and \(\psi:\mathcal{F}\longrightarrow\mathcal{F}\otimes\omega_{X_{0}}\) is a map of coherent sheaves. Suppose that the Higgs field is nilpotent i.e., \(\psi^{n}=0\), where \(n\) denotes the rank of the torsion-free sheaf \(\mathcal{F}\). Then as above we get a natural flag of saturated subsheaves of \(\mathcal{F}\).
\[\mathcal{F}^{0}:=0\subsetneq\mathcal{F}^{1}:=Ker\psi\subsetneq\mathcal{F}^{2}: =Ker\psi^{2}\subsetneq\cdots\subsetneq\mathcal{F}^{k}:=Ker\psi^{k}\subsetneq \mathcal{F}^{k+1}:=\mathcal{F} \tag{6.3}\]
For every \(1\leq i\leq k+1\), we define \(n_{i}:=\mathsf{rank}\ (\frac{\mathcal{F}^{i}}{\mathcal{F}^{i-1}})\) and \(d_{i}:=\deg\ (\frac{\mathcal{F}^{i}}{\mathcal{F}^{i-1}})\). We say
that the nilpotent Higgs bundle \((\mathcal{F},\psi)\) is of type \(\{(n_{i},d_{i})\}_{i=1}^{i=k+1}\). We denote it by \(\mathsf{Type}\ \ (\mathcal{F},\psi)\).
**Definition 6.4**.: **(Type of a nilpotent Higgs bundle on a Gieseker curve \(X_{s}\) for some \(s\in[0,n]\))** Let \(s\in[0,n]\) be an integer. Let \(\pi_{s}:X_{s}\longrightarrow X_{0}\) be the Gieseker curve with exactly \(s\) many \(\mathbb{P}^{1}\)'s. Let \((\mathcal{E},\phi)\) be a Gieseker-Higgs bundle on \(X_{s}\). Then, by definition, \(((\pi_{s})_{*}\mathcal{E},(\pi_{s})_{*}\phi)\) is a nilpotent torsion-free Higgs pair on \(X_{0}\). We define \(\mathsf{Type}\ \ (X_{s},\mathcal{E},\phi):=\mathsf{Type}\ \ ((\pi_{s})_{*} \mathcal{E},(\pi_{s})_{*}\phi)\).
**Lemma 6.5**.: _Let \((\pi_{s}:\mathcal{X}_{s}\to X_{0},\mathcal{E},\phi)\) be a nilpotent Gieseker-Higgs bundle. Then we have the following induced filtration as in (6.2)._
\[\mathcal{E}^{0}:=0\subsetneq\mathcal{E}^{1}:=Ker\phi\subsetneq\mathcal{E}^{2 }:=Ker\phi^{2}\subsetneq\cdots\subsetneq\mathcal{E}^{k}:=Ker\phi^{k}\subsetneq \mathcal{E}^{k+1}:=\mathcal{E} \tag{6.4}\]
_The induced nilpotent torsion-free Higgs pairs \((\mathcal{F}:=(\pi_{s})_{*}\mathcal{E},\psi:=(\pi_{s})_{*}\phi)\) also has a natural filtration as in (6.3)._
\[\mathcal{F}^{0}:=0\subsetneq\mathcal{F}^{1}:=Ker\psi\subsetneq\mathcal{F}^{2 }:=Ker\psi^{2}\subsetneq\cdots\subsetneq\mathcal{F}^{k}:=Ker\psi^{k}\subsetneq \mathcal{F}^{k+1}:=\mathcal{F} \tag{6.5}\]
_Then \((\pi_{s})_{*}\mathcal{E}^{i}\cong\mathcal{F}^{i}\) for every \(1\leq i\leq k+1\)._
Proof.: For every \(i\in[1,k+1]\), we have morphisms \(\phi^{i}:\mathcal{E}\longrightarrow\mathcal{E}\otimes\omega_{X_{s}}^{\otimes i}\) and \(\psi^{i}:\mathcal{F}\longrightarrow\mathcal{F}\otimes\omega_{X_{0}}^{\otimes i}\). Moreover, \(\mathsf{Ker}(\phi^{i})=\mathcal{E}^{i}\) and \(\mathsf{Ker}(\psi^{i})=\mathcal{F}^{i}\). Let \(\sigma\) be a local section of \((\pi_{s})_{*}\mathcal{E}^{i}\) on a neighbourhood \(U\) of the node of \(X_{0}\). It is an element of \(\mathcal{E}^{i}((\pi_{s})^{-1}(U))\). Therefore the section \(\phi^{i}(\sigma)=0\) on the open set \((\pi_{s})^{-1}(U)\cap\tilde{X_{0}}=U^{o}\), where \(U^{o}\) denotes the complement of the node of \(X_{0}\). Therefore \(\psi^{i}(\sigma)=0\) on \(U^{o}\), because \(\psi=\phi\) on \(U^{o}\). Since \(U^{o}\) is dense in \(U\), therefore \(\psi^{i}(\sigma)=0\) on \(U\) and \(\sigma\in\mathcal{F}^{i}\). Therefore, \((\pi_{s})_{*}\mathcal{E}^{i}\subseteq\mathcal{F}^{i}\) for all \(i\in[1,k+1]\).
For the converse, let \(\sigma\in\mathcal{F}^{i}(U)\). Since \(\mathcal{F}^{i}\subseteq\mathcal{F}=(\pi_{s})_{*}\mathcal{E}\), we have \(\sigma\in\mathcal{E}((\pi_{s})^{-1}(U))\). Since \(\psi^{i}(\sigma)=0\), therefore \(\phi^{i}(\sigma)=0\) on \((\pi_{s})^{-1}(U^{o})\) and hence, by continuity, on \((\pi_{s})^{-1}(U)\cap\tilde{X_{0}}\). Now notice that \(\phi^{i}(\sigma)\) is a section of \(\mathcal{E}\otimes\omega_{X_{s}}\). The bundle \(\mathcal{E}\otimes\omega_{X_{s}}\) is a Gieseker-vector bundle because \(\omega_{X_{s}}|_{R}\cong\mathcal{O}_{R}\), where \(R\) the chain of \(\mathbb{P}^{1}\)'s in \(X_{s}\). Therefore since the section vanishes at the two points \(\tilde{X_{0}}\cap R\), therefore it must vanish everywhere. This implies that \(\sigma\in\mathcal{E}^{i}((\pi_{s})^{-1}(U))=((\pi_{s})_{*}\mathcal{E}^{i})(U)\). Therefore, \(\pi_{s*}(\mathcal{E}^{i})=\mathcal{F}^{i}\) for all \(i\in[1,k+1]\).
**Definition 6.6**.: We define \(\mathcal{N}ilp_{0}^{sm,gen}\) to be the open substack of \(\mathcal{N}ilp_{0}^{red}\) consisiting of nilpotent Higgs bundles \((X_{s},\mathcal{E},\phi)\) which satisfies the following two conditions.
1. the \(\mathsf{Type}(X_{s},\mathcal{E},\phi)\) is of general type i.e., it is the same as the \(\mathsf{Type}\) of one of the generic points of the reduced nilpotent cone.
2. \((X_{s},\mathcal{E},\phi)\) is a smooth point of \(\mathcal{N}ilp_{0}^{red}\).
**Lemma 6.7**.: _Let \((\mathcal{X}^{mod},\mathcal{E},\phi)\) denote the restriction of the universal modification, universal vector bundle and the universal Higgs field to \(\mathcal{N}ilp_{0}^{sm,gen}\). Consider the filtration 6.2 of \(\mathcal{E}\) induced by the Higgs field \(\phi\). Then the sheaves \(\mathcal{E}^{i}\)'s in the filtration are all flat over \(\mathcal{N}ilp_{0}^{sm,gen}\)._
Proof.: To see this consider any connected component \(C\) of \(\mathcal{N}ilp_{0}^{sm,gen}\). Then for any element \(c:\mathrm{Spec}\,\mathbb{k}\longrightarrow C\), we have the following exact sequence.
\[0\longrightarrow\mathsf{K}^{i}_{c}\longrightarrow c^{*}\mathcal{E}\xrightarrow{ (c^{*}\phi)^{i}}c^{*}\mathcal{E}\otimes\omega_{c^{*}\mathcal{X}^{mod}} \longrightarrow\mathsf{CK}^{i}_{c}\longrightarrow 0\]
Here \(\mathsf{K}^{i}_{c}\) and \(\mathsf{CK}^{i}_{c}\) denote the Kernel and Cokernel of the map \((c^{*}\phi)^{i}\), respectively. We also have the following exact sequence.
\[0\longrightarrow\underline{\mathsf{K}}^{i}_{c}\longrightarrow(\pi_{s})_{*}( c^{*}\mathcal{E})\xrightarrow{(\pi_{s})_{*}((c^{*}\phi)^{i})=((\pi_{s})_{*}(c^{*} \phi))^{i}}(\pi_{s})_{*}(c^{*}\mathcal{E}\otimes\omega_{c^{*}\mathcal{X}^{mod} })\longrightarrow\underline{\mathsf{CK}}^{i}_{c}\longrightarrow 0\]
Here \(\underline{\mathsf{K}}^{i}_{c}\) and \(\underline{\mathsf{CK}}^{i}_{c}\) are Kernel and Cokernel, respectively. Notice from Lemma 6.5 it follows that \(\underline{\mathsf{K}}^{i}_{c}=(\pi_{s})_{*}(\mathsf{K}^{i}_{c})\). Since the \(\mathsf{Type}\)\((c^{*}\mathcal{X}^{mod},c^{*}\mathcal{E},c^{*}\phi)\) is constant over \(C\), therefore the Hilbert polynomial of \(\underline{\mathsf{K}}^{i}_{c}\) does not depend on \(c\in C\). Therefore, the Hilbert Polynomial of \(\underline{\mathsf{CK}}^{i}_{c}\) does not depend on \(c\in C\). Therefore, \(\underline{\mathsf{CK}}^{i}\) is flat over \(C\). Therefore, \(\underline{\mathsf{K}}^{i}\) is flat over \(C\) and therefore \(\mathsf{K}^{i}\) is flat over \(C\).
**Proposition 6.8**.: _The tangent complex of \(\mathcal{N}ilp_{0}^{red}\) at a point \((X_{s},\mathcal{E},\phi)\in\mathcal{N}ilp_{0}^{sm,gen}\) is given by \(R\Gamma(\mathcal{SC}(\mathcal{E},\phi))\), where \(\mathcal{SC}(\mathcal{E},\phi)\) is the following complex of sheaves on \(X_{s}\)._
\[\mathcal{SC}(\mathcal{E},\phi):=[SC(\mathcal{E},\phi)\xrightarrow{[-,\phi]}SC (\mathcal{E},\phi)\otimes\omega_{X_{s}}] \tag{6.6}\]
_where \(SC(\mathcal{E},\phi)\subseteq\mathcal{E}nd\mathcal{E}\) is the sheaf of local sections \(s\) of \(\mathcal{E}nd\mathcal{E}\) such that \(s(\mathcal{E}^{i})\subseteq\mathcal{E}^{i-1}\) for \(i=1,\ldots,k+1\)._
Proof.: We choose a trivialisation \(X_{k}=\cup_{i\in I}V_{i}\) of the vector bundle \(\mathcal{E}\) and the line bundle \(\omega_{X_{s}}\). Then a first-order infinitesimal deformation (as a Higgs bundle) of \((\mathcal{E},\phi)\) can be described as a pair \((s_{ij},t_{i})\), where \(s_{ij}\in\Gamma(V_{ij},\mathcal{E}nd\mathcal{E})\) and \(t_{i}\in\Gamma(V_{i},\mathcal{E}nd\mathcal{E}\otimes\omega_{X_{s}})\). Since \(\mathcal{E}\) is a vector bundle,
\(\omega_{X_{s}}\)) satisfying \(s_{ij}+s_{jk}=s_{ik}\) and \(t_{i}-t_{j}=[s_{ij},\phi]\) (see [5, Theorem 2.3] and [6, Proposition 3.1.2]).
Let \(\mathrm{Spec}\ \ \mathbb{k}[\epsilon]\longrightarrow\mathcal{N}ilp_{0}^{sm,gen}\) be a map such that the image of the closed point is given by the Higgs bundle \((X_{s},\mathcal{E},\phi)\). Let us denote by the corresponding first order infinitesimal deformation of the nilpotent Higgs bundle \((\mathcal{E},\phi)\) by \((\mathcal{E}[\epsilon],\phi[\epsilon])\). We assume that \((\mathcal{E}[\epsilon],\phi[\epsilon])\) is a nilpotent Higgs bundle. We define \(\mathcal{E}^{i}[\epsilon]:=Ker\ (\phi[\epsilon])^{i}\) for \(i\in[1,k+1]\). Since the induced flag \(\mathcal{E}^{\bullet}\) is flat over \(\mathcal{N}ilp_{0}^{sm,gen}\) (see Lemma 6.7), therefore \(\mathcal{E}^{\bullet}[\epsilon]\) is flat over \(\mathrm{Spec}\ \ \mathbb{k}[\epsilon]\). Since \(\mathcal{E}^{i}[\epsilon]\) is flat over \(\mathrm{Spec}\ \ \mathbb{k}[\epsilon]\), therefore it is an extension of \(\mathcal{E}^{i}\) by \(\mathcal{E}^{i}\).
(6.7)
Now it is straightforward to check that for \((s_{ij},t_{i})\) to be an infinitesimal deformation of \((\mathcal{E},\phi)\) as a nilpotent Higgs field, it has to satisfy the extra condition that \(s_{ij}(\mathcal{E}^{\bullet})\subseteq\mathcal{E}^{\bullet-1}\), where \(\mathcal{E}^{\bullet}\) is the flag (6.2). This means that \(s_{ij}\in\Gamma(V_{ij},SC(\mathcal{E},\phi))\) for all \(i,j\) and therefore, the Proposition follows.
**Theorem 6.9**.:
1. _The Hitchin map_ \(h:M_{Gie}^{cl}\longrightarrow B\) _is surjective._
2. _The sub-stack_ \(\mathcal{N}ilp^{sm,gen}\) _is relatively isotropic over the spectrum of a discrete valuation ring_ \(S\)_._
3. _The Hitchin map_ \(h:M_{Gie}^{cl}\longrightarrow B\) _is flat._
Proof.: Proof of (1): To prove the surjectivity we use the spectral correspondence [1, Lemma 2.4]. Given any point \(a_{\bullet}:=(a_{1},\ldots,a_{n})\in B\), we consider the function \(s(a_{\bullet}):=t^{n}+a_{1}t^{n-1}+\cdots+a_{n-1}t+a_{n}\) on the total space \(Tot(\omega_{X_{0}})\). Here \(t\) denotes the canonical section of \(f^{*}\omega_{X_{0}}\), where \(f:Tot(\omega_{X_{0}})\longrightarrow X_{0}\) is the projection map. Then the vanishing locus \(V(s(a_{\bullet}))\) defines a closed sub-scheme of \(Tot(\omega_{X_{0}})\). Notice that \(Tot(\omega_{X_{0}})\) is only quasi-projective and it is an open subscheme of \(Z:=\mathbb{P}(\omega_{X_{0}}^{*}\oplus\mathcal{O}_{X_{0}})\). But since \(s(a_{\bullet})\) is a monic polynomial, therefore the closure of \(V(s(a_{\bullet}))\) in \(Z\) is \(V(s(a_{\bullet}))\) itself. In particular, \(V(s(a_{\bullet}))\) is a closed subscheme in \(Z\) such that \(V(s(a_{\bullet}))\cap D_{\infty}=\emptyset\), where \(D_{\infty}:=Z\setminus Tot(\omega_{X_{0}})\). Therefore, by spectral
correspondence, it follows that any rank 1 locally free sheaf on \(V(s(a_{\bullet}))\) corresponds to a Higgs bundle \(({\cal E},\phi)\) on the nodal curve \(X_{0}\) whose characteristic polynomial is given by \(a_{\bullet}:=(a_{1},\ldots,a_{n})\in B\). Therefore, the Hitchin map is surjective.
Proof of (2): We want to show that \({\cal N}ilp^{sm,gen}\) is relatively isotropic over \(S\). First, we notice that the smooth locus of the generic fibre of \({\cal N}ilp^{red}\longrightarrow S\) is isotropic, and its dimension is equal to \(n^{2}(g-1)\), which is the same as the dimension of the stack of rank \(n\) vector bundles on a smooth projective curve of genus \(g\). Therefore, we have \(\dim\,{\cal N}ilp^{red}_{0}\geq n^{2}(g-1)\). We want to show that the \({\cal N}ilp^{red,sm}_{0}\) is an isotropic substack in \(M^{cl}_{Gie,0}\). The tangent complex at a point of \({\cal N}ilp^{red,sm}_{0}\) is given by (6.6). We want to show that the following composite morphism is quasi-isomorphic to 0.
\[R\Gamma({\cal SC}({\cal E},\phi))[1]\otimes R\Gamma({\cal SC}({\cal E},\phi)) [1]\longrightarrow R\Gamma({\cal C}({\cal E},\phi))[1]\otimes R\Gamma({\cal C }({\cal E},\phi))[1]\longrightarrow R\Gamma(\omega_{X_{s}}[1]) \tag{6.8}\]
Notice that the morphism
\[R\Gamma({\cal C}({\cal E},\phi))[1]\otimes R\Gamma({\cal C}({\cal E},\phi))[1 ]\longrightarrow R\Gamma(\omega_{X_{s}}[1])\]
is given by the log-symplectic pairing (Theorem 4.13). If we show that the composite morphism 6.8 is quasi-isomorphic to 0, this will imply that the \({\cal N}ilp^{red,sm}_{0}\) is an isotropic substack. To show this, we choose a trivialisation \(X_{s}=\cup_{i\in I}V_{i}\) of the vector bundle \({\cal E}\) and of the line bundle \(\omega_{X_{s}}\). With respect to this cover, the complex \(R\Gamma({\cal SC}({\cal E},\phi))[1]\) is quasi-isomorphic to the following Cech complex
\[\prod SC({\cal E},\phi)(V_{i})\longrightarrow\prod SC({\cal E},\phi)(V_{ij}) \oplus\prod(SC({\cal E},\phi)\otimes\omega)(V_{i})\longrightarrow\prod(SC({ \cal E},\phi)\otimes\omega)(V_{ij})\oplus\prod SC({\cal E},\phi)(V_{ijk}), \tag{6.9}\]
where the term \(\prod SC({\cal E},\phi)(V_{i})\) is in degree \(-1\).
For convenience, let us write (6.8) in the following way.
(6.10)
The arrow in the middle is given by the log-symplectic pairing (4.13).
Notice that the composite map can only be non-nozero on three cohomologies, namely in the degree \(-1\), \(0\) and \(1\).
Case 1: the induced map \(H^{0}(\omega^{\flat})\): An element of \(\prod SC(\mathcal{E},\phi)(V_{ij})\oplus\prod(\overline{SC(\mathcal{E},\phi) \otimes\omega})(V_{i})\) is given by \(\{(s_{ij},t_{i})\}_{i,j\in I}\), where \((s_{ij},t_{i})\), where \(s_{ij}\in\Gamma(V_{ij},\mathcal{E}nd\mathcal{E})\) and \(t_{i}\in\Gamma(V_{i},\mathcal{E}nd\mathcal{E}\otimes\omega_{X_{s}})\) such that \(s_{ij}(\mathcal{E}^{\bullet})\subseteq\mathcal{E}^{\bullet-1}\), where \(\mathcal{E}^{\bullet}\) is the flag (6.2). Then from (6.8) it follows that the pairing of two elements \(\{(s_{ij},t_{i})\}\) and \(\{(s^{\prime}_{ij},t^{\prime}_{i})\}\) is given by \(\{T_{ij}:=Trace(s_{ij}\circ t^{\prime}_{j}-t_{i}\circ s^{\prime}_{ij})\in \omega_{X_{s}}(V_{ij})\}_{i,j\in I}\).
Claim: The section \(T_{ij}\) vanishes for all \(i,j\in I\).
Proof of the claim: We first observe that, we can choose the trivialisation \(X_{s}=\cup_{i\in I}V_{i}\) of \(\mathcal{E}\) and \(\omega_{X_{s}}\) in such a way that
1. \(V_{i}\) contains at most one node for each \(i\in I\),
2. \(V_{i}\)'s are connected.
We notice that \(V_{ij}\)'s also contain at most one node. Suppose, \(V_{ij}\) contains a node \(p\). Then \(V_{ij}=V_{ij}^{1}\coprod V_{ij}^{2}\), the union of two smooth irreducible components of \(V_{ij}\). We denote by \(V_{ij}^{o}:=V_{ij}\setminus p\), \(V_{ij}^{1,o}:=V_{ij}^{1}\setminus p\) and \(V_{ij}^{2,o}:=V_{ij}^{2}\setminus p\). We consider the restriction of the section \(T_{ij}\) to the two open subsets \(V_{ij}^{1,o}\) and \(V_{ij}^{2,o}\). Notice that the restrictions of the flag (6.2) to these two open subsets are all sub-bundles. We notice that \(SC(\mathcal{E}|_{V_{ij}^{1,o}},\phi)\) is the nilpotent part of the parabolic sub-algebra of \(\mathcal{E}nd(E|_{V_{ij}^{1,o}})\) given by the natural flag of sub-bundles induced by \(\phi\). Therefore it is clear that the trace pairing is \(0\) i.e., \((T_{ij})|_{V_{ij}^{1,o}}=0\). Similarly, \((T_{ij})|_{V_{ij}^{2,o}}=0\). Since \(V_{ij}^{o}\) is a dense open subset of \(V_{ij}\), therefore, by continuity, \(T_{ij}=0\).
If a \(V_{ij}\) does not contain any node then the proof is similar.
Case 2: the induced map \(H^{-1}(\omega^{\flat})\): In this case, we look at the pairing of \(\{t_{i}\in SC(\mathcal{E},\phi)\otimes\omega_{X_{s}}\}\) and \(\{s_{ij}\in SC(\mathcal{E},\phi)\}\). The pairing is given by \(\{Trace(t_{i}\circ s_{ij})\}\). By similar arguments as above, we can show that \(Trace(t_{i}\circ s_{ij})=0\ \ \forall i,j\in I\).
Case 3: the induced map \(H^{1}(\omega^{\flat})\):In this case, we look at the pairing of \(\{t_{ij}\in(SC(\mathcal{E},\phi)\otimes\omega_{X_{s}})(V_{ij})\}\) and \(\{s_{j}\in SC(\mathcal{E},\phi)(V_{j})\}\). The pairing is given by \(\{Trace(t_{ij}\circ s_{j})\}\). By similar arguments as above, we can show that \(Trace(t_{ij}\circ s_{j})=0\ \ \forall i,j\in I\).
Therefore, we conclude that the smooth locus of this particular component is isotropic.
Proof of (3) From (2), it follows that the dimension of \(\mathcal{N}ilp_{0}^{red}=n^{2}(g-1)\). Now from Lemma 8.10, it follows that the dimension of \(M_{Gie}^{cl}\) is equal to \(2n^{2}(g-1)+2\). Once we know the dimension, it is easy to see that \(M_{Gie}^{cl}\) is a local complete intersection (Remark 8.12). Since \(M_{Gie}^{cl}\) is a local complete intersection, and all the fibres of the Hitchin map \(h:M_{Gie}^{cl}\longrightarrow B\) are of dimension equal to \(n^{2}(g-1)\), therefore by Miraculous flatness criterion, it follows that the Hitchin map is flat.
## 7 On the relative logarithmic Dolbeault moduli over \(\overline{\mathcal{M}_{g}}\)
In this section, we will construct the Gieseker-like derived moduli stack of Higgs bundles over the moduli stack of stable curves of genus \(g\geq 2\). We will also show that there is a relative \(0\)-shifted log-symplectic forms on this moduli stack relative to the moduli stack of stabel curves. Let us begin with a few definitions.
**Definition 7.1**.: (**The moduli stack of stable curves \(\overline{\mathcal{M}_{g}}\)**) A projective variety \(C\) of dim \(1\) is called a stable curve if it is either smooth or has nodal singularities, and the automorphism group \(Aut(C)\) is a finite group.
The moduli stack of stable curves is a functor.
\[\overline{\mathcal{M}_{g}}:Sch/\Bbbk\longrightarrow Groupoids\]
\[T\mapsto\left\{\begin{array}{l}\mbox{families of stable curves}\\ \mbox{of genus $g$ over $T$}\end{array}\right\} \tag{7.1}\]
Here, morphisms in the groupoid \(\overline{\mathcal{M}_{g}}(T)\) are just isomorphisms of stable curves over \(T\).
**Remark 7.2**.: It is well known that if \(g\geq 2\), the functor \(\overline{\mathcal{M}_{g}}\) is a smooth Deligne-Mumford stack. The locus of nodal curves forms a normal crossing divisor in \(\overline{\mathcal{M}_{g}}\). Therefore, it induces a log structure on \(\overline{\mathcal{M}_{g}}\). From [18, Lemma 4.4], it follows that
the universal curve \(\mathcal{C}_{g}\) over \(\overline{\mathcal{M}}_{g}\) also induces a log structure on it and these two log structures are isomorphic.
**Definition 7.3**.: (**The moduli stack of semi-stable curves \(\mathcal{M}_{g}^{ss}\)**) A connected projective variety \(C\) of dim 1 is called a semi-stable curve if the following properties are satisfied
1. it is either smooth or has nodal singularities,
2. every rational irreducible component \(R\) is smooth, and
3. the intersection \(R\cdot\overline{C\setminus R}=2\).
The moduli stack of semi-stable curves is a functor
\[\mathcal{M}_{g}^{ss}:Sch/\Bbbk\longrightarrow Groupoids\]
\[T\mapsto\begin{Bmatrix}\text{families of semi-stable curves}\\ \text{of genus $g$ over $T$}\end{Bmatrix} \tag{7.2}\]
Morphisms in the groupoid \(\mathcal{M}_{g}^{ss}(T)\) are just isomorphisms of families of semi-stable curves over \(T\).
**Remark 7.4**.: It is well known that if \(g\geq 2\), the functor \(\mathcal{M}_{g}^{ss}\) is a smooth Artin stack [34, 0E72, Lemma 21.5].
### Log structures on \(\mathcal{M}_{g}^{ss}\)
The locus of singular curves in \(\mathcal{M}_{g}^{ss}\) forms a normal crossing divisor, and hence induces a log structure on \(\mathcal{M}_{g}^{ss}\). We denote this divisor by \(\partial\mathcal{M}_{g}^{ss}\). Let us denote the universal curve over \(\mathcal{M}_{g}^{ss}\) by \(\mathcal{D}_{g}\). There is the "stabilization morphism" \(\pi:\mathcal{M}_{g}^{ss}\longrightarrow\overline{\mathcal{M}_{g}}\) such that the induced morphism \(\mathcal{D}_{g}\longrightarrow\pi^{*}\mathcal{C}_{g}\) is the universal modification morphism over \(\mathcal{M}_{g}^{ss}\).
1.1 Versal deformation space of \(\mathcal{M}_{g}^{ss}\) and the versal picture of the map \(\pi:\mathcal{M}_{g}^{ss}\longrightarrow\overline{\mathcal{M}_{g}}\)
Let \(\mathcal{X}\) be a stable curve of genus \(g\) and \(\mathcal{X}^{mod}\) be a semi-stable curve whose stable model is \(\mathcal{X}\). Let \(\{c_{i}\}_{i=1}^{l}\) be the nodes of the curve \(\mathcal{X}\). Let \(\{d_{ij}\}_{i=1}^{\iota_{i}}\) be the nodes of
\(\mathcal{X}^{mod}\) over the node \(c_{i}\) for every \(i=1,\ldots,l\). From [30, Proposition 3.3.2], it follows that
1. there exists a versal deformation space of the nodal curve \(\mathcal{X}\) which is formally smooth to \(\Bbbk[|z_{1},\ldots,z_{l}|]\) i.e., the versal space is isomorphic to \(\Bbbk[|z_{1},\ldots,z_{l}|][|z_{l+1},\ldots,z_{N}|]\), where \(N:=3g-3\) (here \(g\) is the genus of the curves) and \(z_{i}\) is the equation of the \(i\)-th node of \(\mathcal{X}\) for \(i=1,\ldots,l\),and
2. there exits a versal deformation space of the nodal curve \(\mathcal{X}^{mod}\) which is isomorphic to \(\Bbbk[|\{\{z_{ij}\}_{i=1}^{l}\}_{j=1}^{\iota_{i}}|][|z_{l+1},\ldots,z_{N}|]\), where \(z_{ij}=0\) is the equation of the node \(d_{ij}\) for \(i=1,\ldots,l\) and \(j=1,\ldots,\iota_{i}\).
3. Moreover, there is a morphism between the versal deformation spaces, which is as follows. \[\Bbbk[|z_{1},\ldots,z_{l}|][|z_{l+1},\ldots,z_{N}|]\longrightarrow \Bbbk[|\{\{z_{ij}\}_{i=1}^{l}\}_{j=1}^{\iota_{i}}|][|z_{l+1},\ldots,z_{N}|]\] \[z_{i}\mapsto z_{i1}\cdots z_{i\iota_{i}} \forall i=1,\ldots,l\quad\text{,and}\] \[z_{j}\mapsto z_{j} \forall j=l+1,\ldots,N\]
**Proposition 7.5**.:
1. _The curves_ \(\mathcal{C}_{g}\) _and_ \(\mathcal{D}_{g}\) _induce locally-free log structures on_ \(\overline{\mathcal{M}_{g}}\) _and_ \(\mathcal{M}_{g}^{ss}\)_, respectively. These log structures concide with the log structures induced by the boundary divisors_ \(\partial\overline{\mathcal{M}_{g}}\) _and_ \(\partial\mathcal{M}_{g}^{ss}\)_, respectively._
2. _The morphism_ \(\mathcal{M}_{g}^{ss}\longrightarrow\overline{\mathcal{M}_{g}}\) _is a log-smooth morphism of locally free log structures._
Proof.: The proof follows from the following general lemma.
**Lemma 7.6**.: _Let \(T\) be a scheme and \(\mathcal{X}\longrightarrow T\) be a family of stable curves. Let \(\mathcal{X}^{mod}\longrightarrow\mathcal{X}\) be a modification over \(T\). Let us denote the log structures on \(T\) induced by \(\mathcal{X}\) and \(\mathcal{X}^{mod}\) by \((T,P,\alpha:P\longrightarrow\mathcal{O}_{T})\) and \((T,Q,\beta:Q\longrightarrow\mathcal{O}_{T})\), respectively. Here \(P\) and \(Q\) are the etale sheaves of monoids and \(\alpha\) and \(\beta\) are maps of sheaves such that \(\alpha^{-1}(\mathcal{O}_{T}^{\times})\cong P^{\times}\) and \(\beta^{-1}(\mathcal{O}_{T}^{\times})\cong Q^{\times}\). Then there is a natural map of logarithmic schemes \((T,Q,\beta)\longrightarrow(T,P,\alpha)\)._
Proof.: From [18, Lemma 4.4], it follows that the curves \(\mathcal{X}\) and \(\mathcal{X}^{mod}\) induce two log structures on \(T\). We denote these two log structures by \((T,P,\alpha:P\longrightarrow\mathcal{O}_{T})\)
and \((T,Q,\beta:Q\longrightarrow{\cal O}_{T})\), respectively. We want to show that there is a morphism between these two log structures.
Let \(t\in T\) be a closed point. Let us denote the maximal ideal of \({\cal O}_{T,t}^{h}\) by \(m_{T,t}\). Let \(\{c_{i}\}_{i=1}^{l}\) be the nodes of the curve \({\cal X}_{t}\). Let \(\{d_{ij}\}_{i=1}^{\iota_{i}}\) be the nodes of \({\cal X}_{t}^{mod}\) over the node \(c_{i}\) for every \(i=1,\ldots,l\). The Henselian local ring of \({\cal X}\) at the node \(c_{i}\) is isomorphic to the Henselisation of \(\frac{{\cal O}_{T,t}^{h}[|x,y|]}{xy-t_{i}}\) at the ideal \((x,y,m_{T,t})\), for some \(t_{i}\in m_{T,t}\) for every \(i=1,\ldots,l\). Here \((x,y,m_{T,t})\) denote the ideal generated by \(x\),\(y\) and \(m_{T,t}\). Similarly, the Henselian local ring of \({\cal X}^{mod}\) at the node \(d_{ij}\) is isomorphic to the Henselisation of \(\frac{{\cal O}_{T,t}^{h}[|x,y|]}{xy-t_{ij}}\) at the ideal \((x,y,m_{T,t})\), for some \(t_{ij}\in m_{T,t}\) for every \(i=1,\ldots,l\) and \(j=1,\ldots,\iota_{i}\). From [18, Lemma 2.1 and 2.2], it follows that the elements \(t_{i}\)'s and \(t_{ij}\)'s are uniquely determined in \(\widehat{\cal O}_{T,t}\) and in \({\cal O}_{T,t}^{h}\).
Claim:
\[t_{i}=t_{i1}\cdots t_{i\iota_{i}}\hskip 28.452756pt\forall i=1,\ldots,l \tag{7.3}\]
Proof of the claim: Since \(t_{i}\)'s and \(t_{ij}\)'s are uniquely determined in \(\widehat{\cal O}_{T,t}\) and in \({\cal O}_{T,t}^{h}\), it is enough to prove the above identities in \(\widehat{\cal O}_{T,t}\).
By versality property of the family described in subsubsection 7.1.1), we have a commutative diagram
(7.4)
given by the family of curves \({\cal X}\) and \({\cal X}^{mod}\) over \(T\). By the uniqueness of the equations of the nodes in \(\widehat{\cal O}_{T,t}\), we have that \(t_{i}=z_{i}\) and \(t_{ij}=z_{ij}\) for every \(i=1,\ldots,l\) and \(j=1,\ldots,\iota_{i}\). Since \(z_{i}=z_{i1}\cdots z_{i\iota_{i}}\) for every \(i=1,\ldots,l\), we have \(t_{i}=t_{i1}\cdots t_{i\iota_{i}}\ \ \forall i=1,\ldots,l\) in \(\widehat{\cal O}_{T,t}\). This completes the proof of the claim.
Proof of (1): Again since the elements \(t_{i}\)'s and \(t_{ij}\)'s are uniquely determined in \(\widehat{\cal O}_{T,t}\) and in \({\cal O}_{T,t}^{h}\), using the versal maps \(\mathbb{R}[|\{\{z_{ij}\}_{i=1}^{l}\}_{j=1}^{\iota_{i}}|][|z_{l+1},\ldots,z_{N} |]\longrightarrow\widehat{\cal O}_{T,t}\) and \(\mathbb{R}[|z_{1},\ldots,z_{l}|][|z_{l+1},\ldots,z_{N}|]\longrightarrow \widehat{\cal O}_{T,t}\) one can easily check (1).
Proof of (2): Now the morphism of log structures is induced by the following
diagram of pre-log structures.
(7.5)
It is straightforward to check that the induced morphism is log-smooth.
Relative log-cotangent complex of the map \(\mathcal{M}_{g}^{ss}\longrightarrow\overline{\mathcal{M}_{g}}\)
**Proposition 7.7**.: _The relative logarithmic cotangent complex \(\mathbb{L}_{f}^{\,log}\cong 0\)._
Proof.: We have a commutative diagram (not a cartesian square)
(7.6)
As discussed above, the family of semi-stable curves induces log structures on the base and the family of curves, making the projection morphism a log-smooth morphism. Therefore, the maps \(\tilde{p}\) and \(p\) are log-smooth morphisms with the induced log structures. The space of infinitesimal first-order log-deformations of the curves is isomorphic to \(H^{1}(\mathcal{C}_{g},\omega^{\vee}_{\mathcal{C}_{g}/\overline{\mathcal{M}_{g}}})\) and \(H^{1}(\mathcal{D}_{g},\omega^{\vee}_{\mathcal{D}_{g}/\mathcal{M}_{g}^{ss}})\), respectively. Therefore the log-tangent complex of \(\overline{\mathcal{M}_{g}}\) is isomorphic to \(Rp_{*}(\omega^{\vee}_{\mathcal{C}_{g}/\overline{\mathcal{M}_{g}}})[1]\) and the log-tangent complex of \(\mathcal{M}_{g}^{ss}\) is isomorphic to \(R\tilde{p}_{*}(\omega^{\vee}_{\mathcal{D}_{g}/\mathcal{M}_{g}^{ss}})[1]\). Now notice that
\[Rf_{*}\circ R\tilde{p}_{*}(\omega^{\vee}_{\mathcal{D}_{g}/\mathcal{M}_{g}^{ss} })[1]=Rp_{*}R\tilde{f}_{*}(\omega^{\vee}_{\mathcal{D}_{g}/\mathcal{M}_{g}^{ss} })[1]\cong Rp_{*}(\omega^{\vee}_{\mathcal{C}_{g}/\overline{\mathcal{M}_{g}}}) [1].\]
Therefore, we conclude that the natural map between the two log-tangent complexes ( and the map between log-cotangent complexes) is an isomorphism. Using
the distinguished triangle of log-cotangent complexes of a map, we have the following triangles.
\[f^{*}\mathbb{L}_{\overline{\mathcal{M}_{g}}}^{log}\longrightarrow \mathbb{L}_{\mathcal{M}_{g}^{ss}}^{log}\longrightarrow\mathbb{L}_{f}^{log} \longrightarrow f^{*}\mathbb{L}_{\overline{\mathcal{M}_{g}}}^{log}[1] \tag{7.7}\] \[\mathbb{L}_{\mathcal{M}_{g}^{ss}}^{log}\longrightarrow\mathbb{L}_ {f}^{log}\longrightarrow f^{*}\mathbb{L}_{\overline{\mathcal{M}_{g}}}^{log}[1] \longrightarrow\mathbb{L}_{\mathcal{M}_{g}^{ss}}^{log}[1] \tag{7.8}\]
Since, the map \(f^{*}\mathbb{L}_{\overline{\mathcal{M}_{g}}}^{log}\longrightarrow\mathbb{L}_{ \mathcal{M}_{g}^{ss}}^{log}\) is an equivalence, therefore we conclude that \(\mathbb{L}_{f}^{log}\cong 0\).
### Relative logarithmic Dolbeault shape and shifted symplectic forms
Let \(\mathcal{D}_{g}^{lDol}\) denote the relative logarithmic Dolbeault moduli stack for the family of curves \(\mathcal{D}_{g}\longrightarrow\mathcal{M}_{g}^{ss}\). Then \(\mathcal{M}_{g}^{lDol}:=\mathsf{Map}_{\mathcal{M}_{g}^{ss}}(\mathcal{D}_{g}^ {lDol},BGL_{n}\times\mathcal{M}_{g}^{ss})\) is the relative derived moduli stack of Gieseker-Higgs bundles viewed over \(\overline{\mathcal{M}_{g}}\).
**Proposition 7.8**.: _The morphism \(\mathcal{M}_{g}^{lDol}\longrightarrow\mathcal{M}_{g}^{ss}\) is a quasi-smooth morphism of derived Artin stacks._
Proof.: Same as the proof of Proposition 4.12.
We equip \(\mathcal{M}_{g}^{lDol}\) with the locally free log-structure pulled back from \(\mathcal{M}_{g}^{ss}\) via the morphism \(\mathcal{M}_{g}^{lDol}\longrightarrow\mathcal{M}_{g}^{ss}\).
**Theorem 7.9**.: _There is a \(0\)-shifted relative log-symplectic form on \(\mathcal{M}_{g}^{lDol}\) (relative to the moduli stack of stable curves \(\overline{\mathcal{M}_{g}}\))._
Proof.: Consider the composite morphism \(\mathcal{M}_{g}^{lDol}\xrightarrow{\pi}\mathcal{M}_{g}^{ss}\xrightarrow{f} \overline{\mathcal{M}_{g}}\). We have a distinguished triangle
\[\pi^{*}\mathbb{L}_{\mathcal{M}_{g}^{ss}/\overline{\mathcal{M}_{g}}}^{log} \longrightarrow\mathbb{L}_{\mathcal{M}_{g}^{lDol}/\overline{\mathcal{M}_{g}}} ^{log}\longrightarrow\mathbb{L}_{\mathcal{M}_{g}^{lDol}/\overline{\mathcal{M}_ {g}}}^{log}\longrightarrow\pi^{*}\mathbb{L}_{\mathcal{M}_{g}^{ss}/\overline{ \mathcal{M}_{g}}}^{log}[1]. \tag{7.9}\]
But since \(\mathbb{L}_{\mathcal{M}_{g}^{ss}/\overline{\mathcal{M}_{g}}}^{log}\cong 0\) (Proposition 7.7), therefore the relative log cotangent complex of the composite morphism is isomorphic to the relative log cotangent complex of the morphism \(\pi\). But notice that the log structure of \(\mathcal{M}_{g}^{lDol}\) is pulled back of the log structure of \(\mathcal{M}_{g}^{ss}\) via the map \(\pi\). Therefore the relative log-cotangent complex of the morphism \(\pi\) is isomorphic to the relative cotangent complex of the
morphism \(\pi\). Now since the relative logarithmic Dolbeault stack over the moduli stacks \(\mathcal{M}_{g}^{lDol}\) is \(\mathcal{O}\)-compact and \(\mathcal{O}\)-oriented (Theorem 4.11); therefore \(\mathcal{M}_{g}^{lDol}\) has a \(0\)-shifted relative symplectic form over \(\mathcal{M}_{g}^{ss}\), which is a \(0\)-shifted relative log-symplectic form viewed over the moduli stack of stable curves \(\overline{\mathcal{M}_{g}}\).
## 8 Appendix: Classical Artin stack of Gieseker-Higgs bundles and its local properties
In this section we will define and construct the relative classical Artin stack of Gieseker-Higgs bundles and study its local properties. The main results of the appendix are Lemma 8.9 and Lemma 8.10. In the first lemma we prove that the stack of Gieseker vector bundles is an almost very good stack. We use this in Lemma 8.10 to show that the classical stack of Gieseker-Higgs bundles is an irreducible, local complete intersection.
### The stacks of Gieseker vector bundles and Gieseker Higgs bundles
**Definition 8.1**.: **(Gieseker vector bundle)** A vector bundle \(\mathcal{E}\) of rank \(n\) on \(X_{r}\) with \(r\geq 1\) is called a Gieseker vector bundle if
1. \(\mathcal{E}|_{R[r]}\) is a strictly standard vector bundle on \(R[r]\subset X_{r}\), i.e., for each \(i=1,\ldots,r\), \(\exists\) non-negative integers \(a_{i}\) and \(b_{i}\) such that \(\mathcal{E}|_{R[r]_{i}}\cong\mathcal{O}^{\oplus a_{i}}\oplus\mathcal{O}(1)^{ \oplus b_{i}}\), and
2. the direct image \(\pi_{r*}(\mathcal{E})\) is a torsion-free \(\mathcal{O}_{X_{0}}\)-module.
Any vector bundle on \(X_{0}\) is called a Gieseker vector bundle. In the literature, a Gieseker vector bundle is also called an admissible vector bundle.
A Gieseker vector bundle \((X_{r},\mathcal{E})\) is called a stable Gieseker vector bundle if \(\pi_{r*}\mathcal{E}\) is a stable torsion-free sheaf on the irreducible nodal curve \(X_{0}\), where \(\pi_{r}:X_{r}\longrightarrow X_{0}\) is the natural contraction map.
A (stable) Gieseker vector bundle on a modification \(\mathcal{X}_{T}^{mod}\) is a vector bundle such that its restriction to each \((\mathcal{X}_{T}^{mod})_{t}\) is a (stable) Gieseker vector bundle.
**Remark 8.2**.: A Gieseker vector bundle can also be defined as a vector bundle \((X_{r},\mathcal{E})\) on a Gieseker curve \(X_{r}\) satisfying the following two conditions
1. \(\mathcal{E}|_{R[r]}\) is a globally generated vector bundle on \(R[r]\),
2. the direct image \(\pi_{r*}(\mathcal{E})\) is a torsion-free \(\mathcal{O}_{X_{0}}\)-module.
**Definition 8.3**.: **(Gieseker-Higgs bundle)** A Gieseker-Higgs bundle on \(\mathcal{X}_{T}^{mod}\) is a pair \((\mathcal{E}_{T},\phi_{T})\), where \(\mathcal{E}_{T}\) is a vector bundle on \(\mathcal{X}_{T}^{mod}\), and \(\phi_{T}:\mathcal{E}_{T}\longrightarrow\mathcal{E}_{T}\otimes\omega_{ \mathcal{X}_{T}^{mod}/T}\) is an \(\mathcal{O}_{\mathcal{X}_{T}^{mod}}\) -module homomorphism satisfying the following
1. \(\mathcal{E}_{T}\) is a Gieseker vector bundle on \(\mathcal{X}_{T}^{mod}\),
2. for each closed point \(t\in T\) over \(\eta_{0}\in S\), the direct image \((\pi_{t})_{*}(\mathcal{E}_{t})\) is a torsion-free sheaf on \(X_{0}\) and \((\pi_{t})_{*}\phi_{t}:(\pi_{t})_{*}(\mathcal{E}_{t})\longrightarrow(\pi_{t})_ {*}(\mathcal{E}_{t})\otimes\omega_{X_{0}}\) is an \(\mathcal{O}_{X_{0}}\)-module homomorphism. We refer to such a pair \(((\pi_{t})_{*}(\mathcal{E}_{t}),(\pi_{t})_{*}\phi_{t})\) as a torsion-free Higgs pair on the nodal curve \(X_{0}\).
**Remark 8.4**.: A Gieseker-Higgs bundle can also be defined as a Higgs bundle \((X_{r},\mathcal{E},\phi)\) on a Gieseker curve \(X_{r}\) satisfying the following two conditions
1. \(\mathcal{E}|_{R[r]}\) is a globally generated vector bundle on \(R[r]\),
2. the direct image \((\pi_{r})_{*}(\mathcal{E})\) is a torsion-free \(\mathcal{O}_{X_{0}}\)-module.
**Remark 8.5**.: From [21, Definition-Notation 1, Lemma 2, and Proposition 5], it follows that that for moduli problem of vector bundles (Higgs bundles) of rank \(n\), we have to consider Gieseker curves \(X_{r}\), where \(r=0,\dots,n\).
We fix the rank and degree to be \(n\) and \(d\), respectively.
#### 8.1.1 Stack of torsion-free Hitchin pairs
We define the moduli stack of torsion-free Hitchin pairs
\[T\mapsto\left\{\begin{aligned} &\text{Families of torsion-free Hitchin pairs}\\ &(\mathcal{F},\phi:\mathcal{F}\longrightarrow\mathcal{F}\otimes \omega_{\mathcal{X}/S})\\ &\text{of rank $n$ and degree $d$ over the original}\\ &\text{family of curves }\ \mathcal{X}/S\ \ (\ref{eq:T})\end{aligned}\right\} \tag{8.1}\]
It is an Artin stack. For the construction of an atlas for \(TFH({\cal X}/S)\), we refer to [1, 5.0.7. The total family construction.]. Following the notation from [1], we denote it by \(\coprod_{m\geq m_{0}}R_{S}^{\Lambda,m}\). The superscript "\(\Lambda\)" is because for the construction of the Quot scheme of torsion-free Hitchin pairs one views them as \(\Lambda\)-modules, where \(\Lambda=\) Sym \((\omega^{\vee}_{{\cal X}/S})\)[31]. We denote the stack of families of torsion-free sheaves over \({\cal X}/S\) of rank \(n\) and degree \(d\) by \(TF({\cal X}/S)\). It is a reduced and irreducible Artin stack.
#### 8.1.2 Classical Artin Stack of Gieseker-Higgs bundles
Given any derived Artin stack \(F:{\sf cdga}_{\Bbbk}\longrightarrow\mathbb{S}\) one can define a classical Artin stack by considering the composition functor \(F^{cl}:{\sf alg}_{\Bbbk}\longrightarrow{\sf cdga}_{\Bbbk}\longrightarrow \mathbb{S}\). We call \(F^{cl}\) as the underlying classical Artin stack of \(F\). Following this notation, we denote the underlying classical Artin stack of the derived stack \(M\) of Higgs bundles over the family of curves \({\cal X}_{\mathfrak{M}}/\mathfrak{M}\) by \(M^{cl}\).
Let us denote by \(Coh({\cal X}/S)\) the Artin stack of coherent \({\cal O}_{\cal X}-\) modules which are flat over \(S\). There is a natural map \(\theta:M^{cl}\longrightarrow Coh({\cal X}/S)\) which is given by the pushforward of the underlying bundle \(\pi_{*}{\cal E}\), where \(\pi:X_{k}\longrightarrow X_{0}\) is the modification morphism. Consider the open sub-stack \(TF({\cal X}/S)\subset Coh({\cal X}/S)\) consisting of torsion-free sheaves. Therefore the stack \(\theta^{-1}(TF({\cal X}/S))\) is open in \(M^{cl}\). Let us consider the natural map \(\pi^{*}\pi_{*}{\cal E}\longrightarrow{\cal E}\) over the universal curve \({\cal X}^{mod}\) over \(M^{cl}\). Consider the sub-stack of \(M^{cl}\) where the map is surjective. It is again an open sub-stack. Let us denote it by \(M^{gg}\) (the superscript "gg" stands for globally generated). Then we define a open sub-stack \(M^{cl}_{Gie}:=\theta^{-1}(TF({\cal X}/S))\cap M^{gg}\). It is an open sub-stack and it consists of Gieseker-Higgs bundles of rank \(n\) and degree \(d\). There is a natural map \(M^{cl}_{Gie}\longrightarrow TFH({\cal X}/S)\) which is given by \(({\cal X}^{mod},{\cal E},\phi)\mapsto(\pi_{*}{\cal E},\pi_{*}\phi)\), where \(\pi:{\cal X}^{mod}\longrightarrow{\cal X}\) is the modification map.
We denote by \(M^{cl}_{Gie}\) the underlying classical stack of Gieseker-Higgs bundles of rank \(n\) and degree \(d\). We denote by \(M^{cl}_{Gie,0}\) the closed fibre of the map \(M^{cl}_{Gie}\longrightarrow S\). Now we recall the construction of an atlas for the stack \(M^{cl}_{Gie}\) as well as the classical stack of Gieseker vector bundles. Let us denote by \(N_{Gie}\) the classical stack of Gieseker vector bundles.
#### 8.1.3 Construction of an atlas for \(N_{Gie}\)
Let \({\cal O}_{{\cal X}/S}(1)\) be a relatively ample line bundle for the family of curves \({\cal X}/S\). The set of all flat families of stable torsion-free sheaves (Higgs pairs) of degree \(d\) and rank \(n\) over \({\cal X}\) forms a bounded family. Therefore we can choose a large integer \(m_{0}\) such that given any family of stable torsion-free Higgs pairs \(({\cal F}_{S},\phi_{S})\), the sheaf \({\cal F}_{s}\otimes{\cal O}_{{\cal X}_{s}}(m_{0})\) is generated by global sections and \(H^{1}({\cal X}_{s},{\cal F}_{s}\otimes{\cal O}_{{\cal X}_{s}}(m_{0}))=0\) for every geometric point \(s\in S\). Set \(N(m_{0}):=H^{0}({\cal X}_{s},{\cal F}_{s}\otimes{\cal O}_{{\cal X}_{s}}(m_{0}))\) for any geometric point \(s\in S\). We denote by \(Grass(N(m_{0}),n)\) the Grassmannian of \(n\) dimensional quotient vector spaces of \(\mathbb{k}^{N(m_{0})}\).
**Definition 8.6**.: Let \({\cal G}_{S}^{m_{0}}:Sch/S\longrightarrow Sets\) be the functor defined as follows:
\[{\cal G}_{S}^{m_{0}}(T)=\{(\Delta_{T},V_{T})\}, \tag{8.2}\]
where
\[\Delta_{T}\subset{\cal X}\times_{S}T\times Grass(N(m_{0}),n) \tag{8.3}\]
is a closed subscheme, and \(V_{T}\) is a vector bundle on \(\Delta_{T}\) such that
1. the projection \(j:\Delta_{T}\longrightarrow T\times Grass(N(m_{0}),n)\) is a closed immersion,
2. the projection \(\Delta_{T}\longrightarrow{\cal X}\times_{S}T\) is a modification,
3. the projection \(p_{T}:\Delta_{T}\longrightarrow T\) is a flat family of Gieseker curves,
4. Let \({\cal V}\) be the tautological quotient bundle of rank \(n\) on \(Grass(N(m_{0}),n)\) and \({\cal V}_{T}\) its pullback to \(T\times Grass(N(m_{0}),n)\). Then \[V_{T}:=j^{*}({\cal V}_{T})\] (8.4) be such that \(V_{T}\) is a Gieseker vector bundle on the modification \(\Delta_{T}\) of rank \(n\) and degree \(d(m_{0})^{\prime}:=N(m_{0})+n(g-1)\).
5. for each \(t\in T\), the quotient \({\cal O}_{\Delta_{t}}^{N}(m_{0})\longrightarrow V_{t}\) induces an isomorphism \[H^{0}(\Delta_{t},{\cal O}_{\Delta_{t}}^{N(m_{0})})\cong H^{0}(\Delta_{t},V_{t})\] (8.5) and \(H^{1}(\Delta_{t},V_{t})=0\).
We denote by \(P(m_{0})\) the Hilbert polynomial of the closed subscheme \(\Delta_{s}\) of \(\mathcal{X}_{s}\times Grass(N(m_{0}),n)\) for any geometric point \(s\in S\) with respect to the polarisation \(\mathcal{O}_{\mathcal{X}_{s}}(1)\boxtimes\mathcal{O}_{ Grass(N(m_{0}),n)}(1)\), where \(\mathcal{O}_{ Grass(N(m_{0}),n)}(1)\) is the line bundle det \(\mathcal{V}\).
It is shown in [21, Proposition 8] that the functor \(\mathcal{G}_{S}^{m_{0}}\) is represented by a \(PGL(N(m_{0}))\)-invariant open subscheme \(\mathcal{Y}_{S}^{m_{0}}\) of the Hilbert scheme \(\mathcal{H}_{S}:=Hilb^{P(m_{0})}(\mathcal{X}\times Grass(N(m_{0}),n))\).
Finally, the disjoint union \(\mathcal{Y}_{S}:=(\coprod_{{}_{N(m),m\geq m_{0}}}\mathcal{Y}_{S}^{m})\longrightarrow N _{Gie}\) is an atlas. The varieties \(\mathcal{Y}_{S}^{m}\) are all smooth varieties and the projection map to the stack is also smooth.
#### 8.1.4 Construction of an atlas for \(M_{Gie}^{cl}\)
**Definition 8.7**.: We define a functor
\[\mathcal{G}_{S}^{H,m}:Sch/\mathcal{Y}_{S}\longrightarrow Groups \tag{8.6}\]
which maps
\[T\longrightarrow H^{0}(T,(p_{T})_{*}(\mathcal{E}nd\ \ \mathcal{V}_{T}\otimes \omega_{\Delta_{T}/T})),\]
where \(p_{T}:\Delta_{T}:=\Delta_{{}_{\mathcal{Y}_{S}}}\times_{{}_{\mathcal{Y}_{S}}}T \longrightarrow T\) is the projection, and \(\omega_{\Delta_{T}/T}\) denotes the relative dualising sheaf of the family of curves \(p_{T}\).
Since \(\mathcal{Y}_{S}^{m}\) is a reduced scheme, the functor \(\mathcal{G}_{S}^{H,m}\) is representable, i.e., there exists a linear \(\mathcal{Y}_{S}^{m}\)- scheme \(\mathcal{Y}_{S}^{H,m}\), which represents it. For a \(S\)-scheme \(T\), a point in \(\mathcal{G}_{S}^{H,m}(T)\) is given by \((V_{T},\phi_{T})\), where
1. \(V_{T}\in\mathcal{G}_{S}^{m}(T)\), and
2. \((V_{T},\phi_{T})\) is a Gieseker-Higgs bundle.
Finally, the disjoint union \(\mathcal{Y}_{S}^{H}:=(\coprod_{{}_{N(m),m\geq m_{0}}}\mathcal{Y}_{S}^{H,m}) \longrightarrow M_{Gie}^{cl}\) is an atlas. The projection map to the stack is smooth.
### Dimension and local properties of \(M_{Gie,0}^{cl}\)
In this subsection, we will compute the dimension of \(M_{Gie,0}^{cl}\) and show that it is a local complete intersection.
#### 8.2.1 Relative Log-Symplectic reduction
We will do this by a log-symplectic reduction on every atlas of \(N_{Gie,0}\). Let us pick one of the atlas \(\mathcal{Y}_{S}^{m}\) of \(M_{Gie}^{cl}\). The group \(GL_{N(m)}\) acts on \(\mathcal{Y}_{S}^{m}\) and the quotient stack \([\mathcal{Y}_{S}^{m}/GL_{N(m)}]\) is an open sub-stack of \(N_{Gie}\). We consider the relative log-cotangent bundle \(\Omega^{log}_{\mathcal{Y}_{S}^{m}/S}\). Since the \(GL_{N(m)}\) action preserves the normal crossing divisor \(\mathcal{Y}_{0}^{m}\) (the closed fibre of \(\mathcal{Y}_{S}^{m}\longrightarrow S\)), the action of \(GL_{N(m)}\) lifts to an action on \(\Omega^{log}_{\mathcal{Y}_{S}^{m}/S}\) with a moment map \(\mu_{log}:\Omega^{log}_{\mathcal{Y}_{S}^{m}/S}\longrightarrow\mathfrak{gl}^{ *}_{N(m)}\). Notice that the action of \(GL_{N(m)}\) has a generic stabiliser \(\mathbb{G}_{m}\); therefore the map \(\mu_{log}\) actually factors through \(\Omega^{log}_{\mathcal{Y}_{S}^{m}/S}\longrightarrow\mathfrak{gl}^{*}_{N(m)}\). Since we know that there exists an open subset of \(\mathcal{Y}_{S}^{m}\) where the action of \(GL_{N(m)}\) has stabiliser isomorphic to \(\mathbb{G}_{m}\) (namely, the locus of stable vector bundles), therefore the map \(\Omega^{log}_{\mathcal{Y}_{S}^{m}/S}\longrightarrow\mathfrak{gl}^{*}_{N(m)}\) is surjective. Then one can show that \(\mathcal{Y}_{S}^{H,m}=\mu_{log}^{-1}(0)\). To see this, notice that \(\mathcal{Y}_{S}^{m}\) is a principal \(GL_{N(m)}\)-bundle over an open subset of the stack \(N_{Gie}\). Let \(a\) denote the projection \(a:\mathcal{Y}_{S}^{m}\longrightarrow N_{Gie}\). Let us write the cotangent sequence.
\[0\longrightarrow a^{*}\Omega^{log,*}_{N_{Gie}/S}\longrightarrow T^{log,*}_{ \mathcal{Y}_{S}^{m}/S}\longrightarrow\mathfrak{gl}^{*}_{N(m)}\otimes \mathcal{O}_{\mathcal{Y}_{S}^{m}} \tag{8.7}\]
Notice that \([T^{log,*}_{\mathcal{Y}_{S}^{m}/S}\longrightarrow\mathfrak{gl}^{*}_{N(m)} \otimes\mathcal{O}_{\mathcal{Y}_{S}^{m}}]\) is the cotangent complex of the stack \(N_{Gie}\). Therefore the \(0\)-th cohomology of the complex at a point \([\mathcal{O}^{N(m)}\rightarrow\mathcal{E}]\in\mathcal{Y}_{S}^{m}\) is isomorphic to \(H^{1}(\mathcal{E}nd\mathcal{E})^{\vee}\cong\mathsf{Hom}(\mathcal{E},\mathcal{ E}\otimes\omega)\). Therefore by definition 8.1.4, \(\mathcal{Y}_{S}^{H,m}=\mu_{log}^{-1}(0)\). Therefore \([\mu_{log}^{-1}(0)/GL_{N(m)}]\) is an open subset of \(M_{Gie}^{cl}\). Therefore, we will compute the dimension of \(\mu_{log}^{-1}(0)\) and that it is a local complete intersection. We recall a result on the dimension of the image of a cotangent fibre under the moment map.
**Lemma 8.8**.: _dim \(\mu_{log}(\Omega^{log}_{\mathcal{Y}_{S}^{m}/S,y})\geq\text{ dim }\left(\frac{\mathfrak{gl}_{N(m)}}{\mathfrak{gl}_{N(m),y}}\right)^{*}\), where \(\mathfrak{gl}_{N(m),y}\) is the Lie algebra of the stabilizer of \(y\in\mathcal{Y}_{S}^{m}\) under the action of \(GL_{N}(m)\)._
Proof.: Consider the diagram
(8.8)
Here \(K\) denotes the Kernel of the natural map \(T^{log}_{\mathcal{Y}^{m}_{S}/S,y}\longrightarrow T_{\mathcal{Y}^{m}_{S}/S,y}\). Notice that the map \(\mathfrak{gl}_{N(m)}\longrightarrow T_{\mathcal{Y}^{m}_{S}/S,y}\) is the differential of the orbit map, and it factors through \(T^{log}_{\mathcal{Y}^{m}_{S}/S,y}\) because the action of \(GL_{N(m)}\) preserves the normal crossing divisor. It is well known that the rightmost vertical map is injective [32, Lemma 2.4.1]. Therefore we can complete the diagram.
(8.9)
Therefore we see that \(Kernel\ (\iota_{log})\subset Kernel\ (\iota)=\mathfrak{gl}_{N(m),y}\). Since the moment and log-moment maps are the dual of the maps \(\iota\) and \(\iota_{log}\), therefore \(dim\ \mu_{log}(\Omega^{log}_{\mathcal{Y}^{m}_{S}/S,y})\geq\ dim\ \big{(}\frac{ \mathfrak{gl}_{N(m)}}{\mathfrak{gl}_{N(m),y}}\big{)}^{*}\).
**Proposition 8.9**.: _The closed fibre \(N^{cl}_{Gie,0}\) is an irreducible, equidimensional, almost very good stack ([32, Definition 2.1.2]) with normal crossing singularities._
Proof.: From [16, Theorem 9.5],it follows that the normalisation of \(N^{cl}_{Gie,0}\) is a \(KGL_{n}\) bundle over the stack of vector bundles \(Bun(\tilde{X}_{0})\) of rank \(n\) and degree \(d\) over the normalisation \(\tilde{X}_{0}\) of the nodal curve \(X_{0}\). Here \(KGL_{n}\) denotes the compactification of \(GL_{n}\) constructed by Kausz [17]. More precisely, let \(E\) is the universal \(GL_{n}\) bundle over \(\tilde{X}_{0}\times Bun(\tilde{X}_{0})\). Consider the \(GL_{n}\times GL_{n}\) bundle \(E_{x_{1}}\times_{Bun(\tilde{X}_{0})}E_{x_{2}}\) over \(Bun(\tilde{X}_{0})\). Then the associated \(KGL_{n}\) fibration \((E_{x_{1}}\times_{Bun(\tilde{X}_{0})}E_{x_{2}})\times_{GL_{n}\times GL_{n}}KGL _{n}\cong\widetilde{N^{cl}_{Gie,0}}\). This is the stack of Gieseker vector bundle data ([16, Definition 4.7]) which is equivalent to ([DI, Lemma 5.6]) the stack of marked Gieseker vector bundles (i.e., a Gieseker vector bundle with a marked node ([DI, Definition 5.1])). It is obvious that the automorphism group of a marked Gieseker vector bundle is isomorphic to the automorphism group of the corresponding Gieseker vector bundle. Therefore it is enough to show that the stack of Gieseker vector bundle data is smooth, equidimensional, irreducible and almost very good.
Now let us recall that the map \(\tilde{\pi}:\widetilde{N^{cl}_{Gie,0}}\longrightarrow Bun(\tilde{X}_{0})\) is given by \((X^{m,n},s_{1},s_{2},\tilde{\mathcal{E}},\phi)\mapsto h_{*}(\mathcal{E}(-s_{1 }-s_{2}))(p_{1}+p_{2})\), where \(h:X^{m,n}\longrightarrow\tilde{X}_{0}\) is the modification map ([16, Lemma 9.3]). Here \((X^{m,n},s_{1},s_{2},\tilde{\mathcal{E}},\phi)\) is a Gieseker vector bundle data and here \(\phi\) is not a Higgs bundle but an identification between the fibres
\(\tilde{\mathcal{E}}_{s_{1}}\xrightarrow{\cong}\tilde{\mathcal{E}}_{s_{2}}\) ([16, Definition 4.7]). It is straightforward to check that the induced map \(Aut(X^{m,n},s_{1},s_{2},\tilde{\mathcal{E}},\phi)\longrightarrow Aut(h_{*}( \mathcal{E}(-s_{1}-s_{2}))(p_{1}+p_{2}))\) is injective. For notational convenience, let us denote \(Y:=\widetilde{N^{cl}_{Gie,0}}\) and \(Z:=Bun(\tilde{X}_{0})\).
From [4, Proposition 2.1.2] and [32, Definition 2.1.2, Remark 2.1.3], it follows that \(Z\) is smooth, irreducible, almost very good stack. More precisely, the \(\operatorname{codim}_{Z}(Z_{k})>k\ \ \forall k\geq 1\), where \(Z_{k}:=\{z\in Z\ |\ \dim\ Aut\ (z)=1+k\}\). From the fact that \(Aut(X^{m,n},s_{1},s_{2},\tilde{\mathcal{E}},\phi)\subset Aut(h_{*}(\mathcal{E }(-s_{1}-s_{2}))(p_{1}+p_{2}))\), it follows that \(\tilde{\pi}(Y_{k})\subset Z_{k}\). Therefore, \(Y_{k}\subset\tilde{\pi}^{-1}(Z_{k})\) and \(\dim\ Y_{k}\leq\dim\ \ \tilde{\pi}^{-1}(Z_{k})=\dim\ \ Z_{k}+\dim\ \ KGL_{n}\). Now \(\operatorname{codim}_{Y}(Y_{k})=\dim\ \ Y-\dim\ \ Y_{k}\geq\ \ \dim\ Y-\dim\ \ Z_{k}-\dim\ \ KGL_{n}=\dim\ \ Z-\dim\ \ Z_{k}= \operatorname{codim}_{Z}(Z_{k})>k\ \ \forall k>0\). Therefore, \(\widetilde{N^{cl}_{Gie,0}}\) is smooth irreducible, almost very good stack and \(N^{cl}_{Gie,0}\) is an irreducible, almost very good stack.
**Theorem 8.10**.: _The stack \(M^{cl}_{Gie,0}\) is an irreducible local complete intersection of pure dimension \(2\cdot\text{dim}\ \ N_{Gie,0}+1\)._
Proof.: It is enough to show that \(\mu^{-1}_{log,0}(0)=\mathcal{Y}^{H,m}_{0}\) is a local complete intersection of dimension \(2\cdot\dim\ \ N_{Gie,0}+1+\dim\ GL_{N(m)}\). Here \(\mu_{log,0}\) denotes the restriction of \(\mu_{log}\) to the closed fibre of \(\Omega^{log}_{\mathcal{Y}^{S}_{S}/S}\longrightarrow S\). Since the map \(\mu_{log,0}:\Omega^{log}_{\mathcal{Y}^{m}_{0}}\longrightarrow\mathfrak{ pgl}^{*}_{N(m)}\) is surjective, the dimension of the generic fibre is equal to \(\dim\ \mathcal{Y}^{m}_{0}-\dim\ \mathfrak{gl}^{*}_{N(M)}+1\). Therefore for every irreducible component \(I\) of \(\mu^{-1}_{log,0}(0)\), we have
\[\dim\ I\geq\dim\ \mathcal{Y}^{m}_{0}-\dim\ \mathfrak{gl}^{*}_{N(M)}+1. \tag{8.10}\]
Suppose that there exists a component \(I\subseteq\mu^{-1}_{log,0}(0)\) which does not dominates \(N_{Gie,0}\). In that case \(p(I)\subseteq N_{k}\) for some \(k\geq 1\), where \(N_{k}:=\{x\in N|\dim\ Stab(x)=k+1\}\). A priori, the generic point of \(p(I)\) may belong to the singular locus of \(N\). But, in any case, using Lemma 8.8, we have
\[\dim\;I-\dim\;p(I)\quad\quad\leq\quad\quad\dim\;{\cal Y}_{0}^{m}-\dim\;\big{(} \frac{\mathfrak{gl}_{N(m)}}{\mathfrak{gl}_{N(m),y}}\big{)}^{*}\quad\quad\quad \quad\quad\text{(using Lemma \ref{lem:2})}\]
\[\implies\dim\;I\leq\dim\;p(I)+\dim\;{\cal Y}_{0}^{m}-\dim\;\big{(}\frac{ \mathfrak{gl}_{N(m)}}{\mathfrak{gl}_{N(m),y}}\big{)}^{*}\]
\[<(\dim\;{\cal Y}_{0}^{m}-k)+\dim\;{\cal Y}_{0}^{m}-\dim\;\big{(}\frac{ \mathfrak{gl}_{N(m)}}{\mathfrak{gl}_{N(m),y}}\big{)}^{*}\]
\[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
**Remark 8.12**.: As seen in subsubsection 8.2.1, the stack \(M_{Gie}^{cl}\) can be covered by open substacks of the form \([\mu_{log}^{-1}(0)/GL_{N(m)}]\), where \(\mu_{log}:\Omega_{\mathcal{Y}_{S}^{m}/S}^{log}\longrightarrow\mathfrak{pgl}_{N (m)}^{*}\) is the moment map and \(\coprod_{m\geq m_{0}}\mathcal{Y}_{S}^{m}\) is a suitable atlas of the stack \(N_{Gie}\). Now notice that the dimension of \(\Omega_{\mathcal{Y}_{S}^{m}/S}^{log}\) is equal to \(2n^{2}(g-1)+2N(m)^{2}+\dim S\). Therefore the expected dimension of \(\mu_{log}^{-1}(0)\) is equal to
\[2n^{2}(g-1)+N(m)^{2}+1+\dim S.\]
From the previous remark, we know that the dimension of \(M_{Gie}^{cl}\) is
\[2n^{2}(g-1)+1+\dim S=2n^{2}(g-1)+2\]
and therefore the dimension of \(\mu_{log}^{-1}(0)\) is equal to \(2n^{2}(g-1)+N(m)^{2}+1+\dim S\), which is precisely the expected dimension. Now since \(\Omega_{\mathcal{Y}_{S}^{m}/S}^{log}\) is a smooth variety, therefore \(\mu_{log}^{-1}(0)\) is a local complete intersection. Hence, \(M_{Gie}^{cl}\) is also a local complete intersection.
|
2301.09014 | Millimeter Observational Signatures of Flares in Magnetically Arrested
Black Hole Accretion Models | In general relativistic magneto-hydrodynamic (GRMHD) simulations, accreted
magnetic flux on the black hole horizon episodically decays, during which
magnetic reconnection heats up the plasma near the horizon, potentially
powering high-energy flares like those observed in M87* and Sgr A*. We study
the mm observational counterparts of such flaring episodes. The change in 230
GHz flux during the expected high energy flares depends primarily on the
efficiency of accelerating $\gamma \gtrsim 100$ ($T_e \gtrsim 10^{11}$ K)
electrons. For models in which the electrons are heated to $T_e \sim 10^{11}$ K
during flares, the hot plasma produced by reconnection significantly enhances
230 GHz emission and increases the size of the 230 GHz image. By contrast, for
models in which the electrons are heated to higher temperatures (which we argue
are better motivated), the reconnection-heated plasma is too hot to produce
significant 230 GHz synchrotron emission, and the 230 GHz flux decreases during
high energy flares. We do not find a significant change in the mm polarization
during flares as long as the emission is Faraday thin. We also present
expectations for the ring-shaped image as observed by the Event Horizon
Telescope during flares, as well as multi-wavelength synchrotron spectra. Our
results highlight several limitations of standard post-processing prescriptions
for the electron temperature in GRMHD simulations. We also discuss the
implications of our results for current and future observations of flares in
Sgr A*, M87*, and related systems. Appendices contain detailed convergence
studies with respect to resolution and plasma magnetization. | He Jia, Bart Ripperda, Eliot Quataert, Christopher J. White, Koushik Chatterjee, Alexander Philippov, Matthew Liska | 2023-01-21T21:36:44Z | http://arxiv.org/abs/2301.09014v3 | # Millimeter Observational Signatures of Flares in Magnetically Arrested Black Hole Accretion Models
###### Abstract
In general relativistic magneto-hydrodynamic (GRMHD) simulations, accreted magnetic flux on the black hole horizon episodically decays, during which magnetic reconnection heats up the plasma near the horizon, potentially powering high-energy flares like those observed in M87* and Sgr A*. We study the mm observational counterparts of such flaring episodes. The change in 230 GHz flux during the expected high energy flares depends primarily on the efficiency of accelerating \(\gamma\gtrsim 100\) (\(T_{e}\gtrsim 10^{11}\) K) electrons. For models in which the electrons are heated to \(T_{e}\sim 10^{11}\) K during flares, the hot plasma produced by reconnection significantly enhances 230 GHz emission and increases the size of the 230 GHz image. By contrast, for models in which the electrons are heated to higher temperatures (which we argue are better motivated), the reconnection-heated plasma is too hot to produce significant 230 GHz synchrotron emission, and the 230 GHz flux decreases during high energy flares. We do not find a significant change in the mm polarization during flares as long as the emission is Faraday thin. We also present expectations for the ring-shaped image as observed by the Event Horizon Telescope during flares, as well as multi-wavelength synchrotron spectra. Our results highlight several limitations of standard post-processing prescriptions for the electron temperature in GRMHD simulations. We also discuss the implications of our results for current and future observations of flares in Sgr A*, M87*, and related systems. Appendices contain detailed convergence studies with respect to resolution and plasma magnetization.
keywords: black hole physics - accretion, accretion discs - relativistic processes - methods: numerical
## 1 Introduction
Black holes are often surrounded by accretion disks with relativistic jets emitting at a range of wavelengths from radio to \(\gamma\)-ray (e.g. Narayan & Quataert, 2005; Yuan & Narayan, 2014; Davis & Tchekhovskoy, 2020). In addition to quasi-steady emission, bright X-ray and \(\gamma\)-ray flares (e.g. Harris et al., 2011; Abramowski et al., 2012) are observed from Low Luminosity Active Galactic Nuclei such as M87*. Sgr A* exhibits analogous flaring in the infrared (IR) and X-ray (e.g. Yusef-Zadeh et al., 2009, 2010; Trap et al., 2011; Fazio et al., 2018).
The mechanism of such high energy flares is not fully understood. In magnetically arrested disk (MAD) models (Igumenshchev et al., 2003; Narayan et al., 2003; Tchekhovskoy et al., 2011) episodic dissipation of magnetic energy near the horizon is a key dynamical feature of the accretion flow: magnetic flux and magnetic energy build up on the black hole horizon until they become strong enough to suppress accretion. Instabilities (e.g., magnetic Rayleigh-Taylor) and reconnection then set in episodically (in "flux eruptions"), regulating the amount of magnetic flux and energy stored near the black hole. The electromagnetic energy released through this reconnection is a promising source of observed flares from black holes (Dodds-Eden et al., 2010; Dexter et al., 2020; Chatterjee et al., 2021; Porth et al., 2021; Chatterjee & Narayan, 2022; Ripperda et al., 2022; Hakobyan et al., 2022; Seepi et al., 2022).
Using Very Long Baseline Interferometry (VLBI) observations at 230 GHz, the Event Horizon Telescope (EHT) Collaboration presented the first images of the plasma around the supermassive black holes in M87* (Event Horizon Telescope Collaboration et al., 2019) and Sgr A* (Event Horizon Telescope Collaboration et al., 2022). For M87, the polarization maps have also been released (Event Horizon Telescope Collaboration et al., 2021). For M87* in particular the observations generally favor MAD models. For Sgr A*, the observational situation is less clear, but theoretical models of the fueling of Sgr A* by stellar winds predict that the flow becomes magnetically arrested in the inner accretion region (Ressler et al., 2020). Numeri
cal models also suggest that the episodic magnetic flux eruptions in MAD models can explain many aspects of the episodic infrared and X-ray flares observed in Sgr A*. In particular, Dexter et al. (2020) and Porth et al. (2021) showed that such models can qualitatively explain the motion of the IR center-of-light and rotation in the linear polarization direction seen by the VLT interferometer GRAVITY during IR flares from Sgr A* (GRAVITY Collaboration et al., 2018).
It is not clear how horizon-scale observables accessible to EHT will change during the magnetic flux eruptions characteristic of MAD models. If the magnetic flux eruptions indeed drive high-energy flares in Sgr A*, M87*, and other systems, connecting the mm observables to higher energy observables will be a key test of theoretical models. In this paper, we aim to bridge this gap and study multi-wavelength observational signatures of flux eruptions, with a focus on the relation between 230 GHz EHT observables and higher energy radiation. Throughout this paper we will refer to the flux eruptions interchangeably as "flares" by which we specifically mean high-energy flares. We explain our motivation for this identification in more detail in Section 3.
The remainder of this paper is organized as follows. The methodology and numerical techniques are presented in Section 2. We present 230 GHz light curves in Section 3, 230 GHz polarized images in Section 4, and synchrotron emission spectra in Section 5. We conclude in Section 6 with a discussion on the appearance of flux eruptions at millimeter wavelengths, under which conditions the millimeter emission brightens or dims during high-energy flares, and how modeling the emission can be further improved. The Appendices contain detailed convergence studies with respect to resolution and plasma magnetization (see SS2 for a brief summary).
## 2 Methodology
Recently, Ripperda et al. (2022) conducted high resolution (dubbed _extreme_ resolution in their paper) general relativistic magnetohydrodynamic (GRMHD) simulations, which for the first time captured plasmoid-mediated reconnection in a 3D magnetically arrested disk, during the episodic magnetic flux eruptions. The simulations employ spherical Kerr-Schild coordinates \(r,\theta,\phi\) describing a Kerr black hole with dimensionless spin \(a=0.9375\) on a numerical grid with resolution \(N_{r}\times N_{\theta}\times N_{\phi}=5376\times 2304\times 2304\). The radial domain is fixed to [1.2, 2000]\(r_{\rm g}\). The GRMHD equations are integrated until \(10000\,r_{\rm g}/c\). A ceiling is enforced to maintain \(\sigma\leq\sigma_{\rm floor}=25\), where the magnetization \(\sigma\equiv b^{2}/(4\pi\rho c^{2})\) is defined using the magnetic field strength \(b\) co-moving with the fluid, and fluid-frame rest-mass density \(\rho\). A pure ionized hydrogen composition is assumed, and the equation of state is that of an ideal gas with an adiabatic index of \(\hat{\gamma}=13/9\). The simulation is initialized to reach a MAD state, showing large periods of accretion where magnetic flux piles up on the horizon and quasi-periodic short flux eruptions where magnetic energy dissipates through magnetic reconnection. This dissipated magnetic energy can heat the plasma and potentially power multi-wavelength flares.
GRMHD simulations do not predict the electron temperature which is required for calculating synchrotron emission. We use the following \(R_{\rm high}-R_{\rm low}\) model motivated by phenomenological considerations (Moscibrodzka et al., 2016) to compute the electron temperature from GRMHD fluid pressure \(p\), density \(\rho\) and plasma \(\beta\),
\[T_{e}=\frac{2\,T_{\rm fluid}}{1+R},\ \ \mbox{where}\ \ T_{\rm fluid} \equiv\frac{m_{p}p}{2\,k_{B}\rho},\] \[R \equiv\frac{T_{p}}{T_{e}}=\frac{\beta^{2}}{1+\beta^{2}}R_{\rm high }+\frac{1}{1+\beta^{2}}R_{\rm low},\] \[\beta \equiv\frac{8\pi p}{B^{2}}\,. \tag{1}\]
Note that larger \(R\) models have smaller \(T_{e}/T_{\rm fluid}\), and vice versa. In our modelling we assume fixed \(R_{\rm high}\) and \(R_{\rm low}\), although in reality the relation between \(T_{e}\) and \(T_{\rm fluid}\) is more complicated and could well be time and/or space-dependent. Indeed, we shall see that our analysis of the simulated mm variability during a magnetic flux eruption highlights that it is sensitive to the possibility of temporal and/or spatial variability of \(T_{e}/T_{\rm fluid}\). This implies that standard \(R_{\rm high}-R_{\rm low}\) post-processing prescriptions are limited in their ability to predict the variability associated with the distinctive magnetic flux eruptions present in MAD models.
Our calculations only include synchrotron emission from thermal electrons; while this is likely reasonable for the 230 GHz modelling in Sections 3-4, at higher frequencies non-thermal electrons and inverse Compton emission become more important, so our spectral modelling results in Section 5 likely represents lower limits to higher frequency emission instead of quantitatively precise predictions.
We generate ray tracing images from the GRMHD data with the blacklight code (White, 2022), which integrates the radiation transfer equations along geodesics to obtain the observed intensity and polarization maps.1 We ignore the plasma outside \(10\,r_{\rm g}\) where the emission is negligible at the wavelengths studied in this paper. The spatial resolution of the GRMHD data is reduced by a factor of \(4\times 4\times 4\), i.e. only one of four successive points along each spatial dimension is kept, to speed up ray tracing computation. We also ignore the region with \(\sigma>\sigma_{\rm cut}\) where the temperature is not reliable since the plasma may be governed by the advection of injected density and pressure due to the sigma ceiling (\(\sigma_{\rm floor}\)=25); we choose \(\sigma_{\rm cut}\) = 1 in Sections 3-4, and \(\sigma_{\rm cut}\) = 10 in Section 5 since at higher frequencies the emission may be dominated by \(\sigma\sim\sigma_{\rm cut}\) regions.2 We explore the convergence of our results with respect to the choice of GRMHD resolution and \(\sigma_{\rm cut}\) in Appendix A. The convergence with respect to both GRMHD resolution and \(\sigma_{\rm cut}\) depends on both the frequency of the radiation and the assumed mapping between electron temperature and GRMHD fluid temperature. Models with \(R\lesssim 100\) are well-converged at 230 GHz while models with \(R\simeq 100\) show some weak dependence on both resolution and \(\sigma_{\rm cut}\). All models show some dependence on \(\sigma_{\rm cut}\) for higher energy radiation because the high \(\sigma\) regions tend to have high temperatures in our models, which mostly emit higher frequency synchrotron radiation.
Footnote 1: We adopt the fast light approximation which assumes that the speed of light is infinite. This should be fine for our purposes as we mainly study the evolution of emission on the timescale of \(O(10^{2})\)\(M\).
Footnote 2: While both \(\sigma_{\rm floor}\) and \(\sigma_{\rm cut}\) represent a ceiling for the plasma magnetization \(\sigma\), in this paper \(\sigma_{\rm floor}\) stands for the numerical floor applied in the GRMHD simulation, while \(\sigma_{\rm cut}\) represents the cutoff applied during ray tracing computation. Note that in reality, \(\sigma\) in the magnetospheric and jet regions is likely much higher than the value \(\sigma_{\rm floor}\) used in GRMHD simulations.
We choose M87* parameters for ray tracing, with black hole mass \(M=6.5\times 10^{9}\,M_{\odot}\) and distance \(D=16.8\,\)Mpc (Blakeslee et al., 2009; Bird et al., 2010; Cantiello et al., 2018), since the spatially resolved intensity and polarization in M87* are better constrained by EHT data than Sgr A*. However, our results on the evolution of
the 230 GHz emission during flux eruptions are generic and apply to Sgr A* as well. We will discuss the application to Sgr A* in more detail in Section 6. In our ray tracing calculations the camera is located at \(r_{0}=100\,r_{g}\), \(\theta_{0}=163^{\circ}\)(Mertens et al., 2016) and \(\phi_{0}=0^{\circ}\), while the approaching jet has a position angle of \(288^{\circ}\)(Walker et al., 2018), pointing towards the right and slightly up in the images. Since the disk structure of a MAD during a flux eruption is highly non-axisymmetric, the corresponding raytracted image depends on the azimuthal position of the camera. However, this dependence is relatively weak for the low inclination case of M87* (see Gelles et al., 2022 for more details), and therefore, our results for \(\phi_{0}=0^{\circ}\) should hold for other \(\phi_{0}\). Since (ideal) GRMHD simulations are dimensionless, the normalization factor between code units and physical units needs to be set using the observed flux of emission. Unless otherwise specified, the overall density in the simulation is normalized such that the averaged 230 GHz flux equals 0.66 Jy (Event Horizon Telescope Collaboration et al., 2019). All the plots in this paper showing the evolution with time are smoothed by a moving average with a window of \(150\,M\) (15 snapshots), so that the general trends are more clearly presented.
In order to quantify the variability of the predicted images between quiescent and flare states, we compute the following statistics for the 230 GHz images blurred with a 20 \(\mu\)as Gaussian kernel, similar to those used in EHT analysis (Chael et al., 2018; Event Horizon Telescope Collaboration et al., 2019, 2021); see Section 2 of Jia et al. (2022) for more details about how these quantities are measured from the images.
1. The ring diameter \(d\), determined by the average of peak intensity along different directions of the ring.
2. The ring width \(w\), defined as the Full Width Half Maximum (FWHM) of the intensity map, averaged over different directions.
3. The ring orientation \(\eta\) and degree of asymmetry \(A\), \[\eta=\left\langle\mathrm{Arg}\left[\int_{0}^{2\pi}I(\theta)e^{i\theta}d\theta \right]\right\rangle_{r\in[r_{\mathrm{in}},r_{\mathrm{out}}]},\] (2) \[A=\left\langle\frac{\left|\int_{0}^{2\pi}I(\theta)e^{i\theta}d \theta\right|}{\int_{0}^{2\pi}I(\theta)d\theta}\right\rangle_{r\in[r_{\mathrm{in }},r_{\mathrm{out}}]},\] (3) where \(r_{\mathrm{in}}\) and \(r_{\mathrm{out}}\) are the radii where the intensity drops to half of the peak value along that direction.
4. The fractional central brightness \(f_{C}\), \[f_{C}=\frac{\langle I(r,\theta)\rangle_{\theta\in[0,2\pi],\,r\in[0,5\mu\mathrm{as }]}}{\langle I(d/2,\theta)\rangle_{\theta\in[0,2\pi]}}.\] (4)
5. The pixel-level image-averaged linear polarization fraction, \[\langle|m|\rangle=\frac{\sum_{i}\sqrt{Q_{i}^{2}+\mathcal{U}_{i}^{2}}}{\sum_{i }I_{i}},\] (5)
where the Stokes \(\mathcal{I}\), \(\mathcal{Q}\) and \(\mathcal{U}\) are summed over all the pixels and snapshots.
6. The \(\beta_{m}\) polarization statistics (Palumbo et al., 2020) defined in polar coordinates (\(\rho\), \(\phi\)) of the image plane, \[\beta_{m}=\frac{1}{I_{\mathrm{ann}}}\int_{\rho_{\mathrm{min}}}^{\rho_{\mathrm{ max}}}\int_{0}^{2\pi}(Q+i\mathcal{U})e^{-im\phi}\rho d\phi d\rho,\] (6) where we take \(m=2\), \(\rho_{\mathrm{min}}=0\), \(\rho_{\mathrm{max}}\rightarrow\infty\), and \(I_{\mathrm{ann}}\) is the total intensity flux between \(\rho_{\mathrm{min}}\) and \(\rho_{\mathrm{max}}\). Note that \(\beta_{2}\) quantifies the orientation of the polarization and is widely used to constrain the magnetic field structure around the black hole (see Equations 8-9 and the discussions therein).
## 3 Light curves at 230 GHz
In this section, we study the observational signatures of the flare state in 230 GHz light curves. As argued in Ripperda et al. (2022), the dissipation of the jet's magnetic energy through transient reconnection events near the event horizon is a possible mechanism to power observed flares from black holes. We find three major energetic reconnection events between \(t\) = 5,000\(M\) and \(t\) = 10,000\(M\), indicated by the decay of the magnetic flux \(\tilde{\phi}_{\mathrm{horizon}}\equiv\frac{1}{2}\int_{0}^{2\pi}\int_{0}^{\pi} \int_{0}^{\pi}|\,{}^{Frt}|\,\sqrt{-g}\mathrm{d}\theta\mathrm{d}\phi\) on the horizon, which are highlighted by the grey bands in the left panel of Figure 1. In the right panel, we confirm that the maximum fluid temperature (defined in Equation 1) does increase when magnetic reconnection happens, due to the electromagnetic energy converted to heat by the reconnection. Note that the plasma was
Figure 1: GRMHD fluid properties as a function of time, smoothed by a moving average with a window of 150 \(M\). _Left_: the magnetic flux on the black hole horizon \(\tilde{\phi}_{\mathrm{horizon}}\equiv\frac{1}{2}\int_{0}^{2\pi}I^{\pi}|\,{}^{Frt} |\,{}^{Frt}|\,{}^{\sqrt{-g}}\mathrm{d}\theta\mathrm{d}\phi\). _Right_: the maximum fluid temperature over the whole GRMHD grid, which is used as a proxy for the amount of heated/accelerated plasma. The grey bands indicate the three major magnetic-flux decay states. We use the correlation between \(T_{\mathrm{max}}\) and \(-\,\mathrm{d}\tilde{\phi}_{\mathrm{horizon}}\,/\,\mathrm{d}r\) during flux eruptions as a proxy for the timing of high energy flares in systems like M87* and Sgr A*: the energy released by reconnection heats up the plasma during the magnetic flux decay. Note however, that in reality, \(\sigma\gg\sigma_{\mathrm{floor}}\) in the jet and the temperature increase due to flux eruptions may be much larger than shown here (and regulated by strong radiative cooling). We also find a similar trend of increasing temperature during magnetic flux decay for the 90% and 99% percentiles of \(T_{\mathrm{fluid}}\).
modelled as a single-temperature thermal fluid in the GRMHD simulation, whereas it is very likely that the electrons around realistic black holes are non-thermal and have a different energy distribution than the protons (e.g. [14]). Therefore, the maximum electron energy is likely larger than that associated with the maximum temperature shown in Figure 1. Indeed, according to [14]; [15], particle acceleration in reconnection at high \(\sigma\) is particularly efficient in that a large fraction of the dissipated energy ends up in high energy particles. In addition, the high energy particles cool rapidly by synchrotron radiation so high energy flares appear likely if reconnection is sourced by highly magnetized plasma, as suggested by GRMHD simulations. Thus we are motivated to refer to magnetic flux eruptions as flares throughout this paper. Here we will use \(T_{\rm max}\) as a proxy for the high energy emission, since direct, self-consistent modelling of the X-ray or \(\gamma\)-ray light curves is beyond the scope of this work. We note that \(T_{\rm max}\sim\sigma_{\rm floor}\) depends directly on the magnetization in the jet, which is set by \(\sigma_{\rm floor}=25\) in our GRMHD simulation. We will discuss the implications of \(\sigma_{\rm floor}\) being much smaller than realistic values in Section 6.
Figure 3: The evolution of 230 GHz flux with different electron temperature models. _Left:_ the 230 GHz flux as a function of time, smoothed by a moving average with a window of \(150\,M\), with the legend indicating (\(R_{\rm low}\), \(R_{\rm high}\)). _Right:_ the 230 GHz flux at the beginning, midpoint and end of a flux decay state, as a function of \(R=R_{\rm low}=R_{\rm high}\). Here we adjust the density normalization for each \(R\) such that the post flare flux at \(t=9643M\) is fixed to 0.2 Jy (this is the 230 GHz flux for the \(R=1\) model at this time when the density normalization is chosen so that the time-averaged 230 GHz flux equals to 0.66 Jy), to highlight how the flux evolution during the flare depends on electron temperature model. _Dimming_ at 230 GHz occurs with \(R\equiv T_{p}/T_{e}\lesssim 20\) while we see a _brightening_ and then fading for \(R\gtrsim 20\).
Figure 2: Equatorial slices of density and temperature in the GRMHD simulation for the pre-flare, mid-flare and post-flare snapshots. The \(x^{\prime}-y^{\prime}\) coordinates are rotated to match the ray tracing images in Figure 5. We average over the fluid between \(\pm 15^{\circ}\) from the midplane to capture the structures that are not exactly on the midplane, while for \(T_{\rm fluid}\) the average is weighted by \(n_{e}\). The normalization of \(n_{e}\) is set based on the 230 GHz flux of \(R=1\) model, which is 4.88 (156) times larger if we use \(R=10\) (100) instead of \(R=1\).
Figure 2 visualizes equatorial density and temperature fields for the \(t=9113M\) pre-flare quiescent, \(t=9378M\) mid-flare, and \(t=9643M\) post-flare quiescent states (labeled in Figure 1). In the \(t=9378\,M\) mid-flare state we see reconnection-heated hot \(\lesssim 5\times 10^{12}\,\mathrm{K}\) plasma out to \(\sim 15r_{g}\). How does the 230 GHz emission change during the highly energy flares? In the left panel of Figure 3, we plot the 230 GHz light curves for six electron temperature models that are similar to those used in EHT analysis (Event Horizon Telescope Collaboration et al., 2019). We find two different patterns of 230 GHz light curves, depending on the \(T_{e}\) model. For all but the lowest electron temperature model \(R=100\), there is a strong correlation between \(\delta_{\mathrm{horizon}}\) and 230 GHz emission: as \(\delta_{\mathrm{horizon}}\) drops during the flares, the 230 GHz flux also reduces by up to 80%, in contrast to the expected brightening at higher energy bands. With \(R=100\), however, the synchrotron flux at 230 GHz increases simultaneously with \(-\mathrm{d}\delta_{\mathrm{horizon}}/\,\mathrm{d}t\) (and therefore \(T_{\mathrm{max}}\)), meaning that the high energy flare would be accompanied by a 230 GHz counterpart.
In the right panel of Figure 3, we calculate the 230 GHz flux with different \(R=R_{\mathrm{low}}=R_{\mathrm{high}}\) between 1 and 500 for the three times identified in Figure 1 and shown in Figure 2. Since here we want to compare the relative strength of emission at the three times, we adjust the density normalization such that the 230 GHz flux at \(t=9643M\) is fixed to 0.2 Jy for all the models. This facilitates easy comparison of the pre and mid-flare emission relative to the post-flare emission (note that this normalization choice is such that the time-averaged 230 GHz flux for the \(R=1\) model is the fiducial 0.66 Jy). We find similar results as the left panel of Figure 3: with smaller \(R\lesssim 20\), the 230 GHz flux drops monotonically as \(\delta_{\mathrm{horizon}}\) decays. On the other hand, when \(R\gtrsim 20\), the 230 GHz flux of \(t=9378M\) mid-flare state exceeds the \(t=9113M\) pre-flare state: the flare _dimming_ at 230 GHz with small \(R\) models eventually turns into flare _brightening_ with large \(R\) models, in accordance with the lightcurve predictions in the left panel of Figure 3.
The numerical results in Figure 3 can also be understood ana
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \(t\) / \(M\) & (\(R_{\mathrm{low}}\), \(R_{\mathrm{high}}\)) & (\(n_{\mathrm{e,low}}\), \(n_{\mathrm{e,high}}\)) / cm\({}^{-3}\) & (\(T_{\mathrm{e,low}}\), \(T_{\mathrm{e,high}}\)) / K & (\(\beta_{\mathrm{low}}\), \(R_{\mathrm{high}}\)) & (\(\beta_{\mathrm{low}}\), \(R_{\mathrm{high}}\)) / G & (\(r_{\mathrm{min}}\), \(r_{\mathrm{max}}\)) / \(r_{g}\) & \(|\tilde{\theta}|_{\mathrm{high}}\) / \({}^{\circ}\) \\ \hline \multirow{4}{*}{9113} & (1, 1) & \((2.96\times 10^{3},2.53\times 10^{4})\) & \((1.94\times 10^{11},\,5.87\times 10^{11})\) & (0.29, 3.47) & (2.26, 8.65) & (2.99, 6.58) & 15.3 \\ & (10, 10) & \((2.29\times 10^{4},2.07\times 10^{5})\) & \((6.29\times 10^{10},\,2.24\times 10^{11})\) & (0.32, 4.65) & (8.66, 31.1) & (2.26, 4.60) & 14.3 \\ & (100, 100) & \((4.13\times 10^{5},9.25\times 10^{6})\) & \((9.25\times 10^{9},9.49\times 10^{10})\) & (0.30, 14.8) & (50.2, 258) & (1.81, 4.34) & 15.8 \\ \hline \multirow{4}{*}{9378} & (1, 1) & \((6.50\times 10^{2},1.53\times 10^{4})\) & \((2.31\times 10^{11},\,7.32\times 10^{11})\) & (0.44, 3.88) & (1.21, 5.55) & (3.11, 9.06) & 23.9 \\ & (10, 10) & \((1.49\times 10^{3},8.38\times 10^{4})\) & \((8.18\times 10^{10},\,3.76\times 10^{11})\) & (0.54, 4.39) & (2.80, 17.8) & (2.59, 7.79) & 22.6 \\ & (100, 100) & \((4.08\times 10^{3},3.05\times 10^{5})\) & \((3.38\times 10^{10},\,1.92\times 10^{11})\) & (1.11, 10.1) & (6.29, 51.3) & (3.32, 15.2) & 33.2 \\ \hline \multirow{4}{*}{9643} & (1, 1) & \((9.66\times 10^{2},9.00\times 10^{3})\) & \((1.85\times 10^{11},\,5.74\times 10^{11})\) & (0.32, 3.50) & (1.24, 4.95) & (2.93, 7.63) & 23.7 \\ & (10, 10) & \((9.56\times 10^{3},6.13\times 10^{4})\) & \((7.17\times 10^{10},\,2.02\times 10^{11})\) & (0.41, 3.40) & (5.32, 17.4) & (2.23, 4.36) & 14.2 \\ \cline{1-1} & (100, 100) & \((2.25\times 10^{5},2.10\times 10^{6})\) & \((1.48\times 10^{10},\,6.40\times 10^{10})\) & (0.53, 5.01) & (35.8, 125) & (1.88, 3.78) & 14.3 \\ \hline \end{tabular}
\end{table}
Table 1: Where does the majority of the emission come from? For each fluid quantity \(x\), we find \(x_{\mathrm{low}}\) such that if we ignore the region with \(x>x_{\mathrm{low}}\), the total 230 GHz flux drops to 15% of the total value, and similarly for \(x_{\mathrm{high}}\). The region with \(x\in(x_{\mathrm{low}},x_{\mathrm{high}})\) thus contributes 70% of the flux, in the limit of negligible absorption. For the latitude \(\tilde{\theta}\) we only reports \(|\tilde{\theta}|_{\mathrm{high}}\) since most of the emission comes from the equatorial disk. Note that \(n_{e}\) and \(B\) depends on the density normalization factor between physical and simulation units, which is 4.88 (156) times larger for \(R=10\) (100) compared with \(R=1\).
lytically using the well-understood properties of synchrotron emission; we assume optically thin emission in what follows. Plasma with dimensionless temperature \(\theta_{e}=kT_{e}/m_{e}c^{2}=10\theta_{10}\), i.e., \(T_{e}\simeq 6\times 10^{10}\theta_{10}\) K in a magnetic field of strength \(B=10B_{10}\) G emits synchrotron most efficiently, i.e., the emissivity \(\nu j_{\nu}\) peaks, at a frequency
\[\nu_{\rm peak}\sim 5\frac{eB}{m_{e}c}\theta_{e}^{2}\simeq 90\,B_{10}\,\theta_{10}^{ 2}\ {\rm GHz}. \tag{7}\]
For \(\nu\ll\nu_{\rm peak}\) the synchrotron emission scales with plasma parameters as \(j_{\nu}\propto nB^{3/4}\theta_{e}^{-1/2}\). This shows that for high electron temperature models in which mm observations are at \(\nu_{\rm obs}=230\) GHz \(\lesssim\nu_{\rm peak}\), the emission at 230 GHz will decrease with the increasing electron temperature during a flux eruption, as seen in the lower \(R\) models in Figure 3. For \(\nu\gg\nu_{\rm peak}\), on the other hand, as is the case in models with lower electron temperatures, the synchrotron emission scales with plasma parameters as \(j_{\nu}\propto n\theta_{e}^{-2}\exp[-5.5(\nu/\nu_{\rm peak})^{1/3}]\). The exponential dependence on \(T_{e}^{-2/3}\) implies that when the electron temperature is low the emission at 230 GHz increases with increasing electron temperature. The mm synchrotron emission will thus increase during a flux eruption, as seen in the higher \(R\) models in Figure 3. More generally, synchrotron emission at 230 GHz is particularly sensitive to electrons with temperatures corresponding to emission at \(\nu_{\rm peak}\sim 230\) GHz, i.e., \(\theta_{e}\simeq 16B_{10}^{-1/2}\). During a flux eruption whether the mm synchrotron flux increases or decreases thus depends on the details of electron heating for plasma with \(T_{e}\simeq 10^{11}\) K (which is much less than the characteristic fluid temperatures reached during the eruption in GRMHD simulations; see Figure 2).
To better understand the dependence of the 230 GHz light curves on the electron temperature models, we ray-trace with various cuts (i.e. ignore certain regions of plasma based on different fluid quantities) to identify the characteristic fluid quantities in the emission region; the results are listed in Table 1. While the characteristic electron temperature \(T_{e}\) becomes lower for larger \(R\) models, the characteristic fluid temperature \(T_{\rm fluid}\) actually increases with \(R\) (see Equation 1). Roughly speaking, for \(R=1\), 10 and 100, the characteristic \(T_{\rm fluid}\) for 230 GHz emission is approximately \(4\times 10^{11}\) K, \(8\times 10^{11}\) K and \(4\times 10^{12}\) K, respectively, which does not change significantly between the snapshots. Therefore, for different \(R\) models the 230 GHz emission comes from different parts of the accretion flow (which we will specify further next) with different \(T_{\rm fluid}\), and thus may have different time evolution.
In the left panel of Figure 4, we plot the relative plasma mass within different \(\log_{10}T_{\rm fluid}\) bins for the three snapshots. The \(t=9378M\) mid-flare state has the most high temperature \(T_{\rm fluid}\gtrsim 2\times 10^{12}\) K plasma, due to the energy released by reconnection in the equatorial current sheet, which is consistent with the right panel of Figure 1 where we use \(T_{\rm max}\) as a proxy of the mass of high temperature plasma. On the other hand, the mass of intermediate temperature \(7\times 10^{10}\) K \(\lesssim T_{\rm fluid}\lesssim 2\times 10^{12}\) K plasma keeps decreasing during the flare state, which we attribute to the evacuation of the inner accretion disk (see Figure 1 of Ripperda et al., 2022). We also show the characteristic \(T_{\rm fluid}\) for 230 GHz emission for \(R=1\) and 100 with the red and blue bands, defined as the range of \(T_{\rm fluid}\) in which the 230 GHz synchrotron emissivity is larger than half of the peak value over all \(T_{\rm fluid}\), assuming \(B=20\) G and fixed \(n_{e}\). The locations of red and blue bands move somewhat if we choose e.g. \(B=5\) G or 50 G; this does not, however, change our main conclusions.
The left panel of Figure 4 elucidates the strong connection between the electron temperature model and the correlation or anti-correlation of the mm synchrotron flux with the magnetic flux eruption. For electron temperature models with \(R\sim 1-10\) the high temperature plasma created during the flux eruption does not radiate effectively at 230 GHz which is why the flux eruption is accompanied by a decreasing 230 GHz flux. By contrast, for electron temperature models with \(R\sim 30-100\), the high \(T_{\rm fluid}\) plasma has just the right electron temperature to emit significantly at 230 GHz. This is why the flux eruption is accompanied by an increased 230 GHz flux in higher \(R\) electron models.
The right panel of Figure 4 shows the ratio of the total plasma mass with \(\sigma_{\rm cut}=1\) to the total plasma mass with \(\sigma_{\rm cut}=10\). The regions with trans-relativistic \(\sigma\sim\sigma_{\rm cut}\) usually also have higher \(T_{\rm fluid}\), so one would expect that ray tracing calculations at higher frequencies or with lower electron temperature models (with large \(R\) in Equation 1), for which the emission is from the regions with higher \(T_{\rm fluid}\), are more sensitive to the choice of \(\sigma_{\rm cut}\). This is consistent with our convergence calculations in Appendix A2.
## 4 Images at 230 GHz
We show the intensity and polarization maps for three \(T_{e}\) models and the three typical snapshots in Figures 5 and 6. 3 For simplicity, we present three models with \(R_{\rm low}=R_{\rm high}\), while we find that the images with \(R_{\rm low}<R_{\rm high}\) are generally similar to images with \(R_{\rm low}=R_{\rm high}=R^{*}\) where the effective \(R^{*}\) lies between \(R_{\rm low}\) and \(R_{\rm high}\). Note that the \(R=100\) model mid-flare (\(t=9378M\)) image is approximately 4 times larger than the other panels in terms of area.
Footnote 3: We note that the polarization ticks in Figures 5, 7 and 11 of Jia et al. (2022) are not correctly plotted, although their quantitative results for the \(m\) and \(\beta_{2}\) statistics are not affected by this issue.
As with the light curves, we find two different regimes for 230 GHz images depending on the electron temperature model. For higher \(T_{e}\) models with \(R=1\) or 10, we only see a steady decline of the 230 GHz flux, but the ring morphology does not change much during the flares. As \(R\) increases, the quiescent state emission tends to move inwards, as the 230 GHz emission for larger \(R\) models comes from the region with lower \(T_{e}\) but higher \(T_{\rm fluid}\). With \(R=100\), the flare image changes significantly compared with quiescent states, since the hot electrons produced at larger radii \(r\gtrsim 5r_{g}\) during the flares dominate the 230 GHz emission. We note that not only the characteristic radii of the emission increases, but also the emission extends to larger \(\left|\vec{\theta}\right|\) (Table 1) and has contributions from both the current sheet and the heated jet sheath from reconnection exhaust (Ripperda et al., 2022), implying that thin disk semi-analytic models may no longer be suitable for modelling the emission for such cases. Comparing with the density and temperature maps in Figure 2, the \(T_{\rm fluid}\gtrsim 5\times 10^{12}\) K hot flow at \(t=9378\,M\) is only visible in the image with \(R=100\), since with lower \(R\lesssim 10\) it will be too hot to contribute significantly to the 230 GHz emission.
In Figure 7, we compute the 230 GHz image statistics introduced in Section 2. As we already concluded from the total intensity images, the colder electron models (larger \(R\)) generally show a smaller and thinner ring, since the emission comes from the inner regions with higher \(T_{\rm fluid}\) (but lower \(T_{e}\)). During the flares, the ring diameter \(d\) and width \(w\) increase while the fractional central brightness \(f_{\rm C}\) decreases, as the ejection of the inner disk moves the luminous plasma farther from the black hole. For higher electron temperature models with \(R\lesssim 10\), the ring orientation \(\eta\) does not change much since the ring asymmetry is mainly due to Doppler beaming of the accretion flow. On the other hand, for \(R=100\), the emission region is more extended and the ring asymmetry mainly comes from the asymmetric
distribution of hot plasma spiraling down into the black hole, which leads to a larger variation of the ring orientation. Larger \(R\) models generally produce less polarized images, as they need a larger fluid density to match the observed 230 GHz flux which enhances Faraday depolarization. As the fluid density drops in the region (\(\sim\) inner \(10\eta_{g}\)) where the accretion disk is ejected, during the flare states, the 230 GHz emission also becomes more polarized.
While magnetic reconnection changes the topology of the magnetic field during the flare states (Ripperda et al., 2022), the mean direction of the equatorial magnetic field,
\[\eta_{B}(r)\equiv\left\{\arctan(\frac{\sqrt{g_{\phi}\phi}\left|B^{\phi} \right|}{\sqrt{g_{rr}}\,B^{r}\,\mathrm{sign}\left(B^{\phi}\right)})\right\}_ {\theta\in\left(75\degr,105\degr\right),\,\phi\in\left(0\degr,360\degr\right)}, \tag{8}\]
does not change substantially with time, as shown in the left panel of Figure 8. Here the range of the arctan function is set to \([0\degr,180\degr]\), and \(\eta_{B}\) is invariant under a sign inversion of the magnetic field since the synchrotron emissivity is also unchanged. According to Equation 39 in Narayan et al. (2021), the leading order prediction of \(\mathrm{arg}[\beta_{2}]\) for optically thin, axis-symmetric, equatorial plasma and magnetic field profile is given by
\[\mathrm{arg}[\beta_{2}]\simeq\pi-2\,\eta_{B}, \tag{9}\]
for face-on observers from the south pole direction. This indeed gives a reasonable approximation of the actual \(\mathrm{arg}[\beta_{2}]\), as shown in the right panel of Figure 8. Therefore, for all the low \(R\) electron temperature models which lead to optically thin 230 GHz synchrotron emission, \(\mathrm{arg}[\beta_{2}]\) does not change significantly with time, neither is it sensitive to the exact values of \(R_{\mathrm{low}}\) and \(R_{\mathrm{high}}\). On the other hand, the 230 GHz synchrotron emission becomes Faraday thick for large \(R\) models, for which Equation 9 no longer holds. In this case, we find strong Faraday depolarization (the intensity-weighted Faraday rotation depth \(\left\langle\tau_{PV}\right\rangle\sim 5000\) for \(R_{\mathrm{low}}=R_{\mathrm{high}}=100\)) during the quiescent state, such that \(|\beta_{2}|\) is small and \(\mathrm{arg}[\beta_{2}]\) deviates from the predictions of Equation 9. During the flare state, the plasma
Figure 5: The intensity and polarization maps for three \(R=R_{\mathrm{low}}=R_{\mathrm{high}}\) models at three snapshots. The tick direction represents the direction of linear polarization, while the tick length is proportional to \(\sqrt{Q^{2}+4t^{2}}\). Note the red labels which are different between different panels. For \(R=1\) or \(10\), the 230 GHz flux drops during the flares but the ring morphology does not change significantly. For \(R=100\), however, the emission region becomes much more spatially extended at \(t=9378M\) during the flare.
density drops and so does the Faraday rotation depth. Therefore, \(|\beta_{2}|\) increases and \(\arg[\beta_{2}]\) becomes closer to the optically-thin limit in Equation 9.
We note that here \(\arg[\beta_{2}]\) is inconsistent with EHT measurements \(197^{\circ}\leq\arg(\beta_{2})\leq 231^{\circ}\) (Event Horizon Telescope Collaboration et al., 2021) for all the snapshots and electron temperature models, implying that the magnetic field is probably too azimuthal at \(r\lesssim 5r_{\rm g}\) in the large spin MAD simulations (Narayan et al., 2021). This is consistent with previous theoretical work, e.g., Figure 28 of Event Horizon Telescope Collaboration et al. (2021)).
## 5 Multi-wavelength synchrotron spectra
In this section, we go beyond 230 GHz and compute the synchrotron emission spectra from \(10^{10}\) to \(10^{15}\) Hz; the results are shown in Figure 9, with each curve interpolated between 11 different frequencies. We use the same density normalization as in the previous sections, namely the time averaged 230 GHz flux should be 0.66 Jy to match EHT observations. Generally, the \(t=9378M\) mid-flare state has the largest flux at higher frequencies (\(\gtrsim 10^{13}\) Hz), since there is more plasma heated up by reconnection in the current sheets. Note that this implies IR and even higher-energy "flares" associated with flux eruptions nearly independent of whether the mm brightens or fades (the exception is the \(R_{\rm low}=1,R_{\rm high}=100\) model). The spectra in Figure 9 drop faster at higher frequencies for lower electron temperature models, as there are not many electrons that are hot enough to emit at such frequencies.
In this calculation we only include thermal electrons and synchrotron emission, whereas a better modelling of non-thermal electrons, pair production and thermal and non-thermal inverse Compton scattering is required for a quantitative analysis of the spectra at higher frequencies (X-ray and \(\gamma\)-ray, and even optical-IR for M87*); such modeling is is beyond the scope of this paper (see, e.g., Ryan et al., 2018; Hakobyan et al., 2022 for work including more of the relevant radiative processes). Another uncertainty comes from the choice of \(\sigma_{\rm cut}\): unlike the 230 GHz computations, we find that the high frequency ray tracing results are sensitive to \(\sigma_{\rm cut}\), since the \(\sigma\sim\sigma_{\rm cut}\) region may have higher temperature and thus dominate the emission at \(\gtrsim 10^{13}\) Hz. Here we use \(\sigma_{\rm cut}=10\) for the spectra, in contrast to \(\sigma_{\rm cut}=1\) for 230 GHz emission in the previous sections. We
Figure 6: Similar to Figure 5, but blurred with a \(20\,\mu\)as FWHM Gaussian kernel, which matches the current EHT resolution. The polarization pattern does not change much for \(R=1\) or \(10\), but for \(R=100\) it becomes noticeably more polarized at \(t=9378M\).
find that with \(\sigma_{\rm cut}=1\) and lower \(T_{e}\) models, the emission at \(\gtrsim 10^{14}\) Hz basically drops to zero for many snapshots, as all the electrons that are hot enough to emit at such high frequencies are removed by \(\sigma_{\rm cut}=1\). Nevertheless, Figure 9 does confirm the qualitative trend that the high frequency flux increases during the flare state, which is the origin of the observed bright _flares_.
## 6 Discussion
A promising source of high energy flares in accreting black holes such as Sgr A*, M87*, and related systems is reconnection in near-horizon current sheets. Such reconnection is particularly prominent and energetically important during magnetic flux eruptions in MAD accretion models. This model is attractive because it qualitatively explains the timescales and duty cycles of the observed flares as well as many aspects of the observed radiation (e.g. Dodds-Eden et al., 2010; Dexter et al., 2020; Porth et al., 2021). Intriguingly, this model also associates the flares with a dynamically critical aspect of the theoretical model, namely the magnetic flux eruptions and the associated magnetic energy dissipation required for accretion to continue in spite of the energetically dominant magnetic energy in the system. The main goal of this paper has been to study the 230 GHz emission associated with the same magnetic flux eruptions posited to produce the high energy flares.
Should the high energy flares in fact be accompanied by a mm counterpart? The short answer is _maybe_, depending on how efficiently electrons with Lorentz factors of \(\sim 100\) (temperatures of \(\sim 10^{11}\) K) are heated during the flare. In the context of GRMHD modeling like that employed in this work, this depends on the relation between \(T_{e}\) and \(T_{\rm fluid}\) (the GRMHD simulation temperature) which unfortunately is still poorly understood. The energy released by reconnection heats up the plasma to \(T_{\rm fluid}\gtrsim 2\times 10^{12}\) K. However, since the accretion flow in low-luminosity AGN very likely has different electron temperatures than proton temperatures, one needs a prescription for the electron temperature that determines the synchrotron emis
Figure 8: _Left:_ the mean direction of the equatorial magnetic field \(\eta_{B}\), as defined in Equation 8. \(\eta_{B}\) does not change significantly with time, neither is it sensitive to the radius within \(\sim 10\,r_{g}\). _Right:_ dashed lines represent the actual image-averaged \(\arg[\beta_{2}]\) from the \(R_{\rm low}=R_{\rm high}=1\) images, which is close to the results of all low R models (see Figure 7). Solid lines show the semi-analytic computation of \(\arg[\beta_{2}]\) in Equation 9, which is a reasonable approximation of the actual measured \(\arg[\beta_{2}]\) from ray tracing images.
Figure 7: Image statistics as a function of time for five different (\(R_{\rm low}\), \(R_{\rm high}\)) electron temperature models, smoothed by a moving average with a window of \(150\)\(M\); see Section 2 for the exact definition of the statistics. The grey bands indicate the three major flare states. During the flares, we see an increase of ring diameter \(d\), width \(w\) and degree of asymmetry \(A\), and a decrease of fractional central brightness \(\hat{\mathcal{C}}_{\rm c}\), due to the ejection of the inner disk. The other statistics are similar between flare and quiescent states.
sivity. With higher \(T_{e}\) models like \(R\equiv T_{p}/T_{e}=R_{\rm low}=R_{\rm high}=1\), we find that the \(T_{\rm fluid}\gtrsim 2\times 10^{12}\,\)K hot plasma contributes negligible emission at 230 GHz, and the 230 GHz emission gradually _dims_ during the flare, although at higher frequencies the synchrotron flux does increase. We have also studied the spatially resolved emission during the flares in the context of future EHT observations. We find that for models in which the 230 GHz emission dims, the diameter of the high surface brightness "ring" of emission slightly increases and the fractional central brightness decreases, due to the ejection of the inner accretion disk. The polarization is similar between quiescent and flare states because the equatorial magnetic field direction does not change much during the flares (Figure 8).
On the other hand, with lower \(T_{e}\) models like \(R\equiv T_{p}/T_{e}=R_{\rm low}=R_{\rm high}=100\), the flare-state hot electrons are just the right temperature to emit 230 GHz synchrotron radiation, which leads to a 230 GHz flare _brightening_, simultaneous with the high energy flares. Such hot electrons typically have a broader spatial distribution out to \(r\lesssim 15r_{B}\) (Figure 2), so that the size of the ring also increases significantly. The polarization fraction increases while the orientation of the polarization (\(\arg[\beta_{2}]\)) fluctuates during the flares, which we attribute to the evacuation of the inner accretion disk leading to less Faraday depolarization. It is important to stress that the mm brightening for \(R=100\) models found here may depend on the magnetization ceiling of \(\sigma_{\rm floor}=25\) used in the GRMHD simulation. Higher \(\sigma_{\rm floor}\) implies a higher temperature \(T_{\rm fluid}\) of reconnection-heated plasma (and vice-versa; see Ripperda et al., 2020) and so the appropriate value of \(R\) that corresponds to the transition between mm brightening and dimming during flares likely increases with increasing \(\sigma_{\rm floor}\) (such that \(T_{e}\sim 10^{11}\,\)K). We also note that models with \(R=R_{\rm low}\sim R_{\rm high}\sim 100\) are disfavored for explaining the quiescent emission from M87* and Sgr A* (e.g. Bower et al., 2003; Marrone et al., 2007; Event Horizon Telescope Collaboration et al., 2019, 2021), in part because such models have high densities and thus too little linear polarization due to Faraday depolarization. This does not, however, rule out that _during flares_, models with \(R=R_{\rm low}\sim R_{\rm high}\sim 100\) could be appropriate for describing the electron distribution in the near-horizon environment.
In reality, plasma in near-horizon current sheets (i.e., at the base of a jet or magnetospheric region) and in the reconnection exhaust ejected into the disk and jet boundary, consists of electron-positron pairs (as opposed to floored matter in GRMHD). This plasma is then heated by reconnection, up to temperatures similar to the jet's magnetization (Ripperda et al., 2020, 2022) and limited by radiative cooling, instead of being limited by the numerically enforced \(\sigma_{\rm floor}\) in GRMHD simulations. For highly magnetized plasma feeding the reconnection (e.g., \(\sigma\geq 10^{7}\) for M87's jet, Hakobyan et al., 2022), the reconnection-accelerated particles heated to \(T\sim\sigma\), are unlikely to emit much radiation at lower photon energies (e.g., in the mm or IR). Therefore, we argue that the high electron temperature models presented in this paper (e.g., \(R=1\)) best capture the real physics of the reconnection heated plasma during magnetic flux eruptions. These models predict a dimming of the millimeter emission during high-energy flares. However, if the magnetization of the jet feeding the reconnection is much smaller \(\sigma\ll 10^{6}\)(Crinquand et al., 2022), there may be enough non-thermal electrons emitting at submillimeter wavelengths to produce a flux comparable to the quiescent emission observed by Event Horizon Telescope Collaboration et al. (2019, 2022). Models with \(R=R_{\rm low}\sim R_{\rm high}\sim 100\) correspond in principle to this scenario of a less magnetized jet that feeds the reconnection layer, resulting in brightening of submillimeter wavelength emission during high-energy flares.
The properties of mm emission during magnetic flux eruptions depend most sensitively on the heating/acceleration of \(\gamma\sim 100\) electrons, since those particles emit most of their synchrotron radiation in the mm. As we have just argued, this is expected to be inefficient for reconnection in strongly magnetized plasmas (Sironi and Spitkovsky, 2014), which would predict mm dimming coincident with high-energy flares. However, the interaction between the reconnecting current sheet and the bulk of the disk at somewhat larger radii is complex, could be sourced by less magnetized plasma, and could dominate the heating of plasma responsible for the mm emission. It
Figure 9: Sychrotron emission spectra from radio to UV for the \(t=9113M\) pre-flare, \(t=9378M\) mid-flare and \(t=9643M\) post-flare states. The mid-flare state is generally the brightest at higher frequencies, however we note that non-thermal electrons and inverse Compton emission should be properly modeled for a quantitative prediction of the higher frequency spectra.
is also not at all clear that this interaction is well-modeled by existing GRMHD simulations (see Galishnikova et al. 2022 for a comparison of first principles general relativistic particle-in-cell (GRPIC) and GRMHD models of magnetic flux eruptions).
To understand better whether the mm emission during a flare can brighten, it will ultimately be necessary to model the spatial and temporal dependence of the electron distribution, taking into account particle acceleration due to magnetic reconnection (and other processes) in the near-horizon environment. The physics of particle heating and acceleration in the flare and quiescent states could also be significantly different (e.g., because the former is dominated by higher magnetization plasma than the latter). This highlights a significant shortcoming of using a simple time-independent prescription \(T_{e}(T_{\rm fluid})\) to model the emission and variability in systems like M87* and Sgr A*. This is particularly true for MAD models that feature such physically distinct magnetic flux eruptions.
Daily bright and rapid flares have been observed from Sgr A* in X-ray, IR and mm wavelengths. These show, however, different types of multi-band light curves in different flares, implying that they may be powered by different mechanisms. For example, Figure 2 of Fazio et al. (2018) shows that the mm _brightens_ simultaneously with the IR flare, consistent with the low electron temperature regime in this work. On the other hand, Figure 1 of Yusef-Zadeh et al. (2010) shows that the mm _dims_ during the IR flare (see also, Wielgus et al. (2022), for mm dimming during an X-ray flare), consistent with the high electron temperature regime in this work. Correlated changes in the image size and polarization as predicted in this paper would clarify whether the difference between these two types of flares is indeed the electron temperature the plasma is heated to during the flare. Figure 24 of Yusef-Zadeh et al. (2009) and Figure 3 of Trap et al. (2011) show a third type of phenomenology: the mm flux does not change much during an IR flare, but increases later after a significant delay. This is not captured by any electron temperature model in this work. This further suggests that simple time-independent \(T_{e}(T_{\rm fluid})\) prescriptions on top of ideal GRMHD simulations are not adequate to comprehensively explain the flare state observational signatures, which still requires better understanding of particle acceleration around accreting black holes.
## Acknowledgements
We are grateful to Angelo Ricarte for helpful comments on our draft. BR would like to thank Jordy Davelaar for useful discussions. EQ was supported in part by a Simons Investigator grant from the Simons Foundation. This research was enabled by support provided by grant No. NSF PHY-1125915 along with a INCITE program award PHY129, using resources from the Oak Ridge Leadership Computing Facility, Summit, which is a US Department of Energy Office of Science User Facility supported under contract DE-AC05- 00OR22725, as well as Calcul Quebec ([http://www.calculquebec.ca](http://www.calculquebec.ca)) and Compute Canada ([http://www.computecanada.ca](http://www.computecanada.ca)). The analysis presented in this article was performed in part on computational resources managed and supported by Princeton Research Computing, a consortium of groups including the Princeton Institute for Computational Science and Engineering (PICSCiE) and the Office of Information Technology's High Performance Computing Center and Visualization Laboratory at Princeton University. AP acknowledges support by NASA grant 80NSSC22K1054 and NSF grant PHY-2231698. This research was facilitated by Multimessenger Plasma Physics Center (MPPC), NSF grant PHY-2206607. The computational resources and services used in this work were partially provided by facilities supported by the Scientific Computing Core at the Flatiron Institute, a division of the Simons Foundation. This research is part of the Frontera computing project at the Texas Advanced Computing Center (LRAC-AST21006). Frontera is made possible by National Science Foundation award OAC-1818253. Support for this work was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51518.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5- 26555.
## Data Availability
The data underlying this paper will be shared on reasonable request to the corresponding author.
|
2309.02763 | Once-Marking and Always-Marking 1-Limited Automata | Single-tape nondeterministic Turing machines that are allowed to replace the
symbol in each tape cell only when it is scanned for the first time are also
known as 1-limited automata. These devices characterize, exactly as finite
automata, the class of regular languages. However, they can be extremely more
succinct. Indeed, in the worst case the size gap from 1-limited automata to
one-way deterministic finite automata is double exponential.
Here we introduce two restricted versions of 1-limited automata, once-marking
1-limited automata and always-marking 1-limited automata, and study their
descriptional complexity. We prove that once-marking 1-limited automata still
exhibit a double exponential size gap to one-way deterministic finite automata.
However, their deterministic restriction is polynomially related in size to
two-way deterministic finite automata, in contrast to deterministic 1-limited
automata, whose equivalent two-way deterministic finite automata in the worst
case are exponentially larger. For always-marking 1-limited automata, we prove
that the size gap to one-way deterministic finite automata is only a single
exponential. The gap remains exponential even in the case the given machine is
deterministic.
We obtain other size relationships between different variants of these
machines and finite automata and we present some problems that deserve
investigation. | Giovanni Pighizzini, Luca Prigioniero | 2023-09-06T06:20:24Z | http://arxiv.org/abs/2309.02763v1 | # Once-Marking and Always-Marking \(1\)-Limited Automata
###### Abstract
Single-tape nondeterministic Turing machines that are allowed to replace the symbol in each tape cell only when it is scanned for the first time are also known as \(1\)-limited automata. These devices characterize, exactly as finite automata, the class of regular languages. However, they can be extremely more succinct. Indeed, in the worst case the size gap from \(1\)-limited automata to one-way deterministic finite automata is double exponential.
Here we introduce two restricted versions of \(1\)-limited automata, _once-marking \(1\)-limited automata_ and _always-marking \(1\)-limited automata_, and study their descriptional complexity. We prove that once-marking \(1\)-limited automata still exhibit a double exponential size gap to one-way deterministic finite automata. However, their deterministic restriction is polynomially related in size to two-way deterministic finite automata, in contrast to deterministic \(1\)-limited automata, whose equivalent two-way deterministic finite automata in the worst case are exponentially larger. For always-marking \(1\)-limited automata, we prove that the size gap to one-way deterministic finite automata is only a single exponential. The gap remains exponential even in the case the given machine is deterministic.
We obtain other size relationships between different variants of these machines and finite automata and we present some problems that deserve investigation.
## 1 Introduction
In 1967, with the aim of generalizing the concept of determinism for context-free languages, Hibbard introduced _limited automata_, a restricted version of Turing machines [4]. More precisely, for each fixed integer \(d\geq 0\), a _\(d\)-limited automaton_ is a single-tape nondeterministic Turing machine that is allowed to replace the content of each tape cell only in the first \(d\) visits.
Hibbard proved that, for each \(d\geq 2\), \(d\)-limited automata characterize the class of context-free languages. For \(d=0\) these devices cannot modify the input tape, hence they are two-way finite automata, so characterizing regular languages. Furthermore, also \(1\)-limited automata are no more powerful than finite automata. The proof of this fact can be found in [20, Thm. 12.1].
The investigation of these models has been reconsidered in the last decade, mainly from a descriptional point of view. Starting with [9, 10], several works investigating properties of limited automata and their relationships with other computational models appeared in the literature (for a recent survey see [8]).
In this paper we focus on \(1\)-limited automata. We already mentioned that these devices are no more powerful than finite automata, namely they recognize the class of regular languages. However, they can be dramatically more succinct than finite automata. In fact, a double exponential size gap from \(1\)-limited automata to one-way deterministic finite automata has been proved [9]. In other words, every \(n\)-state \(1\)-limited automaton can be simulated by a one-way deterministic automaton with a number of states which is double exponential in \(n\). Furthermore, in the worst case, this cost cannot be reduced.
As pointed out in [9], this double exponential gap is related to a double role of the nondeterminism in 1-limited automata. When the head of a 1-limited automaton reaches for the first time a tape cell, it replaces the symbol in it according to a nondeterministic choice. Furthermore, the set of nondeterministic choices allowed during the next visits to the same cell depends on the symbol written in the first visit and that cannot be further changed, namely it depends on the nondeterministic choice made during the first visit.
With the aim of better understanding this phenomenon, we started to investigate some restrictions of 1-limited automata. On the one hand, we are interested in finding restrictions that reduce this double exponential gap to a single exponential. We already know that this happens for _deterministic_ 1-limited automata [9]. So the problem is finding some restrictions that, still allowing nondeterministic transitions, avoid the double exponential gap. On the other hand, we are also interested in finding some very restricted forms of 1-limited automata for which a double exponential size gap in the conversion to one-way deterministic automata remains necessary in the worst case.
A first attempt could be requiring deterministic rewritings, according to the current configuration of the machine, every time cells are visited for the first time, still keeping nondeterministic the choice of the next state and head movement. Another attempt could be to allow nondeterministic choices for the symbol to rewrite, but not for the next state and the head movement. In both cases the double exponential gap to one-way deterministic finite automata remains possible. Indeed, in both cases, different computation paths can replace the same input prefix on the tape with different strings, as in the original model. Actually, we noticed that the double exponential gap can be achieved already for 1-limited automata that, in each computation, have the possibility to mark just one tape cell leaving the rest of the tape unchanged. This inspired us to investigate machines with such a restriction, which we call _once-marking \(1\)-limited automata_. We show that the double exponential size gap to one-way deterministic finite automata remains possible even for once-marking 1-limited automata that are _sweeping_ (namely, change the head direction only at the left or right end of the tape) and that are allowed to use nondeterminism only in the first visit to tape cells. Comparing the size of once-marking 1-limited automata with other kinds of finite automata, we prove an exponential gap to two-way nondeterministic automata. The situation changes significantly when nondeterministic transitions are not possible. Indeed, we prove that every deterministic once-marking 1-limited automaton can be converted into an equivalent two-way deterministic finite automaton with only a polynomial size increasing. The costs we obtain concerning once-marking 1-limited automata are summarized in Figure 2.
As mentioned above, the double exponential gap from 1-limited automata to one-way deterministic finite automata is related to the fact that different computation paths can replace the same input prefix on the tape with different strings. This suggested the idea of considering a different restriction, which prevents this possibility, by requiring the replacement of each input symbol \(a\) with a symbol that depends only on \(a\). To this aim, here we introduce _always-marking \(1\)-limited automata_, that in the first visit replace each symbol with a marked version of it. We show that in this case the gap from these devices, in the nondeterministic version, to one-way deterministic finite automata reduces to a single exponential. The same gap holds when converting always-marking 1-limited automata into one-way nondeterministic finite automata, but even when converting _deterministic_ always-marking 1-limited automata into _two-way nondeterministic_ finite automata. The bounds we obtain concerning always-marking 1-limited automata are summarized in Figure 3.
The paper is organized as follows. After presenting in Section 2 the preliminary notions used in the paper and, in particular, the definition of 1-limited automata with the fundamental results on their descriptional complexity, in Section 3 we introduce once-marking and always-marking 1-limited automata,
together with some witness languages that will be useful to obtain our results. Sections 4 and 5 are devoted to the investigation of the descriptional complexity of these models. We conclude the paper presenting some final remarks and possible lines for future investigations.
## 2 Preliminaries
In this section we recall some basic definitions useful in the paper. Given a set \(S\), \(\#S\) denotes its cardinality and \(2^{S}\) the family of all its subsets. Given an alphabet \(\Sigma\), a string \(w\in\Sigma^{*}\), and a symbol \(a\in\Sigma\), \(|w|\) denotes the length of \(w\), \(\Sigma^{k}\) the set of all strings on \(\Sigma\) of length \(k\), \(\hat{a}\) the _marked versions_ of \(a\), and \(\hat{\Sigma}=\{\hat{a}\mid a\in\Sigma\}\) the set of the marked versions of the symbols in \(\Sigma\).
We assume the reader familiar with notions from formal languages and automata theory, in particular with the fundamental variants of finite automata (1dfas, 1nfas, 2dfas, 2nfs, for short, where 1/2 mean _one-way/two-way_ and d/n mean _deterministic/nondeterministic_, respectively). For any unfamiliar terminology see, e.g., [5].
A \(1\)_-limited automaton_ (1-la, for short) is a tuple \(\mathcal{A}=(Q,\Sigma,\Gamma,\delta,q_{I},F)\), where \(Q\) is a finite _set of states_, \(\Sigma\) is a finite _input alphabet_, \(\Gamma\) is a finite _work alphabet_ such that \(\Sigma\cup\{\rhd,\lhd\}\subseteq\Gamma,\rhd,\lhd\notin\Sigma\) are two special symbols, called the _left_ and the _right end-markers_, and \(\delta:Q\times\Gamma\to 2^{Q\times(\Gamma\setminus\{\rhd,\lhd\})\times\{-1,+1\}}\) is the _transition function_. At the beginning of the computation, the input word \(w\in\Sigma^{*}\) is stored onto the tape surrounded by the two end-markers, the left end-marker being in position zero and the right end-marker being in position \(|w|+1\). The head of the automaton is on cell 1 and the state of the finite control is the _initial state_\(q_{I}\).
In one move, according to \(\delta\) and the current state, \(\mathcal{A}\) reads a symbol from the tape, changes its state, replaces the symbol just read from the tape by a new symbol, and moves its head to one position forward or backward. Furthermore, the head cannot pass the end-markers, except at the end of computation, to accept the input, as explained below. Replacing symbols is allowed to modify the content of each cell only during the first visit, with the exception of the cells containing the end-markers, which are never modified. Hence, after the first visit, a tape cell is "frozen".1
Footnote 1: More technical details can be found in [9]. However, a syntactical restriction forcing 1-la to replace in the first visit to each tape cell the input symbol in it with another symbol from an alphabet \(\Gamma_{1}\) disjoint from \(\Sigma\), was given. Here we drop this restriction, in order to be able to see once-marking 1-la as a restriction of 1-la. It is always possible to transform a 1-la into an equivalent 1-la satisfying such a syntactical restriction, just extending \(\Gamma\) with a marked copy of \(\Sigma\) and suitably modifying the transition function.
The automaton \(\mathcal{A}\) accepts an input \(w\) if and only if there is a computation path that starts from the initial state \(q_{I}\) with the input tape containing \(w\) surrounded by the two end-markers and the head on the first input cell, and that ends in a _final state_\(q\in F\) after passing the right end-marker. The device \(\mathcal{A}\) is said to be _deterministic_ (d-1-la, for short) whenever \(\#\delta(q,\sigma)\leq 1\), for any \(q\in Q\) and \(\sigma\in\Gamma\).
Two-way finite automata are limited automata in which no rewritings are possible. On the other hand, one-way finite automata can scan the input in a one-way fashion only. A finite automaton is, as usual, a tuple \((Q,\Sigma,\delta,q_{I},F)\), where, analogously to 1-las, \(Q\) is the finite set of states, \(\Sigma\) is the finite input alphabet, \(\delta\) is the transition function, \(q_{I}\) is the initial state, and \(F\) is the set of final states. We point out that for two-way finite automata we assume the same accepting conditions as for 1-las.
Two-way machines in which the direction of the head can change only at the end-markers are said to be _sweeping_[19].
In this paper we are interested to compare the size of machines. The _size_ of a model is given by the total number of symbols used to write down its description. Therefore, the size of 1-las is bounded by
a polynomial in the number of states and of work symbols, while, in the case of finite automata, since no writings are allowed, the size is linear in the number of instructions and states, which is bounded by a polynomial in the number of states and in the number of input symbols.
The size costs of the simulations from 1-las to finite automata have been studied in [9] and are summarized in Figure 1.
## 3 Witness Languages and Variants of 1-Limited Automata
As mentioned in the introduction, 1-las can be very succinct. In fact, for some languages the size gap to 1dfa is double exponential. We already observed that this gap is related to nondeterminism. Indeed, if nondeterministic choices are not possible, the gap reduces to a single exponential (see Figure 1). However, we want to understand better on the one hand how much we can restrict the model, still keeping this double exponential gap and, on the other hand, if there is a restriction that, still allowing some kind of nondeterminism, reduces the gap to a single exponential.
In our investigations, the following language, which is defined with respect to an integer parameter \(n>0\), will be useful:
\[K_{n}=\left\{x_{1}\cdots x_{k}\cdot x\mid k>0,\;x_{1},\ldots,x_{k},x\in\{a,b \}^{n},\;\exists j\in\{1,\ldots,k\},\;x_{j}=x\right\}.\]
We point out that each string in the language is a list of blocks of length \(n\). We ask the membership of the last block to the list of previous ones.
**Theorem 1**.: _The language \(K_{n}\) is accepted by a 1-la with \(O\left(n\right)\) states that, in each accepting computation, replaces the content only of one cell._
Proof.: A 1-la\(\mathcal{M}\) can scan the tape from left to right, marking a nondeterministically chosen tape cell. In this scan, \(\mathcal{M}\) can also verify that the input length is a multiple of \(n\). Furthermore, the marking can be done in the last cell of a block of length \(n\). For this phase \(O\left(n\right)\) states are enough.
Figure 1: Size costs of conversions of 1-las and d-1-las into equivalent one-way and two-way deterministic and nondeterministic finite automata. For all the costs upper and matching lower bounds have been proved, with the only exception of (a) and (b), for which the best known lower and upper bounds are, respectively, exponential and double exponential.
Then the machine has to compare the symbols in the last block with the symbols in the chosen one, namely the block which ends with the marked cell. This can be done by moving the head back and forth from the last block to the chosen block, comparing the symbols in the corresponding positions in the two blocks, and rejecting in case of mismatch. Again, this can be implemented, using a counter modulo \(n\), with \(O\left(n\right)\) states.
Using standard distinguishability arguments, it can be proved that to accept \(K_{n}\), a 1dfa requires a number of states double exponential in \(n\) (state lower bounds for \(K_{n}\) are summarized in Theorem 2 below).
Hence, the language \(K_{n}\) is a witness of the double exponential gap from 1-las to 1dfas. From Theorem 1, we can notice that this gap is obtained by using the capabilities of 1-las in a very restricted way: during each accepting computation, only the content of one cell is modified. This suggested us to considering the following restricted version of 1-las:
**Definition 1**.: _A 1-la is said to be once marking if in each computation there is a unique tape cell whose input symbol \(\sigma\) is replaced with its marked version \(\hat{\sigma}\), while all the remaining cells are never changed._
In the following, for brevity, we indicate once-marking 1-las and once-marking d-1-las as om-1-las and d-om-1-las, respectively.
We shall consider another restriction, in which the 1-la marks, in the first visit, every cell reached by the head.
**Definition 2**.: _A 1-la is said to be always marking if, each time the head visits a tape cell for the first time, it replaces the input symbol \(\sigma\) in it with its marked version \(\hat{\sigma}\)._
In the following, for brevity, we indicate always-marking 1-las and always-marking d-1-las as am-1-las and d-am-1-las, respectively.
We point out that om-1-las and am-1-las use the work alphabet \(\Gamma=\Sigma\cup\hat{\Sigma}\cup\{\succ,\prec\}\). Hence, the relevant parameter for evaluating the size of these devices is their number of states, differently than 1-las, in which the size of the work alphabet is not fixed.
We present another language that will be used in the paper. As \(K_{n}\), it is defined with respect to a fixed integer \(n>0\):
\[J_{n}=\left\{x\cdot x_{1}\cdots x_{k}\mid k>0,\;x_{1},\ldots,x_{k},x\in\{a,b \}^{n},\;\exists j\in\{1,\ldots,k\},\;x_{j}=x\right\}.\]
Even in this case, a string is a list of blocks of length \(n\). Here we ask the membership of the first block to the subsequent list. Notice that \(J_{n}\) is the reversal of \(K_{n}\).
We have the following lower bounds:
**Theorem 2**.: _Let \(n>0\) be an integer._
* _To accept_ \(J_{n}\)_, 1dfas and 1nfas need at least_ \(2^{n}\) _states, while 2nfas need at least_ \(2^{\frac{n-1}{2}}\) _states._
* _To accept_ \(K_{n}\)_, 1dfas need_ \(2^{2^{n}}\) _states, 1nfas need at least_ \(2^{n}\) _states, and 2nfas need at least_ \(2^{\frac{n-1}{2}}\) _states._
Proof.: (sketch) The lower bounds for one-way machines can be proved using standard distinguishability arguments and the fooling set technique [2] (see [9, 14] for similar proofs with slightly different languages).
Using a standard conversion, from a \(k\)-state 2nfa accepting \(K_{n}\) we can obtain an equivalent 1dfa with no more than \(2^{k+k^{2}}\) states [15, 17]. Since every 1dfa accepting \(K_{n}\) should have at least \(2^{2^{n}}\) states, we
get that \(k+k^{2}\geq 2^{n}\). Hence \(k\) grows as an exponential in \(n\). In particular, it can be verified that \(k>2^{\frac{n-1}{2}}\). Since from each 2nfa accepting a language we can easily obtain a 2nfa with a constant amount of extra states accepting the reversal of such a language, we can conclude that the number of states of each 2nfa accepting \(J_{n}\) or \(K_{n}\) must be at least exponential in \(n\).
## 4 Once-Marking 1-Limited Automata
During each computation, _once-marking 1-limited automata_ are able to mark just one input cell.
From Theorem 1, we already know that the language \(K_{n}\) can be accepted by a om-1-la with \(O\left(n\right)\) states. We now show that such a machine can be turned in a even more restricted form:
**Theorem 3**.: _The language \(K_{n}\) is accepted by a om-1-la with \(O\left(n\right)\) states that is sweeping and uses nondeterministic transitions only in the first traversal of the tape._
Proof.: We discuss how to modify the \(O\left(n\right)\)-state om-1-la \(\mathcal{M}\) described in the proof of Theorem 1 in order obtain a sweeping machine that uses nondeterministic transitions only in the first sweep. \(\mathcal{M}\) makes a first scan of the input, exactly as described in the proof of Theorem 1. In this scan the head direction is never changed. When the right end-marker is reached, \(\mathcal{M}\) makes \(n\) iterations, which in the following description will be counted from \(0\) to \(n-1\).
The purpose of the iteration \(i\), \(i=0,\ldots,n-1\), is to compare the \((n-i)\)th symbols of the last block and of the chosen one. To this aim, the iteration starts with the head on the right end-marker, and uses a counter modulo \(n\), initialized to \((i+1)\) mod \(n\). The counter is decremented while moving to the left. In this way, it contains \(0\) exactly while visiting the \((n-i)\)th cell of each input block. Hence, the automaton can easily locate the \((n-i)\)th symbols of the last block and of the chosen one and check if they are equal. Once the left end-marker is reached, \(\mathcal{M}\) can cross the tape from left to right, remembering the number \(i\) of the iteration. Notice that \(\mathcal{M}\) does not need to keep this number while moving from right to left. Indeed the value of \(i\) can be recovered from the value of the counter when the left end-marker is reached.
Once the iteration \(i\) is completed, if the last check was unsuccessful then \(\mathcal{M}\) can stop and reject. Otherwise it can start the next iteration, if \(i<n-1\), or accepts.
From the discussion above, it can be easily verified that \(\mathcal{M}\) is sweeping, makes nondeterministic choices only in the first sweep, and has \(O\left(n\right)\) many states.
We now study the size relationships between om-1-las and finite automata. First, we observe that om-1-las can be simulated by 1nfas and by 1dfas at the costs of an exponential and a double exponential increase in the number of states, respectively. These upper bounds derive from the costs of the simulations of 1-las by finite automata presented in [9, Thm. 2]. By considering the language \(K_{n}\), we can conclude that these costs cannot be reduced:
**Theorem 4**.: _Let \(\mathcal{M}\) be a n-state om-1-las. Then \(\mathcal{M}\) can be simulated by a 1nfa and by a 2nfa with a number of states exponential in \(n\), and by a 1dfas with a number of states double exponential in \(n\). In the worst case these costs cannot be reduced._
Proof.: The upper bounds derive from the cost of the simulations of 1-las by 1nfas and 1dfas given in [9, Thm. 2]. For the lower bounds we consider the language \(K_{n}\). As proved in Theorem 3, this language can be accepted by a om-1-la with \(O\left(n\right)\) states. Furthermore, according to Theorem 2, it requires a number of state exponential in \(n\) to be accepted by 1nfas or 2nfas, and a number of states double exponential in \(n\) to be accepted by 1dfas.
From Theorem 4, it follows that the ability of marking only once can give already a huge descriptional power. Furthermore, from Theorem 3, we can observe that this power is achievable even with a sweeping machine that does not use nondeterminism after the first sweep. From the size costs of the simulation of 1-las by finite automata (see Figure 1), we already know that nondeterminism is essential to obtain this huge descriptional power. We now prove that, without nondeterminism, the descriptional power on om-1-las dramatically reduces:
**Theorem 5**.: _For each \(n\)-state d-om-1-la there exists an equivalent 2dfa with \(O\left(n^{3}\right)\) states._
Proof.: Let \(\mathcal{A}=\left(Q,\Sigma,\Gamma,\delta,q_{I},F\right)\) be a \(n\)-state d-om-1-la. We give a construction of an equivalent 2dfa\(\mathcal{A}^{\prime}\). Before doing that, let us introduce, from an high-level perspective, how the simulating machine works.
The 2dfa\(\mathcal{A}^{\prime}\) operates in different modes.
In the first part of the computation, before \(\mathcal{A}\) marks _one_ cell, \(\mathcal{A}^{\prime}\) is in beforeMarking mode, in which it simulates directly each transition of \(\mathcal{A}\).
When \(\mathcal{A}^{\prime}\) has to simulate the transition \(\delta(s,\sigma)=\left(\hat{\sigma},d\right)\) used by \(\mathcal{A}\) for marking a cell, besides changing its state and moving its head according to the transition, \(\mathcal{A}^{\prime}\) switches to afterMarking mode and stores in its finite control the symbol \(\sigma\) that has been marked and the state \(s\) in which \(\mathcal{A}\) was immediately before the marking.
While in afterMarking mode, every time a cell is visited, \(\mathcal{A}^{\prime}\) has to select which transition of \(\mathcal{A}\) to simulate depending on the symbol \(a\) scanned by the input head. There are two possibilities: if the scanned symbol is different than the symbol \(\sigma\) that has been marked, then the transition is simulated directly. Otherwise, \(\mathcal{A}^{\prime}\) switches to backwardSimulation mode (described later) to verify whether the current cell is the one that has been marked by \(\mathcal{A}\). If this is the case, then \(\mathcal{A}^{\prime}\) simulates the transition of \(\mathcal{A}\) on the marked symbol \(\hat{\sigma}\), otherwise it simulates the transition on \(\sigma\). In both cases \(\mathcal{A}^{\prime}\) keeps working in afterMarking mode, so selecting transitions according to the strategy described above, until there are no more moves to simulate. Therefore \(\mathcal{A}^{\prime}\) accepts if the last simulated transition corresponds to a right transition passing the right end-marker while simulating a final state of \(\mathcal{A}\).
We now give some details on the backwardSimulation mode, which is the core of the simulation. We remind the reader that \(\mathcal{A}^{\prime}\) switches to this mode when, being in afterMarking mode, the input head is on a cell containing the symbol \(\sigma\), which has been saved at the end of the beforeMarking mode. Let us indicate by \(j\) the current position of the head, namely the position that has to be verified.
The 2dfa\(\mathcal{A}^{\prime}\) has to verify whether \(j\) is the cell that has been marked by \(\mathcal{A}\). To make this check, \(\mathcal{A}^{\prime}\) can verify whether the computation path of \(\mathcal{A}\) on the given input reaches, from the initial configuration, a configuration with state \(s\) and the head on the currently scanned cell \(j\) (we remind the reader that \(s\) and \(\sigma\) have been saved in the control of \(\mathcal{A}^{\prime}\) when switching from beforeMarking to afterMarking mode), whose position, however, cannot be saved in the control.
To be sure that the machine does not "loses track" of the position \(j\) while performing this search, we use the following strategy:
* \(\mathcal{A}^{\prime}\) simulates a backward computation from the state \(s\) and the current position \(j\).
* If the initial configuration of \(\mathcal{A}\) is reached, then the cell from which the check has started is the one where the marking transition has been executed.
* At that point, the position \(j\) is recovered by "rolling back" the backward computation. This is done by repeating the (forward) computation of \(\mathcal{A}\) from the initial configuration until a marking transition is used. In fact, since \(\mathcal{A}\) is deterministic and once marking, this transition is necessarily
the one that, from the state \(s\), marked \(\sigma\). In other words, the forward computation of \(\mathcal{A}\) that is simulated here is the same simulated in beforeMarking mode.
As we shall explain later, even in the case the initial configuration of \(\mathcal{A}\) is not reached (namely the verification is not successful), our technique allows to recover the head position \(j\) from which the backward simulation started,
It is important to observe two key points for which this approach works. The first one is that om-1-las mark only one cell during their computation. The second observation is that the simulated machine is deterministic. Therefore, along every accepting computation path from the initial configuration, it occurs only once that the symbol \(\sigma\) is scanned while \(\mathcal{A}\) is in state \(s\), which is when \(\mathcal{A}\) makes a marking transition.
To make such a verification, and in particular the backward search, we use a technique originally introduced by Sipser [18]. This simulation has been then refined by Geffert, Mereghetti, and Pighizzini, which proved that 2dfas can be made halting with a linear increase of the number of states [3]. In the following, we shall refer to the latter simulation as the _original simulation_ and use the notation and terminology contained in [3], to which we address the interested reader for missing details.
The main difference with the original simulation is that there the simulating machine starts from the final configuration of the simulated device, because the goal is to verify the presence of an accepting computation path. In our case, the machine \(\mathcal{A}^{\prime}\) starts the backward simulation from the state \(s\) and the cell containing \(\sigma\) that has to be checked.
In the following, a _configuration_ is a pair \((q,i)\), where \(q\) is the current state and \(i\) is the position of the tape head.
Consider the graph whose nodes represent configurations and edges computation steps. Since \(\mathcal{A}\) is deterministic, the component of the graph containing \((s,j)\) is a tree rooted at this configuration, with backward paths branching to all possible predecessors of \((s,j)\). In addition, no backward path starting from \((s,j)\) can loop (hence, it is of finite length), because the marking configuration \((s,j)\) cannot be reached by a forward path from a loop (due to the fact that the machine is deterministic).
The simulating machine \(\mathcal{A}^{\prime}\) can perform a depth-first search of this tree in order to detect whether the initial configuration \((q_{I},0)\) belongs to the predecessors of \((s,j)\). If this is the case, then the machine returns to the position \(j\), by performing a forward simulation of \(\mathcal{A}\) from \((q_{I},0)\) until when \(s\) is entered while reading the symbol \(\sigma\). We stress that this approach works because the simulated machine is deterministic. After that, the simulation of \(\mathcal{A}\) in afterMarking mode is recovered by performing a move on the symbol \(\hat{\sigma}\). On the other hand, if the whole tree has been examined without reaching \((q_{I},0)\), then the cell in position \(j\) is not the marked one, so the machine simulates a move of \(\mathcal{A}^{\prime}\) on \(\sigma\) from the cell in position \(j\), again switching back to afterMarking mode. Notice that this case occurs when there are no more predecessors of \((s,j)\) to visit. So, in this case, the machine \(\mathcal{A}^{\prime}\) completes the depth-first search on the cell in position \(j\), while looking for further nodes of the graph reachable from the configuration \((s,j)\). Hence, no extra steps are required to retrieve the position \(j\).
In conclusion, \(\mathcal{A}^{\prime}\) has three state components of size \(O\left(n\right)\): one used in beforeMarking and afterMarking for the direct simulation of the transitions of \(\mathcal{A}\), one for storing the state \(s\) and the symbol \(\sigma\), and one used in backwardSimulation mode. So, the total number of states of \(\mathcal{A}^{\prime}\) is \(O\left(n^{3}\right)\).
In Figure 2 the state costs of the conversions involving om-1-las are summarized. In particular, we proved that the size gap from om-1-las to 2nfas is exponential and to 1dfas is double exponential, while d-om-1-las and 2dfas are polynomially related in size.
Some questions remain open, in particular about the costs of the simulations of om-1-las by d-om-1-las and by 2dfas. At the moment, from the above mentioned results, we can derive double
exponential upper bounds and exponential lower bounds. The same questions are open for the simulation of 1-las by d-1-las and by 2dfas, namely by dropping the once-marking restriction. We point out that these questions are related to the problem of the cost of the elimination of nondeterminism from two-way finite automata, proposed by Sakoda and Sipser in 1978 [16], which is still open.
## 5 Always-Marking 1-Limited Automata
Always-marking 1-limited automata replace, when they visit each cell for the first time, the input symbol with its marked version. In this section we study the descriptional complexity of these devices.
First of all, we prove that am-1-las cannot achieve the same succinctness as 1-las. In fact, the size gap to 1dfas reduces from double exponential for 1-las to single exponential.
**Theorem 6**.: _Each \(n\)-state am-1-la can be simulated by a 1infa with at most \(n\cdot 2^{n^{2}}\) states and by a complete 1dfa with at most \((2^{n}-1)\cdot 2^{n^{2}}+1\) states._
Proof.: Let \(\mathcal{M}=(Q,\Sigma,\Gamma,\delta,q_{0},F)\) be a given \(n\)-state am-1-la. We adapt the argument used in [9] to convert 1-las into 1nfas and 1dfas, which is derived from the technique to convert 2dfas into equivalent 1dfas, presented in [17], and based on _transitions tables_.
Roughly, transition tables represent the possible behaviors of \(\mathcal{M}\) on frozen tape segments. More precisely, given \(z\in\Gamma^{*}\), the _transition table_ associated with \(z\) is the binary relation \(\tau_{z}\subseteq Q\times Q\), consisting of all pairs \((p,q)\) such that \(\mathcal{M}\) has a computation path that starts in the state \(p\) on the rightmost symbol of the tape segment containing \(\rhd z\), ends entering the state \(q\) by leaving the same tape segment to the right side, i.e., by moving from the rightmost cell of the segment to the right, and does not visit any cell outside the segment.
First, we can apply the conversion presented in [9] from 1-las to 1nfas, in order to obtain from \(\mathcal{M}\) an equivalent 1nfa \(A\), whose computations simulate the computations of \(\mathcal{M}\) by keeping in the finite state control two components:
Figure 2: Size costs of conversions involving om-1-las. The gaps (a) and (b) derive from Theorem 4. For (c) and (d) the lower bound derives from the lower bound of the language \(K_{n}\) on 2nfas (Theorem 2); the best known upper bound derives from (a). The bounds (e) and (f) are from Theorem 5. The upper bound for (g) derives from the conversion from d-1-las and the lower bound from the conversion from 2dfas.
* The transition table associated with the part of the tape at the left of the head. This part has been already visited and, hence, it is frozen.
* The state in which the simulated computation of \(\mathcal{M}\) reaches the current tape position for the first time.
For details we address the reader to [9, Thm. 2]. Since the number of transition tables is at most \(2^{n^{2}}\), the number of states in the resulting 1nfa\(A\) is bounded by \(n\cdot 2^{n^{2}}\).
Applying the subset construction, this automaton can be converted into an equivalent deterministic one, with an exponential increase of the number of states, so obtaining a double exponential number of states in \(n\). In the general case, this increasing cannot be reduced. This is due to the fact that different computations of \(A\), after reading the same input, could keep in the control different transitions tables, depending on the fact that \(\mathcal{M}\) can replace the same input by different strings.
However, under the restriction we are considering, along different computations, each input string \(x\) is always replaced by the same string \(\vec{x}\), which is obtained by marking every symbol of \(x\). Hence, at each step of the simulation, the transition table stored by \(A\) depends only on the input prefix already inspected. The only part that can change is the state of the simulated computation of \(\mathcal{M}\) after reading \(x\).
This allows to obtain from \(A\) a 1dfa\(A^{\prime}\), equivalent to \(\mathcal{M}\) that, after reading a string \(x\), has in its finite state control the transition table associated with \(\vec{x}\), and the _set_ of states that the computations of \(\mathcal{M}\) can reach after reading \(x\). In other words, the automaton \(A^{\prime}\) is obtained from \(A\) by keeping the first component of the control, which is deterministic, and making a subset construction for the second one.
By summarizing, the possible values of the first component are \(2^{n^{2}}\), while the values of the second one are \(2^{n}\), namely the possible subsets of the state set of \(\mathcal{M}\). This gives a \(2^{n}\cdot 2^{n^{2}}\) upper bound. We can slightly reduce this number, by observing that when the second component contains the empty set, i.e., each computation of \(\mathcal{M}\) (or equivalently of \(A\)) stops before entering it, then the input is rejected, regardless the first component. Hence, we can replace all the pairs having the empty set as a second component with a unique sink state, so reducing the upper bound to \((2^{n}-1)\cdot 2^{n^{2}}+1\)
The asymptotical optimality of the upper bounds in Theorem 6 derives from the optimality of the conversions from 2nfas to 1nfas and to 2dfas[15, 17, 6].
We now show that am-1-las can be more succinct than 2nfas, even in the deterministic case. In particular we prove the following:
**Theorem 7**.: _The language \(J_{n}\) is accepted by a d-am-1-la with \(O\left(n\right)\) states, while it cannot be accepted by any 2nfa with less than \(2^{\frac{n-1}{2}}\) states._
Proof.: The lower bound for 2nfas has been given in Theorem 2. The possibility of marking the already-visited cells allows to reduce this cost, even without making use of the nondeterminism, as we now describe. An always marking d-1-la\(\mathcal{M}\) can firstly visit and mark the first \(n\) tape cells. Then, it starts to inspect the next block of length \(n\). When the head reaches for the first time a cell, \(\mathcal{M}\) remembers the scanned symbol \(\sigma\) in it and moves the head back to the left end-marker and then to the corresponding cell in the first block (this can be implemented with a counter modulo \(n\)). If the symbol in this cell is not \(\sigma\) then \(\mathcal{M}\) has to skip the remaining symbols in the block under inspection and inspect the next block, if any. This can be done moving the head to the left end-marker and then, starting to count modulo \(n\), moving to the right until finding the first symbol of the next block. This symbol can be located using the value of the counter and the fact that it has not been marked yet. Otherwise, if the symbol in the cell coincides with \(\sigma\) and the block is not completely inspected (see below), \(\mathcal{M}\) moves the head to the right to search the next symbol of the block under inspection, namely the first unmarked symbol.
When locating a symbol, \(\mathcal{M}\) can also check and remember if it is in position \(n\). This is useful to detect whether a block has been completely scanned, which also means that the block has been _successfully scanned_, otherwise the machine would have already rejected. Hence, in this case, \(\mathcal{M}\) can move the head to the right to finally reach the accepting configuration. However, according to the definition of \(J_{n}\), before doing that, \(\mathcal{M}\) needs to verify that the input has length multiple of \(n\). All these steps can be implemented with a fixed number of variables and a counter modulo \(n\). This allows to conclude that \(\mathcal{M}\) can be implemented with \(O\left(n\right)\) states.
In Theorem 7 we proved an exponential gap from d-am-1-las to 2nfas and hence also to one-way finite automata. This allows to conclude that the following upper bounds, that are immediate consequences of the corresponding upper bounds for d-1-las[9, Thm. 2], cannot be significantly reduced:
**Theorem 8**.: _Each \(n\)-state d-am-1-la can be simulated by a 1dfa and by a 1nfa with no more than \(n\cdot(n+1)^{n}\) states._
From the discussion above and Theorem 8, we have the same state gap from d-am-1-las and from d-1-las to one-way automata.
The state costs of the conversions involving am-1-las are summarized in Figure 3.
Even in the case of am-1-las, as well as in the cases of 1-las and of om-1-las, we do not know how much the elimination of the nondeterminism costs. Here, we have an exponential upper bound for the conversion of am-1-las into d-am-1-las but, at the moment, we do not have a matching lower bound. Considering the conversion of am-1-las into 2dfas, unlikely the analogous conversions from 1-las and om-1-las, here we have matching exponential upper and lower bounds. As already mentioned at the end of Section 4, these questions are related to the open question of Sakoda and Sipser.
## 6 Conclusion
We study the costs of the simulations of om-1-las and am-1-las by finite automata. Figures 2 and 3 give a summary of the results we obtained. They can be compared with the costs of the simulations
Figure 3: State costs of conversions involving am-1-las. All the exponential upper bounds derive from Theorems 6 and 8, while the lower bounds derive from Theorem 7. For (a) we do not know if in the worst case an exponential size is also necessary.
concerning 1-las, in Figure 1.
We observed that am-1-las cannot reach the same succinctness as 1-las and om-1-las (see Theorems 4 and 6). In particular, in Theorem 3 we have shown that the language \(K_{n}\) can be accepted by a om-1-la with \(O\left(n\right)\) states. Hence, it requires an exponential number of states on am-1-las due to the fact that a double exponential number of states on 1dfas is necessary (see Theorem 2). It is not difficult to describe a 2nfa accepting \(K_{n}\) with an exponential number of states. We point out that such a machine is also a am-1-la. Hence, by summarizing, the language \(K_{n}\) is accepted by a om-1-la with \(O\left(n\right)\) states, by an am-1-la with a number of states exponential in \(n\), and by a 1dfa with a number of states double exponential in \(n\). All these costs cannot be reduced.
Since in the nondeterministic case the gaps from om-1-las to finite automata are the same as from 1-las, a natural question is to ask if om-1-las are always as succinct as 1-las. Intuitively the answer to this question is negative. For instance we do not see how to recognize the language whose strings are concatenations of blocks of length \(n\), in which two blocks are equal, with a om-1-la with \(O\left(n\right)\) states, while it is not hard to accept it using a 1-la with such a number of states. We leave the study of this question for a future work.
Another candidate for studying this question is the unary language \((a^{2^{n}})^{*}\). We proved that this language can be accepted by a d-1-la with \(O\left(n\right)\) states and a work alphabet of cardinality \(O\left(n\right)\), and by a d-1-la with \(O\left(n^{3}\right)\) states and work alphabet of size not dependent on \(n\)[10, 12]. As pointed out in [10], each 2nfa accepting it requires at least \(2^{n}\) states. Hence, by Theorem 5 even each d-om-1-la accepting it requires an exponential number of states. We do not see how to reduce this number even by allowing the use of nondeterminism on om-1-las or on am-1-las.
More in general, the comparisons between the sizes of these restricted versions of 1-las deserve further investigation, even in the unary case where the cost of several simulations are still unknown [10]. In a recent paper, we investigated _forgetting_ 1-las, another restriction of 1-las in which there is a unique symbol \(X\) that is used to replace input symbols. Therefore, during the first visit to a cell, its original content is always replaced by \(X\)[11].
Finally, we would like to mention once again the problem of the cost of removing nondeterminism from 1-las, om-1-las, and am-1-las (see Sections 4 and 5), which is connected to the main question of the cost of the elimination of nondeterminism from two-way finite automata, raised longtime ago by Sakoda and Sipser and still open [15] (for a survey, see [6]).
|
2303.04349 | Virtual Reality in Metaverse over Wireless Networks with User-centered
Deep Reinforcement Learning | The Metaverse and its promises are fast becoming reality as maturing
technologies are empowering the different facets. One of the highlights of the
Metaverse is that it offers the possibility for highly immersive and
interactive socialization. Virtual reality (VR) technologies are the backbone
for the virtual universe within the Metaverse as they enable a hyper-realistic
and immersive experience, and especially so in the context of socialization. As
the virtual world 3D scenes to be rendered are of high resolution and frame
rate, these scenes will be offloaded to an edge server for computation.
Besides, the metaverse is user-center by design, and human users are always the
core. In this work, we introduce a multi-user VR computation offloading over
wireless communication scenario. In addition, we devised a novel user-centered
deep reinforcement learning approach to find a near-optimal solution. Extensive
experiments demonstrate that our approach can lead to remarkable results under
various requirements and constraints. | Wenhan Yu, Terence Jie Chua, Jun Zhao | 2023-03-08T03:10:41Z | http://arxiv.org/abs/2303.04349v1 | # Virtual Reality in Metaverse over Wireless Networks with User-centered Deep Reinforcement Learning
###### Abstract
The Metaverse and its promises are fast becoming reality as maturing technologies are empowering the different facets. One of the highlights of the Metaverse is that it offers the possibility for highly immersive and interactive socialization. Virtual reality (VR) technologies are the backbone for the virtual universe within the Metaverse as they enable a hyper-realistic and immersive experience, and especially so in the context of socialization. As the virtual world 3D scenes to be rendered are of high resolution and frame rate, these scenes will be offloaded to an edge server for computation. Besides, the metaverse is user-center by design, and human users are always the core. In this work, we introduce a multi-user VR computation offloading over wireless communication scenario. In addition, we devised a novel user-centered deep reinforcement learning approach to find a near-optimal solution. Extensive experiments demonstrate that our approach can lead to remarkable results under various requirements and constraints.
Metaverse, computation offloading, reinforcement learning, wireless networks
## I Introduction
**Background.** _Maturing technologies in areas such as 6G wireless networks [1] and high-performance extended reality (XR) technology [2] has empowered the developments of the Metaverse [3]. One of the key developments of the Metaverse is highly interactive and immersive socialization. Users can interact with one another via full-body avatars, improving the overall socialization experience.
**Motivation.** _Virtual Reality is a key feature of an immersive Metaverse socialization experience. Compared to traditional two-dimensional images, generating \(360^{\circ}\) panoramic images for the VR experience is computationally intensive. However, the rendering and computation of scenes of high resolution and frame rate are still not feasible on existing VR devices, due to the lack of local device computing power. A feasible solution to powering an immersive socialization experience on VR devices is through computation offloading [4]. In addition, the metaverse is a user-centric application by design, and we need to place the user experience at the core of the network design [5]. Therefore, we have to consider a multi-user socialization scenario, in which each user has a different purpose of use and requirements. This propels us to seek a more user-centered and oriented solution.
**Related work.** _In recent years, VR services over wireless communication have been thoroughly studied in many previous works. Chen et al. studied the quality of service of a VR service over wireless communication using an echo state network [6]. However, none of the previous works considered the varying purpose of use and requirements between users. Although MEC-based VR services are thoroughly studied, few of them considered a sequential scenario over wireless communication. Machine-learning-based approaches have been widely adopted to tackle wireless communication challenges [7, 8], and DRL has been proven to achieve excellent performance. Meng et al. [9] addressed the synchronization between physical objects and the digital models in Metaverse with deep reinforcement learning (DRL). This is due to the ability of DRL agents to explore and exploit in self-defined environments [10]. However, there are no existing works which has designed a user-centered and user-oriented DRL method._
**Approach.** _This paper proposes a novel multi-user VR model in a downlink Non-Orthogonal Multiple Access (NOMA) system [11]. We designed a novel DRL algorithm that considers the varying purpose of use and requirements of the users. We re-design the Proximal Policy Optimization (PPO) algorithm [12] with a reward decomposition structure._
**Contributions.** _Our contributions are as follows:_
* _User-centered Computation Offloading VR Formulation:_ _We study the user-centered Metaverse computation offloading over the wireless network, designing a multi-user scenario where a Virtual Service Provided (VSP) assists users in generating reality-assisted virtual environments._
* _HRPPO:_ _We carried a novel DRL algorithm Hybrid Reward PPO (HRPPO) to tackle the proposed channel allocation problem. The HRPPO is imbued with the hybrid reward architecture (HRA), enabling it to have a more user-centred perspective._
* _DRL Scenario Design:_ _The design of the three core DRL elements: state, action, and reward are explained in detail. Extensive experiments demonstrate the effectiveness of our method._
The rest of the paper is organized as follows. Section II introduces our system model. Then Section III and IV proposes our deep reinforcement learning approach and settings. In Section V-A, extensive experiments are performed, and various methods are compared to show the prowess of our strategy. Section VI concludes the paper.
## II System model
Consider a multi-user wireless downlink transmission in an indoor environment, in which one second is divided into \(T\) time steps. To ensure a smooth experience, we consider a slotted structure with the clock signal sent by the server for synchronization, and each slot contains one high resolution frame transmission, and the duration for each time slot is \(\iota\) (\(\iota=\frac{1}{T}\)). In each time step, a sequence of varying resolution 3D scenes is generated by the VSP and sent to \(n\) VR device users (VU) \(\mathcal{N}=\{1,2,...,N\}\) with distinct characteristics (e.g., Computation capability) and different requirements (e.g., Tolgert delay). Each user is selected for either (1) **Computation offloading** by offloading the tracking vectors \(\chi_{n}\)[13] to the virtual service provider (VSP) for scene rendering, or (2) **Local computing** by receiving scenes (tracking vectors) from others and render scenes locally with a lower computation capability and at the expense of energy consumption. If a user is selected for computation offloading, the VSP will generate the frame and send it back to them via a set of channels \(\mathcal{M}=\{1,2,...,m\}\),
Each user can accept virtual scene frame rates as low as a minimum tolerable frames per second (FPS) \(\tau_{n,F}\), which is the number of successfully received frames in a second. Considering that the tracking vectors are relatively very small [13], we assume that the vectors are transmitted with dedicated channels between VSP-VU and VU-VU, and neglect the overhead.
We use \(\Gamma^{t}=\{\Gamma^{t}_{1},\Gamma^{t}_{2},...,\Gamma^{t}_{N}\}\) to denote the selection of downlink channel arrangement and inherently, the computing method (VSP or local computation). \(\Gamma^{t}_{n}=m\) indicates VU \(n\) is arranged to channel \(m\) at time step \(t\), and \(\Gamma^{t}_{n}=0\) means user \(n\) needs to generate locally.
Thus, it is imperative to devise a comprehensive algorithm that takes into account the varying satisfaction threshold and requirements of the users. In the next section, we explain the computation offloading and local computing models in detail. The system model is shown in Fig.1.
### _Computation offloading model_
We first introduce the computation offloading model based on the wireless cellular network. The server VSP will manage the downlink channels \(\mathcal{M}\) of all VUs \(\mathcal{N}\). And then, the server VSP Furthermore, we denote \(D^{t}_{n}\) (\(n\in\mathcal{N}\)) as the size of the virtual scene frame at time step \(t\) that needs to be transmitted to user \(n\).
We adopt the Non-Orthogonal Multiple Access (NOMA) system as this work's propagation model. In NOMA system, several users can be multiplexed on one channel by the successive interference cancellation (SIC) and superposition coding, and the received signals of VUs in channel \(m\) are sorted in ascending order: \(p_{1}|h^{t}_{1,m}|^{2}>p_{2}|h^{t}_{2,m}|^{2}>...>p_{N}|h^{t}_{N,m}|^{2}\)[11]. In this paper, we assume the decoders of the VUs can recover the signals from each channel through SIC. We denote \(h^{t}_{i,m}\) as the channel gain between the VSP and the \(n^{th}\) user allocated to channel \(m\) at time step (iteration) \(t\). The Downlink rate can be expressed as [11]:
\[r^{t}_{n}=W\log\left(1+\frac{p_{n}|h^{t}_{n,m}|^{2}}{\sum_{i=n+1}^{N}p_{i}|h^{ t}_{n,m}|^{2}+W\sigma^{2}}\right). \tag{1}\]
\(P_{d}=\{p_{1},p_{2},...,p_{N}\}\) denotes the transmission power of each VU's device. Note that transmission power is not time-related in our scenario. \(h^{t}_{n,m}=g^{t}_{n,m}l^{-\alpha}_{n}\) denotes the channel gain between VU \(n\) and VSP in channel \(m\), with \(g^{t}_{n,m}\), \(l_{n}\), \(\alpha\) being the Rayleigh fading parameter, the distance between VU \(n\) and VSP, and the path loss exponent, respectively. \(W\) is the bandwidth of each channel, and \(W\sigma^{2}\) denotes the background noise. Accordingly, the total delay \(d_{n,o}\) of each frame in time step \(t\) is divided into (1) Execution time and (2) downlink transmission time:
\[d^{t}_{n,o}=\frac{D^{t}_{n}\times C^{t}_{n}}{f_{v}}+\frac{D^{t}_{n}}{r^{t}_{n }}. \tag{2}\]
where \(f_{v}\) is the computation capability of VSP, and \(C^{t}_{n}\) is the required number of cycles per bit of this frame [14].
### _Local computing model_
When VU is not allocated a channel, it needs to generate the virtual world frames locally at the expense of energy consumption. Let \(f_{n}\) be the computation capability of VU \(n\), and it varies across VUs. Adopting the model from [15], the energy per cycle can be expressed as \(e_{n,cye}=\eta f_{n}^{2}\). Therefore, the overhead of local computing in terms of execution delay and energy can be derived as:
\[d^{t}_{n,l}=\frac{D^{t}_{n}\times C^{t}_{n}}{f_{n}}. \tag{3}\]
\[e^{t}_{n,l}=\mu_{n}\times D^{t}_{n}\times C^{t}_{n}\times e_{n,cye}. \tag{4}\]
where \(\mu_{n}\) is the weighting parameter of energy for VU \(n\). The battery state of each VU can be different, then, we assume that \(\mu_{n}\) is closer to \(0\) with a higher battery.
### _Problem formulation_
With the slotted structure, we set the VUs' maximum tolerable delay to be \(\iota\) for every frame to be the problem constraint. Different users have different purpose of use (video games,
Fig. 1: Virtual reality in the Metaverse over wireless networks.
group chat, etc.). Thus, they also have varying expectations of satisfactory number of frames per second \(\tau_{n,F}\). We set the tolerable frame transmission failure count of VU \(n\) as \(\tau_{n,f}\). Initially, the tolerable frame transmission failure count of VU \(n\) is defined as \(\tau_{n,f}^{0}=T-\tau_{n,F}\). For each successive frame, the delay in exceedance of tolerable threshold leads to a decrease in VU's remaining tolerable count: \(\tau_{n,f}^{t+1}=\tau_{n,f}^{t}-I_{n}^{t}\), where
\[I_{n}^{t}=\begin{cases}1,&if~{}~{}d_{n,o}^{t}>\iota~{}or~{}d_{n,l}^{t}>\iota.\\ 0,&else.\end{cases} \tag{5}\]
Our goal is to find the near-optimal channel arrangement for the transmission of \(T\) frames, to minimize the total frame transmission failure count and VU device energy consumption.
\[\min_{\Gamma^{t},...,\Gamma^{t}}\sum_{n\in\mathcal{N}}\sum_{t=0}^ {T}\left[\omega_{1}I_{n}^{t}+\omega_{2}e_{n,l}^{t}\right]. \tag{6}\] \[s.t. C1: \tau_{n,f}^{t}\geq 0,~{}\forall n\in\mathcal{N},\forall t\in[0,T].\] (7) \[C2: \Gamma_{n}^{t}=\{0,1,...,M\},~{}\forall n\in\mathcal{N},\forall t \in[0,T]. \tag{8}\]
The \(\omega_{1},\omega_{2}\) are the weighting parameters of delay and energy. Constraint \(C1\) ensures that the frame transmission failure count of each user is within their tolerable limit. Constraint \(C2\) is our integer optimization variable which denotes the computing method and channel assignment for each user at every time step.
This formulated problem is **sequential**, where the remaining tolerable frame transmission failure count \(\tau_{n}^{f,t}\) of each user changes over time, and influences the following states. Thus, convex optimization methods are unsuitable for our proposed problem due to the huge space of integer variables and daunting computational complexity. Also, as the problem contains too many random variables, model-based RL approaches which require transition probabilities are infeasible techniques to tackle our proposed problem. We next introduce our deep RL environment settings according to the formulated problem.
## III Deep reinforcement learning setting
For a reinforcement learning environment (problem), the most important components are (1) State: the key factors for an agent to make a decision. (2) Action: the operation decided by an agent to interact with the environment. (3) Reward: the feedback for Agent to evaluate the action under this state. Thus, we expound on these three components next.
### _State_
We included the following attributes into the state: (1) Each VU's virtual world frame size: \(D_{n}^{t}\). (2) Each VU's remaining tolerable frame transmission failure count: \(\tau_{n,f}^{t}\). (3) The channel gain of each VU: \(h_{n,m}^{t}\). (4) The remaining number of frames to be transmitted at each time step: \((T-t)\).
### _Action_
The discrete action channel assignment to each VU is:
\[a_{u}^{t}=\Gamma^{t}=\{\Gamma_{1}^{t},\Gamma_{2}^{t},...,\Gamma_{ N}^{t}\}. \tag{9}\] \[s.t. \Gamma_{n}^{t}\in\{0,1,...,M\}. \tag{10}\]
In practice, we use a tuple in which there are \(N\) elements corresponding to \(N\) users and each element can take \(M+1\) values, which corresponds to the number of channels; plus 1 for a user being assigned to perform local computing. However, we need to encode the discrete actions as discrete numbers to be evaluated by the neural network. The encoding method is shown in Fig. 2.
### _Reward_
As the main objective is to minimize the frame transmission failure counts and energy consumption, the overall reward \(R_{n}^{t}\) for each VU contains: (1) a penalty \(R_{n,f}^{t}\) for every frame transmission failure and (2) a weighted reward \(R_{n,e}^{t}\) for energy consumption corresponding to VU's battery life. To implement the tolerance constraint \(C1\), we give (3) a huge penalty \(R_{n,end}^{t}\) corresponding to the number of frames left to be transmitted when any VU's remaining tolerable frame transmission failure count is \(0\). In the circumstance of (3), the **episode ends immediately**.
## IV Deep Reinforcement learning approach
Our proposed Hybrid Reward Proximal Policy Optimization (HRPPO) is based on Proximal Policy Optimization (PPO) algorithm, which is considered as the state-of-art RL algorithm [12]. HRPPO is inspired by the Hybrid Reward Architecture (HRA) [16]. Thus, PPO and HRA preliminaries will first be introduced. We will then explain the HRPPO.
### _Preliminary_
#### Iv-A1 Proximal Policy Optimization (PPO)
As we emphasize on developing a _user-centred_ model which considers VUs' varying purpose of use and requirements, policy stability is essential. Proximal Policy Optimization (PPO) by openAI [12] is an enhancement of the traditional policy gradient algorithm. PPO has better sample efficiency by using a separate policy for sampling, and is more stable by embedding policy constraint.
In summary, PPO has two main characteristics in its policy network (Actor): (1) _Increased sample efficiency_. PPO uses a separate policy for sampling trajectories (during training) and evaluating (during evaluation) to increase sample efficiency as well. Here we use \(\pi_{\theta}\) as the evaluating policy and \(\pi_{\theta}\), as the data sampling policy. As we use the \(\pi_{\theta_{s}}\) to sample data for training, the expectation can be rewritten as:
\[\mathbb{E}_{(s^{t},a^{t})\sim\pi_{\theta}}[\pi_{\theta}(a^{t}|s^{t})A^{t}]= \mathbb{E}_{(s^{t},a^{t})\sim\pi_{\theta_{s}}}[\frac{\pi_{\theta}(a^{t}|s^{t} )}{\pi_{\theta_{s}}(a^{t}|s^{t})}A^{t}]. \tag{11}\]
(2) _Policy constraint._ After switching the data sampling policy from \(\pi_{\theta}\) to \(\pi_{\theta_{s}}\), an issue still remains. Although in the
Fig. 2: UL action encoding method.
equation (11), they have the similar expectation value of their objective functions, their variances are starkly distinct. Therefore, a KL-divergence penalty can be added as a constraint to the reward formulation to constrain the distances. However, the KL divergence is impractical to calculate in practice as this constraint is imposed on every observation. Thus, we rewrite the objective function as: \(\mathbb{E}_{(s^{t},a^{t})\sim\pi_{\theta}},[f^{t}(\theta)A^{t}]\)[12], where
\[f^{t}(\theta)=min\{{{r}^{t}}(\theta),clip({{r}^{t}}(\theta),1-\epsilon,1+ \epsilon)\}. \tag{12}\]
And \({{r}^{t}}(\theta)=\frac{\pi_{\theta}(a^{t}|s^{t})}{\pi_{\theta}(a^{t}|s^{t})}\). The problem is solved by gradient ascent, therefore, the gradient can be written as:
\[\Delta\theta=\mathbb{E}_{(s^{t},a^{t})\sim\pi_{\theta}}\left[\triangledown f^ {t}(\theta)A^{t}\right]. \tag{13}\]
In terms of the value network (Critic), PPO uses identical Critic as per other Actor-Critic algorithms; and the loss function can be formulated in [12] as:
\[L(\phi)=[V_{\phi}(s^{t})-(A^{t}+V_{\phi^{\prime}}(s^{t}))]^{2}. \tag{14}\]
\(V(s)\) is the widely used state-value function [17], which is estimated by a learned critic network with parameter \(\phi\). We update \(\phi\) by minimizing the \(L(\phi)\), and the parameter \(\phi^{\prime}\) of the target state-value function periodically with \(\phi\). Using target value is a prevailing trick in RL, which has been used in many algorithms [17].
#### Iii-A2 Hybrid Reward Architecture (HRA)
High-dimensional objective functions are common in communication problems, especially for multi-user scenarios, since we usually need to consider multiple factors and distinct requirements of different users. This issue of using RL to solve a high dimensional objective function was first studied in [16]. In their work, they proposed the HRA structure for Deep Q-learning (DQN) which aims to decompose high-dimensional objective functions into several simpler objective functions. HRA has remarkable performance in handling high-dimensional objectives, which serves as the inspiration for our work.
### _Hrppo_
In contrast to decomposing the overall reward into separate sub-goal rewards as done in [16], we built a user-centered reward decomposition architecture as an extension to PPO, Hybrid Reward PPO (HRPPO), which takes in the rewards of different users and calculates the actions-values separately. In other words, we give the network a view of the state-value of each user, instead of merely evaluating the overall value of an action based on an overall state-value.
**Function process:** In each episode, when the current transmission is accomplished with the selected action \(a_{t}\), the environment will issue rewards \(R_{1}^{t},R_{2}^{t},...,R_{n}^{t}\) as feedback to different VUs. These rewards along with their corresponding states and next iteration state, will be sent to the Critic to generate the state-values \(V_{1}^{t},V_{2}^{t},...,V_{n}^{t}\), representing the state-value of each VU. The state-value is then used to calculate the advantages and losses for each VU. The above-mentioned process is illustrated in Fig. 3.
**Update function:** In equation (13), we established the policy gradient for PPO Actor, and in HRPPO we have the gradient \(\Delta\theta\) as:
\[\Delta\theta=\mathbb{E}_{(s^{t},a^{t})\sim\pi_{\theta^{\prime}}}[\triangledown f ^{t}(\theta,(\sum_{n=1}^{N}A_{n}^{t}(s^{t}))]. \tag{15}\]
where \(A_{n}^{t}\) denotes the advantages of different VUs. The generalized advantage estimation (GAE) [18] is chosen as the advantage function:
\[A_{n}^{t}=\delta^{t}+(\gamma\lambda)\delta_{n}^{t+1}+...+(\gamma \lambda)\bar{T}^{-1}\delta_{n}^{t+\bar{T}-1}, \tag{16}\] \[\text{where}\quad\hat{\delta}_{n}^{t}=R_{n}^{t}+\gamma V_{\phi^{ \prime}}(s^{t+1})-V_{\phi^{\prime}}(s^{t}). \tag{17}\]
\(\bar{T}\) specifies the length of the given trajectory segment, \(\gamma\) specifies the discount factor, and \(\lambda\) denotes the GAE parameter. In terms of Critic loss, the equation (14) is formatted into:
\[L(\phi)=\sum_{n=1}^{N}\left(V_{\phi,n}(s^{t})-(A_{n}^{t}+V_{\phi^{\prime},n}(s ^{t}))\right)^{2}. \tag{18}\]
Similar to the renowned centralized training decentralized execution (CTDE) framework [19], the \(V_{\phi}^{n}\) also uses centralized training with equation (18). Therefore, the training time will not scale with the number of users.
#### Iii-A1 **Baselines**
We also implement some of the most renowned RL algorithms that are capable of tackling problems with a discrete action space.
* **HRDQN**. We implemented the hybrid reward DQN following the structure of HRA [16].
* **PPO**. The traditional PPO is used as a baseline. The sum of all users' rewards is selected as the global reward.
* **Random**. The random Agent selects actions randomly, which represents the system performance if no channel resource allocation is performed.
### _Metrics_
We introduce a set of metrics (apart from RL rewards) to evaluate the effectiveness of our proposed methods.
* **Successful frames**. The number of successful frames among total \(T\) frames determines the Frame Rate per
Fig. 3: Hybrid Reward PPO.
Second (FPS) of the virtual world scenes, and hence fluidity of the Metaverse VR experience.
* **Energy consumption**. We illustrate the total energy consumption in each episode. Lower energy consumption signifies a more effective use of channel resources.
* **Average rate**. The average downlink transmission rate of all VUs and frames in each episode is shown to evaluate the trained policy. A higher average rate indicates better allocation of channel resources.
## V Experiment results
### _Numerical Setting_
Consider a \(30\times 30\)\(m^{2}\) indoor space where multiple VUs are distributed uniformly across the space. We set the number of channels to be \(3\) in each experiment configuration, and the number of VUs to be from \(5\) to \(8\) across the different experiment configurations. The maximum resolution of one frame is 2k (\(2048\times 1080\)) and the minimum is 1080p (\(1920\times 1080\)). Each pixel is stored in 16 bits [20] and the factor of compression is 150 [13]. We randomize the data size of one frame to take values from a uniform distribution in which \(D_{n}^{t}\in[\frac{1920\times 1080\times 16}{150},\frac{2048\times 1080\times 1 6}{150}]\). The flashed rate, \(T\) frames in one second is taken to be 90, which is considered the best rate for VR [13] applications. The bandwidth of each channel is set to \(10\times 180\) kHz. The required successful frame transmission count \(\tau_{n,F}\) is uniformly selected from \([75,80]\), which is higher than the acceptable of \(60\)[13]. In terms of channel gain, the small-scale fading follows the Rayleigh distribution and \(\alpha=2\) is the path loss exponent. For all experiments, we use \(2\times 10^{5}\) steps for training, and the evaluation interval is set to be \(50\) training steps. As there are several random variables in our environment, all experiments are conducted under global **random seeds from 0-10**, and the error bands are drawn to better illustrate the model performances.
### _Result analysis_
We first illustrate the performances of the different models against different metrics in two experimental configurations (shown in Fig. 4): one with 6 VUs and the other with 8 VUs. We then show the overall results for each experimental configurations in Table I. Results in Table I are taken from the average of the final \(200\) steps.
The training reward, successful frame transmission counts, and average downlink transmission rate show an overall upward trend as training progresses. When pitted against these metrics, HRPPO performed the best out of the tested baseline algorithms. In the experimental setting with \(6\) VUs, although PPO and HRPPO are able to attain similar peak rewards towards later training stages, HRPPO converges in half the number of training steps taken for PPO to achieve convergence. In the experimental setting with \(8\) VUs, HRPPO obtains a much higher final reward when compared to PPO. Both HRPPO and PPO achieved higher rewards than HRDQN, and performed better in each metric. HRPPO and PPOs' performance superiority can be attributed to the PPO's policy KL penalty and higher sample efficiency. However, HRDQN and PPO fail to find a good solution in more complicated scenarios. In the \(6\) VU experimental setting, both HRPPO and PPO are able to allocate VUs to a VSP channel for computation offloading in each round, and this is reflected in zero energy spent on local device computation. However, in the experimental setting with \(8\) VUs, there is insufficient channel resources and all of the three algorithms learn strategies to increase transmission rate and avoid frame rate decrement, by allocating some VUs to perform local computation, which increases energy consumption.
The complete results in Table. I shows that HRPPO obtains the best performance for almost every metric under every scenario. This demonstrates that decomposing the reward and using sum-losses which provides a user-centered view to the RL agent, is good approach to tackling a multi-user computation offloading problem.
## VI conclusion
In this paper, we study a multi-user VR in the Metaverse mobile edge computing over wireless networks scenario. Mul
tiple users with varying requirements are considered, and a novel user-centered RL algorithm _HRPPO_ is designed to tackle it. Extensive experiment results show that _HRPPO_ has the quickest convergence and achieves the highest reward, which is \(45\%\) higher than the traditional _PPO_. In the future, we will continue to optimize the power allocation to seek more optimal solutions to our proposed problems.
## Acknowledgement
This research is partly supported by Singapore Ministry of Education Academic Research Fund under Grant Tier 1 RG90/22, RG97/20, Grant Tier 1 RG24/20 and Grant Tier 2 MOE2019-T2-1-176; partly by the NTU-Wallenberg AI, Autonomous Systems and Software Program (WASP) Project.
|
2306.16193 | Deterministic End-to-End Transmission to Optimize the Network Efficiency
and Quality of Service: A Paradigm Shift in 6G | Toward end-to-end mobile service provision with optimized network efficiency
and quality of service, tremendous efforts have been devoted in upgrading
mobile applications, transport and internet networks, and wireless
communication networks for many years. However, the inherent loose coordination
between different layers in the end-to-end communication networks leads to
unreliable data transmission with uncontrollable packet delay and packet error
rate, and a terrible waste of network resources incurred for data
re-transmission. In an attempt to shed some lights on how to tackle these
challenges, design methodologies and some solutions for deterministic
end-to-end transmission for 6G and beyond are presented, which will bring a
paradigm shift to the end-to-end wireless communication networks. | Xiaoyun Wang, Shuangfeng Han, Zhiming Liu, Qixing Wang | 2023-06-28T13:18:28Z | http://arxiv.org/abs/2306.16193v2 | Deterministic End-to-End Transmission to Optimize the Network Efficiency and Quality of Service: A Paradigm Shift in 6G
###### Abstract
**Abstract: Toward end-to-end mobile service provision with optimized network efficiency and quality of service, tremendous efforts have been devoted in upgrading mobile applications, transport and internet networks, and wireless communication networks for many years. However, the inherent loose coordination between different layers in the end-to-end communication networks leads to unreliable data transmission with uncontrollable packet delay and packet error rate, and a terrible waste of network resources incurred for data re-transmission. In an attempt to shed some lights on how to tackle these challenges, design methodologies and some solutions for deterministic end-to-end transmission for 6G and beyond are presented, which will bring a paradigm shift to the wireless communication networks.**
_Keywords--_ 6G, application layer, transport layer, quality of service, deterministic transmission,
## I Efforts toward better End-to-end Wireless Service Provision
**End-to-end wireless service provision process:** In the wireless communication networks, the provision of mobile service requires data processing in different layers of the Open System Interconnection (OSI) reference model. The data packet of diversified mobile services will first be source-coded at the application layer and transmitted via transport layer protocols like Transmission Control Protocol (TCP) or User Datagram Protocol. Then the data packet goes through the Internet Protocol (IP) network layer and arrives at the core network (CN), where the quality of service (QoS) [1] parameters will be configured to ensure the mobile services will be provisioned with sufficiently satisfactory quality. The CN estimates the QoS requirements of each data flow, decides on whether the data flow type is guaranteed bit rate (GBR), delay critical GBR, or non-GBR, and estimates the packet delay budget, packet error rate, and so on. The radio access network (RAN) will continue to schedule the transmission of these data packets via the air interface according to the QoS requirements.
**Efforts toward better wireless communication networks:** To satisfy the ever increasing requirements of mobile services, the wireless communication industry has developed from the first generation analog communication networks to the current fifth generation (5G) networks [2,3]. Now, the global attention has been shifting from 5G to the future 5.5G and 6G. Toward 5.5G, the standard body like 3GPP started comprehensive studies in release 18 to adopt more new features, including full duplex, wireless artificial intelligence and machine learning, network energy saving, XR service enhancement [4], etc. Release 19 may well continue the studies for 5.5G. Extensive studies have also been conducted in both the industry and academia on the 6G scenarios, use cases, and key technologies [5-7]. For example, several research directions for 6G have been identified, including Terahertz communications, reflective intelligent surface, artificial intelligence and deep learning driven network architecture and transmission technology in physical and higher layer. In the recent ITU-R recommendation [8], the framework and overall objectives of the future development of IMT for 2030 and beyond has been outlined. IMT-2030 is expected to support enriched and immersive experience, enhanced ubiquitous coverage, and enable new forms of collaboration.
**Efforts toward better application layer and TCP/IP layer:** Toward a better end-to-end performance, other parts in the OSI model have also been evolving continuously.
1) _Application layer optimization:_ Take the video coding and decoding for example. Video codecs for multimedia services have been upgraded for many years. For example, the Moving Picture Expert Group (MPEG) immersive video standard [9], the latest addition to the MPEG-I suite of standards, is designed to support virtual and extended reality applications that require six degrees of freedom visual interaction with the rendered scene. The MPEG-5 low complexity enhancement video coding [10] works in combination with other codecs, to produce a more efficiently compressed video. In recent years, artificial intelligence has been adopted in the codec design to more accurately capture the key message in the information source to further increase the compression ratio.
2) _Transport layer optimization:_ In the transport layer, applications using TCP in the wireless or wired communication are often bottle-necked by the handshake mechanism, which may undesirably incur delay, particularly in the time-sensitive communication scenario (e.g. streaming live video). Many congestion control algorithms have been developed. For
example, the TCP Westwood [11] uses packet loss rate as a key indicator of network congestion to adjust the TCP transmission window, and is particularly useful when unexpected losses due to radio channels are misinterpreted as congestion, resulting in window reduction. Artificial intelligence/machine learning technologies have recently been introduced to enhance the TCP performance [12, 13].
3) _Internet protocol layer optimization:_ Traditional IP protocols support reliable service data delivery, without providing strict QoS guarantees. To frame the development of new technologies for deterministic IP networks, the Internet Engineering Task Force (IETF) and Institute of Electrical and Electronics Engineers (IEEE) Time Sensitive Network (TSN) have specified target requirements and potential solutions for guaranteed bandwidth, bounded end-to-end latency and bounded jitter [14].
## II. Fundamental Challenges to the Wireless Communication Networks
Despite all the above efforts toward better end-to-end service performance, there are still several fundamental limitations of the current wireless communication networks. As shown in Fig.1, the traditional QoS management is confined to the wireless networks (i.e. the part below transport/IP layer, as specified in [15]). There is a clear lack of efficient QoS management in the application layer, transport layer and IP layer, which has been and will be hampering end-to-end mobile service optimization.
_Lack of determinism in the application layer:_ Significant progress has been made in the mobile applications to better encode and decode the source information, e.g. to achieve higher compression ratio while maintaining satisfactory quality. However, dynamic variations in space, time, frequency domains are often observed in the wireless channels, especially in the high mobility scenarios. These dynamic variations often lead to re-transmission in media access control (MAC) layer, radio link control (RLC) and transport layer. Currently, the applications don't have sufficient access to the TCP and IP performance, the wireless channels and QoS guarantee capability at the RAN side. This means the data format of the application layer, if not configured properly, may not be satisfactorily supported by the subsequent layers.
_Lack of determinism in the transport layer:_ Wireless TCP has become the bottle of the wireless communications, mainly due to the following reasons.
1) _Difficulty in identification of the congestion reasons:_ The traditional TCP schemes work well for traditional Internet traffic since packet loss is mainly caused by congestion. However, in wireless communication networks, the packet loss is most likely caused by variations of wireless channels, rather than true congestion. The erroneously triggered congestion control can result in lower bandwidth utilization.
2) _Difficulty in adjustment of the transmit data packet size:_ TCP protocols use the probing mechanism to change the packets transmission size with inherent blindness towards the buffer status at the RAN, which is determined by the current buffer status and the scheduled transmission data size during the next TCP window. This is because the TCP layer does not know the scheduling algorithms at the RAN and has no clue how many data will be transmitted for each mobile user, thus being unable to adjust the packet transmission speed accordingly.
3) _Difficulty in QoS guarantee in TCP layer:_ The transport layer manages data flows uniformly and maintains best-effort fairness for all the applications. Consequently, the TCP layer is faced with fundamental challenges in satisfying various QoS requirements like high data-rate, constant bit-rate, delay-tolerant, or high reliability. It is obvious that poor or no QoS guarantee at the TCP layer will inevitably degrade the end-to-end service quality.
_Lack of determinism in the wireless QoS management:_ For information security and privacy purpose, the mobile over-the-tops would not allow more exposure of the application data to the operators. Consequently, it is challenging for the operators to obtain services characteristics and to make a proper prediction and guarantee of QoS with optimized resource allocation, thus leading to a degradation of network efficiency. To make it worse, even if the CN identifies QoS requirements accurately, the configured QoS parameters may not be achievable in the RAN. This is because the temporal and geographical distribution of the mobile services may fluctuate, and the wireless channels are dynamically changing almost all the time in the mobile communications. No matter how smart and powerful the resource scheduler at the RAN is, there is a considerable possibility that the mobile service may not be supported due to lack of radio resources. This is the essential reason for lack of determinism in the wireless QoS management.
Fig. 1: Limitations of the end-to-end wireless networks
_Lack of determinism in the IP layer:_ There exist several challenges to providing end-to-end latency and bounded jitter guarantees in the IP layer. Firstly, due to the multiplexing nature of the data bursts, the varying elasticity of mobile traffic needs to be controlled at the ingress, thus incurring an extra latency depending on the burst size. Secondly, generally a very large number of devices and users will use the IP networks simultaneously. It would be difficult to manage the delay and jitter of each mobile traffic after being routed many times.
Based on the above analysis, due to the inherent loose coordination between different layers in the OSI service model, the end-to-end transmission is inevitably uncertain and unpredictable, thus rendering the end-to-end QoS guarantee unreliable and challenging. This also results in very low network efficiency. Unfortunately, this issue has not received due attention for various reasons in the wireless communication industry.
## III New Paradigm of Wireless Communications with Deterministic End-to-End Transmission
Deterministic network is especially important for the emerging time-critical applications, such as industrial automation, smart grids, and telesurgery. In order to tackle the fundamental challenges faced with the wireless communication networks to achieve maximized network efficiency and quality of service, it would be highly motivated to introduce certain determinism into the end-to-end data transmission. The following design methodologies for deterministic end-to-end transmission need to be taken into considerations in the design of 6G and beyond.
_Deterministic QoS jointly determined for each layer:_ In the current 5G networks, the QoS parameters are determined in the CN, which has no idea whether these parameters could by supported in the RAN, especially when the mobile resources are limited in the peak wireless traffic hours. Therefore, coordination between the CN and RAN is necessitated, and the final QoS parameters (determined in the CN, RAN or other management platforms) should be sufficiently supported by the RAN scheduling algorithms. For a better end-to-end QoS guarantee, the QoS parameters for each layer need to be jointly determined. One design example is given to illustrate how to jointly determine the QoS parameters:
1) The CN or other management entity in the network estimates the QoS requirements of each data flow. Traditionally, these QoS requirements are service targets of the wireless networks. For the future design, the QoS requirements should be for the end-to-end communication.
2) The CN or other management entity in the network will coordinate the end-to-end QoS requirements for all the layers, including the application layer, transport layer, IP layer, CN, RAN and the user equipment (UE). For example, the delay budget will be partitioned into different layers to jointly fulfill the end-to-end delay requirements.
3) The RAN calculates whether the wireless QoS requirements of all users will be supported in RAN. If yes, the QoS parameters will be used at the RAN as the scheduling targets. If not all the users' QoS requirements will be supported, even with the smartest scheduling algorithm, the RAN will suggest QoS degradation to CN based on some fairness metric, e.g. the one that ensures same ratio of scheduled data packet size to the required data packet size for all users or data flows. Upon receiving the QoS adjustment suggestions from RAN, CN or other management entity will re-calculate the QoS for each layer and informs the QoS parameters to other layers.
4) For each user or data flow, the RAN calculates how many data need to be transmitted during a certain time window to guarantee the QoS requirements and schedules the radio resource accordingly. The buffer status and scheduled data packet size within a certain time window will be used at the TCP layer to adjust the TCP transmit packet size.
5) Based on the coordination between the CN and RAN, or even between all the layers, the QoS requirements for all the layers will be finally determined before the data transmission. Alternatively, a powerful management center in the network is responsible for joint optimization of the end-to-end QoS configuration and configures QoS for each layer. This will facilitate deterministic end-to-end QoS provision.
Note that during data transmission, the UE feeds back the end-to-end QoS status to the wireless network, which can serve as an important reference for the adjustment of the end-to-end and per-layer QoS parameters.
_To introduce QoS management in the application layer:_ Traditionally the wireless QoS management is based on the requirements of the mobile services, with proper RAN resource allocation and air interface transmission schemes like channel coding, modulation, waveform, multiple access, and multiple antenna technologies. However, in the mobile communications, the mobile services also need to match the conditions of wireless networks. The application data format need to be adjusted according to the jointly determined QoS parameters at the application layer. For example, if the wireless channel does not support 1080p video for a video stream service, video packet with 720p may achieve better QoS or QoE performance. QoS flows will be generated for mobile service (e.g. FTP, video call, streaming) with different QoS requirements. This leads to more reliable service data format selection and deterministic QoS guarantee at the application layer. The transmission efficiency will be significantly improved.
_To introduce QoS management in the transport layer:_ This requires the TCP has sufficient knowledge of the QoS requirements of each data flow from the application layer, and makes optimal data packet scheduling to meet the QoS requirements and the fairness metric. A context-oriented TCP design is proposed in [16], which understands the application context and adapts to varying network conditions with flow-based QoS control. This scheme introduces QoS provision before the mobile network (as traditionally
implemented in the core network). However, the QoS parameters may not be accurate and more importantly may not be well supported by the mobile network. With the jointly determined QoS parameters available at the TCP layer, satisfactory QoS management will be more conveniently achieved, thus laying a solid foundation for the end-to-end QoS optimization.
_RAN information assisted deterministic TCP:_ The TCP layer should be able to adjust its transmit strategy according to the wireless channel variations and scheduling results of the RAN. This mandates feedback of RAN information from the RAN to the TCP for each TCP transmit window. The information may include the scheduling results like how many data will be scheduled for each user for the next time window, the current buffer status, the channel statistics (note that it would be extremely challenging to feed back the real time channel information due to transmit latency from the RAN to TCP). Based on these RAN information, the TCP knows exactly how many data to transmit in the next transmit window to satisfy the QoS requirements. Hence, the deterministic TCP transmission is achieved. In contrast to traditional TCP schemes, transmission efficiency is significantly improved, with maximum transmit data packet size, lowest conjestion ratio, and lowest latency.
_Deterministic IP layer:_ Technologies that provide determinism in the IP network have been studied in recent years, in an effort to achieve better guarantee of minimum and maximum end-to-end latency from source to destination, and bounded packet delay variation, to minimize packet loss ratios, and to minimize out-of-order packet delivery. A deterministic IP network requires a completely redesigned architecture that includes network resource layer management, deterministic IP routing layer management, and deterministic service layer management. Based on the jointly determined QoS parameters, deterministic IP technologies will further ensure QoS performance of the data packets [17].
_Prediction of the wireless channels to facilitate reliable pre-scheduling:_ The TCP transmit period is much larger than the resource scheduling time interval at the RAN (usually millisecond level). This mandates reliable resource pre-scheduling in the RAN scheduler during the next TCP transmit period. Toward this end, accurate estimation and prediction of the dynamic wireless channels in temporal and frequency domains will play an essential role. Recently, artificial intelligence and deep learning technologies have been introduced in wireless channel prediction, which has exhibited large performance improvements [18]. Alternatively, resource scheduling can be implemented in transform domains, like the delay, doppler, and angular domains, where the channels' temporal correlations are relatively much higher than that in the temporal and frequency domains [19]. This enables more reliable pre-scheduling, which facilitates more deterministic and efficient end-to-end QoS guarantee.
_Network architecture to support deterministic end-to-end transmission:_ Following the above design methodologies, deterministic and jointly determined QoS parameters from the application layer to the wireless communication network will be achieved. Toward this end, tight coordination or even convergence between different layers is highly motivated. The network architecture which will facilitate smooth information exchange between different layers and joint optimization will be the one important design direction. The new network architecture will also mandate significant changes to the related standardization, including new entities, functions, interface, and signaling. This will bring a paradigm shift of the mobile communications, because all the players in the end-to-end communication industry need to collaborate and the required standardization efforts will cover the traditionally different standardization bodies like 3GPP, IETF, ITU, and ISO.
A brief summery of the end-to-end design is illustrated in Fig.2.
## IV Conclusion
In this comment article, the fundamental challenges on the end-to-end QoS guarantee in wireless communication networks like 5G are analyzed. The inherent loose coordination between different layers leads to unreliable data transmission with
Fig. 2: End-to-end deterministic transmission for 6G and beyond
uncontrollable packet delay and packet error rate, with terrible waste of network resources incurred for data re-transmission. To tackle these challenges, design methodologies for deterministic end-to-end 6G and future wireless communication networks are given, which will bring a paradigm shift to converged design of different layers in the OSI service model.
|
2308.11350 | The intuitionistic-like logic based on a poset | The aim of the present paper is to show that the concept of intuitionistic
logic based on a Heyting algebra can be generalized in such a way that it is
formalized by means of a bounded poset. In this case it is not assumed that the
poset is relatively pseudocomplemented. The considered logical connectives
negation, implication or even conjunction are not operations in this poset but
so-called operators since they assign to given entries not necessarily an
element of the poset as a result but a subset of mutually incomparable elements
with maximal possible truth values. We show that these operators for negation
and implication can be characterized by several simple conditions formulated in
the language of posets together with the operator of taking the lower cone.
Moreover, our implication and conjunction form an adjoint pair. We call these
connectives "unsharp" or "inexact" in accordance with the existing literature.
We also introduce the concept of a deductive system of a bounded poset with
implication and prove that it induces an equivalence relation satisfying a
certain substitution property with respect to implication. Moreover, the
restriction of this equivalence on the base set is uniquely determined by its
kernel, i.e. the class containing the top element. | Ivan Chajda, Helmut Länger | 2023-08-22T10:58:54Z | http://arxiv.org/abs/2308.11350v1 | # The intuitionistic-like logic based on a poset
###### Abstract
The aim of the present paper is to show that the concept of intuitionistic logic based on a Heyting algebra can be generalized in such a way that it is formalized by means of a bounded poset. In this case it is not assumed that the poset is relatively pseudocomplemented. The considered logical connectives negation, implication or even conjunction are not operations in this poset but so-called operators since they assign to given entries not necessarily an element of the poset as a result but a subset of mutually incomparable elements with maximal possible truth values. We show that these operators for negation and implication can be characterized by several simple conditions formulated in the language of posets together with the operator of taking the lower cone. Moreover, our implication and conjunction form an adjoint pair. We call these connectives "unsharp" or "inexact" in accordance with the existing literature. We also introduce the concept of a deductive system of a bounded poset with implication and prove that it induces an equivalence relation satisfying a certain substitution property with respect to implication. Moreover, the restriction of this equivalence on the base set is uniquely determined by its kernel, i.e. the class containing the top element.
**AMS Subject Classification:** 06A11, 06D15, 06D20, 03G25, 03B22
**Keywords:** Bounded poset, logic based on a poset, unsharp negation, unsharp implication, adjoint operators, Modus Ponens, deductive system, equivalence relation induced by a deductive system
## 1 Introduction
Intuitionistic logic was algebraically formalized by means of relatively pseudocomplemented semilattices, see [1], [2] and [14] - [17]. If such a semilattice is even a lattice, it is called a Heyting algebra, see [14] and [17]. It is well-known that every relatively pseudocomplemented lattice is distributive. The concept of relatively pseudocomplemented lattices was extended by the first author to non-distributive lattices in [3] under the name sectional pseudocomplementation. It was further extended also for posets in [10]. Thus a kind of intuitionistic logic based on sectionally pseudocomplemented lattices was realized.
However, there exist logics based on bounded posets, e.g. the logic of quantum mechanics, see e.g. [7], [8], [13] and [18]. In particular, orthomodular posets on which the logic of quantum mechanics is based are thoroughly studied in [7], [8], [11] and [18].
In order to formalize a logic based on a poset, there are two possible ways how to solve the problem that the operations meet and join need not be defined everywhere. One method is to consider these operations as partial only and then the logical connectives formalized by them and by partial operations derived from them are also only partial. The disadvantage of this approach is that in some cases, such a logic cannot answer the question what is a logical conclusion of some reasoning (or sequential derivation). Hence we prefer another approach, namely we consider so-called operators instead of operations which assign to given entries not necessarily an element as the result, but a certain subset of elements with maximal values. We work with such results of logical connectives which are subsets, but their elements are mutually incomparable. It means that one cannot prefer one element of this set with respect to the other elements. Hence such a logic based on a poset gets an answer concerning logical derivation in each case, but the result may be "inexact" or "unsharp", see e.g. [9] or [13]. We suppose that if an exact logical derivation is impossible from the reasons mentioned above, it is better to have an unsharp result than none. In our opinion unsharp reasoning is an alternative to multiple-valued reasoning which is now generally accepted though it was not accepted by all specialists at first.
In their recent paper [9] the authors showed that a certain intuitionistic-like logic can be derived based on arbitrary meet-semilattices satisfying the Ascending Chain Condition (ACC) regardless whether there exist (relative) pseudocomplements or not. We can ask if similar logics may be derived also by means of arbitrary posets with \(0\) satisfying the ACC. Within meet-semilattices the logical connective conjunction is usually formalized by the meet operation. Then the unsharp implication as introduced in [9] forms an adjoint operator to conjunction. In the case of a poset we must find another operator formalizing conjunction. In [13] so-called unsharp properties of formal logics were used. In [9] we considered unsharpness of implication as well as of negation. This means that for given propositions \(p\) and \(q\) the results of the implication \(p\to q\) and of the negation \(\neg p\), respectively, need not be elements of the corresponding meet-semilattice \(\mathbf{L}\), but may be non-empty subsets of it consisting of mutually incomparable maximal elements. When proceeding from meet-semilattices to posets, we will apply this principle again for the connective conjunction in such a way that implication and conjunction will still be connected by a certain kind of adjointness, see e.g. [6]. Due to this fact, in such logics we still have the derivation rule Modus Ponens.
## 2 Preliminaries
In the following we identify singletons with their unique element, i.e. we will write \(x\) instead of \(\{x\}\). Moreover, all posets considered in the sequel are assumed to satisfy the Ascending Chain Condition which we will abbreviate by ACC. This implies that every element lies under a maximal one. Of course, every finite poset satisfies the ACC.
In the sequel we will use the following notation: Let \(\mathbf{P}=(P,\leq)\) be a poset, \(a,b,c\in P\)
and \(A,B\) be non-empty subsets of \(P\).
\[\operatorname{Max}A :=\text{ set of all maximal elements of }A,\] \[L(a,b) :=\{x\in P\mid x\leq a,b\},\] \[\Lambda(A,B) :=\bigcup_{x\in A,y\in B}L(x,y),\] \[A\leq B\text{ if }x\leq y\text{ for all }x\in A\text{ and all }y\in B,\] \[A\leq_{1}B\text{ if for every }x\in A\text{ there exists some }y\in B\text{ with }x\leq y,\] \[A=_{1}B\text{ if both }A\leq_{1}B\text{ and }B\leq_{1}A.\]
The relations \(\leq_{1}\) and \(=_{1}\) are a quasiorder relation on \(2^{P}\) and an equivalence relation on \(2^{P}\), respectively. It is easy to see that \(A\leq_{1}\operatorname{Max}B\) provided \(A\subseteq B\) and that \(A\leq_{1}b\) is equivalent to \(A\leq b\).
If a poset \(\mathbf{P}\) has a bottom and a top element, we will denote them by \(0\) and \(1\), respectively, and we will express this fact by writing \(\mathbf{P}=(P,\leq,0,1)\). Such a poset is called _bounded_. The element \(c\) is called the _relative pseudocomplement_ of \(a\) with respect to \(b\), formally \(c=a*b\), if \(c\) is the greatest element \(x\) of \(P\) satisfying \(L(a,x)\leq b\). If \(\mathbf{P}=(P,\leq,0)\) then the relative pseudocomplement \(a*0\) of \(a\) with respect to \(0\) is denoted by \(a^{*}\) and called the _pseudocomplement_ of \(a\), see e.g. [4] or [12]. Especially, we have \(L(a,a^{*})=0\).
## 3 Negation derived in posets
Consider a bounded poset \(\mathbf{P}=(P,\leq,0)\) satisfying the ACC. For \(a\in P\) we define
\[a^{0}:=\operatorname{Max}\{x\in P\mid L(a,x)=0\}.\]
Clearly. \(a^{0}\) need not be an element of \(P\), but it is a non-empty subset of \(P\) since \(L(a,0)=0\). From now on, we will call \(a^{0}\) the _unsharp negation_ of \(a\). Of course, if the pseudocomplement \(a^{*}\) of \(a\) exists then \(a^{0}=a^{*}\).
We extend this concept to subsets of \(P\) as follows: If \(A\) is a non-empty subset of \(P\) then
\[A^{0}:=\operatorname{Max}\{x\in P\mid L(x,y)=0\text{ for all }y\in A\}.\]
We are going to show that the negation \({}^{0}\) defined in this way shares several properties with the negation in intuitionistic logic.
**Theorem 3.1**.: _Let \(\mathbf{P}=(P,\leq,0,1)\) be a bounded poset satisfying the ACC, \(a,b\in P\) and \(A,B\) non-empty subsets of \(P\). Then the following holds:_
1. \(A^{0}\) _is an antichain, in particular_ \(a^{0}\) _is an antichain,_
2. \(0^{0}=1\) _and_ \(1^{0}=0\)_,_
3. \(L(x,y)=0\) _for all_ \(x\in A\) _and all_ \(y\in A^{0}\)_, in particular_ \(L(a,y)=0\) _for all_ \(y\in a^{0}\)_,_
4. \(A\leq_{1}B\) _implies_ \(B^{0}\leq_{1}A^{0}\)_, in particular_ \(a\leq b\) _implies_ \(b^{0}\leq a^{0}\)_,_
5. \(A^{0}=_{1}B^{0}\) _implies_ \(A^{0}=B^{0}\)_, in particular_ \(a^{0}=_{1}b^{0}\) _implies_ \(a^{0}=b^{0}\)_;_ \(A^{0}=_{1}b\) _implies_ \(A^{0}=b\)_, in particular_ \(a^{0}=_{1}b\) _implies_ \(a^{0}=b\)_,_
* \(A\leq_{1}A^{00}\)_, in particular_ \(a\leq_{1}a^{00}\)_;_ \(A^{000}=A^{0}\)_, in particular_ \(a^{000}=a^{0}\)_,_
* \(\Lambda\Big{(}a,\big{(}L(a,b)\big{)}^{0}\Big{)}=_{1}\Lambda(a,b^{0})\)_._
Proof.:
* - (iii) follow directly from the definition of \({}^{0}\).
* If \(A\leq_{1}B\) then \[\{x\in P\mid L(x,y)=0\text{ for all }y\in B\}\subseteq\{x\in P\mid L(x,y)=0 \text{ for all }y\in A\}.\]
* Assume \(a\in A^{0}=_{1}B^{0}\). Because of \(A^{0}\leq_{1}B^{0}\) there exists some \(b\in B^{0}\) with \(a\leq b\), and because of \(B^{0}\leq_{1}A^{0}\) there exists some \(c\in A^{0}\) with \(b\leq c\). Together we obtain \(a\leq b\leq c\) and hence \(a\leq c\). Since \(a\) and \(c\) belong to the antichain \(A^{0}\) we conclude \(a=c\) and therefore \(a=b\in B^{0}\). This shows \(A^{0}\subseteq B^{0}\). Interchanging the roles of \(A^{0}\) and \(b^{0}\) yields \(B^{0}\subseteq A^{0}\). Together we obtain \(A^{0}=B^{0}\). Now assume \(a\in A^{0}=_{1}b\). Because of \(A^{0}\leq_{1}b\) we have \(a\leq b\), and because of \(b\leq_{1}A^{0}\) there exists some \(c\in A^{0}\) with \(b\leq c\). Together we obtain \(a\leq b\leq c\) and hence \(a\leq c\). Since \(a\) and \(c\) belong to the antichain \(A^{0}\) we conclude \(a=c\) and therefore \(a=b\). This shows \(A^{0}=b\).
* We have \(A\subseteq\{x\in P\mid L(x,y)=0\text{ for all }y\in A^{0}\}\) and hence \(A\leq_{1}A^{00}\). Replacing \(A\) by \(A^{0}\) we get \(A^{0}\leq_{1}A^{000}\). From \(A\leq_{1}A^{00}\) and (iv) we obtain \(a^{000}\leq_{1}A^{0}\). Together we have \(A^{000}=_{1}A^{0}\) whence \(A^{000}=A^{0}\) according to (v).
* Any of the following statements implies the next one:
* \(L(x,y)=0\) for all \(x\in\big{(}L(a,b)\big{)}^{0}\) and all \(y\in L(a,b)\),
* \(L(b,y)=0\) for all \(x\in\big{(}L(a,b)\big{)}^{0}\) and all \(y\in L(a,x)\),
* \(\Lambda\Big{(}a,\big{(}L(a,b)\big{)}^{0}\Big{)}\leq_{1}\Lambda(a,b^{0})\). That (1) implies (2) can be seen as follows: If \(x\in\big{(}L(a,b)\big{)}^{0}\), \(y\in L(a,x)\) and \(c\in L(b,y)\) then \(c\in L(a,b)\) and \(c\leq x\) and therefore \(L(x,c)=0\) by (1) whence \(L(c)=L(x,c)=0\), i.e. \(c=0\). From \(L(a,b)\leq b\) we conclude \(b^{0}\leq_{1}\big{(}L(a,b)\big{)}^{0}\) according to (iv) and hence \[\Lambda(a,b^{0})\leq_{1}\Lambda\Big{(}a,\big{(}L(a,b)\big{)}^{0}\Big{)}.\]
**Example 3.2**.: _Consider the bounded poset \(\mathbf{P}=(P,\leq,0,1)\) visualized in Fig. 1:_
_We have_
\[\begin{array}{c|c|c|c|c|c|c|c|c|c|c}x&0&a&b&c&d&e&f&1\\ \hline x^{0}&1&f&ac&d&c&0&a&0\\ \hline x^{00}&0&a&b&c&d&1&f&1\end{array}\]
(_Here and in the following we write \(a_{1}\ldots a_{n}\) instead of \(\{a_{1},\ldots,a_{n}\}\)._) _One can see that \(x^{00}=x\) for all \(x\in P\setminus\{e\}\) and \(e^{00}=1\neq e\). But \(e^{000}=1^{0}=0=e^{0}\) and hence \({\bf P}\) satisfies the identity \(x^{000}\approx x^{0}\) in accordance with_ (vi) _of Theorem 3.1._
The following example shows that (iv) of Theorem 3.1 does not hold for \(\leq\) instead of \(\leq_{1}\).
**Example 3.3**.: _Consider the bounded poset \({\bf P}=(P,\leq,0,1)\) depicted in Fig. 2:_
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/p1}\end{array}\]
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/p2}\end{array}\]
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/p3}\end{array}\]
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/p4}\end{array}\]
\[\begin{array}{c}\includegraphics[width=142.
**Theorem 3.4**.: _Let \((P,\leq,0,1)\) be a bounded poset satisfying the ACC and \({}^{0}\) a unary operator on \(P\). Then the following conditions \((1)\) and \((2)\) are equivalent:_
1. \(x^{0}=\mathrm{Max}\{y\in P\mid L(x,y)=0\}\) _for all_ \(x\in P\)_,_
2. _the operator_ \({}^{0}\colon 2^{P}\to 2^{P}\)__\((\)_restricted to_ \(P\)\()\) _satisfies the following conditions for all_ \(x\in P\)_:_ 1. \(x^{0}\) _is an antichain,_ 2. \(L(x,y)=0\) _for all_ \(y\in x^{0}\)_,_ 3. \(\Lambda\Big{(}x,\big{(}L(x,y)\big{)}^{0}\Big{)}=_{1}\Lambda(x,y^{0})\)_._
Proof.:
\((1)\)\(\Rightarrow\)\((2)\):
This follows from Theorem 3.1.
\((2)\)\(\Rightarrow\)\((1)\):
If \(L(x,y)=0\) then according to (P3) we have
\[y=_{1}L(y,1)=\Lambda\Big{(}y,\big{(}L(x,y)\big{)}^{0}\Big{)}=\Lambda\Big{(}y, \big{(}L(y,x)\big{)}^{0}\Big{)}=_{1}\Lambda(y,x^{0})\leq_{1}x^{0}\]
and hence \(y\leq_{1}x^{0}\). Conversely, if \(y\leq_{1}x^{0}\) then according to (P2) we get
\[L(x,y)\subseteq\Lambda(x,x^{0})=0\]
which implies \(L(x,y)=0\). This shows that \(L(x,y)=0\) is equivalent to \(y\leq_{1}x^{0}\). We conclude
\[\mathrm{Max}\{y\in P\mid L(x,y)=0\}=\mathrm{Max}\{y\in P\mid y\leq_{1}x^{0}\} =x^{0}.\]
The last equality can be seen as follows. Let \(z\in\mathrm{Max}\{y\in P\mid y\leq_{1}x^{0}\}\). Then \(z\leq_{1}x^{0}\), i.e. there exists some \(u\in x^{0}\) with \(z\leq u\). We have \(u\leq_{1}x^{0}\). Then \(z<u\) would imply \(z\notin\mathrm{Max}\{y\in P\mid y\leq_{1}x^{0}\}\), a contradiction. This shows \(z=u\in x^{0}\). Conversely, assume \(z\in x^{0}\). Then \(z\leq_{1}x^{0}\). If \(z\notin\mathrm{Max}\{y\in P\mid y\leq_{1}x^{0}\}\) then there would exist some \(u\in P\) with \(z<u\leq_{1}x^{0}\) and hence there would exist some \(w\in x^{0}\) with \(z<u\leq w\) contradicting (P1). This shows \(z\in\mathrm{Max}\{y\in P\mid y\leq_{1}x^{0}\}\).
## 4 Unsharp implication and conjunction
Let us recall that Brouwerion semilattices are relatively pseudocomplemented meet-semilattices, see e.g. [15] and [16]. It is familiarly known that in logics based on Brouwerian semilattices or on Heyting algebras the relative pseudocomplement is considered as the logical connective implication, i.e.
\[x\to y=x*y=\mathrm{Max}\{z\in S\mid x\wedge z\leq y\}.\]
Of course, \(\mathrm{Max}\{z\in S\mid x\wedge z\leq y\}\) is a singleton, thus \(x*y\) is an element of \(S\). In our case we will use formally the same definition, but the result \(x\to y\) need not be an element of the poset \(\mathbf{P}\) in question, but may be a subset of \(P\) in general. However, the elements of \(x\to y\) are mutually incomparable and of a maximal possible value. Hence
one cannot pick up one of them to be the preferable element. Now we are going to define our main concept, i.e. the operator \(\rightarrow\) which formalizes the logical connective implication. As mentioned above, for given entries \(x\) and \(y\) the result of \(x\to y\) may be a subset of \(P\). Due to this, if we combine this operator in several formulas, we must define its value also for entries which are subsets. We define:
Let \(\mathbf{P}=(P,\leq,0,1)\) be a bounded poset satisfying the ACC, \(a,b\in P\) and \(A,B\) non-empty subsets of \(P\). Then
* \(a\to b:=\operatorname{Max}\{x\in P\mid L(a,x)\leq b\}\), \(A\to B:=\operatorname{Max}\{y\in P\mid L(x,y)\leq_{1}B\text{ for all }x\in A\}\).
* \(a\odot b:=\operatorname{Max}L(a,b)\), \(A\odot B:=\operatorname{Max}\Lambda(A,B)\).
Of course, if \(\mathbf{P}\) is a lattice then \(a\odot b=a\wedge b\).
It is evident that \(a\to 0=a^{0}\) as usual in intuitionistic logic.
In order to estimate how reasonable our definition of implication is we can verify its relationship with conjunction. These two unsharp logical connectives are related as follows:
* \(x\odot y\leq z\) if and only if \(x\leq_{1}y\to z\)
or, even more general,
\[A\odot y\leq z\text{ if and only if }A\leq_{1}y\to z\]
for every non-empty subset \(A\) of \(P\). This is a variant of adjointness of the operators \(\odot\) and \(\rightarrow\). Hence the connectives introduced before seem to be sound. We list some of their properties. It is evident that \(\odot\) is commutative and \(x\odot 1\approx 1\odot x\approx x\).
Moreover, from \(x\to y=x\to y\) we infer by (AD)
\[(x\to y)\odot x\leq_{1}y\]
which is a kind of the Modus Ponens derivation rule in intuitionistic logic. Namely, it says that the truth value of the proposition \(y\) cannot be less than the truth values of the propositions \(x\) and \(x\to y\) despite the fact that \(x\to y\) may consist of a number of propositions.
For the operator \(\rightarrow\) we prove the following result.
**Theorem 4.1**.: _Let \(\mathbf{P}=(P,\leq,0,1)\) be a bounded poset satisfying the ACC, \(a,b,c\in P\) and \(A,B\) non-empty subsets of \(P\). Then the following holds:_
* \(a\to b\) _is an antichain,_
* \(b\leq_{1}a\to b\)_,_
* \(a\leq_{1}(a\to b)\to b\)_,_
* \(a\leq b\) _implies_ \(c\to a\leq_{1}c\to b\) _and_ \(b\to c\leq_{1}a\to c\)_,_
* \(\Lambda(a,a\to b)=L(a,b)\)_,_
* \(\Lambda(a\to b,b)=_{1}b\)_,_
* \(1\to A=A\)_, especially_ \(1\to a=a\)__
* \(A\to B=1\) _if and only if_ \(A\leq_{1}B\)_, especially_ \(a\to b=1\) _if and only if_ \(a\leq b\)_,_
* \(A\odot b\leq c\) _if and only if_ \(A\leq_{1}b\to c\)_,_
* \(a\odot(a\to b)=a\odot b\)_._
Proof.:
*, (ii), (vii) and (ix) follow directly from the definition of \(\to\).
* follows from \(L(x,a)=L(a,x)\leq b\) for all \(x\in a\to b\)
* follows since \(a\leq b\) implies \[\{x\in P\mid L(c,x)\leq a\}\subseteq\{x\in P\mid L(c,x)\leq b\},\] \[\{x\in P\mid L(b,x)\leq c\}\subseteq\{x\in P\mid L(a,x)\leq c\}.\]
* If \(c\in\Lambda(a,a\to b)\) then there exists some \(d\in a\to b\) with \(c\in L(a,d)\). Since \(L(a,d)\leq b\) we have \(c\in L(a,b)\). Conversely, assume \(c\in L(a,b)\). Since \(L(a,b)\leq b\) there exists some \(d\in a\to b\) with \(b\leq d\). Now \(c\in L(a,d)\subseteq\Lambda(a,a\to b)\).
* Since \(L(a,b)\leq b\) there exists some \(c\in a\to b\) with \(b\leq c\). Now \(b\in L(c,b)\subseteq\Lambda(a\to b,b)\) showing \(b\leq_{1}\Lambda(a\to b,b)\). Of course, \(\Lambda(a\to b,b)\leq_{1}b\).
* The following are equivalent: \(A\to B=1\), \(L(x,1)\leq_{1}B\) for all \(x\in A\), \(A\leq_{1}B\).
* According to (v) we have \[a\odot(a\to b)=\operatorname{Max}\Lambda(a,a\to b)=\operatorname{Max}L(a,b)=a \odot b.\]
We can see that our operator \(\to\) shares a lot of properties with the connective implication in intuitionistic logic based on a Heyting algebra. In particular, our \(\to\) is antitone in the first and monotone in the second entry.
**Example 4.2**.: _Consider the bounded poset \((\{0,a,b,c,d,e,1\},\leq,0,1)\) visualized in Fig. 3:_
_The operator tables of \(\rightarrow\) and \(\odot\) are as follows:_
\[\begin{array}{c|c|c|c|c|c|c|c}\rightarrow&0&a&b&c&d&e&1\\ \hline 0&1&1&1&1&1&1&1\\ \hline a&bc&1&bc&bc&1&1&1\\ \hline b&ac&ac&1&ac&1&1&1\\ \hline c&de&de&de&1&de&de&1\\ \hline d&c&ac&bc&c&1&ce&1\\ \hline e&c&ac&bc&c&cd&1&1\\ \hline 1&0&a&b&c&d&e&1\\ \end{array}\]
Now we characterize the binary operator \(\rightarrow\) in a similar way as it was done for the unary operator \({}^{0}\) in Theorem 3.4.
**Theorem 4.3**.: _Let \(\mathbf{P}=(P,\leq,0,1)\) be a bounded poset satisfying the \(\mathrm{ACC}\) and \(\rightarrow\) a binary operator on \(P\). Then the following conditions \((1)\) and \((2)\) are equivalent:_
1. \(x\to y=\mathrm{Max}\{z\in P\mid L(x,z)\leq y\}\) _for all_ \(x,y\in P\)_,_
2. _the operator_ \(\rightarrow\colon 2^{P}\times 2^{P}\to 2^{P}\)__\((\)_restricted to_ \(P\times P)\) _satisfies the following conditions for all_ \(x,y,z\in P\)_:_ 1. \(x\to y\) _is an antichain,_ 2. \(L(x,z)\leq y\) _implies_ \(z\leq_{1}x\to y\)_,_ 3. \(\Lambda(x,x\to y)=L(x,y)\)_._
Proof.:
\((1)\)\(\Rightarrow\)\((2)\):
This follows from Theorem 4.1.
\((2)\)\(\Rightarrow\)\((1)\):
According to (R2) we have that \(L(x,z)\leq y\) implies \(z\leq_{1}x\to y\). Conversely, if \(z\leq_{1}x\to y\) then by (R3) we obtain
\[L(x,z)\subseteq\Lambda(x,x\to y)=L(x,y)\leq y\]
and hence \(L(x,z)\leq y\). This shows that \(L(x,z)\leq y\) is equivalent to \(z\leq_{1}x\to y\). We conclude
\[\mathrm{Max}\{z\in S\mid L(x,z)\leq y\}=\mathrm{Max}\{z\in S\mid z\leq_{1}x \to y\}=x\to y.\]
The last equality can be seen as follows. Let \(u\in\mathrm{Max}\{z\in S\mid z\leq_{1}x\to y\}\). Then \(u\leq_{1}x\to y\), hence there exists some \(v\in x\to y\) with \(u\leq v\). We have \(v\leq_{1}x\to y\). Then \(u<v\) would imply \(u\notin\mathrm{Max}\{z\in S\mid z\leq_{1}x\to y\}\), a contradiction. This shows \(u=v\in x\to y\). Conversely, assume \(u\in x\to y\). Then \(u\leq_{1}x\to y\). If \(u\notin\mathrm{Max}\{z\in S\mid z\leq_{1}x\to y\}\) then there would exist some \(v\in S\) with \(u<v\leq_{1}x\to y\) and hence there would exist some \(w\in x\to y\) with \(u<v\leq w\) contradicting (R1). This shows \(u\in\mathrm{Max}\{z\in S\mid z\leq_{1}x\to y\}\)
Let us note that using both (R2) and (R3) we can prove the adjointness (AD) mentioned above.
**Remark 4.4**.: _It is of some interest that the unsharp operator \(\to\) can be characterized by three exact and simple conditions in the language of posets equipped with the operator \(L\) of the lower cone._
## 5 Deductive systems
As mentioned in Section 4, we can involve in our logic the derivation rule Modus Ponens. This rule is in fact closely related to the concept of a deductive system. Hence we define:
**Definition 5.1**.: _Let \(\mathbf{P}=(P,\leq,0,1)\) be a bounded poset satisfying the ACC and \(\to\) defined by_ (I)_. A deductive system of \(\mathbf{P}\) is a subset \(D\) of \(2^{P}\setminus\{\emptyset\}\) satisfying the following conditions:_
* \(1\in D\)_,_
* _if_ \(x,y,z,u\in P\)_,_ \(x\to y\in D\) _and_ \((x\to y)\to(z\to u)\in D\) _then_ \(z\to u\in D\)_,_
* _if_ \(x,y,z\in P\) _and_ \(x\to y,y\to x\in D\) _then_ \((z\to x)\to(z\to y)\in D\) _and_ \((x\to z)\to(y\to z)\in D\)_._
In the following we make use of the identities \(x\odot 1\approx 1\odot x\approx x\) and \(1\to x\approx x\) (see Section 4).
**Remark 5.2**.: _If \(x\in D\), \(y\in P\) and \(x\to y\in D\) then according to_ (vii) _of Theorem 4.1 we have \(1\to x=x\in D\) and \((1\to x)\to(1\to y)=x\to y\in D\) and hence because of_ (ii) _of Definition 5.1 we get \(y=1\to y\in D\). Therefore, \(x\in D\) and \(x\to y\in D\) imply \(y\in D\) which justifies the name "deductive system" and illuminates the connection such systems with the derivation rule Modus Ponens._
**Example 5.3**.: _Consider the bounded poset \(\mathbf{P}=(P,\leq,0,1)\) depicted in Fig. 4:_
_The operator table for \(\rightarrow\) is as follows:_
\[\begin{array}{c|c|c|c|c|c|c|c}\rightarrow&0&a&b&c&d&e&1\\ \hline 0&1&1&1&1&1&1&1\\ \hline a&b&1&b&1&1&1&1\\ \hline b&c&c&1&c&1&1&1\\ \hline c&b&e&b&1&1&e&1\\ \hline d&0&a&b&c&1&e&1\\ \hline e&0&c&b&c&d&1&1\\ \hline 1&0&a&b&c&d&e&1\\ \end{array}\]
_We want to show that \(D:=\{d,e,1\}\) is a deductive system of \(\mathbf{P}\)._ (_The following considerations make heavy use of the operation table for \(\rightarrow\)._) _In order to make the proof of this statement short and clear we define two subsets \(A\) and \(B\) of \(P\) as follows:_
\[A :=\{(c,a),(c,e)\}\cup D^{2}\cup\{(x,y)\in P^{2}\mid x\leq y\},\] \[B :=\{a,c\}^{2}\cup D^{2}\cup\{(x,x)\mid x\in P\}.\]
_Now let \(x,y,z\in P\). Then we have_
\[x\to y\in D\text{ if and only if }(x,y)\in A,\] \[x\to y,y\to x\in D\text{ if and only if }(x,y)\in B,\] \[1\in D\text{ according to the definition of }D,\] \[\text{if }x\in D\text{ and }(x,y)\in A\text{ then }y\in D,\] \[\text{if }(x,y)\in B\text{ then }(z\to x,z\to y),(x \to z,y\to z)\in B\]
_completing the proof that \(D\) is a deductive system of \(\mathbf{P}\)._
Since in posets we do not have everywhere defined lattice operations join and meet, we formulate a certain kind of compatibility in the following way.
**Definition 5.4**.: _Let \((P,\leq,0,1)\) be a bounded poset satisfying the ACC and \(\Phi\) be an equivalence relation on \(2^{P}\setminus\{\emptyset\}\). We say that \(\Phi\) satisfies the substitution property with respect to \(\odot\) if for all \(x,y\in P\) we have_
\[1\;\Phi\;x\to y\text{\ \ implies \ \ }x\odot 1\;\Phi\;x\odot(x\to y).\]
_We say that \(\Phi\) satisfies the substitution property with respect to \(\rightarrow\) if for all \(x,y,z,u\in P\) we have_
\[x\;\Phi\;y\text{\ implies }x\to x\;\Phi\;x\to y,\] \[x\;\Phi\;y\text{\ implies }(z\to x)\rightarrow(z\to x)\;\Phi\;(z \to x)\rightarrow(z\to y),\] \[x\;\Phi\;y\text{\ implies }(x\to z)\rightarrow(x \to z)\;\Phi\;(x\to z)\rightarrow(y\to z),\] \[1\;\Phi\;x\to y\text{\ implies }1\rightarrow(z\to u)\;\Phi\;(x\to y) \rightarrow(z\to u).\]
If an equivalence relation on \(2^{P}\setminus\{\emptyset\}\) satisfies the substitution property with respect to \(\odot\) and \(\rightarrow\) then we can easily describe the relationship to its kernel.
**Lemma 5.5**.: _Let \((P,\leq,0,1)\) be a bounded poset satisfying the ACC, \(\Phi\) an equivalence relation on \(2^{P}\setminus\{\emptyset\}\) satisfying the substitution property with respect to \(\odot\) and \(\to\) and \(a,b\in P\). Then \((a,b)\in\Phi\) if and only if \(a\to b,b\to a\in[1]\Phi\)._
Proof.: If \((a,b)\in\Phi\) then
\[a\to b\in[a\to a]\Phi=[1]\Phi,\] \[b\to a\in[b\to b]\Phi=[1]\Phi.\]
If, conversely, \(a\to b,b\to a\in[1]\Phi\) then according to (x) of Theorem 4.1 we have
\[a=a\odot 1\ \Phi\ a\odot(a\to b)=a\odot b=b\odot a=b\odot(b\to a)\ \Phi\ b\odot 1=b.\]
When studying congruences in varieties of algebras, an important congruence property is the so-called weak regularity. It means that if an algebra in question has a constant \(1\) and if for two of its congruences \(\Phi\) and \(\Psi\) we have \([1]\Phi=[1]\Psi\) then \(\Phi=\Psi\), see e.g. [5]. Surprisingly, we obtain a similar result for posets and equivalence relations having the substitution property with respect to \(\odot\) and \(\to\). In fact, this is a consequence of Lemma 5.5.
**Corollary 5.6**.: _If \((P,\leq,0,1)\) is a bounded poset satisfying the ACC, \(\Phi,\Psi\) are equivalence relations on \(2^{P}\setminus\{\emptyset\}\) satisfying the substitution property with respect to \(\odot\) and \(\to\) and \([1]\Phi=[1]\Psi\) then \(\Phi\cap P^{2}=\Psi\cap P^{2}\)._
For varieties of algebras, the mentioned weak regularity was characterized by B. Csakany, see e.g. [5] by means of certain binary terms satisfying a simple condition. It is of some interest that such a binary term can be derived also for posets.
**Proposition 5.7**.: _Let \((P,\leq,0,1)\) be a bounded poset satisfying the ACC, put \(t(x,y):=(x\to y)\odot(y\to x)\) for all \(x,y\in P\) and let \(a,b\in P\). Then \(t(a,b)=1\) if and only if \(a=b\)._
Proof.: According to (viii) of Theorem 4.1 the following are equivalent:
\[t(a,b) =1,\] \[(a\to b)\odot(b\to a) =1,\] \[\operatorname{Max}\Lambda(a\to b,b\to a) =1,\] \[1 \in\Lambda(a\to b,b\to a),\] there exists some \(x\in a\to b\) and some \(y\in b\to a\) with \(1\in L(x,y)\), there exists some \(x\in a\to b\) and some \(y\in b\to a\) with \(x=y=1\), \[1 \in(a\to b)\cap(b\to a),\] \[a\to b =b\to a=1,\] \[a \leq b\leq a,\] \[a =b.\]
We are going to show that the kernel of an equivalence relation satisfying the substitution property with respect to \(\odot\) and \(\rightarrow\) is just a deductive system.
**Theorem 5.8**.: _Let \(\mathbf{P}=(P,\leq,0,1)\) be a bounded poset satisfying the ACC and \(\Phi\) an equivalence relation on \(2^{P}\setminus\{\emptyset\}\) satisfying the substitution property with respect to \(\odot\) and \(\rightarrow\). Then \([1]\Phi\) is a deductive system of \(\mathbf{P}\)._
Proof.: Let \(a,b,c,d\in P\).
1. This is clear.
2. If \(a\to b,(a\to b)\rightarrow(c\to d)\in[1]\Phi\) then according to (vii) of Theorem 4.1 we have \[c\to d=1\rightarrow(c\to d)\in[(a\to b)\rightarrow(c\to d)]\Phi=[1]\Phi.\]
3. If \(a\to b,b\to a\in[1]\Phi\) then \((a,b)\in\Phi\) according to Lemma 5.5 and hence \[(c\to a)\rightarrow(c\to b)\in[(c\to a) \rightarrow(c\to a)]\Phi=[1]\Phi,\] \[(a\to c)\rightarrow(b\to c)\in[(a\to c) \rightarrow(a\to c)]\Phi=[1]\Phi.\]
The question if a given deductive system \(D\) on a bounded poset \((P,\leq,0,1)\) satisfying the ACC induces an equivalence relation on \(2^{P}\setminus\{\emptyset\}\) with kernel \(D\cap P\) having a property similar to the substitution property with respect to \(\rightarrow\) is answered in the next result. At first, we define the relation induced by \(D\).
**Definition 5.9**.: _For every bounded poset \((P,\leq,0,1)\) satisfying the ACC and every subset \(E\) of \(2^{P}\setminus\{\emptyset\}\) define a binary relation \(\Theta(E)\) on \(2^{P}\setminus\{\emptyset\}\) as follows:_
\[(A,B)\in\Theta(E)\text{ if and only if }A\to B,B\to A\in E\]
\((A,B\text{ non-empty subsets of }P)\)_. We call \(\Theta(E)\) the relation induced by \(E\)._
In Theorem 5.8 we proved that for every equivalence relation \(\Phi\) on \(2^{P}\setminus\{\emptyset\}\) satisfying the substitution property with respect to \(\odot\) and \(\rightarrow\), its kernel \([1]\Phi\) is a deductive system of \(\mathbf{P}\). The next theorem shows that there holds some converse version of this result if we consider the restriction of the relation induced by a deductive system of \(\mathbf{P}\) to the base set \(P\).
**Theorem 5.10**.: _Let \(\mathbf{P}=(P,\leq,0,1)\) be a bounded poset satisfying the ACC and \(D\) a deductive system of \(\mathbf{P}\). Then the following hold:_
1. \(\Theta(D)\cap P^{2}\) _is an equivalence relation on_ \(P\)_,_
2. \([1]\big{(}\Theta(D)\cap P^{2}\big{)}=D\cap P\)_,_
3. _if_ \(x,y,z\in P\) _and_ \((x,y)\in\Theta(D)\) _then_ \((z\to x,z\to y)\in\Theta(D)\) _and_ \((x\to z,y\to z)\in\Theta(D)\)_._
Proof.: Let \(a,b,c\in P\).
1. Evidently, \(\Theta(D)\) is reflexive and symmetric. If \((a,b),(b,c)\in\Theta(D)\) then \[b\to c,(b\to c)\rightarrow(a\to c),c\to b,(c\to b) \rightarrow(c\to a)\in D\] and hence \(a\to c,c\to a\in D\), i.e. \((a,c)\in\Theta(D)\). This shows that \(\Theta(D)\) is transitive and therefore an equivalence relation on \(P\).
2. The following are equivalent: \(a\in[1]\big{(}\Theta(D)\big{)}\); \(a\to 1,1\to a\in D\); \(1,a\in D\); \(a\in D\).
3. follows from Definition 5.1.
**Example 5.11**.: _For the deductive system \(D\) from Example 5.3 we have_
\[\Theta(D)=\{a,c\}^{2}\cup D^{2}\cup\{(x,x)\mid x\in P\}.\]
|
2304.09812 | Signatures of Cooper pair dynamics and quantum-critical
superconductivity in tunable carrier bands | Different superconducting pairing mechanisms are markedly distinct in the
underlying Cooper pair kinematics. Pairing interactions mediated by
quantum-critical soft modes are dominated by highly collinear processes,
falling into two classes: forward scattering and backscattering. In contrast,
phonon mechanisms have a generic non-collinear character. We show that the type
of kinematics can be identified by examining the evolution of superconductivity
when tuning the Fermi surface geometry. We illustrate our approach using
recently measured phase diagrams of various graphene systems. Our analysis
unambiguously connects the emergence of superconductivity at ``ghost
crossings'' of Fermi surfaces in distinct valleys to the pair kinematics of a
backscattering type. Together with the observed non-monotonic behavior of
superconductivity near its onset (sharp rise followed by a drop), it provides
strong support for a particular quantum-critical superconductivity scenario.
These findings conclusively settle the long-standing debate on the origin of
superconductivity in this system and demonstrate the essential role of
quantum-critical modes in superconducting pairing. Moreover, our work
highlights the potential of tuning bands via ghost crossings as a promising
means of boosting superconductivity. | Zhiyu Dong, Patrick A. Lee, Leonid S. Levitov | 2023-04-19T16:57:16Z | http://arxiv.org/abs/2304.09812v1 | # Signatures of Cooper pair dynamics and quantum-critical superconductivity in tunable carrier bands
###### Abstract
Different superconducting pairing mechanisms are markedly distinct in the underlying Cooper pair kinematics. Pairing interactions mediated by quantum-critical soft modes are dominated by highly collinear processes, falling into two classes: forward scattering and backscattering. In contrast, phonon mechanisms have a generic non-collinear character. We show that the type of kinematics can be identified by examining the evolution of superconductivity when tuning the Fermi surface geometry. We illustrate our approach using recently measured phase diagrams of various graphene systems. Our analysis unambiguously connects the emergence of superconductivity at "ghost crossings" of Fermi surfaces in distinct valleys to the pair kinematics of a backscattering type. Together with the observed non-monotonic behavior of superconductivity near its onset (sharp rise followed by a drop), it provides strong support for a particular quantum-critical superconductivity scenario. These findings conclusively settle the long-standing debate on the origin of superconductivity in this system and demonstrate the essential role of quantum-critical modes in superconducting pairing. Moreover, our work highlights the potential of tuning bands via ghost crossings as a promising means of boosting superconductivity.
Superconducting phases occurring in various strongly interacting systems [1; 2; 3; 4; 5; 6] are often interpreted by theoretical frameworks that involve quantum-critical pairing [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. Yet, delineating these experimentally from the more conventional scenarios has not always been easy. Superconductivity (SC) observed in moire and non-moire graphene at the onset of electronic orders, where soft spin and valley collective modes can mediate pairing[14; 15; 16; 17; 18; 19], is an appealing setting for understanding the telltale signatures of different pairing mechanisms. Pairing with nonzero angular momentum can often be identified from the dependence on the applied magnetic field. In this vein, are there easily identifiable signatures of superconductivity driven by quantum-critical soft modes?
Tuning the band parameters in correlated electron systems through the quantum-critical point (QCP) in order to gain insight into the nature of superconductivity has been a subject of wide interest. In most cases, modifying the band structure beyond subtle perturbations is extremely difficult to achieve experimentally. Nevertheless, the dependence on an applied strain has been used to reveal the impact of the van Hove points on the superconductivity in Sr\({}_{2}\)RuO\({}_{4}\)[20; 21; 22; 23; 24], and the competition between nematic order and superconductivity in iron-based superconductors[25; 26; 27]. In the \(\kappa\)-phase organic superconductors[28] and heavy fermion systems such as CeCoIn\({}_{5}\)[29] and UPt\({}_{3}\)[30; 31], the role of interaction and correlations is probed by pressure dependence of the superconductivity. These findings have triggered considerable theoretical interest [32; 33; 34; 35; 36; 37].
Unlike previously studied systems, in graphene-based superconductors the Fermi surfaces are widely tunable[1; 2; 3; 4; 5]. This tunablity, as we will see, opens new avenues for probing the nature of pairing through linking it to the Cooper pair scattering kinematics. The latter are known to be highly collinear for superconductivity (SC) assisted by incipient electronic orders and driven by soft quantum-critical modes[14; 17; 18]. Depending on the mechanism type it falls into two main classes: collinear backscattering and forward scattering. The method we introduce below can differentiate between kinematic types by identifying unique features in the evolution of superconducting phases upon adjusting the Fermi surface geometry.
Here, through a detailed quantitative comparison to experimental data obtained by tuning SC in several graphene systems, we demonstrate the occurrence of the collinear backscattering kinematics. Specifically, we find direct evidence linking the onset of superconductivity and the abrupt appearance of "ghost valley crossings" between Fermi surfaces in different valleys. This is distinct from conventional ways to stimulate superconductivity by tuning the Fermi level through van Hove points. Identification of an abrupt onset of SC with such crossings limits the possible soft modes that can serve as pairing glue, excluding many of the previously considered scenarios and pinpointing SC driven by the isospin inter-valley-coherent (IVC) mode pictured in Fig.1 and discussed below as the most likely mechanism. Further evidence for this scenario is provided by a significant enhancement of superconducting \(T_{\rm c}\) and a characteristic nonmonotonic behavior at SC onset near ghost valley crossing (see (3) and accompanying discussion), which is in good agreement with experimental observations (see Fig.3).
A salient feature of graphene superconductivity that will be important for our analysis is that the two electrons forming a Cooper pair are located in valleys \(K\) and \(\bar{K}\) which are related by time reversal symmetry. Accordingly, Cooper pair kinematics involves valley-conserving scattering of pair states \((\mathbf{K}+\mathbf{k},\mathbf{\bar{K}}-\mathbf{k})\rightarrow(\mathbf{K}+\mathbf{k}^{\prime},\mathbf{\bar{K }}-\mathbf{k}^{\prime})\) with \(\mathbf{k},\mathbf{k}^{\prime}\ll K\). Because of this property, the only type of pairing mechanism that can generate collinear backscattering kinematics with \(\mathbf{k}^{\prime}\approx-\mathbf{k}\) is the pairing mediated by isospin fluctuations that are softened and
activated at quantum criticality [14; 16; 17; 18; 19]. Here, isospin refers to spin and valley. This isospin mode arises from the fluctuations of valley order \(\langle\psi^{\dagger}_{\mathbf{K}}\psi_{\mathbf{K}}\rangle\), the quantity describing the inter-valley coherence (IVC)[38]. To clarify the backscattering nature of the IVC pairing mechanism, we write down the pairing interaction shown diagrammatically in Fig.1. This is directly analogous to the paramagnon pairing mechanism near a ferromagnetic quantum critical point [7; 8; 9; 10]. Standard analysis [see [14; 17; 18] and [39], Sec.B] yields
\[\Gamma_{\omega\mathbf{k},\omega^{\prime}\mathbf{k}^{\prime}}=\frac{U}{\kappa|\omega+ \omega^{\prime}|+l_{0}^{2}(\mathbf{k}+\mathbf{k}^{\prime})^{2}+\delta^{2}}, \tag{1}\]
where \(U\), \(\kappa\), \(l_{0}\) are model-specific parameters and \(\delta\) denotes the distance to the QCP [39]. Crucially, the two electrons in a Cooper pair are predominantly scattered from the initial momenta of \((\mathbf{K}+\mathbf{k},\bar{\mathbf{K}}-\mathbf{k})\) to the final momenta of \((\mathbf{K}+\mathbf{k}^{\prime},\bar{\mathbf{K}}-\mathbf{k}^{\prime})\) where \(\mathbf{k}^{\prime}\approx-\mathbf{k}\), namely, backscattering dominates. Indeed, the soft mode describing the IVC instability, which mediates pairing, is the particle-hole ladder shown in Fig.1 for which the momentum transfer is \((\mathbf{K}+\mathbf{k})-(\bar{\mathbf{K}}-\mathbf{k}^{\prime})\). Expanding about the ordering vector \(2\mathbf{K}\) yields a singularity at small \((\mathbf{k}+\mathbf{k}^{\prime})^{2}\) in (1). This behavior is distinct from the QCP scenarios where pairing mainly benefits from forward-scattering processes, wherein electrons are scattered by a small angle on the Fermi surface, as, e.g., the pairing mediated by nematic fluctuations in iron-based superconductors[40; 41; 42] or pairing through interaction renormalized by valley-polarization fluctuations in graphene bilayer[15]. In these cases the pairing interaction can be modeled by an expression similar to that in (1) with frequencies and momenta entering as \(\omega-\omega^{\prime}\) and \(\mathbf{k}-\mathbf{k}^{\prime}\). In this case the interaction peaks at \(\mathbf{k}^{\prime}\approx\mathbf{k}\) and \(\omega^{\prime}\approx\omega\). Therefore, establishing the backscattering pair kinematics strongly supports the IVC pairing mechanism. Since the Fermi surface ghost crossing signature arises generally in the presence of multiple Fermi pockets and tunable bands, this method can be tested in many superconducting systems such as those found in transition metal dichalcogenides and graphene multilayers[43; 44; 45; 46; 3; 4].
Parenthetically, other scenarios may be considered, such as pairing mediated by antiferromagnetic (AFM) fluctuations, where electrons are predominantly scattered between different parts of the Fermi surface by a large AFM ordering momentum. This mechanism is actively studied in iron pnictides[12; 13], yet it does not appear relevant for graphene.
We will demonstrate the fundamental idea using the setting of Bernal bilayer graphene (BBG) biased by a transverse electric field, a strongly interacting system with a tunable band hosting a superconducting phase[4]. A key experimental finding that points to QCP physics is that the SC phase is a sliver that tracks the phase boundary between a partially-isospin-polarized phase and an unpolarized phase, labeled PIP\({}_{2}\) and Sym\({}_{12}\) in Fig.2 following Ref.[4]. The two QCP scenario types introduced above, involving forward scattering and backscattering,
Figure 1: a) A diagrammatic representation of the pairing interaction mediated by isospin mode, a soft mode associated with the intervalley phase coherence (IVC). Identifying valleys \(\mathbf{K}\) and \(\bar{\mathbf{K}}\) with spin up and down maps this interaction to the paramagnon pairing mechanism mediated by ferromagnetic spin fluctuations[7; 8; 9; 10]. At an isospin quantum criticality, this interaction peaks at \(\omega+\omega^{\prime}=0\) and \(\mathbf{k}+\mathbf{k}^{\prime}=0\) (see (1)), resulting in a backscattering-type Cooper pair dynamics. b) Fermi sea pockets located near valleys \(\mathbf{K}\) and \(\bar{\mathbf{K}}\) in BBG bandstructure, which host fermions forming \(\mathbf{K}\)-\(\bar{\mathbf{K}}\) Cooper pairs. c) Because backscattering, pairing develops near "ghost” crossings of Fermi surfaces \(\pm Q_{i}\) (\(i=1,2,3\)) found by superimposing the pockets in valleys \(\mathbf{K}\) (red contours) and \(\bar{\mathbf{K}}\) (blue contours) by a \(\mathbf{K}\rightarrow\bar{\mathbf{K}}\) translation. A single Fermi sea gives non-removable crossings (i), whereas multi-pocket Fermi seas give removable crossings (ii)-(iii), where \(\theta\) denotes the angle at the crossing. Transitions between the intersecting and non-intersecting Fermi surfaces (ii) and (iii) induced by tuning the bandstructure terminate the superconducting phases.
Figure 2: a) Phase diagram of BBG measured at a finite in-plane magnetic field \(B_{\parallel}=0.165\)T, adapted from Ref.[4]. The SC phase occurs along the phase boundary between the partially isospin-polarized phase (PIP\({}_{2}\)) and an isospin-unpolarized phase (Sym\({}_{12}\)). b) A zoom-in of the region near SC onset in a). Red and black curves mark the phase boundaries. The solid yellow line, obtained from the free-particle bands in phase Sym\({}_{12}\), marks the transition at which the “ghost” \(\mathbf{K}-\bar{\mathbf{K}}\) Fermi surface crossings abruptly disappear (see insets in the top row). The dashed yellow line, drawn in the region where the free-particle description does not apply, is a guide to the eye. The measured emergence point of SC coincides with the appearance of the Fermi-surface crossings. The inset on the left, adapted from Ref.[4], shows the Fermi surfaces for four isospins in PIP\({}_{2}\) phase.
are both viable candidates for this system. The former involves valley-polarization order due to Stoner valley imbalance instability in BBG[15], whereas the latter involves IVC order. The IVC scenario has been considered in RTG [17] and it is straightforward to generalize to BBG as will be shown below. Experiments also indicate a peculiar \(B\) dependence of SC which persists in a high in-plane magnetic field \(B_{\parallel}\) and is activated only above a threshold \(B_{\parallel}\). Yet these observations cannot directly distinguish the two QCP scenarios.
However, there is one observation that so far has escaped attention: The SC sliver only exists on a segment of the PIP\({}_{2}\)-Sym\({}_{12}\) phase boundary - it emerges abruptly upon increasing carrier density along this boundary. The same behavior is found recently in the SC\({}_{1}\) phase of BBG/WSe\({}_{2}\) but not in RTG. As we shall show, this behavior favors a pairing mechanism that involves backscattering, as opposed to the Stoner instability type proposed earlier which involves forward scattering[15].
Next, we consider the backscattering mechanism for superconductivity and its relation to ghost crossings. As we will see, the pairing gap predominantly opens near the crossings of Fermi surfaces in valleys \(\mathbf{K}\) and \(\mathbf{\bar{K}}\) superimposed by a \(\mathbf{K}\rightarrow\mathbf{\bar{K}}\) translation. These points, below referred to as "ghost" crossings, are illustrated in Fig.1 c), where they are labeled \(\pm Q_{i}\) (\(i=1,2,3\)). This result directly follows from the back-scattering nature of the pairing interaction (1), which requires that both momenta \(\mathbf{k}\) and \(-\mathbf{k}\) are found near the Fermi surface in the same valley.
Crucially, these crossings can be switched on and off by varying transverse electric field, an experimental knob tuning the BBG band structure. We anticipate that this change in the bandstructure, illustrated schematically in Fig.1 c) (ii) and c) (iii), leads to an abrupt emergence of SC phase, a notable feature observed in BBG (see Fig.2) and WSe\({}_{2}\)-supported BBG[47] (see Fig.3). This leads to a conjecture that the superconductivity in both systems is dominated by a backscattering pairing mechanism. Below we present microscopic analysis for IVC QCP that allows us to verify this conjecture quantitatively by a direct band structure calculation. Though for the WSe\({}_{2}\)-supported BBG the IVC phase is not believed to be stabilized[48; 49; 47], we assume that it may be a competing phase so that IVC fluctuations co-exist with nematic fluctuations, with the IVC pairing channel enhanced by nesting at the "ghost" Fermi surface crossing (see below). Our approach reproduces the measured SC emergence points with high accuracy providing strong evidence for pair backscattering. Further, since this behavior cannot be explained by other existing scenarios, such as [15; 50; 51; 52; 53; 54; 55], the IVC pairing scenario stands out as the most probable occurrence in a realistic system.
It is also interesting to mention that in the rhombohedral trilayer graphene where the Fermi sea is an annulus with both its inner and outer Fermi surfaces looking like the one in Fig.1 c) (i)[43; 3]. In this case the ghost crossings remain robust under variation of the electric field and, therefore, we do not expect abrupt emergence or termination of SC phase similar to that seen in BBG. This conclusion is in agreement with the observed phase diagrams [3].
Next, we present the essential points of the microscopic analysis. Due to the observation that pairing gap predominantly opens at \(\pm Q_{i}\)'s shown in Fig.1, it is convenient to describe pairing in terms of the electron dispersion within the patches around \(\pm Q_{i}\)'s, treating \(\pm Q_{i}\) as a patch index. Namely, we define a gap function near \(\pm Q_{i}\) as \(\Delta_{\mathbf{K}\mathbf{\bar{K}};\pm Q_{i}}(\mathbf{k})\), where \(\mathbf{k}\) is measured from \(Q_{i}\), and \(k^{\prime}\) is measured from \(-Q_{i}\). Here, \(\mathbf{k},\mathbf{k}^{\prime}\ll k_{F}\). Accordingly, we model the electron energy near \(\pm Q_{i}\) as:
\[\epsilon_{\pm i,\mathbf{k}}=v_{F}\mathbf{n}_{\pm i}\cdot\mathbf{k}+|\mathbf{n}_{\pm i}\times \mathbf{k}|^{2}/2m_{\perp} \tag{2}\]
where \(\mathbf{n}_{\pm i}\) are the unit vectors normal to the Fermi surface at \(\pm Q_{i}\). Applying this model to describe pairing and keeping only the scattering processes in which an electron is scattered from a patch near \(Q_{i}\) to a patch near \(-Q_{i}\), which is the most singular contribution, we find
\[\Delta_{\mathbf{K}\mathbf{K};Q_{i}}(\mathbf{k},\omega)=-\sum_{\mathbf{k}^{\prime}\omega^{ \prime}}\frac{\Gamma_{\omega+\omega^{\prime},\mathbf{k}+\mathbf{k}^{\prime}}\mathbf{\Delta _{\mathbf{K}\mathbf{K};-Q_{i}}}(\mathbf{k}^{\prime},\omega^{\prime})}{\omega^{\prime 2}+\epsilon_{-i,\mathbf{k} ^{\prime}}^{2}},\]
The analysis of this equation is detailed in [39]. Below we describe the main predictions.
Since \(\Gamma_{\omega+\omega^{\prime},\mathbf{k}+\mathbf{k}^{\prime}}\) is positive, the gap equation predicts a sign-changing solution \(\Delta_{\mathbf{K}\mathbf{\bar{K}};Q_{i}}=-\Delta_{\mathbf{K}\mathbf{\bar{K}};-Q_{i}}\)(see Sec.D in Ref.[39]). This yields two degenerate pairing channels that respect the symmetry group (see Sec.C in Ref.[39]). These are the p-wave \(p_{x}\pm ip_{y}\) channels identical to the ones identified for RTG [17] and moire graphene [19]. A linear superposition of these two channels gives
Figure 3: a) Phase diagram of WSe\({}_{2}\)-supported BBG, adapted from Ref.[47]. Here, distinct from BBG, the part below the red and black phase boundaries is the “vanilla” phase where no symmetry is broken. The phase above the black phase boundary and SC phase is an ordered phase conjectured in Ref.[47] to be a nematic phase. b) The agreement between theoretically predicted onset of \(\mathbf{K}\)-\(\mathbf{\bar{K}}\) Fermi surface crossings (solid yellow line) and the measured superconductivity emergence point for WSe\({}_{2}\)-supported BBG. The dashed yellow line, as in Fig.2, is a guide to the eye. The insets in b) illustrate the overlapping and non-overlapping Fermi surfaces. The red and blue curves represent Fermi surfaces in minority isospin species \(\mathbf{K}\downarrow\) and \(\mathbf{\bar{K}}\uparrow\), respectively.
rise to a \(p\)-wave channel which breaks the rotational symmetry, which has a \(T_{\rm c}\) degenerate with that of \(p_{x}\pm ip_{y}\) channels. In a recent experiment[47] in WSe\({}_{2}\)-supported BBG samples the SC phases are found to emerge on top of, or next to, a nematic phase where three-fold rotation symmetry is spontaneously broken. This suggests that the \(p\)-wave superconductivity wins in these systems.
An interesting behavior of SC that is unique to the three-pocket Fermi sea is an increase in \(T_{\rm c}\) near the termination of SC phase. Indeed, upon the appearance of ghost crossings the Fermi pockets in two valleys become nearly tangential at the crossing point. In this case, an approximate nesting at the \(\mathbf{K}\)-\(\mathbf{\bar{K}}\) pocket crossings by a vector \(2\mathbf{K}\) allows pairing to occur on a larger Fermi surface segment near the crossing points, which leads to an enhancement in superconducting \(T_{\rm c}\). This behavior is manifest in the expression for \(T_{\rm c}\) derived in [39], Sec.E:
\[T_{\rm c}=2\omega_{0}e^{-\frac{1}{\chi}},\quad\lambda=\frac{U}{8v_{F}l_{0} \delta\sin(\theta/2)},\quad\omega_{0}=\frac{\delta v_{F}}{l_{0}}. \tag{3}\]
with \(\theta\) the angle between the \(\mathbf{K}\) and \(\mathbf{\bar{K}}\) Fermi surfaces at the crossing points (see Fig.1 c)). The increase in \(T_{\rm c}\) occurs because \(\theta\) vanishes when the Fermi surfaces become tangential. We expect the divergence of \(\lambda\) at \(\theta\to 0\) to be cut off by the dispersion curvature described by the quantity \(m_{\perp}\) in (2)); this effect is not manifest in (3) as it is subleading for finite \(\theta\). However, \(m_{\perp}\) will limit the phase volume in \(\mathbf{k}\)-space where pairing can occur when \(\theta\to 0\), thereby, cutting off the divergence of \(\lambda\) and \(T_{\rm c}\). The enhancement in \(T_{\rm c}\) near the termination point probably explains why IVC fluctuations dominate the pairing despite possible presence of other fluctuations, e.g. due to nematic order conjectured in Ref.[47].
Unfortunately, the existing data are insufficient to map out this interesting behavior, though it is somewhat consistent with the superconducting phase in Fig.3 widening near the termination point. Verifying the predicted non-monotonic behavior in \(T_{\rm c}\) near the termination point is an interesting direction for future experiments.
Next, we use a realistic bandstructure to obtain a condition for the ghost Fermi surface crossings to exist and demonstrate an agreement with the observed onset of superconductivity. We first present the analysis for WSe\({}_{2}\)-supported BBG. In Ref.[47] two superconductivity phases were found. Here, we focus on the SC\({}_{1}\) phase, which emerges from an isospin-unpolarized parent state. The phase SC\({}_{2}\) emerges from a parent state with a pocket polarization[47] that needs an analysis that accounts for the interaction effects, which we leave to future work.
We predict the onset of valley-crossings by numerically calculating the single-particle band dispersion in the isospin-unpolarized phase. We model the single-particle in WSe\({}_{2}\)-supported BBG using the Hamiltonian
\[H=H_{\rm BBG}+H_{\rm SOI}. \tag{4}\]
The first term \(H_{\rm BBG}\) is the four-band tight-binding model given in the basis \(\left\{c_{A,1}^{\eta,s},c_{B,1}^{\eta,s},c_{A,2}^{\eta,s},c_{B,2}^{\eta,s}\right\}\) (with A and B the sublattice indices, 1 and 2 the layer indices, \(\eta=\pm 1\) a valley label and \(s\) the spin index) [56; 57] and [39] Sec.A:
\[H_{\rm BBG}=\begin{pmatrix}u/2&v\pi^{\dagger}&-v_{4}\pi^{\dagger}&v_{3}\pi\\ v\pi&u/2+\Delta^{\prime}&t_{1}&-v_{4}\pi^{\dagger}\\ -v_{4}\pi&t_{1}&-u/2+\Delta^{\prime}&v\pi^{\dagger}\\ v_{3}\pi^{\dagger}&-v_{4}\pi&v\pi&-u/2\end{pmatrix} \tag{5}\]
where \(\pi=\hbar\left(\eta k_{x}+ik_{y}\right)\), \(k_{x}\) and \(k_{y}\) are the \(x\) and \(y\) components of momentum measured from \(\mathbf{K}\) or \(\mathbf{\bar{K}}\). The quantity \(u\) is the interlayer bias, \(t_{1}\) is the interlayer hopping parameter, \(v\), \(v_{3}\), \(v_{4}\) are associated with microscopic hopping amplitudes of the values given in [57]. The second term \(H_{\rm SOI}\), (4), represents an Ising spin-orbital interaction (SOI) induced by the proximate WSe\({}_{2}\) layer, which takes the valley Ising form[46]:
\[H_{\rm SOI}=\lambda_{\rm I}\eta\sigma_{z} \tag{6}\]
where \(\eta=\pm 1\) for valley \(\mathbf{K}\) and \(\mathbf{\bar{K}}\), \(\sigma_{z}\) is the Pauli matrix for spin in the out-of-plane direction.
To determine how the onset of Fermi surface crossings compares with measured SC phases, two parameters in model (4) must be obtained by careful analysis of existing data. One is the interlayer bias \(u\), which is proportional to the transverse electric field \(D\), yet the ratio between \(u\) and \(D\) in general is not exactly known (see discussion below). Another is the spin-orbit coupling \(\lambda_{\rm I}\).
We determine these two quantities using the quantum oscillations measured in Ref.[47]. This measurement accurately gives the carrier densities where two distinct transitions of Fermi surface topology occur in the minority isospin species \(\mathbf{K}\downarrow\) and \(\mathbf{\bar{K}}\uparrow\). One is the transition from a single Fermi sea to an annular Fermi sea occurs at \(n=9.9\times 10^{11}\)cm\({}^{-2}\). The other is the transition from the annular Fermi sea to a three-pocket Fermi sea occurs at \(n=9.7\times 10^{11}\)cm\({}^{-2}\). Using these two data points as constraints, we are able to determine the numerical values of the two unknown parameters:
\[\frac{u}{\rm meV}=\frac{(0.047\pm 0.001)eD}{\rm mV/nm},\quad\lambda_{\rm I}=(7 \pm 1)\rm meV.\]
Using these values, we study the evolution of Fermi seas within the symmetry-unbroken "vanilla" phase. We focus on the Fermi seas of the minority isospin species as the majority isospin species feature a single Fermi sea that does not experience any qualitative change in this regime. In the regime where SC\({}_{1}\) phase occurs, the majority species feature valley crossings of the \(\mathbf{K}\) and \(\mathbf{\bar{K}}\) Fermi seas with an order-one angle \(\theta\sim O(1)\) at the crossing and, consequently, no enhancement in superconductivity due to small \(\theta\) values similar to the one demonstrated by the minority species. We therefore focus on minority species, deferring the analysis of majority species till later. We determine the transition from overlapping to non-overlapping Fermi seas in minority species, finding a transition marked by the yellow lines in Fig.3. The solid yellow line lies inside the symmetry-unbroken "vanilla"
phase where a single-particle band calculation can be trusted. The dashed yellow line drawn across the ordered phase where a single-particle band calculation is invalid merely provides a guide to the eye. Notably, the yellow line crossing with the phase boundary agrees well with the SC phase onset. This provides strong evidence for the pairing governed by a back-scattering mechanism, such as the IVC QCP scenario.
We note that the value of \(\lambda_{\mathrm{I}}\) extracted and used above is a few times greater than the value \(\lambda_{\mathrm{I}}^{(0)}=1.6\,\mathrm{meV}\) inferred from measurements in a strong out-of-plane \(B\) field [47]. We believe that this discrepancy is reasonable. Indeed, the value of \(\lambda_{\mathrm{I}}\) that should be plugged in our simulation is not the bare SOI strength, but rather the effective interaction renormalized by strong interactions in a flat-bottom BBG band. The vertex corrections that govern this renormalization are expected to be large since the system is in the regime close to all kinds of spin and valley Stoner instabilities. In contrast, \(\lambda_{I}^{(0)}\) measured in a large \(B\) field[47] is largely insensitive to this physics, which explains the above disparity in \(\lambda_{I}\) values.
Next, we turn to the BBG/hBN case[4]. We continue to use the four-band model given in (5). Here, however, unlike the case of WSe\({}_{2}\)-supported BBG, the relation between the interlayer bias parameter \(u\) and the experimental displacement field \(D\) is not accurately known. For BBG/hBN quantum oscillation measurements[4] do not provide sufficient information to extract the ratio between interlayer bias \(u\) and displacement field \(D\). However they do give useful upper and lower bounds for \(u/D\).
Namely, quantum oscillation measurements [4] reveal that: 1) the isospin-unpolarized Sym\({}_{4}\) phase below the isospin-polarized PIP\({}_{2}\) phase (shown in Fig. 2 b) has a single Fermi surface per isospin; 2) the isospin-unpolarized Sym\({}_{12}\) phase above PIP\({}_{2}\) features three distinct pockets per isospin. We find that in order to reproduce these two observations, the value \(u/D\) should fall in the range: \(0.057<\frac{u}{eD\cdot\mathrm{nm}}<0.072\). Accordingly, as a best guess, we pick a value in the middle of this window, \(\frac{u}{eD\cdot\mathrm{nm}}=0.065\). With this value, we derive the transition line between overlapping and non-overlapping Fermi seas, indicated as the yellow line in Fig.2, which closely matches the emergence point of the SC phase.
It is worth noting that several experiments have attempted to measure the \(u/D\) ratio for BBG, yielding vastly different values that do not fall within the range inferred from our fermiology analysis. Namely, Refs.[58] and [5] find \(\frac{u}{eD\cdot\mathrm{nm}}=0.1\) and \(0.033\), respectively. Due to significant variations in values, likely due to electrostatic differences in devices, we refrained from using them directly. Instead, we selected \(u/D\) as described above to predict fermiology that best matches the measurements.
To restate the main result of our analysis, in both BBG and in BBG/WSe\({}_{2}\) the lines that mark the emergence of ghost Fermi surface crossings match perfectly the points of the onset of superconductivity. As discussed above, the emergence of valley crossings strongly impacts the pair scattering kinematics, favoring backscattering. To the contrary, it has little impact on the density of states at the Fermi level or the e-e interaction strength. Therefore the observed behavior is difficult to understand within a conventional BCS superconductivity framework but is naturally explained by the IVC superconductivity mechanism.
This conclusion is further supported by the nonmonotonic behavior of superconductivity near the onset (a rise followed by a drop) observed in BBG/WSe\({}_{2}\). This observation is explained by the enhancement of pairing at small angles \(\theta\) between Fermi surfaces at the ghost crossing discussed above. It is interesting to compare the nonmonotonic behavior of superconductivity in BBG/WSe\({}_{2}\) with the monotonic behavior observed at superconductivity onset in BBG/hBN samples. We believe that this difference can be attributed to the constraints imposed by isospin orders and Fermiology. Specifically, the SC phase in BBG/hBN must lie outside the PIP\({}_{2}\) phase which is not necessarily compatible with the SC order, and below the boundary where valley crossings occur (yellow line in Fig. 2). These constraints limit the SC phase to a narrow wedge in the phase diagram (Fig. 2), preventing the enhancement of the SC phase at the onset of valley crossings. In comparison, in the BBG/WSe\({}_{2}\) these constraints are lifted. Since the "vanilla" phase lies below the phase boundary, the onset of valley crossings (solid yellow line in Fig. 3) extends downwards and therefore does not constrain the SC phase.
Lastly, we believe that the IVC pairing revealed by our analysis is generally applicable to other observed SC phases, such as the SC\({}_{2}\) phase in BBG/WSe\({}_{2}\). Here a conclusive analysis would require more knowledge of the isospin phase diagram, which is currently being investigated by several groups [48; 49]. Nonetheless, the SC\({}_{2}\) phase, which is a wedge embedded between different isospin orders, shows an abrupt onset which is likely related to a ghost valley crossing (see Fig. 3).
In conclusion, the sudden appearance of SC phases coincides with the appearance of the \(\mathbf{K}\)-\(\mathbf{\bar{K}}\) ghost Fermi surface crossings in both BBG and WSe\({}_{2}\)-supported BBG. This behavior suggests that quantum-critical fluctuations drive the pairing in both systems, favoring a backscattering-type pairing interaction due to the IVC order as the glue for superconductivity over other candidates like valley-polarization order [15]. Overall, it is not compatible with conventional phonon mechanisms[50; 51], nor with the conventional Kohn-Luttinger mechanisms[53; 54], pointing to a mechanism that involves a soft quantum-critical mode as a pairing glue. Last but not least, it highlights tuning bands through ghost crossings as an attractive pathway to enhance superconductivity.
We thank A. F. Young and S. Nadj-Perge for sharing unpublished data, and A. V. Chubukov and J. G. Analytis for fruitful discussions. This work was supported by the Science and Technology Center for Integrated Quantum Materials, National Science Foundation Grant No. DMR1231319, and Army Research Office
Grant No. W911NF-18-1-0116. P. L. acknowledges the support by DOE office of Basic Sciences Grant No. DE-FG02-03ER46076.
|
2308.07532 | SOLES VII: The Spin-Orbit Alignment of WASP-106 b, a Warm Jupiter Along
the Kraft Break | Although close-orbiting, massive exoplanets -- known as hot and warm Jupiters
-- are among the most observationally accessible known planets, their formation
pathways are still not universally agreed upon. One method to constrain the
possible dynamical histories of such planets is to measure the systems'
sky-projected spin-orbit angles using the Rossiter-McLaughlin effect. By
demonstrating whether planets orbit around the stellar equator or on offset
orbits, Rossiter-McLaughlin observations offer clues as to whether the planet
had a quiescent or violent formation history. Such measurements are, however,
only a reliable window into the history of the system if the planet in question
orbits sufficiently far from its host star; otherwise, tidal interactions with
the host star can erase evidence of past dynamical upheavals. We present a
WIYN/NEID Rossiter-McLaughlin measurement of the tidally detached ($a/R_* =
13.18^{+0.35}_{-0.37}$) warm Jupiter WASP-106 b, which orbits a star along the
Kraft break ($T_{\mathrm{eff}}=6002\pm164$ K). We find that WASP-106 b is
consistent with a low spin-orbit angle ($\lambda=6^{+17}_{-16}\,^{\circ}$ and
$\psi = 26^{+12}_{-17}\,^{\circ}$), suggesting a relatively quiescent formation
history for the system. We conclude by comparing the stellar obliquities of hot
and warm Jupiter systems, with the WASP-106 system included, to gain insight
into the possible formation routes of these populations of exoplanets. | Josette Wright, Malena Rice, Xian-Yu Wang, Kyle Hixenbaugh, Songhu Wang | 2023-08-15T02:16:58Z | http://arxiv.org/abs/2308.07532v2 | # SOLES VII: The Spin-Orbit Alignment of WASP-106 b, a Warm Jupiter Along the Kraft Break
###### Abstract
Although close-orbiting, massive exoplanets - known as hot and warm Jupiters - are among the most observationally accessible known planets, their formation pathways are still not universally agreed upon. One method to constrain the possible dynamical histories of such planets is to measure the systems' sky-projected spin-orbit angles using the Rossiter-McLaughlin effect. By demonstrating whether planets orbit around the stellar equator or on offset orbits, Rossiter-McLaughlin observations offer clues as to whether the planet had a quiescent or violent formation history. Such measurements are, however, only a reliable window into the history of the system if the planet in question orbits sufficiently far from its host star; otherwise, tidal interactions with the host star can erase evidence of past dynamical upheavals. We present a WIYN/NEID Rossiter-McLaughlin measurement of the tidally detached (\(a/R_{*}=13.18^{+0.35}_{-0.37}\)) warm Jupiter WASP-106 b, which orbits a star along the Kraft break (\(T_{\rm eff}=6002\pm 164\) K). We find that WASP-106 b is consistent with a low spin-orbit angle (\(\lambda=6^{+17}_{-16}\,^{\circ}\) and \(\psi=26^{+12}_{-17}\,^{\circ}\)), suggesting a relatively quiescent formation history for the system. We conclude by comparing the stellar obliquities of hot and warm Jupiter systems, with the WASP-106 system included, to gain insight into the possible formation routes of these populations of exoplanets.
planetary alignment (1243), exoplanet dynamics (490), star-planet interactions (2177), exoplanets (498), planetary theory (1258), exoplanet systems (484) 0000-0002-4061-587X]Josette Wright
0000-0002-8828-7885]Malena Rice
0000-0002-0883-0883]Xian-Yu Wang
0000-0001-8881-7885]Kyle Hixenbaugh
0000-0002-4173-0888]Songhu Wang
## 1 Introduction
Despite being among the most readily detectable exoplanets, short-period Jovian planets still have contested formation histories. The formation pathways available to this population have been the subject of much debate in the last several decades, as reviewed in Dawson & Johnson (2018). One major category of formation routes is violent formation, in which the Jupiter originally forms farther out from the star than its final, close-in orbit and then moves inward through high-eccentricity migration, causing disruptions to the orbits of inner planets on its way (Weidenschilling & Marzari, 1996; Wu & Murray, 2003; Fabrycky & Tremaine, 2007; Ida et al., 2013). Quiescent formation pathways, on the other hand, may better preserve nearby companions in hot Jupiter systems (Lee & Peale, 2002). In situ formation, for example, can occur when a less massive super-Earth in the inner region of a planetary system undergoes runaway gas accretion until it becomes a Jupiter (Batygin et al., 2016). Another possible quiescent formation route involves disk migration of the planet while already at its Jupiter-range mass (Goldreich & Tremaine, 1979; Lin et al., 1996; Baruteau et al., 2014). Information about the current properties of known hot Jupiter systems may be used to distinguish between these possible origins.
One parameter that offers a window into the dynamical histories of these systems is the sky-projected spin-orbit alignment angle, \(\lambda\), which is a proxy for the stellar obliquity. The stellar obliquity (\(\psi\)) of a planetary system is defined as the angle between the net orbital angular momentum axis of the planetary system and the spin axis of its host star. A large angle \(\lambda\) typically corresponds to a misalignment between these two vectors and may indicate a violent past. Conversely, an aligned system with a smaller \(\lambda\) would be consistent with a more quiescent history.
To measure \(\lambda\), we utilize the Rossiter-McLaughlin effect (Rossiter, 1924; McLaughlin, 1924), which describes
the way in which a body transiting in front of a rotating star blocks out the blue- and red-shifted portions of the occulted star's light at different points during the transit. The proportions of blue- and red- shifted light that are blocked can be traced through high-precision radial velocity (RV) observations, and the shape of the observed RV profile encodes information about the sky-projected spin-orbit angle \(\lambda\) at which the transiting body crosses in front of the star.
The majority of spin-orbit angle measurements to date have been made for hot Jupiter systems due to the planets' relatively deep and frequent transits, which makes them observationally accessible (Albrecht et al., 2022). However, because hot Jupiters orbit so close to their host star, tidal effects may, in some cases, erase the remnant signatures of a chaotic dynamical past (Winn et al., 2010). Previous results have demonstrated that hot Jupiters orbiting hot stars above the Kraft break (Kraft, 1967) - a rotational discontinuity that divides stars with convective envelopes (cool stars) and those with radiative envelopes (hot stars) - are misaligned significantly more often than hot Jupiters orbiting cooler stars, below the Kraft break (Winn et al., 2010; Schlaufman, 2010). The Kraft break is located in the range \(6000\leq T_{\rm eff}\leq 6250\,\rm K\), with some stellar-metallicity-dependent variation in the exact transition point (Spalding and Winn, 2022).
Because cooler stars have a substantial convective envelope, a popular explanation for the observed trend in stellar obliquities is that hot Jupiters orbiting stars below the Kraft break are tidally realigned, while their counterparts orbiting hotter stars remain misaligned due to their weaker star-planet tidal interactions (e.g. Albrecht et al., 2012; Wang et al., 2021). This scenario suggests that hot Jupiters may form violently in both hot and cool star systems, producing regular spin-orbit misalignments across stellar types. In this framework, the signatures of misalignment are rapidly erased from cool star systems, while they often persist for hot star systems.
Previous work has demonstrated that this violent formation mechanism, combined with tidal dissipation, can well reproduce the observed set of hot Jupiter spin-orbit angles (Rice et al., 2022). However, it remains unclear whether hot Jupiters around cool stars did indeed begin with larger initial misalignments. This leads to major degeneracies when determining the system's history, as reviewed in Section 4 of Albrecht et al. (2022).
One way to break this degeneracy is to obtain spin-orbit measurements for wider-orbiting, "tidally detached" systems - that is, those that have projected tidal realignment timescales longer than the age of the system (Rice et al., 2021). This is the goal of the Stellar Obliquities in Long-period Exoplanet Systems (SOLES) survey (Rice et al., 2021; Wang et al., 2022; Rice et al., 2022, 2023; Hixenbaugh et al., 2023; Dong et al., 2023), which is collecting stellar obliquity measurements for systems with tidally detached planets to more robustly constrain the origins of spin-orbit misalignments in exoplanet systems. By expanding the set of spin-orbit constraints for tidally detached warm Jupiters - which we define as short-period (\(P<100\) days) giant planets (\(M_{p}\geq 0.4M_{J}\)) with scaled orbital semimajor axes \(a/R_{*}\geq 11\) - we can place new constraints on the origins of spin-orbit misalignments more generally.
In this work, we present a measurement of the Rossiter-McLaughlin effect across one transit of the tidally detached (\(a/R_{*}=13.18^{+0.35}_{-0.37}\)) warm Jupiter WASP-106 b, first confirmed by Smith et al. (2014). This observation was taken with the NEID spectrograph (Schwab et al., 2016) mounted on the WIYN 3.5-meter telescope at Kitt Peak National Observatory in Arizona. This is the seventh result from the ongoing SOLES survey and one of the first warm Jupiter spin-orbit angles measured in a system with a relatively hot, high-mass host star along the Kraft break. WASP-106 b is a \(1.93\pm 0.15\,\rm M_{J}\) planet, orbiting a \(M_{*}=1.175^{+0.082}_{-0.074}\,\rm M_{\odot}\), \(T_{\rm eff}=6002\pm 164\) K star at a period of \(P=9.29\) days (see Section 4). We find that WASP-106 b is consistent with near-alignment, with \(\lambda=6^{+17}_{-16}\,\rm\circ\) and \(\psi=26^{+12}_{-17}\,\rm\circ\).
## 2 Observations
On March 2nd, 2022, from 4:34 to 11:57 UT, we collected 22 RV measurements of WASP-106 using the high-resolution (\(R\sim 110,000\)) WIYN/NEID spectrograph which covers a wavelength range 380-930 nm (Schwab et al., 2016). Seeing ranged from 1.0'' to 1.7'', with a median of 1.1'', and the median RV uncertainty was 9.3 m/s. The airmass \(z\) varied within the range \(1.26\leq z\leq 1.78\), starting at \(z=1.58\) at the beginning of the night before the target rose to \(z=1.26\) and then reached \(z=1.78\) at the end of the observing sequence. The signal-to-noise ratio (S/N) ranged from 18 to 30 pixel\({}^{-1}\) at 5500 A.
The spectra from this observing sequence were reduced using the NEID Data Reduction Pipeline1, and reduced spectra were retrieved from the NExScI NEID Archive2. The NEID RV measurements and uncertainties are provided in Table 1 and are shown in the rightmost panel of Figure 1.
Footnote 1: See [https://neid.ipac.caltech.edu/docs/NEID-DRP/](https://neid.ipac.caltech.edu/docs/NEID-DRP/) for more information
Footnote 2: [https://neid.ipac.caltech.edu/](https://neid.ipac.caltech.edu/)
## 3 Stellar Parameters
We derived the stellar parameters for WASP-106 by analyzing the NEID spectra obtained during the RM sequence. To enhance the final spectrum S/N, we co-added all spectra after correcting for their RV shifts caused by the planetary reflex motion. The stellar parameters \(T_{\rm eff}\), \(\log{\rm g}\), [Fe/H], and \(v\sin i_{*}\) were derived using the iSpec Python package (Blanco-Cuaresma et al., 2014; Blanco-Cuaresma, 2019).
During the fitting process, we employed the SPECTRUM radiative transfer code (Gray & Corbally, 1994), MARCS atmosphere models (Gustafsson et al., 2008), and the sixth version of the GES atomic line list (Heiter et al., 2021). Using constraints from these sources, iSpec minimizes the difference between the synthetic and input spectra by applying the nonlinear least-squares Levenberg-Marquardt fitting algorithm (More, 2006).
To expedite the fitting process, we selected specific spectral regions from 476 to 678 nm that are sensitive to our parameters of interest. These regions include the wing segments of the H\(\alpha\), H\(\beta\), and Mg I triplet lines, which are sensitive to \(T_{\rm eff}\) and \(\log g\). Additionally, we included the Fe I and Fe II lines, which enable precise constraints on [Fe/H] and \(v\sin i_{*}\).
To determine the stellar mass (\(M_{*}\)) and radius (\(R_{*}\)), we employed MESA Isochrones & Stellar Tracks (MIST; Choi et al., 2016; Dotter, 2016) models in conjunction with a spectral energy distribution (SED) fit using the EXOFASTv2 Python package (Eastman et al., 2019). The SED was constructed using photometry from 2MASS (Cutri et al., 2003), WISE (Cutri & et al., 2013), TESS (Ricker et al., 2015), and _Gaia_ DR3 (Gaia Collaboration et al., 2023).
During the fitting process, we adopted Gaussian priors based on the effective temperature (\(T_{\rm eff}\)) and metallicity ([Fe/H]) derived from our spectral fit, as well as the parallax (\(\varpi\)) drawn from _Gaia_ DR3 (Gaia Collaboration et al., 2023). We also applied an upper limit on the V-band extinction (\(A_{v}\)) as given by Schlafly & Finkbeiner (2011).
The fitting process utilized the Differential Evolution Markov Chain Monte Carlo (DEMCMC; Ter Braak, 2006) method. The fit was considered converged when the Gelman-Rubin statistic (\(\hat{R}\), Gelman & Rubin, 1992) fell below 1.01 and the effective number of independent samples exceeded 1000. The resulting stellar parameters are listed in Table 2.3
Footnote 3: This method provided an intermediate value of \(v\sin i_{*}=5.83\pm 3.61\) km/s that was used to inform our priors to allesfitter, from which we derived the final \(v\sin i_{*}\) value in Table 2.
## 4 Stellar Obliquity Modeling
To find the sky-projected spin-orbit angle \(\lambda\) for WASP-106 b, we used the Python package allesfitter (Gunther & Daylan, 2020) to jointly fit radial velocity data from the NEID (Schwab et al., 2016), CORALIE (Queloz et al., 2000), and SOPHIE (Perruchot et al., 2008) spectrographs, as well as photometric data from TESS Sectors 9, 36, 45, and 46 (Ricker et al., 2015). The RV data from CORALIE and SOPHIE were sourced from Smith et al. (2014).
We drew initial guesses for \(P\), \(T_{0}\), \(\cos i\), \(R_{\rm p}/R_{\star}\), \((R_{\star}+R_{\rm p})/a\), and \(K\) (definitions given in Table 2) from the values derived in Smith et al. (2014). All fitted parameters were allowed to vary and were initialized with uniform priors, as listed in Table 2. The two eccentricity parameters \(\sqrt{e}\sin\omega\) and \(\sqrt{e}\cos\omega\) were each initialized with a value of 0, and the two transformed quadratic limb-darkening coefficients \(q_{1}\) and \(q_{2}\)4 for each of the TESS and NEID datasets (4 total coefficients) were initialized with a value of 0.5. We fit baseline RV offsets for the CORALIE, SOPHIE, and NEID datasets, allow
\begin{table}
\begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{ Time (BJD\({}_{\rm TDB}\))} & RV - 17200 (m/s) & \(\sigma_{\rm RV}\) (m/s) \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 1: NEID radial velocity measurements across the transit of WASP-106 b. For readability, values are reported as RV - 17200 m/s.
ing each to vary between \(\pm 20\) km/s. To support convergence, the three offsets were each initialized at 17.2 km/s after manually examining the data. The sky-projected spin-orbit angle \(\lambda\) was initialized with a value of \(0^{\circ}\) and allowed to vary between \(\pm 180^{\circ}\). The projected stellar rotational velocity \(v\sin i_{*}\) was initialized with a value of 5.83 km/s from our spectral fit and was allowed to vary between 0 and 20 km/s.
We ran an affine-invariant Markov Chain Monte Carlo (MCMC) analysis with 100 walkers to sample the posterior distributions of all model parameters. The best-fit model parameters and their associated \(1\sigma\) uncertainties, listed in Table 2, were extracted after obtaining 200,000 accepted steps per walker, where the first 10,000 steps were discarded as burn-in. Our results are in good agreement (that is, within \(2\sigma\)) with the associated values obtained by Smith et al. (2014).
The best-fit joint model is shown in Figure 1 together with each dataset included in the analysis, as well as the residuals of each fit. The fitted and derived parameters corresponding to this model are provided in Table 2. The WASP-106 system is consistent with a low spin-orbit angle, with \(\lambda=6^{+17}_{-16}\,\).
Next, we leveraged TESS light curve data, in combination with our derived \(\lambda\) constraint, to measure the 3D spin-orbit angle \(\psi\). Our analysis incorporated two-minute cadence TESS light curve data from Sectors 9, 36, 45, and 46, spanning February 28th, 2019 to December 30th, 2021. Based on a Generalized Lomb-Scargle Periodogram (GLS, Zechmeister and Kurster, 2009) analysis, we found \(P_{\rm rot}=9.766\pm 0.005\) days with a False Alarm Probability (FAP) of less than 0.1%, as shown in Figure 2. However, because latitudinal differential rotation enforces a lower limit of 10% to the measurement precision (Epstein and Pinsonneault, 2014; Aigrain et al., 2015), we ultimately adopted \(P_{\rm rot}=9.77\pm 0.98\) days.
Combining this value with \(v\sin i_{*}=7.0^{+1.1}_{-1.0}\) km/s from our global fit, the stellar equatorial rotation velocity was derived as \(v=\frac{2\pi R_{*}}{P_{\rm rot}}=7.61\pm 0.77\) km/s. The Bayesian inference method from Masuda and Winn (2020) and Hjorth et al. (2021) was then applied to \(R_{*}\), \(P_{\rm rot}\), and \(\cos i_{*}\) to accommodate the interdependent parameters \(v\) and \(v\sin i_{*}\). The fitted parameters were \(R_{*}\), \(P_{\rm rot}\), and \(\cos i_{*}\), and uniform priors were applied to them. To achieve a conservative result, we adopted the suggested systematic uncertainties of \(\sigma_{R_{*}}\approx 4.2\%\), which equates to 0.06 \(R_{\odot}\). The final likelihood function is given as
\[\begin{split}\mathcal{L}&=\left(\frac{R_{*}/R_{ \odot}-1.47}{0.06}\right)^{2}+\left(\frac{P_{\rm rot}-9.77\ {\rm d}}{0.98\ {\rm d}}\right)^{2}\\ &\quad+\left(\frac{v\sqrt{(1-\cos^{2}i_{*})}-7.0\ {\rm km/s}}{1.1\ {\rm km/s}} \right)^{2},\end{split} \tag{1}\]
where \(v=2\pi R_{*}/P_{\rm rot}\). Note that we adopt \(\sigma_{R_{*}}=0.06R_{\odot}\) based on the systematic stellar parameter uncertainties suggested by Tayar et al. (2020).
We implemented the likelihood function using PyMC3(Salvatier et al., 2016) and ran MCMC sampling until the Gelman-Rubin statistic (\(\hat{R}\)) for each fitted parameter was less than 1.01. We derived a stellar inclination posterior \(\sin i_{*}=0.91\pm 0.09\), or \(i_{*}=89.72\pm 25.15^{\circ}\). Subsequently, the true stellar obliquity (\(\psi\)) was calculated using (Fabrycky and Winn, 2009)
\[\cos\psi=\cos i_{*}\cos i+\sin i_{*}\sin i\cos\lambda, \tag{2}\]
where \(i_{*}\) is the stellar inclination and \(i\) is the planet's orbital inclination. We ultimately obtained \(\psi=26^{+12}_{-17}\)
Figure 1: Joint fit to photometry (left), out-of-transit RV data (center), and the in-transit Rossiter-McLaughlin RV data (right) obtained for WASP-106 b. The model is shown in gray, while data is provided in color with modeled constant offsets and jitter terms included. The associated residuals are provided below each panel.
which is consistent with near-alignment for WASP-106 b.
## 5 Tidal Realignment Timescales
Next, we verified the expected tidal realignment timescales for WASP-106 b to demonstrate whether the system could have been realigned from a misaligned state within its lifetime. For cooler stars (below the Kraft break) with significant convective envelopes, the convective tidal realignment timescale \(\tau_{\rm CE}\) is given by (Zahn, 1977; Albrecht et al., 2012)
\[\frac{1}{\tau_{\rm CE}}=\frac{1}{10\times 10^{9}\,\rm yr}\bigg{(}\frac{M_{ \rm p}}{M_{*}}\bigg{)}^{2}\bigg{(}\frac{a/R_{*}}{40}\bigg{)}^{-6}. \tag{3}\]
For hotter stars (above the Kraft break) with much less appreciable convective envelopes, the radiative realignment timescale \(\tau_{\rm RA}\) is given by (Zahn, 1977; Albrecht et al., 2012)
\[\frac{1}{\tau_{\rm RA}}=\frac{1}{1.25\times 5\times 10^{9}\, \rm yr}\bigg{(}\frac{M_{\rm p}}{M_{*}}\bigg{)}^{2}\\ \times\bigg{(}1+\frac{M_{\rm p}}{M_{*}}\bigg{)}^{5/6}\bigg{(} \frac{a/R_{*}}{6}\bigg{)}^{-17/2}. \tag{4}\]
These equations have been empirically calibrated using observations of binary star systems (Zahn, 1977). This means that their application in this context includes an implicit assumption that planet-star systems follow a similar tidal realignment mechanism to that of binary star systems. Therefore, we warn that their use is not intended to provide a precise value for the expected tidal realignment timescale, but, rather, an order-of-magnitude estimate.
Given our measured stellar metallicity \([\rm Fe/H]=-0.02\pm 0.10\) (Table 2), we anticipate that the temperature of WASP-106 likely falls just below the metallicity-dependent Kraft break, which is expected to lie in the range \(6100\)K \(\leq T_{\rm eff}\leq 6200\) K based on Figure 9 in Spalding and Winn (2022). Nevertheless, because WASP-106 lies directly along the border delineating Kraft break, with \(T_{\rm eff}=6002\pm 164\) K, we compute both \(\tau_{\rm CE}\) and \(\tau_{\rm RA}\) for thoroughness.
Using the values from Table 2, we find that \(\tau_{\rm CE}=5.26\times 10^{12}\) yr and \(\tau_{\rm RA}=2.06\times 10^{18}\) yr. This result suggests that the WASP-106 system was likely never significantly misaligned, as realigning WASP-106 b, regardless of whether the host star had a significant convective envelope or not, would have taken several orders of magnitude more years than the age of the Universe. Our result strengthens the hypothesis that warm Jupiters commonly form in aligned configurations (Rice et al., 2022) even in systems along the Kraft break, suggesting that warm Jupiters may generally form more quiescently than hot Jupiters.
## 6 Discussion
On a broad scale, the SOLES survey aims to delineate the origins of spin-orbit misalignments by examining the spin-orbit distribution of wide-orbiting exoplanets. Because most spin-orbit observations to date - including the one presented in this work for WASP-106 b - have been made for transiting giant planets, we focus on hot and warm Jupiters within this section.
Figure 3 shows a comparison of the sky-projected spin-orbit distribution for hot (top panel) and warm (bottom panel) Jupiters, distinguishing between planets as a function of stellar \(T_{\rm eff}\) and multiplicity of the host star system.5 Systems with one or more stellar companions were identified by (1) searching for systems with sy_snum\(>1\) in the NASA Exoplanet Archive and (2) applying the criteria outlined in El-Badry et al. (2021) to check for any bound companions within \(10^{\prime}\) of the primary resolved by the _Gaia_ DR3 catalogue. We found no stellar companions bound to the WASP-106 system.
Footnote 5: This figure includes all systems with \(\lambda\) measurements in the TEP-cat catalogue (Southworth, 2011) as of 7/20/2023.
As established in previous studies (Winn et al., 2010; Schlaufman, 2010), we recover the known trend that hot Jupiters around hot stars are misaligned at a relatively high rate, whereas those around cool stars are typically aligned. We also find that, with the most updated
Figure 2: Lomb-Scargle periodogram of the TESS light curve data for WASP-106, with transit data masked out. A False Alarm Probability (FAP) level at 0.1% is marked with a dashed red line. The highest-power peak corresponds to a period \(P=9.766\) days marked with an arrow, indicating the most likely stellar rotation period.
sample, all warm Jupiters in single-star systems with spin-orbit measurements to date remain at or near spin-orbit alignment, as initially found in Rice et al. (2022b). WASP-106 b is consistent with this pattern.
The host star's position along the lower edge of the Kraft break makes this measurement particularly interesting: the system's alignment, in combination with other aligned warm Jupiter systems around the Kraft break, may suggest the absence of a stellar obliquity transition between hot and cool stars hosting warm Jupiters. However, further observations in this crucial parameter space will be necessary to definitively demonstrate the presence or absence of this transition. We suggest three potential scenarios that are each consistent with the most updated stellar obliquity distribution shown in Figure 3:
1. Hot and warm Jupiters form through distinct channels, with warm Jupiters forming quiescently and being initially aligned, while hot Jupiters form violently and are initially misaligned. Only hot Jupiters orbiting cool stars would be tidally re-aligned, producing the current stellar obliquity distribution for hot Jupiters (e.g. Albrecht et al., 2012; Rice et al., 2022a).
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{ Parameter} & \multicolumn{1}{c}{Description} & \multicolumn{1}{c}{Priors} & \multicolumn{1}{c}{Value} & \multicolumn{1}{c}{+1\(\sigma\)} & \multicolumn{1}{c}{-1\(\sigma\)} \\ \hline Stellar Parameters\({}^{\dagger}\): & & & & & \\ \(M_{*}\) & Stellar mass (\(M_{\odot}\)) & - & 1.175 & 0.082 & 0.074 \\ \(R_{*}\) & Stellar radius (\(R_{\odot}\)) & - & 1.47 & 0.016 & 0.017 \\ \(\log g\) & Surface gravity (cm/s\({}^{2}\)) & - & 4.49 & 0.16 & 0.16 \\ \([\)Fe/H\(]\) & Metallicity (dex) & - & -0.02 & 0.10 & 0.10 \\ \(T_{\mathrm{eff}}\) & Effective temperature (K) & - & 6002 & 164 & 164 \\ \(v\sin i_{*}\) & Projected stellar rotational velocity (km/s) & \(\mathcal{U}(5.83;0.0;20.0)\) & 7.0 & 1.1 & 1.0 \\ \multicolumn{1}{c}{Planetary Parameters:} & & & & & \\ \(R_{p}/R_{*}\) & Planet-to-star radius ratio & \(\mathcal{U}(0.07582376;0;1)\)* & 0.07559 & 0.00072 & 0.00087 \\ \((R_{*}+R_{p})/a\) & Sum of radii divided by the orbital semimajor axis & \(\mathcal{U}(0.0762043;0;1)\) & 0.0816 & 0.0024 & 0.0021 \\ \(\cos i_{*}\) & Cosine of the orbital inclination & \(\mathcal{U}(0.0089;0;1)\) & 0.0297 & 0.0063 & 0.0103 \\ \(T_{0}\) & Mid-transit epoch (BJD \(-2457000\)) & \(\mathcal{U}(977;644;998)\) & 977.976 & 0.0014 & 0.0014 \\ \(P\) & Orbital period (days) & \(\mathcal{U}(9.289715;8.28;10.28)\) & 9.289699 & 1e-05 & 1e-05 \\ \(K\) & Radial velocity semi-amplitude (m/s) & \(\mathcal{U}(165.3;0.1000)\) & 164.7 & 4.4 & 4.4 \\ \(\sqrt{c}\cos\omega\) & Eccentricity parameter 1 & \(\mathcal{U}(0;-1.0;1.0)\) & -0.063 & 0.084 & 0.067 \\ \(\sqrt{e}\sin\omega\) & Eccentricity parameter 2 & \(\mathcal{U}(0;-1.0;1.0)\) & 0.091 & 0.134 & 0.137 \\ \(q_{1,\mathrm{tESS}}\) & Quadratic limb darkening coefficient 1, TESS & \(\mathcal{U}(0.5;0.0;1.0)\) & 0.095 & 0.092 & 0.041 \\ \(q_{2,\mathrm{tESS}}\) & Quadratic limb darkening coefficient 2, TESS & \(\mathcal{U}(0.5;0.0;1.0)\) & 0.50 & 0.33 & 0.31 \\ \(q_{1,\mathrm{neid}}\) & Quadratic limb darkening coefficient 1, NEID & \(\mathcal{U}(0.5;0.0;1.0)\) & 0.49 & 0.31 & 0.29 \\ \(q_{2,\mathrm{neID}}\) & Quadratic limb darkening coefficient 2, NEID & \(\mathcal{U}(0.5;0.0;1.0)\) & 0.56 & 0.30 & 0.36 \\ \(\Delta_{\mathrm{RV}}\),coralia & RV offset, CORALIE (km/s) & \(\mathcal{U}(17.2;-20.0;20.0)\) & 17.248 & 0.004 & 0.004 \\ \(\Delta_{\mathrm{RV}}\),corbrie & RV offset, SOPHIE (km/s) & \(\mathcal{U}(17.2;-20.0;20.0)\) & 17.189 & 0.006 & 0.006 \\ \(\Delta_{\mathrm{RV}}\),neid & RV offset, NEID (km/s) & \(\mathcal{U}(17.2;-20.0;20.0)\) & 17.189 & 0.004 & 0.004 \\ \(\lambda\) & Sky-projected spin-orbit angle (\({}^{\circ}\)) & \(\mathcal{U}(0;-180.0;180.0)\) & 6 & 17 & 16 \\ \multicolumn{1}{c}{Derived Parameters:} & & & & & \\ \(R_{p}\) & Planetary radius (R\({}_{\mathrm{J}}\)) & - & 1.080 & 0.016 & 0.017 \\ \(M_{p}\) & Planetary mass (M\({}_{\mathrm{J}}\)) & - & 1.93 & 0.15 & 0.15 \\ \(b\) & Impact parameter & - & 0.387 & 0.074 & 0.136 \\ \(T_{4}\) & Transit duration (h) & - & 5.334 & 0.040 & 0.038 \\ \(\delta\) & Transit depth & - & 6.204 & 0.086 & 0.071 \\ \(a\) & Semimajor axis (au) & - & 0.0901 & 0.0026 & 0.0027 \\ \(i\) & Inclination (\({}^{\circ}\)) & - & 88.30 & 0.59 & 0.36 \\ \(e\) & Eccentricity & - & 0.023 & 0.027 & 0.016 \\ \(w\) & Argument of periastron (\({}^{\circ}\)) & - & 128 & 93 & 37 \\ \(u_{1,\mathrm{TESS}}\) & Limb darkening parameter 1, TESS & - & 0.30 & 0.12 & 0.15 \\ \(u_{2,\mathrm{TESS}}\) & Limb darkening parameter 2, TESS & - & 0.00 & 0.25 & 0.16 \\ \(u_{1,\mathrm{neid}}\) & Limb darkening parameter 1, NEID & - & 0.69 & 0.53 & 0.46 \\ \(u_{2,\mathrm{NEID}}\) & Limb darkening parameter 2, NEID & - & -0.07 & 0.45 & 0.40 \\ \hline \end{tabular} \({}^{\dagger}\) The resulting uncertainties of stellar parameters did not account for systematic errors (Tayar et al., 2020).
\({}^{*}\)\(\mathcal{U}(x;a;b)\) is a uniform prior with initial guess \(x\) and lower and upper limits \(a\) and \(b\), respectively.
\end{table}
Table 2: System properties derived for WASP-106.
In this scenario, we would expect to observe relatively low stellar obliquities for warm Jupiters around both cool and hot stars.
2. Both hot and warm Jupiters orbiting cool stars have undergone quiescent formation histories and are therefore initially aligned, while those orbiting hot stars have experienced more violent formation processes and are initially misaligned. Because more massive, hotter stars tend to have a higher rate of stellar multiplicity (Duchene and Kraus, 2013; Yang et al., 2020), they are more likely to encounter the perturbing influence of a stellar companion, which can produce a primordial disk misalignment (Batygin, 2012). These hotter stars are also more likely to host more massive protoplanetary disks (Andrews et al., 2013), which may be more likely to form multiple Jupiters capable of inducing misalignment through post-disk dynamical sculpting (Wu et al., 2023). In this case, the current spin-orbit distribution would directly reflect the planet formation process, with tides playing a lesser role in altering stellar obliquities over time (Hixenbaugh et al., 2023). Accordingly, we would expect that the population of warm Jupiters orbiting hot stars would follow a comparable spin-orbit distribution to that of the hot Jupiters orbiting hot stars. Because there are only a few warm Jupiters orbiting hot stars with measured obliquities, the currently observed alignment in these systems may simply reflect small-number statistics.
3. Hot Jupiters orbiting cool stars form and evolve in a similar, quiescent manner to warm Jupiters, and they are therefore initially aligned (Wu et al., 2023; Hixenbaugh et al., 2023). Only hot Jupiters orbiting hot stars have undergone violent formation histories, resulting in initial misalignment. Hot Jupiters around hot stars would then represent a subset of planets that initially formed as longer
Figure 3: Stellar obliquity distribution for hot Jupiters (top) and warm Jupiters (bottom). The range used for the Kraft break, marked in pink, is \(6000\leq T_{\rm eff}\leq 6250\,\)K, to account for the potential variation of the Kraft break among individual host stars (Spalding and Winn, 2022). We define a hot Jupiter as any planet with \(a/R_{\star}<11\) and \(M\geq 0.4\,\)M\({}_{\rm J}\), and a warm Jupiter as any planet in the same mass range with \(a/R_{\star}\geq 11\). Single-star systems are marked with a circle, and multi-star systems with a triangle. Values were obtained from the TEPCat catalog (Southworth, 2011) on 7/20/23.
period Jupiters but that were dynamically excited and that tidally circularized within the system lifetime (as in e.g. the framework proposed in Wu et al. (2023)). In this case, as in Scenario 1, we would expect to observe relatively small spin-orbit angles for warm Jupiters around both hot and cool stars.
To distinguish between these three scenarios, it is necessary to expand the sample of warm Jupiter spin-orbit angles to include more measurements in systems with host stars along and above the Kraft break (e.g. Sedaghati et al., 2023). The presented measurement of WASP-106 b supports this goal and builds toward future population studies that will demonstrate how warm Jupiters fit into a broader context. Ultimately, the stellar obliquity distribution for warm Jupiter systems holds great promise to provide insights into whether the current geometries of hot and warm Jupiter systems are primarily an outcome of formation processes, or whether misalignments are instead a consequence of subsequent dynamical evolution.
## 7 Acknowledgements
M.R. and S.W. thank the Heising-Simons Foundation for their generous support. M.R. acknowledges support from Heising-Simons Foundation Grant #2023-4478. S.W. acknowledges support from Heising-Simons Foundation Grant #2023-4050. This research was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute.
While this paper was in the review process, it was brought to our attention that another team conducted a separate Rossiter-McLaughlin measurement of WASP-106 b (Harre et al., 2023). Our analysis was conducted fully independently of this result, and both spin-orbit measurements are consistent with each other.
numpy(Oliphant, 2006; Walt et al., 2011; Harris et al., 2020), matplotlib(Hunter, 2007), pandas(McKinney et al., 2010), scipy(Virtanen et al., 2020), allesfitter(Gunther & Daylan, 2020), emcee(Foreman-Mackey et al., 2013), iSpec(Blanco-Cuaresma et al., 2014; Blanco-Cuaresma, 2019), EXOFASTv2(Eastman et al., 2019) NEID/WIYN, TESS, SOPHIE, CORALIE, NASA Exoplanet Archive, Extrasolar Planets Encyclopaedia FacilityThis work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
|
2305.08825 | A Poincaré disk model for taxicab hyperbolic geometry | The taxicab plane is a modification of the Euclidean plane that uses an
alternative notion of distance. Similarly, the Poincar\'{e} disk is a model of
hyperbolic geometry that consists of a subset of the Euclidean plane with an
alternative notion of distance. In this paper, we merge these two variations to
create a taxicab version of the Poincar\'{e} disk in an attempt to create a
type of taxicab hyperbolic space. | Aaron Fish, Dylan Helliwell | 2023-05-15T17:33:41Z | http://arxiv.org/abs/2305.08825v1 | # A Poincare disk model for taxicab hyperbolic geometry
###### Abstract.
The taxicab plane is a modification of the Euclidean plane that uses an alternative notion of distance. Similarly, the Poincare disk is a model of hyperbolic geometry that consists of a subset of the Euclidean plane with an alternative notion of distance. In this paper, we merge these two variations to create a taxicab version of the Poincare disk in an attempt to create a type of taxicab hyperbolic space.
*Aaron Fish was supported by the Seattle University Mathematics Early Research REU in 2017.
## 1. Introduction
The taxicab plane is the set \(\mathbb{R}^{2}\) with the taxicab distance \(d\) defined as follows: for two points \(x=(x_{1},x_{2})\) and \(y=(y_{1},y_{2})\),
\[d_{T}(x,y)=|x_{1}-y_{1}|+|x_{2}-y_{2}|.\]
This distance function arises from the taxicab norm
\[||v||_{T}=|v_{1}|+|v_{2}|.\]
This space was introduced by Hermann Minkowski in the late 19th century as an alternative to Euclidean geometry and has since enjoyed a fair amount of interest. See [11, 12, 13] for details about this space along with various constructions and objects that are developed within it.
The Poincare disk is a model of hyperbolic geometry consisting of the set \(D=\{x\in\mathbb{R}^{2}:x_{1}^{2}+x_{2}^{2}<1\}\) equipped with the Riemannian metric
\[g_{P}(v,w)=4\frac{v\cdot w}{(1-||x||_{E}^{2})^{2}}\]
where \(v\) and \(w\) are tangent vectors at the point \(x\in D\) and \(||\cdot||_{E}\) is the Euclidean norm. This gives rise to a norm on each tangent space
\[||v||_{P}=2\frac{||v||_{E}}{1-||x||_{E}^{2}}\]
and from this, the length of a piecewise smooth curve \(\gamma:[0,1]\to D\) is given by
\[\mathcal{L}_{P}(\gamma)=\int_{0}^{1}2\frac{||\gamma^{\prime}(t)||_{E}}{1-|| \gamma(t)||_{E}^{2}}\,dt.\]
See for example [1] for more detail about hyperbolic geometry, including the Poincare disk as a model.
In this paper, we create a taxicab Poincare disk by following this development using taxicab norm wherever possible. Specifically, we define the taxicab Poincare disk to be the set
\[D_{T}=\{x\in\mathbb{R}^{2}:||x||_{T}<1\}=\{x\in\mathbb{R}^{2}:|x_{1}|+|x_{2}|<1\}\]
equipped with the norm on each tangent space given by
\[||v||_{D_{T}}=\frac{||v||_{T}}{1-||x||_{T}^{2}}\]
where \(v\) is a vector based at \(x\in D_{T}\) and the coordinate basis is used to define the norm on the tangent space. We do not start with a Riemannian metric because the taxicab norm does not arise from an inner product. We also do not include a factor of \(2\). In the case of the standard Poincare disk, this factor is included to ensure that the resulting curvature is uniformly equal to \(-1\), and we will not be attempting to compute curvature in \(D_{T}\). That being said, see the comment in Section 5.2 after Theorem D.
This norm allows us to measure the length of curves that admit absolutely continuous parameterizations. The set of such curves from \(p\) to \(q\) is denoted \(\Gamma(p,q)\).
We denote by \(m(p,q)\) a certain minimal point associated to two given points \(p\) and \(q\). This point, defined in Section 2.1 and illustrated in Figure 1, gives rise to a particular curve \(\lambda_{p,q}\), which is just the concatenation of the segments from \(p\) to \(m(p,q)\) and from \(m(p,q)\) to \(q\).
With this, we have:
**Theorem A**.: _For \(p,q\in D_{T}\) and \(\gamma\in\Gamma(p,q)\), \(\mathcal{L}(\gamma)\geq\mathcal{L}(\lambda_{p,q})\) with equality if and only if \(\gamma\) is doubly monotonic and passes through \(m(p,q)\)._
Using this, a distance function on \(D_{T}\) is established:
**Theorem B**.: _The distance function on \(D_{T}\) arising from the length functional \(\mathcal{L}\) is_
\[d(p,q)=\tanh^{-1}(|p_{1}|+|p_{2}|)+\tanh^{-1}(|q_{1}|+|q_{2}|)-2\tanh^{-1}(|m_ {1}|+|m_{2}|)\]
_where \(m=m(p,q)\)._
This allows us to characterize circles and determine the isometry group:
**Theorem C**.: _The isometry group for \(D_{T}\) is isomorphic to \(D_{4}\)._
Since the isometry group does not act transitively, \(D_{T}\) is not homogeneous. Despite this, hyperbolicity can still be explored, and from the perspective of Gromov hyperbolicity, we prove
**Theorem D**.: \(D_{T}\) _is \(\ln(3)\)-hyperbolic._
This paper is organized as follows: In Section 2 we introduce some preliminary constructions and results that support the remainder of the paper. In Section 3 we prove Theorem A, and with this, in Section 4 we prove Theorem B, and use this to characterize circles and prove Theorem C. In Section 5 we explore the extent to which \(D_{T}\) is actually hyperbolic, proving Theorem D. Finally, we share some concluding remarks in Section 6.
## 2. Preliminaries
In this section, we introduce some terminology and discuss the analytical tools used to prove our results.
### Minimal points
We define here a certain point associated to two given points that proves to be important in identifying length minimizing curves and establishing an explicit formula for the distance function on \(D_{T}\).
Let
\[\ell:\mathbb{R}^{2}\longrightarrow\mathbb{R}\] \[\ell(x,y)=\begin{cases}\operatorname{sgn}(x)\min\{|x|,|y|\}& \text{if }\operatorname{sgn}(x)=\operatorname{sgn}(y)\\ 0&\text{if }\operatorname{sgn}(x)\neq\operatorname{sgn}(y),\end{cases}\]
where \(\operatorname{sgn}(x)\) is the sign of \(x\):
\[\operatorname{sgn}(x)=\begin{cases}1&\text{if }x>0\\ -1&\text{if }x<0\\ 0&\text{if }x=0.\end{cases}\]
With this, given \(p,q\in D_{T}\) let
\[m(p,q)=\big{(}\ell(p_{1},q_{1}),\ell(p_{2},q_{2})\big{)}.\]
We call this point the minimal point associated to \(p\) and \(q\). For example, note that if \(p\) and \(q\) lie in opposite quadrants, \(m(p,q)\) is the origin which we denote with \(\theta\). See Figure 1.
Given two points \(p\) and \(q\) in the same quadrant, we say \(p\) lies beyond \(q\) if \(m(p,q)=q\). If, moreover, \(p\) and \(q\) do not share a coordinate line, we say \(p\) lies strictly beyond \(q\).
Figure 1. Some points in \(D_{T}\), the minimal points associated to various pairs of points, and the corresponding L-shaped curves.
One way to think about the minimal point for \(p\) and \(q\) is that it is the point furthest from the origin with the property that both \(p\) and \(q\) lie beyond it.
### Isometries
Isometries of \(D_{T}\) will be discussed in more detail in Section 4.3. We mention here only that rotations about the origin by integer multiples of \(\frac{\pi}{2}\) and reflections about the coordinate axes and the lines of slope \(\pm 1\) through the origin are isometries. We leave it to the reader to confirm this. These isometries will be used regularly and without explicit mention to simplify various arguments.
### Curves
Unless otherwise stated, the domain for curves will be \([0,1]\) and the codomain will be \(D_{T}\). When we say "let \(\gamma\) be a curve from \(p\) to \(q\)," we mean both components of \(\gamma\) are absolutely continuous, \(\gamma(0)=p\), and \(\gamma(1)=q\). The set of such curves is denoted \(\Gamma(p,q)\). If both components of a curve \(\gamma\) are monotonic, we say \(\gamma\) is doubly monotonic.
Given two curves \(\gamma\) and \(\phi\), with \(\gamma(1)=\phi(0)\), we define the concatenation \(\gamma\cup\phi\) as follows:
\[\gamma\cup\phi:[0,1]\longrightarrow D_{T}\] \[\gamma\cup\phi(t)=\begin{cases}\gamma(2t)&\text{ if }0\leq t \leq\frac{1}{2}\\ \phi(2t-1)&\text{ if }\frac{1}{2}<t\leq 1.\end{cases}\]
Given two points \(p\) and \(q\), we define the segment from \(p\) to \(q\) to be
\[\sigma_{p,q}:[0,1]\longrightarrow D_{T}\] \[\sigma_{p,q}(t)=\big{(}p_{1}+t(q_{1}-p_{1}),p_{2}+t(q_{2}-p_{2}) \big{)}.\]
Given two points \(p\) and \(q\) and \(m=m(p,q)\), we say the L-shaped curve from \(p\) to \(q\) is the curve
\[\lambda_{p,q}:[0,1]\longrightarrow D_{T}\] \[\lambda_{p,q}=\sigma_{p,m}\cup\sigma_{m,q}.\]
See Figure 1 for examples of these curves.
We define the length of a curve \(\gamma:[a,b]\to D_{T}\) as follows:
\[\mathcal{L}(\gamma)=\int_{a}^{b}\|\gamma^{\prime}(t)\|_{D_{T}}\,dt=\int_{a}^{ b}\frac{|\gamma^{\prime}_{1}(t)|+|\gamma^{\prime}_{2}(t)|}{1-(|\gamma_{1}(t)|+| \gamma_{2}(t)|)^{2}}\,dt.\]
Note that \(\mathcal{L}(\gamma\cup\phi)=\mathcal{L}(\gamma)+\mathcal{L}(\phi)\).
We say two curves \(\gamma:[a_{1},b_{1}]\to D_{T}\) and \(\eta:[a_{2},b_{2}]\to D_{T}\) are equivalent, and write \(\gamma\sim\eta\), if they trace the same set in \(D_{T}\) and \(\mathcal{L}(\gamma)=\mathcal{L}(\eta)\). Note that if there exists an absolutely continuous monotonic function \(f:[a_{1},b_{1}]\to[a_{2},b_{2}]\) such that \(\gamma=\eta\circ f\), then \(\gamma\sim\eta\). Because of this, the specific domains and exact parameterizations for various curves are usually not important. Moreover, through the equivalence, concatenation is associative: \(\gamma\cup(\phi\cup\eta)\sim(\gamma\cup\phi)\cup\eta\). Also note that if \(p\) lies beyond \(q\) then \(\lambda_{p,q}\sim\sigma_{p,q}\).
### Absolutely Continuous Functions
As mentioned above, the curves under consideration will be parameterized using absolutely continuous functions. We work at this level of regularity for a number of reasons. First, absolutely continuous functions are differentiable almost everywhere, so the formula for the length of a curve makes sense. Second, concatenation preserves absolute continuity. Third, absolute continuity allows for substitution (change of variables) as an integration
technique. Fourth, if a function \(f\) is absolutely continuous, then so is \(|f|\). Finally, in some instances a given curve is altered using a minimization process, and absolute continuity is preserved under this change, as shown below.
Given \(f:[a,b]\to\mathbb{R}\), we define the cumulative minimum function \(\underline{f}:[a,b]\to\mathbb{R}\) and residual minimum function \(\overline{f}:[a,b]\to\mathbb{R}\) as follows:
\[\underline{f}(x)=\min_{t\in[a,x]}\{f(t)\}\]
and
\[\overline{f}(x)=\min_{t\in[x,b]}\{f(t)\}.\]
See Figure 2 for examples of such functions.
**Proposition 2.1**.: _Let \(f:[a,b]\to\mathbb{R}\) be absolutely continuous. Then the cumulative minimum function \(\underline{f}\) and residual minimum function \(\overline{f}\) have the following properties:_
* _Both_ \(\underline{f}\) _and_ \(\overline{f}\) _are also absolutely continuous;_
* \(\underline{f}\) _is monotonically decreasing,_ \(\overline{f}\) _is monotonically increasing;_
* \(\underline{f}(a)=f(a)\) _and_ \(\overline{f}(b)=f(b)\)_;_
* _for all_ \(x\in[a,b]\)_,_ \(\underline{f}(x)\leq f(x)\) _and_ \(\overline{f}(x)\leq f(x)\)_;_
* _for all_ \(x\in[a,b]\)_,_ \(|\underline{f}^{\prime}(x)|\leq|f^{\prime}(x)|\) _whenever both are defined and_ \(|\overline{f}^{\prime}(x)|\leq|f^{\prime}(x)|\) _whenever both are defined._
Proof.: The proofs of all but the first property are left to the reader.
For the first property, focussing first on \(\underline{f}\), since \(f\) is absolutely continuous, for any \(\varepsilon>0\). there exists a \(\delta>0\) such that for any finite collection of disjoint subsets \((a_{i},b_{i})\subset[a,b]\), \(i=1,\ldots,n\) with
\[\sum_{i=1}^{n}(b_{i}-a_{i})<\delta\]
it follows that
\[\sum_{i=1}^{n}|f(b_{i})-f(a_{i})|<\varepsilon.\]
We will show that the same \(\delta\) works for \(\underline{f}\). Given a collection of disjoint sets as above, let \(c_{i}\in[a_{i},b_{i}]\) be a point such that \(f(x)\geq f(c_{i})\) for all \(x\in[a_{i},b_{i}]\). Such a point exists since \(f\) is continuous. Now consider two possibilities. If \(\underline{f}(a_{i})\leq f(c_{i})\), then in fact \(\underline{f}(x)=\underline{f}(a_{i})\) for all \(x\in[a_{i},b_{i}]\) and so \(|\underline{f}(b_{i})-\underline{f}(a_{i})|=0\). Indicate these sets with the new index \(j=1,\ldots,n_{0}\). On the other hand, if \(\underline{f}(a_{i})>f(c_{i})\) then \(\underline{f}(x)=f(c_{i})\) for all \(x\in[c_{i},b_{i}]\) and \(f(a_{i})\geq\underline{f}(a_{i})>f(c_{i})=\underline{f}(c_{i})\). In this case, indicate these sets with the new index \(k=1,\ldots,n_{1}\) so that \(n_{0}+n_{1}=n\), and subdivide \((a_{i},b_{i})\) into \((a_{i},c_{i})\) and \((c_{i},b_{i})\). This new family of disjoint subsets will still have total length less than \(\delta\):
\[\sum_{j=1}^{n_{0}}(b_{j}-a_{j})+\sum_{k=1}^{n_{1}}\bigl{[}(c_{k}-a_{k})+(b_{k }-c_{k})\bigr{]}=\sum_{i=1}^{n}(b_{i}-a_{i})<\delta\]
so by the absolute continuity of \(f\),
\[\sum_{j=1}^{n_{0}}|f(b_{j})-f(a_{j})|+\sum_{k=1}^{n_{1}}\bigl{(}|f(c_{k})-f(a _{k})|+|f(b_{k})-f(c_{k})|\bigr{)}<\varepsilon. \tag{1}\]
Then,
\[\sum_{i=1}^{n}|\underline{f}(b_{i})-\underline{f}(a_{i})| =\sum_{j=1}^{n_{0}}|\underline{f}(b_{j})-\underline{f}(a_{j})|+ \sum_{k=1}^{n_{1}}|\underline{f}(b_{k})-\underline{f}(a_{k})|\] \[=0+\sum_{k=1}^{n_{1}}|\underline{f}(c_{k})-\underline{f}(a_{k})|\] \[=0+\sum_{k=1}^{n_{1}}\bigl{(}|\underline{f}(c_{k})-\underline{f}( a_{k})|+|\underline{f}(b_{k})-\underline{f}(c_{k})|\bigr{)}\] \[\leq\sum_{j=1}^{n_{0}}|f(b_{j})-f(a_{j})|+\sum_{k=1}^{n_{1}} \bigl{(}|f(c_{k})-\underline{f}(a_{k})|+|f(b_{k})-f(c_{k})|\bigr{)}\] \[\leq\sum_{j=1}^{n_{0}}|f(b_{j})-f(a_{j})|+\sum_{k=1}^{n_{1}} \bigl{(}|f(c_{k})-f(a_{k})|+|f(b_{k})-f(c_{k})|\bigr{)}\] \[<\varepsilon\]
where the last line follows from Equation (1). Therefore, \(\underline{f}\) is also absolutely continuous.
While a completely analogous proof works for \(\overline{f}\), it is perhaps simpler to note that the residual minimum function is a horizontal reflection of the cumulative minimum function for a horizontal reflection of \(f\).
## 3. Length minimizers
In the Poincare disk, length minimizers are arcs of circles perpendicular to the circle at infinity, and these curves provide a great deal of information about the geometry in the space. We find that length minimizers in \(D_{T}\) are not as consistent as for the Poincare disk, but they do provide the desired insight into the geometry of the taxicab Poincare disk. We have:
**Theorem A**.: _For \(p,q\in D_{T}\) and \(\gamma\in\Gamma(p,q)\), \(\mathcal{L}(\gamma)\geq\mathcal{L}(\lambda_{p,q})\) with equality if and only if \(\gamma\) is doubly monotonic and passes through \(m(p,q)\)._
Figure 2. An absolutely continuous function \(f\) in gray, its cumulative minimum function \(\underline{f}\) in solid black, and its residual minimum function \(\overline{f}\) in dashed black.
In general, the length minimizer will not be unique, even up to equivalence, but \(\lambda_{p,q}\) is always a length minimizer, and there are some cases where the minimizer must be equivalent to \(\lambda_{p,q}\). Specifically, if \(p\) and \(q\) lie in the same quadrant, and neither point lies strictly beyond the other, or if \(p\) and \(q\) are in adjacent quadrants and share a coordinate line, then the only doubly monotonic curves from \(p\) to \(q\) are those that are equivalent to \(\lambda_{p,q}\).
To prove Theorem A, we identify the following four cases:
* Case 1: \(p\) and \(q\) lie in the same quadrant and one point lies beyond the other;
* Case 2: \(p\) and \(q\) lie in the same quadrant, and neither point lies beyond the other;
* Case 3: \(p\) and \(q\) lie in opposite quadrants;
* Case 4: \(p\) and \(q\) lie in adjacent quadrants.
We prove these cases sequentially: the proof of Case 2 relies on Case 1 and the proofs of Cases 3 and 4 each rely on Case 1 and Case 2. Note in particular that Case 1 includes the situation where \(p\) and \(q\) lie in the same quadrant and share a coordinate line. This special case will be important for the proof of Case 2.
While the two points under consideration may lie on the same side of a coordinate axis, a curve connecting them need not. In a number of places, our analysis is simplified if we know the curve connecting two points lies on the same side of a coordinate axis that the points do. The following lemma shows that we can alter the curve without changing its length so that the new curve lies on the same side of a coordinate axis as \(p\) and \(q\).
**Lemma 3.1**.: _Let \(p\) and \(q\) lie on one side of a coordinate axis. Then for \(\gamma\in\Gamma(p,q)\), there exists a curve \(\widehat{\gamma}\in\Gamma(p,q)\) that lies on the same side of the coordinate axis and such that \(\mathcal{L}(\widehat{\gamma})=\mathcal{L}(\gamma)\)._
Proof.: Without loss of generality, suppose \(p_{2}\) and \(q_{2}\) are both nonnegative. Then, define \(\widehat{\gamma}=(\gamma_{1},|\gamma_{2}|)\). By Proposition 2.1, this curve is an element of \(\Gamma(p,q)\). The fact that \(\mathcal{L}(\widehat{\gamma})=\mathcal{L}(\gamma)\) is left to the reader.
Note that if \(p\) and \(q\) lie in the same quadrant, we can apply Lemma 3.1 twice to produce a curve that is also restricted to that quadrant. See Figure 3 for illustrations of these results.
### Length minimizers when \(p\) lies beyond \(q\)
We start with an important computation. Then, after a special case is resolved, Theorem A, Case 1 is proved.
**Lemma 3.2**.: _Let \(p\) lie beyond \(q\) and let \(\gamma\in\Gamma(p,q)\) be doubly monotonic. Then_
\[\mathcal{L}(\gamma)=\tanh^{-1}(|p_{1}|+|p_{2}|)-\tanh^{-1}(|q_{1}|+|q_{2}|).\]
See Figure 4.
Proof.: Let \(\widetilde{\gamma}=(\widetilde{\gamma}_{1},\widetilde{\gamma}_{2})=\big{(}\big{|} \gamma_{1}\big{|},\big{|}\gamma_{2}\big{|}\big{)}\). Note that this is a curve from \(\big{(}|p_{1}|,|p_{2}|\big{)}\) to \(\big{(}|q_{1}|,|q_{2}|\big{)}\) and that \(\mathcal{L}(\widetilde{\gamma})=\mathcal{L}(\gamma)\). Then
\[\mathcal{L}(\gamma) =\mathcal{L}(\widetilde{\gamma})\] \[=\int_{0}^{1}\frac{|\widetilde{\gamma}_{1}^{\prime}(t)|+| \widetilde{\gamma}_{2}^{\prime}(t)|}{1-(|\widetilde{\gamma}_{1}(t)|+| \widetilde{\gamma}_{2}(t)|)^{2}}\,dt\] \[=-\int_{0}^{1}\frac{\widetilde{\gamma}_{1}^{\prime}(t)+\widetilde {\gamma}_{2}^{\prime}(t)}{1-(\widetilde{\gamma}_{1}(t)+\widetilde{\gamma}_{2} (t))^{2}}\,dt\] \[=-\int_{\widetilde{\gamma}_{1}(0)+\widetilde{\gamma}_{2}(0)}^{ \widetilde{\gamma}_{1}(1)+\widetilde{\gamma}_{2}(1)}\frac{1}{1-u^{2}}\,du\] \[=-\tanh^{-1}(u)\,\Big{|}_{\widetilde{\gamma}_{1}(0)+\widetilde {\gamma}_{2}(0)}^{\widetilde{\gamma}_{1}(1)+\widetilde{\gamma}_{2}(1)}\] \[=-\tanh^{-1}(u)\,\Big{|}_{|p_{1}|+|p_{2}|}^{|q_{1}|+|q_{2}|}\] \[=\tanh^{-1}(|p_{1}|+|p_{2}|)-\tanh^{-1}(|q_{1}|+|q_{2}|).\]
**Lemma 3.3**.: _Let \(p\) lie beyond \(q\) and let \(\gamma\in\Gamma(p,q)\) have the property that \(\gamma(t)\) lies beyond \(q\) for all \(t\in[0,1]\). Let \(\psi\in\Gamma(p,q)\) be doubly monotonic. Then \(\mathcal{L}(\gamma)\geq\mathcal{L}(\psi)\) with equality if and only if \(\gamma\) is doubly monotonic._
Proof.: Since, by Lemma 3.2, all doubly monotonic curves have the same length \(L\), it is sufficient to show that \(\mathcal{L}(\gamma)>L\) if \(\gamma\) is not doubly monotonic.
Figure 3. (a) When \(p\) and \(q\) lie on the same side of a coordinate axis, we can restrict a curve connecting them to the same side without changing its length. (b) When \(p\) and \(q\) lie in the same quadrant, we can restrict a curve connecting them to the same quadrant without changing its length.
Without loss of generality, suppose \(p\) and \(q\) lie in the first quadrant. Define the doubly monotonic shadow of \(\gamma\) to be
\[\underline{\gamma}:[0,1]\longrightarrow D_{T}\] \[\underline{\gamma}(t)=\left(\underline{\gamma_{1}}(t),\underline{ \gamma_{2}}(t)\right).\]
Note that by Proposition 2.1, \(\underline{\gamma}\) is absolutely continuous, doubly monotonic, and starts at \(p\). The fact that it ends at \(q\) follows from the fact that \(\gamma(t)\) lies beyond \(q\) for all \(t\in[0,1]\). Hence, by Lemma 3.2, \(\mathcal{L}(\underline{\gamma})=L\). By Proposition 2.1,
\[\underline{\gamma_{i}}(t)\leq\gamma_{i}(t)\]
for all \(t\in[0,1]\), so that
\[\frac{1}{1-(|\underline{\gamma_{1}}(t)|+|\underline{\gamma_{2}}(t)|)^{2}}\leq \frac{1}{1-(|\gamma_{1}(t)|+|\gamma_{2}(t)|)^{2}}.\]
Also by Proposition 2.1,
\[|\underline{\gamma_{i}}^{\prime}(t)|\leq|\gamma_{i}^{\prime}(t)|\]
whenever both are defined. Moreover, if \(\gamma\) is not doubly monotonic, there will exist an interval where at least one of these inequalities is strict.
These two estimates imply
\[\mathcal{L}(\underline{\gamma}) =\int_{0}^{1}\frac{|\underline{\gamma_{1}}^{\prime}(t)|+| \underline{\gamma_{2}}^{\prime}(t)|}{1-(|\underline{\gamma_{1}}(t)|+| \underline{\gamma_{2}}(t)|)^{2}}\,dt\] \[\leq\int_{0}^{1}\frac{|\underline{\gamma_{1}^{\prime}}(t)|+| \underline{\gamma_{2}^{\prime}}(t)|}{1-(|\gamma_{1}(t)|+|\gamma_{2}(t)|)^{2}} \,dt\] \[=\mathcal{L}(\gamma),\]
and if \(\gamma\) is not doubly monotonic, the inequality on the second line will be strict.
See Figure 5 for an illustration of the doubly monotonic shadow of a curve \(\gamma\).
We now prove Theorem A in the case when one point lies beyond the other.
Figure 4. Given \(p\) and \(q\) in the same quadrant, with \(p\) lying beyond \(q\), doubly monotonic curves from \(p\) to \(q\) all have the same length.
Proof of Theorem A, Case 1.: Suppose, without loss of generality, \(p\) and \(q\) lie in the first quadrant and, in light of Lemma 3.1, suppose \(\gamma\) lies completely in the first quadrant as well. Let
\[r=\left(\min_{t\in[0,1]}\gamma_{1}(t),\min_{t\in[0,1]}\gamma_{2}(t)\right)\]
and consider the curve \(\widetilde{\gamma}=\gamma\cup\sigma_{q,r}\). By Lemma 3.3 the doubly monotonic shadow \(\widetilde{\gamma}\) is shorter than \(\widetilde{\gamma}\) and the same length as the doubly monotonic curve \(\sigma_{p,q}\cup\sigma_{q,r}\). Putting these facts together, we have
\[\mathcal{L}(\gamma)+\mathcal{L}(\sigma_{q,r}) =\mathcal{L}(\widetilde{\gamma})\] \[\geq\mathcal{L}(\widetilde{\gamma})\] \[=\mathcal{L}(\sigma_{p,q}\cup\sigma_{q,r})\] \[=\mathcal{L}(\sigma_{p,q})+\mathcal{L}(\sigma_{q,r}).\]
Cancelling the \(\mathcal{L}(\sigma_{q,r})\) on both sides yields the result. See Figure 6 for an illustration of this construction.
Length minimizers when \(p\) and \(q\) lie in the same quadrant, and neither point lies beyond the other
Next we explore the situation where two points lie in the same quadrant, but neither lies beyond the other. As in the previous case, our first lemma identifies the length minimizing curves, and then later results establish that they are in fact the minimizers.
**Lemma 3.4**.: _Suppose \(p\) and \(q\) lie in the same quadrant and neither point lies beyond the other, and let \(m=m(p,q)\). Let \(\gamma\in\Gamma(p,q)\) have the property that for
Figure 5. A curve \(\gamma\) from \(p\) to \(q\) with the property that \(\gamma(t)\) lies beyond \(q\) for all \(t\in[0,1]\), in gray, and its doubly monotonic shadow \(\underline{\gamma}\), in black. Piecewise, \(\underline{\gamma}\) is either equal to \(\gamma\), a segment with \(\gamma\) lying beyond it, or stationary, waiting for \(\gamma\) to “catch up.” In the first case, the lengths coincide. In the second and third cases, \(\underline{\gamma}\) is strictly shorter.
all \(t\in[0,1]\), \(\gamma(t)\) lies beyond \(m\). Then \(\mathcal{L}(\gamma)\geq\mathcal{L}(\lambda_{p,q})\) with equality if and only if \(\gamma\sim\lambda_{p,q}\)._
See Figure 7.
Proof.: Suppose, without loss of generality, that \(p\) and \(q\) lie in the first quadrant, and that \(p_{1}<q_{1}\) and \(p_{2}>q_{2}\). Define the curves
\[\phi^{1}:[0,1]\longrightarrow D_{T}\] \[\phi^{1}(t)=\big{(}m_{1},\underline{\gamma_{2}}(t)\big{)}\] \[\phi^{2}:[0,1]\longrightarrow D_{T}\] \[\phi^{2}(t)=\big{(}\overline{\gamma_{1}}(t),m_{2}\big{)},\]
Note that \(\phi^{1}\sim\sigma_{p,m}\) and \(\phi^{2}\sim\sigma_{m,q}\), so \(\phi^{1}\cup\phi^{2}\sim\lambda_{p,q}\). Then using the properties established by Proposition 2.1,
\[\mathcal{L}(\lambda_{p,q}) =\mathcal{L}(\phi^{1})+\mathcal{L}(\phi^{2})\] \[=\int_{0}^{1}\frac{0+|\underline{\gamma_{2}}^{\prime}(t)|}{1- \big{(}|p_{1}|+|\underline{\gamma_{2}}(t)|\big{)}^{2}}\,dt+\int_{0}^{1}\frac{ |\overline{\gamma_{1}}^{\prime}(t)|+0}{1-\big{(}|\overline{\gamma_{1}}(t)|+|q _{1}|\big{)}^{2}}\,dt\] \[\leq\int_{0}^{1}\frac{0+|\gamma_{2}^{\prime}(t)|}{1-\big{(}|p_{1 }|+|\gamma_{2}(t)|\big{)}^{2}}\,dt+\int_{0}^{1}\frac{|\gamma_{1}^{\prime}(t)|+ 0}{1-\big{(}|\gamma_{1}(t)|+|q_{1}|\big{)}^{2}}\,dt\] \[\leq\int_{0}^{1}\frac{0+|\gamma_{2}^{\prime}(t)|}{1-\big{(}| \gamma_{1}(t)|+|\gamma_{2}(t)|\big{)}^{2}}+\frac{|\gamma_{1}^{\prime}(t)|+0}{ 1-\big{(}|\gamma_{1}(t)|+|\gamma_{2}(t)|\big{)}^{2}}\,dt\] \[=\mathcal{L}(\gamma)\]
and at least one of the inequalities will be strict unless \(\gamma\sim\lambda_{p,q}\).
With this, we are ready to handle Case 2.
Proof of Theorem A, Case 2.: Suppose, without loss of generality, that \(p\) and \(q\) lie in the first quadrant, and that \(p_{1}<q_{1}\) and \(p_{2}>q_{2}\). In light of Lemma 3.1, suppose
Figure 6. When \(p\) lies beyond \(q\), the curve \(\gamma\) connecting \(p\) to \(q\) on the left, and the augmented curve \(\gamma\cup\sigma_{q,r}\) on the right used in the proof of Theorem A, Case 1.
also that \(\gamma\) lies in the first quadrant. Let \(t_{1}\) be the time such that \(\gamma_{1}(t_{1})=p_{1}\) and \(\gamma_{1}(t)>p_{1}\) for all \(t\in(t_{1},1]\). Let \(\gamma^{1}\) be the restriction of \(\gamma\) to \([0,t_{1}]\), let \(\gamma^{2}\) be the restriction to \([t_{1},1]\), and let \(r=\gamma(t_{1})\).
Now consider two scenarios:
* Scenario 1: \(r_{2}\leq m_{2}\). Then by Theorem A, Case 1, \(\mathcal{L}(\gamma^{1})\geq\mathcal{L}(\sigma_{p,r})\geq\mathcal{L}(\sigma_{p, m})\) and the inequality is strict unless \(\gamma^{1}\sim\sigma_{p,m}\). Similarly, \(\mathcal{L}(\gamma^{2})\geq\mathcal{L}(\sigma_{r,m}\cup\sigma_{m,q})\geq \mathcal{L}(\sigma_{m,q})\) and the inequality is strict unless \(\gamma^{2}\sim\sigma_{m,q}\). Combining these, we have \[\mathcal{L}(\gamma) =\mathcal{L}(\gamma^{1}\cup\gamma^{2})\] \[\geq\mathcal{L}(\sigma_{p,m}\cup\sigma_{m,q})\] \[=\mathcal{L}(\lambda_{p,q}),\] and the inequality is strict unless \(\gamma\sim\lambda_{p,q}\). See Figure 8 (a).
* Scenario 2: \(r_{2}>m_{2}\). Let \(t_{2}\) be the time such that \(\gamma_{2}(t_{2})=q_{2}\) and \(\gamma_{2}(t)\neq q_{2}\) for all \(t\in[t_{1},t_{2})\). Let \(\gamma^{2a}\) be the restriction of \(\gamma^{2}\) to \([t_{1},t_{2}]\), let \(\gamma^{2b}\) be the restriction to \([t_{2},1]\), and let \(s=\gamma(t_{2})\). By the Intermediate Value Theorem, \(\gamma^{2a}(t)\) lies beyond \(m\) for all \(t\in[t_{1},t_{2}]\). Hence by Lemma 3.4, \(\mathcal{L}(\gamma^{2a})>\mathcal{L}(\lambda_{r,s})\). Moreover, by Theorem A, Case 1, \(\mathcal{L}(\gamma^{1})\geq\mathcal{L}(\sigma_{p,r})\) and \(\mathcal{L}(\gamma^{2b})\geq\mathcal{L}(\sigma_{s,q})\), and so \[\mathcal{L}(\gamma) =\mathcal{L}(\gamma^{1}\cup\gamma^{2a}\cup\gamma^{2b})\] \[>\mathcal{L}(\sigma_{p,r}\cup\lambda_{r,s}\cup\sigma_{s,q})\] \[=\mathcal{L}(\lambda_{p,q}).\] See Figure 8 (b).
### Length minimizers when \(p\) and \(q\) lie in opposite quadrants
Together, Cases 1 and 2 of Theorem A completely determine the length minimizing curves connecting two points that are both in a single quadrant. Handling points in different quadrants does not require any new technical results.
Proof of Theorem A, Case 3.: First note that every point lies beyond \(\theta=(0,0)\) so if \(\gamma\) passes through the origin, then by Theorem A, Case 1 it can be made shorter by replacing with a concatenation of doubly monotonic curves from \(p\) to \(\theta\) and \(\theta\) to \(q\).
If \(\gamma\) does not pass through the origin, it must cross the \(x_{1}\)-axis at a point \(a=(a_{1},0)\), and it must cross the \(x_{2}\)-axis at a point \(b=(0,b_{2})\), where both \(a_{1}\) and \(b_{2}\) are non-zero. Then by Theorem A, Case 2, the part of \(\gamma\) connecting \(a\) to \(b\) can be shortened by replacing it with \(\lambda_{a,b}\). This new shorter curve passes through the origin and then, as above, can be shortened by replacing with a doubly monotonic curve from \(p\) through \(\theta\) to \(q\). See Figure 9 for an illustration of these steps.
Proof of Theorem A, Case 4.: Suppose, without loss of generality, that \(p\) lies in the first quadrant, \(q\) lies in the second quadrant, \(p_{2}\leq q_{2}\), and, noting that \(p_{1}=0\) is covered by Case 1, suppose \(p_{1}>0\). In light of Lemma 3.1, suppose also that
Figure 8. When neither point lies beyond the other, the curve \(\gamma\) from \(p\) to \(q\) may cross \(\{x_{1}=p_{1}\}\) for the last time at or below \(m\) as on the left so that \(\gamma=\gamma^{1}\cup\gamma^{2}\), or it may cross above \(m\) as on the right so that \(\gamma=\gamma^{1}\cup\gamma^{2a}\cup\gamma^{2b}\). These scenarios require slightly different analysis, as seen in the proof of Theorem A, Case 2.
Figure 9. The steps in the proof of Theorem A, Case 3 showing that a length minimizing curve connecting points in opposite quadrants must be doubly monotonic and pass through the origin.
\(\gamma_{2}(t)\geq 0\) for all \(t\in[0,1]\). By the Intermediate Value Theorem, there exists a time \(t_{1}\) such that \(\gamma_{1}(t_{1})=0\). Let \(r=\gamma(t_{1})=(0,\gamma_{2}(t_{1}))\), let \(\gamma^{1}\) be the part of \(\gamma\) from \(p\) to \(r\), and let \(\gamma^{2}\) be the part of \(\gamma\) from \(r\) to \(q\). Finally, let \(m=m(p,q)=(0,p_{2})\), let \(n=(0,q_{2})\), and consider three scenarios:
* Scenario 1: \(r_{2}\leq m_{2}\). Then \(p\) and \(q\) both lie beyond \(r\), so by Theorem A, Case 1, \[\mathcal{L}(\gamma^{1})\geq\mathcal{L}(\sigma_{p,m}\cup\sigma_{m,r})\] and \[\mathcal{L}(\gamma^{2})\geq\mathcal{L}(\sigma_{r,m}\cup\sigma_{m,q})\] with equality if and only if \(\gamma^{1}\) and \(\gamma^{2}\) are each doubly monotonic. Combining these inequalities, we have \[\mathcal{L}(\gamma) =\mathcal{L}(\gamma^{1})+\mathcal{L}(\gamma^{2})\] \[\geq\mathcal{L}(\sigma_{p,m}\cup\sigma_{m,r})+\mathcal{L}(\sigma _{r,m}\cup\sigma_{m,q})\] \[=\mathcal{L}(\sigma_{p,m}\cup\sigma_{m,q})+2\mathcal{L}(\sigma_{ m,r})\] \[\geq\mathcal{L}(\lambda_{p,q})\] and equality is achieved if and only if, as above \(\gamma^{1}\) and \(\gamma^{2}\) are both doubly monotonic, and in addition, \(r=m\).
* Scenario 2: \(m_{2}<r_{2}\leq n_{2}\). Then neither \(p\) nor \(r\) lies beyond the other so by Theorem A, Case 2, \[\mathcal{L}(\gamma^{1})\geq\mathcal{L}(\lambda_{p,r})\] with equality if and only if \(\gamma^{1}\) is doubly monotonic and passes through \(m\). Also, since \(q\) lies beyond \(r\), by Theorem A, Case 1, \[\mathcal{L}(\gamma^{2})\geq\mathcal{L}(\sigma_{r,q})\] and equality is achieved if and only if \(\gamma^{2}\) is doubly monotonic. Combining these inequalities, and noting that \(\mathcal{L}(\sigma_{m,r}\cup\sigma_{r,q})=\mathcal{L}(\sigma_{m,q})\) we have \[\mathcal{L}(\gamma) =\mathcal{L}(\gamma^{1}\cup\gamma^{2})\] \[\geq\mathcal{L}(\lambda_{p,r}\cup\sigma_{r,q})\] \[=\mathcal{L}(\lambda_{p,q})\] and equality is achieved if and only if \(\gamma\) both doubly monotonic and passes through \(m\).
* Scenario 3: \(r_{2}>n_{2}\). Then neither \(p\) nor \(r\) lies beyond the other and neither \(r\) nor \(q\) lies beyond the other, so by Theorem A, Case 2, \[\mathcal{L}(\gamma^{1})\geq\mathcal{L}(\lambda_{p,r})\] and \[\mathcal{L}(\gamma^{2})\geq\mathcal{L}(\lambda_{r,q}).\] Also note that, since \(r_{2}>n_{2}\), \[\mathcal{L}(\lambda_{p,r}) =\mathcal{L}(\sigma_{p,m}\cup\sigma_{m,r})\] \[>\mathcal{L}(\sigma_{p,m}\cup\sigma_{m,n})\]
and
\[\mathcal{L}(\lambda_{r,q}) =\mathcal{L}(\sigma_{r,n}\cup\sigma_{n,q})\] \[>\mathcal{L}(\sigma_{n,q}).\]
Combining these inequalities, we have
\[\mathcal{L}(\gamma) =\mathcal{L}(\gamma^{1}\cup\gamma^{2})\] \[\geq\mathcal{L}(\lambda_{p,r}\cup\lambda_{r,q})\] \[>\mathcal{L}(\sigma_{p,m}\cup\sigma_{m,n}\cup\sigma_{n,q})\] \[=\mathcal{L}(\sigma_{p,m}\cup\sigma_{m,q})\] \[=\mathcal{L}(\lambda_{p,q}).\]
See Figure 10 for an illustration of these three scenarios.
## 4. Distance, Circles, Isometries
In this section, we use the length minimizers to define a distance function on the taxicab Poincare disk, use this distance to characterize circles, and determine the isometries of \(D_{T}\).
### Distance function for \(D_{t}\)
In the usual Poincare disk \(D\), the curves that minimize the length functional \(\mathcal{L}_{P}\) are arcs of circles that are perpendicular to \(\partial D\). These curves are used to define the distance between points \(p,q\in D\), resulting in
\[d_{P}(p,q)=\cosh^{-1}\left(1+2\frac{||p-q||_{E}^{2}}{(1-||p||_{E}^{2})(1-||q|| _{E}^{2})}\right).\]
See [1] for the development of length and distance in the Poincare disk.
Similarly, in \(D_{T}\), the curves that minimize the length functional \(\mathcal{L}\) are given by Theorem A and these curves are used to define a distance function on \(D_{T}\):
**Theorem B**.: _The distance function on \(D_{T}\) arising from the length functional \(\mathcal{L}\) is_
\[d(p,q)=\tanh^{-1}(|p_{1}|+|p_{2}|)+\tanh^{-1}(|q_{1}|+|q_{2}|)-2\tanh^{-1}(|m_ {1}|+|m_{2}|) \tag{2}\]
_where \(m=m(p,q)\)._
Figure 10. The three scenarios in the proof of Theorem A, Case 4 showing that a length minimizing curve connecting points in adjacent quadrants must be the concatenation of a coordinate segment and a doubly monotonic curve.
Proof.: Since \(\lambda_{p,q}=\sigma_{p,m}\cup\sigma_{m,q}\in\Gamma(p,q)\) is always a minimizing curve from \(p\) to \(q\), \(\sigma_{p,m}\) and \(\sigma_{m,q}\) each lie in a single quadrant, and \(p\) and \(q\) both lie beyond \(m\), by Lemma 3.2,
\[d(p,q) =\min_{\gamma\in\Gamma(p,q)}\mathcal{L}(\gamma)\] \[=\mathcal{L}(\lambda_{p,q})\] \[=\mathcal{L}(\sigma_{p,m})+\mathcal{L}(\sigma_{m,q})\] \[=\tanh^{-1}(|p_{1}|+|p_{2}|)+\tanh^{-1}(|q_{1}|+|q_{2}|)\] \[\qquad\qquad\qquad-2\tanh^{-1}(|m_{1}|+|m_{2}|).\]
Note that \(d(p,\theta)=\tanh^{-1}(|p_{1}|+|p_{2}|)\), so Equation (2) can be rewritten as follows:
\[d(p,q)=d(p,\theta)+d(q,\theta)-2d(m,\theta). \tag{3}\]
### Circles
With a distance function established for \(D_{T}\), our next goal is to characterize circles in this space. Circles in the usual Poincare disk are Euclidean circles, although the hyperbolic center does not coincide with the Euclidean center. We find here that, unfortunately, most circles in \(D_{T}\) are not taxicab circles, although there are a number of interesting facts about them that help us better understand the space.
Let \(C_{r}(p)\) be the circle of radius \(r\) centered at \(p\). Also, let \(r_{p}\) be the radius of the circle centered at \((0,0)\) that contains \(p\). Note that, using this notation, Equation (3) becomes
\[d(p,q)=r_{p}+r_{q}-2r_{m}. \tag{4}\]
**Lemma 4.1**.: _The circles centered at the origin are taxicab circles._
Proof.: The formula for the circle \(C_{r}(\theta)\) is \(d(p,\theta)=\tanh^{-1}(|p_{1}|+|p_{2}|)=r\) which can be rewritten
\[|p_{1}|+|p_{2}|=\tanh(r),\]
which in turn is the formula of the taxicab circle of radius \(\tanh(r)\) centered at \(\theta\).
Given the point \(p\), let \(c^{1}(p)=(p_{1},0)\) and \(c^{2}(p)=(0,p_{2})\) The coordinate lines through \(p\) and the coordinate lines through the origin intersect at \(c^{i}=c^{i}(p)\) and subdivide \(D_{T}\) into nine open regions, some of which will be empty if \(p\) lies on a coordinate axis. These regions comprise four quadrants \(Q_{\theta}\), \(Q_{p}\), \(Q_{c^{1}}\), \(Q_{c^{2}}\), a central rectangle \(R\), and four strips \(S_{\theta,c^{1}}\), \(S_{\theta,c^{2}}\), \(S_{p,c^{1}}\), \(S_{p,c^{2}}\). See Figure 11.
Note that each of the quadrants intersects exactly one edge of the circle at infinity. We say a segment in one of these quadrants is admissible if its endpoints lie on the coordinate lines defining the quadrant and it is parallel to the corresponding edge at infinity. The edge at infinity corresponding to \(Q_{p}\) can also be associated to the rectangle \(R\) and we say a segment in \(R\) is admissible if its endpoints lie on the boundary of \(R\) and it is parallel to the corresponding edge at infinity.
**Theorem 4.2**.: _If a circle \(C_{r}(p)\) intersects a quadrant \(Q\) or the rectangle \(R\), it does so along an admissible segment. If it intersects a strip \(S\), it does so along the following curves:_
* _in_ \(S_{p,c^{1}}\)_,_ \[|x_{1}|=\frac{\kappa[1+(|p_{1}|+|x_{2}|)^{2}]+2(|p_{1}|+|x_{2}|)}{1+(|p_{1}|+|x_{ 2}|)^{2}+2\kappa(|p_{1}|+|x_{2}|)}-|x_{2}|;\]
* _in_ \(S_{p,c^{2}}\)_,_ \[|x_{2}|=\frac{\kappa[1+(|x_{1}|+|p_{2}|)^{2}]+2(|x_{1}|+|p_{2}|)}{1+(|x_{1}|+| p_{2}|)^{2}+2\kappa(|x_{1}|+|p_{2}|)}-|x_{1}|;\]
* _in_ \(S_{0,c^{1}}\)_,_ \[|x_{2}|=\frac{\kappa(1+|x_{1}|^{2})+2|x_{1}|}{1+|x_{1}|^{2}+2\kappa|x_{1}|}-| x_{1}|;\]
* _in_ \(S_{0,c^{2}}\)_,_ \[|x_{1}|=\frac{\kappa(1+|x_{2}|^{2})+2|x_{2}|}{1+|x_{2}|^{2}+2\kappa|x_{2}|}-| x_{1}|;\]
_where \(\kappa=\tanh(r-r_{p})\)._
_The circle \(C_{r}(p)\) always intersects \(Q_{p}\), \(S_{p,c^{1}}\), and \(S_{p,c^{2}}\). It intersects \(R\) if and only if \(r<r_{p}\) and it intersects \(Q_{\theta}\) if and only if \(r>r_{p}\). It intersects \(Q_{c^{1}}\) and \(S_{\theta,c^{1}}\) if and only if \(r>d(p,c^{1})\) and it intersects \(Q_{c^{2}}\) and \(S_{\theta,c^{2}}\) if and only if \(r>d(p,c^{2})\)._
Proof.: Let \(x\in C_{r}(p)\cap Q\). Note that the minimal point \(m=m(p,x)\) is the nearest vertex of \(R\), which is independent of \(x\). Hence the equation for the circle \(d(p,x)=r\) can be rewritten as:
\[r_{x}=r+2r_{m}-r_{p}\]
and since \(m\) is independent of \(x\), this implies that all such points lie on some fixed circle centered at \(0\). Similarly, if \(x\) lies in \(R\), then \(m=x\) and the equation for the circle can be rewritten as
\[r_{x}=r_{p}-r\]
Figure 11. The points \(c^{i}=c^{i}(p)\), \(i=1,2\), and various regions associated to \(p\).
and this implies that all such points lie on some fixed circle centered at \(\theta\). In either case, by Lemma 4.1 this intersection will be an admissible segment.
For points in the strips, \(m\) depends on both \(p\) and \(x\), so the formulas for the circle in these regions do not reduce to that of a circle centered at the origin and the resulting curve is not a straight line.
The equation for \(x\in C_{r}(p)\) is
\[r_{x}+r_{p}-2r_{m}=r\]
which can be rewritten as
\[|x_{1}|+|x_{2}|=\tanh(r-r_{p}+2r_{m})=\frac{\tanh(r-r_{p})[1+(|m_{1}|+|m_{2}|) ^{2}]+2(|m_{1}|+|m_{2}|)}{1+(|m_{1}|+|m_{2}|)^{2}+2\tanh(r-r_{p})(|m_{1}|+|m_{2 }|)}.\]
Note that \(\tanh(r-r_{p})\) is constant for a given circle and within each strip, \(m\) can be determined explicitly in terms of the coordinates of \(x\) and \(p\). Substituting and rearranging yields the desired formulas.
Finally, the regions \(Q_{p}\), \(S_{p,c^{1}}\), and \(S_{p,c^{2}}\) each contain points of all distances from \(p\). On the other hand, the points in \(R\) are all less than \(r_{p}\) away from \(p\) while the points in \(Q_{\theta}\) are all greater than \(r_{p}\) away from \(p\). Similarly, for all points \(q\in Q_{c^{1}}\cup S_{\theta,c^{1}}\), \(d(p,q)>d(p,c^{1})\) and for all points \(q\in Q_{c^{2}}\cup S_{\theta,c^{2}}\), \(d(p,q)>d(p,c^{2})\).
While Theorem 4.2 provides explicit formulas for circles in \(D_{T}\), it only provides partial information about what such circles actually look like. Figures 12, 13, and 14 illustrate various circles and here, we present some observations that help to fill out our understanding. These observations are about their "extrinsic" geometry as viewed from the perspective of the Euclidean plane. These results are somewhat removed from our main goal and the full proofs are a bit technical, so, while we indicate the main elements of the proofs, the details are left to the interested reader.
First, Theorem 4.2 shows that in each strip, one coordinate variable can be determined as a function of the other, and a computation shows that the second derivative has constant sign resulting in the fact that these circles are all boundaries of extrinsically convex sets. On the other hand, many circles are not boundaries of intrinsically geodesically convex sets (for any reasonable definition). For example, if a circle \(C\) has nonempty intersection with \(R\), then for \(p\) and \(q\) inside \(C\) and near \(C\cap R\), \(\lambda_{p,q}\) will often lie partially outside \(C\).
Second, circles lying in a single quadrant posses extrinsic reflective symmetry relative to the line through the circle's center and perpendicular to the corresponding edge at infinity. Somewhat more generally, even if a circle does not lie in a single quadrant, those points that do lie in a single quadrant enjoy the indicated symmetry. See Figure 12.
Third, suppose \(C_{r}(p)\) and \(C_{r}(q)\) each lie in a single quadrant and that \(r_{p}=r_{q}\). Then these two circles have the same extrinsic shape. See Figure 13. This can be checked algebraically, but thinking in terms of the radii involved leads to more insight. Rewrite the equation for the circle \(C_{r}(p)\) as
\[(r_{x}-r_{m})+(r_{p}-r_{m})=r\]
where \(m=m(x,p)\). Since this equation is written completely in terms of distances from the origin, the locations of the points in \(C_{r}(p)\) are not important as long as they posses the correct radii.
Finally, let \(p\) and \(q\) lie in the same quadrant, and share a coordinate line \(\ell\), with \(p\) lying beyond \(q\). Let \(\tilde{\ell}\) be the coordinate line through \(q\) and perpendicular to \(\ell\)
and let \(H\) be the half-plane with boundary \(\tilde{\ell}\) and not containing \(p\). Let \(r>d(p,q)\) and let \(r^{\prime}=r-d(p,q)\). Then \(C_{r^{\prime}}(q)\cap H=C_{r}(p)\cap H\). See Figure 14. To justify this, suppose, without loss of generality, that \(p\) lies in the first quadrant and that \(0\leq q_{1}\leq p_{1}\) and \(q_{2}=p_{2}\). Then, for any point \(s\) such that \(s_{1}\leq q_{1}\), \(m^{\prime}=m(s,q)=m(s,p)\). Note that \(r^{\prime}=r-r_{p}+r_{q}\), so
\[r_{s}+r_{q}-2r_{m^{\prime}}=r^{\prime}\]
Figure 12. Three circles centered at the same point. The inner two circles lie completely in the first quadrant and have reflective symmetry about the line \(\gamma\). The outer circle does not lie in a single quadrant and does not have reflective symmetry, but if a point and its reflection both lie in the first quadrant, then that point lies in the circle if and only if its reflection lies in the circle.
Figure 13. Two circles with the same radii and with centers the same distance from the origin.
becomes
\[r_{s}-2r_{m} =r-r_{p}+r_{q}-r_{q}\] \[=r-r_{p}\]
which implies that \(s\) solves the equation for \(C_{r}(q)\) if and only if it solves the equation for \(C_{r}(p)\).
### Isometries
The isometry group for \((\mathbb{R}^{2},d)\) is isomorphic to \(\mathbb{R}^{2}\rtimes D_{4}\) where the transformations associated to \(\mathbb{R}^{2}\) are translations and the transformations associated to \(D_{4}\) are reflections about coordinate lines and lines with slope \(\pm 1\), and rotations by integer multiples of \(\frac{\pi}{2}\)[12, 13]. Meanwhile, the isometry group for the Poincare Disk is \(SU(1,1)\) which comprises the set of Mobius transformations in the complex plane that map the unit disk to itself, the reflections across lines through the origin, and their compositions [14].
**Theorem C**.: _The isometry group for \(D_{T}\) is isomorphic to \(D_{4}\)._
To prove that this is all there is, we proceed in four steps, showing successively that an isometry must be progressively more constrained. In all four steps, we use the fact that any isometry must map circles of a given radius bijectively to other circles of the same radius.
Proof.: The fact that rotations by multiples of \(\frac{\pi}{2}\) about the origin and reflections across the coordinate axes and the lines \(x_{2}=\pm x_{1}\) are isometries is left to the reader.
In the other direction, let \(\Psi:D_{T}\to D_{T}\) be an isometry. Suppose first that \(\Psi\) does not fix the origin. Let \(\Psi(p)=\theta\) and suppose, without loss of generality, that \(p_{1}>0\) and \(p_{2}\geq 0\). Choose \(r\) small enough that \(C_{r}(p)\) lies to the right of the \(x_{2}\)-axis. Then a portion \(\widetilde{\gamma}\) of the circle lies in \(S_{p,c^{2}}\), with endpoints \(\widetilde{q}\) to the left of \(p\) and \(\widetilde{s}\) above \(p\). Note that \(\widetilde{s}\) lies beyond \(\widetilde{q}\) so by the mean value theorem, there exists a point, and hence an arc on \(\widetilde{\gamma}\) where its slope is positive. Mapping this part of \(\widetilde{\gamma}\) by \(\Psi\) may not result in an edge in a single quadrant, so if necessary, restrict
the arc further to produce a curve \(\gamma\) from \(q\) to \(s\) such that \(\Psi(\gamma)\) lies in a single quadrant. See Figure 15.
Since \(\gamma\) is doubly monotonic, by Theorem A, Case 1, \(d(q,s)=\mathcal{L}(\gamma)\), and since \(\Psi\) is an isometry, \(\mathcal{L}(\gamma)=\mathcal{L}(\Psi(\gamma))\). But neither \(\Psi(q)\) nor \(\Psi(s)\) can lie beyond the other, so by Theorem A, Case 2, \(\mathcal{L}\big{(}\Psi(\gamma)\big{)}>d\big{(}\Psi(q),\Psi(s)\big{)}\), arriving at a contradiction.
Next, suppose \(\Psi\) does not map a ray of a coordinate axis to another ray of a coordinate axis. Let \(p=(p_{1},0)\) with \(p_{1}>0\) and, without loss of generality, suppose \(\Psi(p)=q\) where \(q_{1}\geq q_{2}>0\). Let \(r>0\) be small enough that \(C_{r}(q)\) lies only in the first quadrant. Let \(s\) be the unique point on the \(x_{1}\)-axis that lies in \(C_{r}(p)\) and lies between \(\theta\) and \(p\). Since \(\Psi\) fixes the origin, there is a bijective correspondence between \(\{x\in C_{r}(p):r_{x}=r_{s}\}\) and \(\{x\in C_{r}(q):r_{x}=r_{s}\}\), but the first set is the singleton \(\{s\}\) and the second set is a segment. See Figure 16.
The next step is to show that a coordinate axis must map into a single coordinate axis; that is, a coordinate axis cannot be bent at the origin. Since the origin is fixed, by continuity, each coordinate ray based at the origin must be preserved. Suppose, without loss of generality, that \(\Psi\) maps the positive \(x_{1}\)-axis to itself and maps the positive \(x_{2}\)-axis to the negative \(x_{1}\)-axis.
Let \(p\) lie on the positive \(x_{1}\)-axis and let \(q\) lie on the positive \(x_{2}\)-axis with \(r_{p}=r_{q}\). Let \(s=\Psi(q)\) so that \(s\) lies on the negative \(x_{1}\)-axis and \(r_{s}=r_{p}\). Also, \(\Psi(\sigma_{p,q})\) is a
Figure 16. The point \(s\) and the segment \(\gamma\) would need to be in bijective correspondence under a proposed isometry \(\Psi\) in the second part of the proof of Theorem C.
Figure 15. Arcs of circles in bijective correspondence under a proposed isometry \(\Psi\) in the first part of the proof of Theorem C.
curve that crosses the \(x_{2}\)-axis, and in fact it must be an arc of a circle centered at the origin since \(\sigma_{p,q}\) is an arc on a circle centered at the origin. This implies there is a time \(t\in(0,1)\) such that \(\Psi(\sigma_{p,q})_{1}(t)=0\), but this is a contradiction by the previous step since there is no such time when \((\sigma_{p,q})_{1}(t)=0\). See Figure 17.
Finally since \(\Psi\) sends each coordinate axis to a coordinate axis, suppose, without loss of generality, that \(\Psi\) is the identity map on the axes. Let \(p\) be any point not on the axes and, without loss of generality, suppose \(p\) is in the first quadrant. Let \(q\) and \(s\) lie on the positive \(x_{1}\)-axis and positive \(x_{2}\)-axis respectively such that \(r_{s}=r_{q}=r_{p}\). Note that \(\Psi(q)=q\) and \(\Psi(s)=s\). Let \(r=d(q,p)\) and let \(\tilde{r}=d(s,p)\). The fact that \(p\) is the unique point that lies on the three circles \(C_{r}(q)\), \(C_{\tilde{r}}(s)\), and \(C_{r_{p}}(\theta)\) is left as an exercise for the reader. Since \(\Psi\) maps these three circles bijectively to themselves, \(p\) must also be preserved. Therefore, \(\Psi\) must be the identity. See Figure 18.
Since, throughout this process, the only transformations used to constrain \(\Psi\) are elements of \(D_{4}\), \(\Psi\) must be an element of \(D_{4}\).
## 5. Hyperbolicity
It is somewhat disappointing that the isometry group for \(D_{T}\) is so small. The fact that this space is not homogeneous indicates that it is perhaps not the best candidate as a representative for taxicab hyperbolic geometry. Here, we explore the extent to which \(D_{T}\) is nonetheless hyperbolic. Since we are not working with curvature, we consider alternatives that are more accessible.
### Playfair's Axiom
In Euclidean geometry, Playfair's Axiom states that given a line \(\ell\) and a point \(p\) not on \(\ell\), there exists at most one line passing through \(p\) and not intersecting \(\ell\). In \(D_{T}\), we find that, using a suitable definition of "line," given a line \(\ell\) and a point \(p\), there exist infinitely many lines passing through \(p\) and not intersecting \(\ell\).
In the taxicab plane \((\mathbb{R}^{2},d)\), since length minimizing curves are not unique, Playfair's axiom fails unless an additional constraint is imposed. Since Euclidean segments are minimizers, we can define a line in \((\mathbb{R}^{2},d)\) to be a Euclidean line, and then Playfair's axiom holds.
Similarly, since \(\lambda_{p,q}\) is always a viable minimizer in \(D_{T}\), we could define lines to be extensions of \(\lambda\)'s such that no new corners or bends are introduced. Then two
Figure 17. Arcs of circles in bijective correspondence under a proposed isometry \(\Psi\) in the third part of the proof of Theorem C.
points \(p\) and \(q\) define a unique line in the sense that
\[\Lambda:(D_{T}\times D_{T})\backslash\Delta\to\{\text{lines}\}\]
is a well defined function. Here, \(\Delta=\{(p,p)\}\subset D_{T}\times D_{T}\). However \(s\in\Lambda(p,q)\) does not imply \(p\in\Lambda(s,q)\) so we do not have uniqueness in the Euclidean sense. See Figure 19 for an illustration of the set of lines through a point \(p\) using this definition.
With this definition, we have the following:
**Theorem 5.1**.: _Given a line \(\ell\) and a point \(p\) not on \(\ell\), there are infinitely many lines through \(p\) that do not intersect \(\ell\)._
Since \(D_{T}\) lacks homogeneity, a complete proof would require considering a number of cases. See Figure 20 for some examples. The reader is encouraged to consider other scenarios as well.
### Gromov hyperbolicity
For a somewhat more sophisticated perspective, we consider Gromov hyperbolicity. For a given pair of points \(x\) and \(y\), and base point \(z\), let \(G(x,y;z)\) be the Gromov product
\[G(x,y;z)=\frac{1}{2}\left[d(x,z)+d(y,z)-d(x,y)\right].\]
Introduced by Gromov (see for example [1]), a metric space is said to be \(\delta\)-hyperbolic if there exists \(\delta\geq 0\) such that for all \(x,y,z,w\), the following inequality holds:
\[G(x,y;w)\geq\min\left\{G(x,z;w),G(y,z;w)\right\}-\delta. \tag{5}\]
The usual hyperbolic plane, with sectional curvature -1, is 2-hyperbolic. Meanwhile, trees are 0-hyperbolic. The Gromov hyperbolicity of \(D_{T}\) is determined here, giving us a new concrete example in this area.
Figure 18. Since \(\Psi\) preserves \(q\), \(s\), and the origin, it must also preserve the circles centered at these points and containing \(p\), which in turn implies that \(\Psi\) must preserve \(p\).
Figure 19. The set of lines through a point \(p\). The thick segments and rays are used by multiple lines.
Figure 20. Various scenarios of a line \(\ell\), a point \(p\) not on \(\ell\), and two lines passing through \(p\) and not intersecting \(\ell\). The reader is encouraged to find other lines to justify the fact that there are infinitely many such lines through \(p\).
**Theorem D**.: \(D_{T}\) _is \(\ln(3)\)-hyperbolic._
As mentioned in the introduction, the norm being used for \(D_{T}\) differs from that of the Poincar'e disk by a factor of \(2\). If we were to rescale the norm for \(D_{T}\) to coincide, this would also double the \(\delta\) in Theorem D, so in this sense, \(D_{T}\) is slightly less hyperbolic than usual hyperbolic space.
Proof.: In [1], it is shown that if Inequality (5) holds for a single base point \(w_{0}\) and \(\delta\), then it holds for all \(w\) and at most \(2\delta\). With this in mind, let \(w_{0}=\theta\) and note that by Equation (4)
\[G(p,q;\theta) =\frac{1}{2}\left[d(p,\theta)+d(q,\theta)-d(p,q)\right]\] \[=\frac{1}{2}\left[r_{p}+r_{q}-\left(r_{p}+r_{q}-2r_{m(p,q)}\right)\right]\] \[=r_{m(p,q)}.\]
Our goal is to show that with \(\delta=\frac{1}{2}\ln(3)\), for any three points \(p\), \(q\), and \(s\)
\[G(p,q;\theta)\geq\min\left\{G(p,s;\theta),G(q,s;\theta)\right\}-\delta\]
which, using the formula above, can be written
\[r_{m(p,q)}\geq\min\left\{r_{m(p,s)},r_{m(q,s)}\right\}-\delta. \tag{6}\]
In general, if \(m(p,q)\) lies beyond \(m(p,s)\) or \(m(q,s)\), then \(\delta=0\) will suffice. This configuration occurs often and as the following four cases are considered, depending on the relative location of the points \(p\), \(q\), and \(s\), it is left to the reader to verify some of the simpler such scenarios.
1. All three points are in the same quadrant. In this case, the only scenario where \(\delta=0\) is not sufficient is when neither \(p\) nor \(q\) lies beyond the other, and \(s\) lies beyond \(m(p,q)\). Suppose, without loss of generality, that all three points are in the first quadrant with \(p_{1}<q_{1}\) and \(p_{2}>q_{2}\). Then \(s_{1}>p_{1}\) so \(m(p,s)=(p_{1},\min\{p_{2},s_{2}\})\) and \((p_{1},s_{2})\) always lies beyond this point. Similarly, \(s_{2}>q_{2}\) so \(m(q,s)=(\min\{q_{1},s_{1}\},q_{2})\) and \((s_{1},q_{2})\) always lies beyond this point. Hence, since \(s_{1}+s_{2}<1\) \[\min\{r_{m(p,s)},r_{m(q,s)}\} \leq\min\{r_{(p_{1},s_{2})},r_{(s_{1},q_{2})}\}\] \[=\min\{\tanh^{-1}(p_{1}+s_{2}),\tanh^{-1}(s_{1}+q_{2})\}\] \[\leq\min\{\tanh^{-1}(p_{1}+1-s_{1}),\tanh^{-1}(s_{1}+q_{2})\}.\] The right hand side is maximized when \(p_{1}+1-s_{1}=s_{1}+q_{2}\) so that \(s_{1}=\frac{1+p_{1}-q_{1}}{2}\). Therefore, to satisfy Inequality (6), since \(m(p,q)=(p_{1},q_{2})\), we need \[\tanh^{-1}(p_{1}+q_{2})\geq\tanh^{-1}\left(\frac{1+p_{1}+q_{2}}{2}\right)-\delta.\] Solving for \(\delta\), we get \[\delta \geq\tanh^{-1}\left(\frac{1+p_{1}+q_{2}}{2}\right)-\tanh^{-1}(p_ {1}+q_{2})\] \[=\tanh^{-1}\left(\frac{1-(p_{1}+q_{2})}{2-(1+p_{1}+q_{2})(p_{1}+q _{2})}\right).\]
Since \(0\leq p_{1}+q_{2}<1\), this is maximized when \(p_{1}+q_{2}=0\) and \(\delta=\tanh^{-1}\left(\frac{1}{2}\right)\). See Figure 21.
* Two points are in one quadrant and the third point is in an adjacent quadrant. If \(p\) and \(q\) are in the same quadrant, then \(m(p,q)\) will always lie beyond either \(m(p,s)\) or \(m(q,s)\), so \(\delta=0\) will suffice. If \(p\) and \(q\) are in adjacent quadrants, then suppose, without loss of generality, that \(s\) and \(q\) lie in the first quadrant and \(p\) lies in the second quadrant. Then \(m(p,q)=(0,\min\{p_{2},q_{2}\})\) and \(m(p,s)=(0,\min\{p_{2},s_{2}\})\). From this, if \(p_{2}\leq q_{2}\) or \(s_{2}\leq q_{2}\), Inequality (6) is satisfied with \(\delta=0\). Otherwise, \(m(q,s)=(\min\{q_{1},s_{1}\},q_{2})\) and the point \((s_{1},q_{2})\) lies beyond \(m(q,s)\), and similarly the point \((0,s_{2})\) lies beyond \(m(p,s)=(0,\min\{p_{2},s_{2}\})\) which in turn lies beyond \(m(p,q)=(0,q_{2})\). Hence, again using the fact that \(s_{1}+s_{2}<1\), \[\min\{r_{m(p,s)},r_{m(q,s)}\} \leq\min\{\tanh^{-1}(s_{2}),\tanh^{-1}(s_{1}+q_{2})\}\] \[\leq\min\{\tanh^{-1}(1-s_{1}),\tanh^{-1}(s_{1}+q_{2})\}\] which is maximized when \(1-s_{1}=s_{1}+q_{2}\) so that \(s_{1}=\frac{1-q_{1}}{2}\). Therefore, Inequality (6) is satisfied if \[\tanh^{-1}(q_{2})\geq\tanh^{-1}\left(\frac{1+q_{2}}{2}\right)-\delta.\] Solving for \(\delta\), we find the largest \(\delta\) needed is \[\delta=\tanh^{-1}\left(\frac{1+q_{2}}{2}\right)-\tanh^{-1}(q_{2})=\tanh^{-1} \left(\frac{1}{2+q_{2}}\right)\]
Figure 21. The points of interest in the proof of \(\delta\)-hyperbolicity when all points are in one quadrant. In this example, \(p\) and \(s\) are positioned such that \(m(p,s)\) is just \(p\), and \((p_{1},s_{2})\) lies beyond this point. Meanwhile, neither \(q\) nor \(s\) lie beyond the other and \(m(q,s)=(s_{1},q_{2})\).
which is its largest when \(q_{2}=0\) and \(\delta=\tanh^{-1}\left(\frac{1}{2}\right)\). See Figure 22.
* Two points are in one quadrant and the third point is in the opposite quadrant. Then two of the three minimal points \(m(p,q)\), \(m(p,s)\), and \(m(q,s)\) are at the origin, so Inequality (6) holds with \(\delta=0\).
* Each point is in its own quadrant. If \(p\) and \(q\) are in adjacent quadrants, then \(m(p,s)\) or \(m(q,s)\) lies at the origin and Inequality (6) holds with \(\delta=0\). If \(p\) and \(q\) are in opposite quadrants, then suppose, without loss of generality that \(s\) lies in the first quadrant, \(p\) lies in the second, and \(q\) lies in the fourth. Then \(m(p,q)=\theta\), \(m(p,s)=(0,\min\{p_{2},s_{2}\})\), and \(m(q,s)=(\min\{q_{1},s_{1}\},0)\). Hence \(r_{m(p,s)}\leq\tanh^{-1}(s_{2})\) and \(r_{m(q,s)}\leq\tanh^{-1}(s_{1})\), so \[\min\left\{r_{m(p,s)},r_{m(q,s)}\right\}\leq\min\left\{\tanh^{-1}(s_{1}),\tanh ^{-1}(s_{2})\right\}\] and since \(s_{1}+s_{2}<1\), the right hand side is bounded by \(\tanh^{-1}\left(\frac{1}{2}\right)\). See Figure 23.
The work up to now shows that for all \(p\), \(q\), and \(s\) in \(D_{T}\),
\[G(p,q;\theta)\geq\min\{G(p,s;\theta),G(q,s;\theta)\}-\tanh^{-1}\left(\frac{1}{2 }\right).\]
Hence, from [1], for all \(p\), \(q\), \(w\) and \(s\) in \(D_{T}\),
\[G(p,q;w)\geq\min\{G(p,s;w),G(q,s;w)\}-2\tanh^{-1}\left(\frac{1}{2}\right).\]
We show here that \(\delta=2\tanh^{-1}\left(\frac{1}{2}\right)=\ln(3)\) is necessary. Let \(p\), \(q\), \(w\), and \(s\) each lie in their own quadrant with \(p\) and \(q\) in opposite quadrants and \(w\) and \(s\) in
Figure 22. The points of interest in the proof of \(\delta\)-hyperbolicity when two points are in one quadrant and one point is in an adjacent quadrant. In this example, \(p\) and \(s\) are positioned such that \(m(p,s)=(0,p_{2})\), and \((0,s_{2})\) lies beyond this point. Meanwhile, neither \(q\) nor \(s\) lie beyond the other and \(m(q,s)=(s_{1},q_{2})\).
opposite quadrants. Then \(m(p,q)=m(w,s)=\theta\) so
\[G(p,q;w) =\frac{1}{2}\left[d(p,w)+d(q,w)-d(p,q)\right]\] \[=\frac{1}{2}\left[r_{p}+r_{w}-2r_{m(p,w)}+r_{q}+r_{w}-2r_{m(q,w)}- (r_{p}+r_{q}-2r_{m(p,q)})\right]\] \[=r_{w}-r_{m(p,w)}-r_{m(q,w)},\]
and similarly
\[G(p,s;w) =\frac{1}{2}\left[d(p,w)+d(s,w)-d(p,s)\right]\] \[=\frac{1}{2}\left[r_{p}+r_{w}-2r_{m(p,w)}+r_{s}+r_{w}-2r_{m(s,w)} -(r_{p}+r_{s}-2r_{m(p,s)})\right]\] \[=r_{w}-r_{m(p,w)}+r_{m(p,s)},\]
and
\[G(q,s;w)=r_{w}-r_{m(q,w)}+r_{m(q,s)}.\]
Hence the desired inequality reduces to
\[r_{w}-r_{m(p,w)}-r_{m(q,w)}\geq\min\{r_{w}-r_{m(p,w)}+r_{m(p,s)},r_{w}-r_{m(q,w )}+r_{m(q,s)}\}-\delta\]
which, canceling the \(r_{w}\) from each side, reduces to
\[-r_{m(p,w)}-r_{m(q,w)}\geq\min\{-r_{m(p,w)}+r_{m(p,s)},-r_{m(q,w)}+r_{m(q,s)} \}-\delta.\]
Now let \(s=(t,t)\), \(p=(-t,t)\), \(q=(t,-t)\), and \(w=(-t,-t)\). Then the various minimal points here are \((\pm t,0)\) and \((0,\pm t)\) and their corresponding radii are all \(\tanh^{-1}(t)\). Hence the inequality reduces to
\[-2\tanh^{-1}(t)\geq 0-\delta.\]
Figure 23. The points of interest in the proof of \(\delta\)-hyperbolicity when each point is in its own quadrant. In this example, \(p\) and \(s\) are positioned such that \(m(p,s)=(0,p_{2})\), and \((0,s_{2})\) lies beyond this point. Meanwhile, \(m(q,s)=(s_{1},0)\).
Since \(t\) can approach \(\frac{1}{2}\), we need \(\delta=2\tanh^{-1}\left(\frac{1}{2}\right)\). See Figure 24.
## 6. Final thoughts and next steps
We finish with some ideas for future exploration related to our taxicab Poincare disk and taxicab hyperbolic geometry more generally.
The Poincare disk is one of many models for hyperbolic space. It would be interesting to look at other models from the taxicab perspective presented here. The upper half-plane model would be the next natural choice. We expect that identifying length minimizing curves in this setting would require analysis comparable to what was done for \(D_{T}\). Interestingly, the isometry group for an upper half-plane model would be infinite, allowing at least for horizontal translations.
In \(D_{T}\), the result shown in Figure 13 is reflective of the fact that, while the global isometry group is small, there are local isometries involving translations along lines of slope \(\pm 1\). These local isometries may be related to the translational isometries expected in an upper half plane model, thus providing a connection between the two models somewhat analogous to that of the upper half plane model and Poincare disk model for usual hyperbolic space.
Alternatively, developing a taxicab hyperbolic space could be attempted from the perspective of the isometry group. The isometry group for hyperbolic space is isomorphic to the group of Mobius transformations in the complex plane that map the unit circle to itself, \(SU(1,1)\). It would be interesting to search for a group similar to \(SU(1,1)\) that captures the desired properties for a taxicab space. It should act transitively on some set, the elements that fix a point should be isomorphic to \(D_{4}\), and the resulting space should be hyperbolic in some sense.
Figure 24. The points of interest in the proof of \(\delta\)-hyperbolicity showing that a factor of two is necessary when \(w\neq 0\).
Finally, the structure of the Poincare disk is closely related to inversion. Some initial attempts by the authors to develop a comparable transformation in the taxicab setting were unsuccessful, but more work is warranted in this direction.
|
2308.08601 | Custom Bell inequalities from formal sums of squares | Bell inequalities play a key role in certifying quantum properties for
device-independent quantum information protocols. It is still a major
challenge, however, to devise Bell inequalities tailored for an arbitrary given
quantum state. Existing approaches based on sums of squares provide results in
this direction, but they are restricted by the necessity of first choosing
measurement settings suited to the state. Here, we show how the sum of square
property can be enforced for an arbitrary target state by making an appropriate
choice of nullifiers, which is made possible by leaving freedom in the choice
of measurement. Using our method, we construct simple Bell inequalities for
several families of quantum states, including partially entangled multipartite
GHZ states and qutrit states. In most cases we are able to prove that the
constructed Bell inequalities achieve self-testing of the target state. We also
use the freedom in the choice of measurement to self-test partially entangled
two-qubit states with a family of settings with two parameters. Finally, we
show that some statistics can be self-tested with distinct Bell inequalities,
hence obtaining new insight on the shape of the set of quantum correlations. | Victor Barizien, Pavel Sekatski, Jean-Daniel Bancal | 2023-08-16T18:00:05Z | http://arxiv.org/abs/2308.08601v2 | # Custom Bell inequalities from formal sums of squares
###### Abstract
Bell inequalities play a key role in certifying quantum properties for device-independent quantum information protocols. It is still a major challenge, however, to devise Bell inequalities tailored for an arbitrary given quantum state. Existing approaches based on sums of squares provide results in this direction, but they are restricted by the necessity of first choosing measurement settings suited to the state. Here, we show how the sum of square property can be enforced for an arbitrary target state by making an appropriate choice of nullifiers, which is made possible by leaving freedom in the choice of measurement. Using our method, we construct simple Bell inequalities for several families of quantum states, including partially entangled multipartite GHZ states and qutrit states. In most cases we are able to prove that the constructed Bell inequalities achieve self-testing of the target state. We also use the freedom in the choice of measurement to self-test partially entangled two-qubit states with a family of settings with two parameters. Finally, we show that some statistics can be self-tested with distinct Bell inequalities, hence obtaining new insight on the shape of the set of quantum correlations.
## I Introduction
One of the most striking characteristic of quantum theory is the fact that it does not admit a hidden variable description that is local, a property usually referred to as nonlocality [1]. This feature is at the heart of numerous phenomenons and applications ranging from quantum paradoxes [2] to device-independent information processing, which enables the certification of quantum properties such as entanglement [3; 4] and randomness [5; 6; 7] without relying on an underlying description of the apparatuses involved [8; 9].
Bell inequalities are the tool of choice to study nonlocality: the violation of a Bell inequality certifies nonlocality, and any nonlocal behavior can be highlighted by a Bell inequality. Furthermore, quantum applications based on nonlocality are validated by Bell expressions: the length of a key distributed between two parties in a device-independent quantum key distribution protocol, for instance, is a direct function of a Bell score [10; 11; 12; 13; 9]. The nonlocal properties of quantum states and measurements can thus be investigated by studying Bell inequalities. However, apart from a few notable exceptions, Bell inequalities suited to quantum states and/or quantum measurements of interest are generally not known.
Indeed, Bell inequalities were initially studied for their ability to distinguish local from nonlocal behaviors. Given a number of measurement settings and possible outcomes, the set of local behaviors forms a polytope which is singled out by its facets - tight Bell inequalities [14]. Since a polytope has a finite number of facets, tight Bell inequalities are naturally of particular interest. The best known one is the Clauser-Horne-Shimony-Holt (CHSH) inequality, which involves two binary measurement settings per party [15]. This inequality has been studied extensively, and it is known to be maximally violated by performing complementary measurements on a two-qubits maximally entangled state \(|\phi^{+}\rangle=(|00\rangle+|11\rangle)/\sqrt{2}\). For this reason, the CHSH inequality is well suited to the study of this state's nonlocality. However, the direct relation between tight Bell inequalities and states of general interest, like maximally entangled states, essentially ends here.
Notably, the local polytope emerging when considering two measurements with three possible outcomes, a natural scenario for measurements on a three-dimensional quantum system, also has a unique new facet given by the so-called CGLMP inequality [16; 17]. Remarkably, this inequality is maximally violated by the two-qutrits state \(|\psi_{\rm CGLMP}\rangle\propto 2\,|00\rangle+(\sqrt{11}-\sqrt{3})\,|11\rangle+2 \,|22\rangle\), which is non-maximally entangled [18; 19; 20]. This leaves open the question of identifying a Bell inequality suited to the maximally entangled state of two qutrits, or more generally to any other quantum state of particular interest.
A number of works provided partial answers to this question by successfully constructing Bell inequalities that are maximally violated by particular target states. Examples include Bell inequalities maximally violated by the maximally entangled state of two qutrits [21; 22; 23; 24], and by partially entangled two-qubit states [25]. Unfortunately, these approaches rely heavily on the specific structure or symmetry of the target state and do not generalize easily to arbitrary situations.
Other works managed to obtain Bell inequalities suited to generic situations by focusing on specific applications of nonlocality. This includes inequalities bounding optimally the communication cost [26], the min entropy [27; 28] or the von Neumann entropy [29]. However, these approaches depend on the full probability distributions and rely on the Navascues-Pironio-Acin (NPA) hierarchy [19; 30] to relate to quantum theory. They have thus mostly been considered numerically for given choices of measurements, requiring an a priori guess of the measurements that should be performed on the state of interest to obtain correlations on the boundary of the quantum set.
Further insight on the relation between Bell inequalities
and quantum states came from an independent line of work which showed that the two can be more strongly connected than anticipated. Namely, it was found that some Bell inequalities are not only maximally violated by specific states, but that these states are sometimes the only ones able to achieve their maximal violation (together with fundamentally equivalent realizations, such as isometrically equivalent ones) [31; 8]. When this is the case, it is said that the Bell inequality self-tests the quantum state and/or measurements. The self-testing property strongly supports the idea of associating to a quantum state the Bell inequalities that it can maximally violate.
Initially, the self-testing property was only observed in a few specific instances, including the CHSH inequality [31; 32], but it has now been confirmed in an overwhelming number of cases [33]. For instance, self-testing schemes have been found for all partially entangled states of two qubits through the tilted Bell inequality [34] as well as for maximally entangled states in arbitrary dimension [35]. It is still unknown whether all pure entangled states can be self-tested, but substantial result have been obtained along these line, see [36; 37; 38; 39; 40; 41; 42; 43].
Note that some self-testing results are based on the knowledge of the full measurement statistics rather than solely on the maximal violation of a Bell inequality [44; 36; 40; 45; 8; 40]. Leaving aside the case of non-exposed points [46; 47], any result obtained in this way can also be certified by a Bell inequality due to the convex nature of the quantum set, but the appropriate Bell expression may be hard to find [48]. Whereas self-tests based on full statistics rely on numerous parameters, the ones based on a single quantity may be easier to use and lead to a wide range of applications, as in the case of partially entangled states self-tested from the tilted Bell inequality [36; 37; 40; 49; 50]. Constructing simple Bell inequalities is thus relevant even for states already known to be self-testable.
Among the techniques developed for self-testing, sum of squares (SOS) play an important role [33; 34; 51]. Indeed, significant properties of a Bell expression such as its Tsirelson bound can be inferred from its sum of squares decomposition [52].
Sum of squares have also been used to construct Bell expressions from a fixed choice of quantum state and measurements [53; 54; 24; 55]. Finally, the self-testing property itself was used to construct Bell inequalities, potentially for arbitrary multipartite states [50]. While generic, this last methods also relies on the usage of a choice of the measurements. Therefore, no technique for constructing Bell expressions tailored to a specific quantum state is known that is really generic, analytical and flexible.
Here, we present a systematic method that enables the construction of Bell expressions for generic target quantum states with the guarantee that the state reaches the maximal value of the Bell expression. Our method is rooted in the idea that the state maximally violates the constructed Bell inequality. As an introduction to the construction of Bell expressions under this simple principle, we review the variational method, first introduced in [56]. To our knowledge, this method was not described exhaustively in the literature. We then discuss some of the limitations of the variational method before introducing our approach based on formal sum of squares (SOS). Finally, we apply our method to several cases, construct each time the corresponding Bell inequalities, and discuss the relation to self-testing.
## II Conditions for the maximal violation of a Bell inequality
### General definitions
Consider a setting where \(n\) parties share a global state \(|\psi\rangle\) on which they may perform \(m\) different local measurements with \(k\) possible outcomes. This experiment is described by the conditional probability distribution \(\mathbf{P}=P(a_{1},\ldots,a_{n}|x_{1},\ldots,x_{n})\) of observing the outcomes \(a_{i}=1,\ldots,k\) given the possible measurement settings \(x_{i}=1,\ldots,m\). A _Bell expression_\(\beta\) in this scenario is a linear map that associates a Bell score
\[\beta(\mathbf{P})=\sum_{\mathbf{a},\mathbf{x}}\alpha_{\mathbf{a}|\mathbf{x}}P(\mathbf{a}|\mathbf{x}) \tag{1}\]
to every probability distribution \(\mathbf{P}\)[57]. Here, \(\alpha_{\mathbf{a}|\mathbf{x}}\) are the Bell expression's coefficients and \(\mathbf{a}=(a_{1},\ldots,a_{n})\), \(\mathbf{x}=(x_{1},\ldots,x_{n})\) are vectors containing the outcomes and setting choices of all parties. The well-known CHSH Bell inequality \(\beta\leq 2\) is a bound on the CHSH Bell expression \(\beta\) defined by [15]
\[\alpha_{a_{1},a_{2}|x_{1},x_{2}}=(-1)^{a_{1}+a_{2}+(x_{1}-1)(x_{2}-1)} \tag{2}\]
with \(a_{1},a_{1},x_{1},x_{2}=1,2\). We emphasize that a Bell expression Eq. (1) is an object acting on probability space and is fully defined by the Bell coefficients \(\alpha_{\mathbf{a}|\mathbf{x}}\).
Given a choice of measurement projectors \(\{\hat{\Pi}^{(i)}_{a|x}\}\) with \(\hat{\Pi}^{(i)}_{a|x}\in\mathcal{L}(\mathcal{H}^{(i)})\) and \(\sum_{a}\Pi^{(i)}_{a|x}=\mathds{1}\), a Bell expression \(\beta\) gives rise to a _Bell operator_
\[\hat{S}=\sum_{\mathbf{a},\mathbf{x}}\alpha_{\mathbf{a}|\mathbf{x}}\hat{\Pi}^{(1)}_{a_{1}|x_{1} }\otimes\ldots\otimes\hat{\Pi}^{(n)}_{a_{n}|x_{n}} \tag{3}\]
which acts on the full Hilbert space \(\mathcal{H}=\mathcal{H}^{(1)}\otimes\ldots\otimes\mathcal{H}^{(n)}\); \(\hat{S}\in\mathcal{L}(\mathcal{H})\). In the case of binary outcomes (\(k=2\)), the measurements can also be described in terms of measurement operators \(\hat{M}^{(i)}_{x}=\hat{\Pi}^{(i)}_{1|x}-\hat{\Pi}^{(i)}_{2|x}\) with eigenvalue \(\pm 1\) for each party. The Bell operator can then be rewritten in the form
\[\hat{S}=\sum_{x_{1},\ldots,x_{n}\geq 0}c_{x_{1},\ldots,x_{n}}\,\hat{M}^{(1)}_{x_{ 1}}\otimes\cdots\otimes\hat{M}^{(n)}_{x_{n}}, \tag{4}\]
where \(c_{x_{1},\ldots,x_{n}}\in\mathbb{R}\) and we set \(\hat{M}^{(i)}_{0}=\mathds{1}\). Bell operators satisfy
\[\beta(\mathbf{P})=\langle\psi|\,\hat{S}\,|\psi\rangle \tag{5}\]
for every state \(|\psi\rangle\)[58]. In the case of the CHSH expression, choosing the optimal Pauli measurements \(\hat{M}_{1}^{(1)}=\hat{Z}_{A}\), \(\hat{M}_{2}^{(1)}=\hat{X}_{A}\) and \(\hat{M}_{x_{2}}^{(2)}=(\hat{Z}_{B}-(-1)^{x_{2}}\hat{X}_{B})/\sqrt{2}\)) gives rise to the Bell operator
\[\hat{S}=(\hat{X}_{A}\hat{X}_{B}+\hat{Z}_{A}\hat{Z}_{B})/2. \tag{6}\]
Here we omit the tensor notation and refer to parties with the letters \(A\) and \(B\). This operator is a well known entanglement witness [59] and has the remarkable property of identifying the singlet state within the two-qubits state space as the only state with maximal eigenvalue.
Clearly, a Bell operator depends both on the Bell coefficients \(\alpha_{\mathbf{a}|\mathbf{x}}\) and on the choice of measurements. Furthermore, it acts on the parties' Hilbert space (e.g. on \(\mathbb{C}^{2^{n}}\) when \(|\psi\rangle\) is an \(n\)-qubit state) rather than on probabilities. Hence, for a fixed Bell expression \(\beta\), different choices of measurements \(\hat{M}_{x}^{(i)}\) give rise to different Bell operators \(\hat{S}\). Similarly, a given Bell operator \(\hat{S}\) gives rise to different Bell expressions \(\beta\) depending on the chosen set of measurements \(\hat{M}_{x}^{(i)}\). In the following, we explore the constraints that a maximal violation imposes on the relation between Bell expressions and Bell operators and use them to identify Bell expressions that are relevant to a target state \(|\psi\rangle\).
### Variational method
The strongest connection known to date between Bell expressions \(\beta\) and quantum states \(|\psi\rangle\) occurs when the maximal score of a Bell expression self-tests a specific state [33], i.e. when its maximum quantum score is only compatible with \(|\psi\rangle\) (up to redundant transformations). However, it is not clear how a Bell expression can be generically constructed to self-test an arbitrary state. For this reason, we now relax this problem and consider conditions on Bell expressions that are only necessary for self-testing \(|\psi\rangle\). This is an easier task and it can be used to discard Bell expressions \(\beta\) that have no chance of self-testing \(|\psi\rangle\).
One such necessary condition is for the maximal value of \(\beta\) to be achieved by measuring \(|\psi\rangle\). In this case, a first rather naive observation is that there must be an implementation of the measurement operators \(\hat{M}_{x}^{(i)}\) such that \(|\psi\rangle\) is an eigenstate with maximal eigenvalue of the corresponding Bell operator Eq. (4).
When \(|\psi\rangle=|\phi^{+}\rangle\), an example of such an operator is \(\hat{S}\) given in Eq. (6): \(|\phi^{+}\rangle\) is its only eigenstate with eigenvalue 1. Now, depending on the actual measurement operators \(\hat{M}_{x}^{(i)}\), this Bell operator can correspond to various Bell expressions. Considering arbitrary qubit measurements in the \(\hat{X}\)-\(\hat{Z}\) plane
\[\hat{A}_{x} =\cos(a_{x})\hat{Z}_{A}+\sin(a_{x})\hat{X}_{A} \tag{7a}\] \[\hat{B}_{y} =\cos(b_{y})\hat{Z}_{B}+\sin(b_{y})\hat{X}_{B} \tag{7b}\]
with \(a_{x},b_{y}\in\mathbb{R}\), we obtain all such Bell expressions:
\[\beta= \Big{[}\cos(a_{2}-b_{2})\langle A_{1}B_{1}\rangle-\cos(a_{2}-b_{1 })\langle A_{1}B_{2}\rangle \tag{8}\] \[-\cos(a_{1}-b_{2})\langle A_{2}B_{1}\rangle+\cos(a_{1}-b_{1}) \langle A_{2}B_{2}\rangle\Big{]}\] \[\times\frac{1}{2\sin(a_{1}-a_{2})\sin(b_{1}-b_{2})},\]
where the notation \(\langle A_{x}B_{y}\rangle\) without hats stands for
\[\langle A_{x}B_{y}\rangle=\sum_{a,b=1}^{2}(-1)^{a+b}P(a,b|x,y). \tag{9}\]
At this stage, we note that these expressions may not all be good candidates for Bell expressions maximized by \(|\psi\rangle\) as implementations with other measurement settings may give rise to Bell operators with eigenvalues larger than 1. Consider for example the Bell expression for the choices of parameter \(a_{1}=0\), \(a_{2}=\pi/2\), \(b_{y}=-(-1)^{y}\pi/6\)
\[\beta=\frac{1}{2\sqrt{3}}\Big{[}\langle A_{1}B_{1}\rangle+\langle A_{1}B_{2} \rangle+\sqrt{3}\langle A_{2}B_{1}\rangle-\sqrt{3}\langle A_{2}B_{2}\rangle \Big{]}. \tag{10}\]
By construction, its value for the maximally entangled state \(|\phi^{+}\rangle\) with this choice of measurement is 1, but if Bob changes his measurements to \(B_{y}=\cos(\pi/4)\hat{Z}_{B}-(-1)^{y}\hat{X}_{B}\), this value increases up to \(\beta(\mathbf{P})=\frac{\sqrt{2}}{2\sqrt{3}}(1+\sqrt{3})\simeq 1.12>1\). For these settings, measurements on a partially entangled state can also reach the value 1. We would thus like to refine the condition to avoid such cases.
So far, we only considered the relation between Bell operators and Bell expressions for a fixed choice of measurement settings, but if \(\beta\) is maximized by \(|\psi\rangle\), not only must \(|\psi\rangle\) be a maximal eigenvector for the Bell operator corresponding to the chosen measurements, but the corresponding eigenvalue shall exceed the Bell score attainable for any other implementation. Ultimately, this should be the case even for implementations involving measurements of arbitrary dimension, which are arguably hard to parametrize. It should already be satisfied, however, in the qubit space. This case is easier to parametrize and constitutes a necessary condition. Even then, this condition is difficult to verify globally on the whole space as shown above. However, the condition takes a simple form when considering small perturbations of the ideal implementation.
To see this, consider the Bell expression Eq. (8) with measurements close to Eq. (7), namely with
\[\hat{A}_{x} \rightarrow\hat{A}_{x}+\delta_{A_{x}}(-\sin(a_{x})\hat{Z}_{A}+\cos(a _{x})\hat{X}_{A}) \tag{11a}\] \[\hat{B}_{y} \rightarrow\hat{B}_{y}+\delta_{B_{y}}(-\sin(b_{y})\hat{Z}_{B}+\cos(b _{y})\hat{X}_{B}) \tag{11b}\]
for small \(\delta\)s1. The state \(|\phi^{+}\rangle\) is a local optimum only if the expectation value of the resulting perturbed Bell
operator remains unchanged to first order, i.e. if
\[\begin{split}&\left\langle\phi^{+}\right|\frac{\partial\hat{S}}{ \partial\delta_{A_{x}}}\left|\phi^{+}\right\rangle\propto\cos(a_{1})\cos(a_{2 })+\sin(a_{1})\sin(a_{2})=0\\ &\left\langle\phi^{+}\right|\frac{\partial\hat{S}}{\partial \delta_{B_{y}}}\left|\phi^{+}\right\rangle\propto\cos(b_{1})\cos(b_{2})+\sin( b_{1})\sin(b_{2})=0.\end{split} \tag{13}\]
In the range \(-\frac{\pi}{2}\leq a_{1},b_{1}\leq\frac{\pi}{2}\), these conditions are equivalent to \(a_{2}=a_{1}\pm\frac{\pi}{2}\) and \(b_{2}=b_{1}\pm\frac{\pi}{2}\), i.e. imposing that the measurements be complementary for Alice and Bob. Choosing \(a_{2}=a_{1}+\pi/2\) and \(b_{2}=b_{1}-\pi/2\), the corresponding Bell expression can be expressed as a function of a single parameter \(c=b_{1}-a_{1}\):
\[\begin{split}\beta=&\cos(c)\langle A_{1}B_{1} \rangle+\sin(c)\langle A_{1}B_{2}\rangle\\ &+\sin(c)\langle A_{2}B_{1}\rangle-\cos(c)\langle A_{2}B_{2} \rangle.\end{split} \tag{14}\]
From this example, we see how the condition of local optimality eliminates many Bell expressions candidates for \(\left|\phi^{+}\right\rangle\). The resulting family contains CHSH as a special case, but also includes additional Bell expressions. One can verify that these expressions self-test the desired state for all \(c\in(0,\pi/4]\). We comment later on reasons why specific values of \(c\) might be more interesting than others in this particular example.
Note that Eq. (13) contains a redundancy: when the considered perturbation corresponds to a common rotation of both measurements \(\delta_{A_{1}}=\delta_{A_{2}}\), it can be seen as a local unitary transformation of the state and the first order condition is always fulfilled. Thus, this condition is only sensitive to variations of the relative angle \(\delta_{A}=\delta_{A_{1}}-\delta_{A_{2}}\) between the two measurements. The same holds on Bob's side, leaving one equality constraint per party. In general, parametrization up to a local unitary can be achieved by parametrizing for each party all measurements except one.
This example illustrates the usage of the variational principle to find a Bell expression maximized by a given quantum state. Here, we started from a specific Bell operator in Eq. (8) that is maximized by \(\left|\phi^{+}\right\rangle\), but clearly, any other similar choice could have been made and the same procedure could be followed for an arbitrary state \(\left|\psi\right\rangle\). We can thus formulate the variational method as follows:
1. Choose a Bell operator \(\hat{S}\) which admits the target state \(\left|\psi\right\rangle\) as its eigenstate with maximal eigenvalue.
2. Parametrize the measurement bases, e.g. \(\hat{M}_{x}^{(i)}\) in the binary case, for each party.
3. Define a corresponding parametrization of Bell expressions \(\beta\) by expressing the Bell operator \(\hat{S}\) in terms of the measurement operators \(\hat{M}_{x}^{(i)}\).
4. Consider a perturbation of the measurements \[\hat{M}_{x}^{(i)}\rightarrow\hat{M}_{x}^{(i)}+\delta_{x}^{(i)}\hat{M}_{x}^{( i)\perp}.\] (15)
5. Solve the first order equations \[\left\langle\psi\right|\frac{\partial\hat{S}}{\partial\delta_{x}^{(i)}}\left| \psi\right\rangle=0\ \ \forall x,i.\] (16)
Note that in Step 2 the measurements should be chosen such that \(\text{span}\{\hat{\Pi}_{a_{1}|x_{1}}^{(1)}\otimes\cdots\otimes\hat{\Pi}_{a_{n} |x_{n}}^{(n)}\}\) contains the Bell operator \(\hat{S}\). Furthermore, when the measurement operators for at least one party define an overcomplete operator basis (not counting the identity), several choices of \(\beta\) could be made in Step 3. In this case, the derivative in Eq. (16) is to be understood accordingly.
Eigenvalue perturbation implies that Eq. (16) must be satisfied when \(\beta\) is maximized by the considered state and settings. Importantly, the method gives a _necessary condition_ for the Bell expression candidate \(\beta\) obtained by the choice of operator \(\hat{S}\) and of settings \(\hat{M}_{x}^{(i)}\) that was made in the first place.
Note that expression \(\beta\) may still be maximally violated by the considered state even when condition Eq. (16) is not verified. This is however only possible with other measurement settings and thus a different corresponding operator \(\hat{S}^{\prime}\). For example, we showed previously that Bell expression Eq. (10) is not a "good" candidate for the operator choice \(\hat{S}=(\hat{X}_{A}\hat{X}_{B}+\hat{Z}_{A}\hat{Z}_{B})/2\). Nevertheless, its maximum \(2/\sqrt{3}\simeq 1.15>1\) is attained by the state \(\left|\phi^{+}\right\rangle\) for the different measurement settings \(a_{1}=0\), \(a_{2}=\pi/2\), \(b_{y}=-(-1)^{y}\pi/3\), and Eq. (16) is satisfied for the corresponding Bell operator. It may thus be helpful to consider several Bell operators \(\hat{S}\).
The operator \(\hat{S}\) belongs to a finite dimensional product Hilbert space and therefore can in principle be fully parametrized. The constraints here are that the state \(\left|\psi\right\rangle\) should belong to the support of the considerd Hilbert space and be a maximal eigenstate of \(\hat{S}\). Furthermore, finite dimensional measurements can also be expressed with a finite number of parameters. In all generality, the whole variational method can therefore be parametrized with finitely many parameters. When considering a complete parametrization rather than a particular choice of \(\hat{S}\) and of the measurements, the implications of Eq. (16) become stronger.
First, if an expression \(\beta\) is never obtained for all possible \(\hat{S}\) and \(\hat{M}_{x}^{(i)}\) verifying Eq. (16), then this expression cannot be maximized by the considered state. This implies that this expression cannot be used to self-test the considered state. When considering a complete parametrization, the variational method can thus be considered as a fully
necessary condition for both Bell expression maximization and self-testing 2.
Footnote 2: Note that when using the variational method to construct self-testing candidates, the operator \(\hat{S}\) can be further restrained to admit the target state \(\ket{\psi}\) as a unique maximum eigenstate.
Second, if a choice of measurements never verifies Eq. (16) for all possible \(\hat{S}\), then these settings cannot be used in any expression maximized by the considered state. In particular, these settings cannot be used to self-test the considered state, or in other words, the realisation corresponding to the considered state and those measurements cannot be self-tested.
From the previous example, we note that an easy way to choose the initial Bell operator \(S\) is to identify a set of operators \(S_{i}\) that are stabilizers of the target state: \(S_{i}\ket{\psi}=\ket{\psi}\) and such that \(S_{i}\) have eigenvalues in \([-1,1]\). Then any convex sum of several of these \(S=\sum_{i}p_{i}S_{i}\) with \(p_{i}\geq 0\) and \(\sum_{i}p_{i}=1\) results in a valid operator \(S\).
As demonstrated in previous works, the variational method allows one to construct insightful Bell expressions tailored to various states. In [56], the method is used to find an expression self-testing the four qubits linear cluster state. The method also provided a self-test more robust to noise than the tilted CHSH inequality for the partially entangled two-qubit states [60] through
\[\begin{split}\frac{\langle A_{1}B_{1}\rangle+\langle A_{1}B_{2} \rangle}{2\cos(b_{\theta})}+s_{2\theta}\frac{\langle A_{2}B_{1}\rangle-\langle A _{2}B_{2}\rangle}{2\sin(b_{\theta})}\\ +\frac{1}{2}c_{2\theta}\left(\langle A_{1}\rangle+\frac{\langle B _{1}\rangle+\langle B_{2}\rangle}{2\cos(b_{\theta})}\right)\preceq 2,\end{split} \tag{17}\]
where \(b_{\theta}=\pi/2-\arctan\sqrt{\frac{1+\frac{1}{4}c_{2\theta}^{2}}{s_{2\theta}^ {2}}}\). The associated measurement settings are given by3:
Footnote 3: Note that we changed the parametrization of the measurement angles compared to what is presented in [60] in order to be keep coherence throughout this article.
\[\begin{cases}\hat{M}_{1}^{(1)}=\hat{Z}_{A},\quad M_{1}^{(1)}=\hat{X}_{A},\\ \hat{M}_{y}^{(2)}=\cos(b_{\theta})\hat{Z}_{B}-(-1)^{y}\sin(b_{\theta})\hat{X}_ {B}.\end{cases} \tag{18}\]
#### ii.2.1 Second order condition
For an expression to be a maximum, its first derivative must be zero, but the second derivative should also be negative. The variational approach can thus be pushed one step further. As we show below, the negativity of the second derivative is often verified. Furthermore, in the case of multi-dimensional variation, the associated Hessian provides interesting insight about the interplay between various measurement parameters.
For a perturbation of the operator \(\hat{S}\) induced by an infinitesimal change of the measurement settings in the Bell expression accordingly to Eq. (15), the eigenvalue 1 of \(\ket{\psi}\) is a local maximum iff (in addition to Eq. (16)) the following semi-definite condition holds:
\[\gamma\preceq 0,\text{ where }\ \gamma=\mu+\nu \tag{19a}\] \[\mu_{ij}=\bra{\psi}\frac{\partial\hat{S}}{\partial\delta_{i} \partial\delta_{j}}\ket{\psi}\] (19b) \[\nu_{ij}=2\sum_{l}\frac{\bra{\psi}\frac{\partial\hat{S}}{ \partial\delta_{i}}\ket{\psi_{l}}\bra{\psi_{l}}\frac{\partial\hat{S}}{ \partial\delta_{j}}\ket{\psi}}{1-\lambda_{l}}. \tag{19c}\]
Here, \(\ket{\psi_{l}}\) are the other eigenstates of \(\hat{S}\) with the eigenvalues \(\lambda_{l}\), \(\mu\) accounts for the direct second order variation of the Bell operator on \(\ket{\psi}\), while \(\nu\) accounts for the variation of the eigenstate associated to the maximal eigenvalue [61].
As an example, consider the case of the Bell operator Eq. (6) with measurements Eq. (7). The second order perturbation of the measurements is given by
\[\hat{A}_{x} \rightarrow\hat{A}_{x}+\delta_{a_{x}}(-\sin(a_{x})\hat{Z}_{A}+\cos (a_{x})\hat{X}_{A})-\frac{1}{2}\delta_{a_{x}}^{2}\hat{A}_{x}, \tag{20}\] \[\hat{B}_{y} \rightarrow\hat{B}_{y}+\delta_{b_{y}}(-\sin(b_{y})\hat{Z}_{B}+\sin (b_{y})\hat{X}_{B})-\frac{1}{2}\delta_{b_{y}}^{2}\hat{B}_{y}.\]
Considering only variations of the second measurement of each party (by invariance under local unitaries), we can impose the first order conditions and compute the Hessian matrix:
\[\gamma=\begin{pmatrix}-1&\cos(2c)\\ \cos(2c)&-1\end{pmatrix}. \tag{21}\]
Once again the second order condition only depends on the single parameter \(c=b_{1}-a_{1}\). The eigenvalues of \(\gamma\) are \(\{-2\sin(c)^{2},-2\cos(c)^{2}\}\). For \(c\in(0,\pi/4]\) those are strictly negative and are equal when \(c=\pi/4\). These eigenvalues describe the decrease of the maximal value of the Bell operators to second order along principal measurements perturbation directions. When both eigenvalues are equal, we recover the CHSH expression. One can thus understand CHSH as the solution among all expressions in Eq. (14) which behaves the most uniformly with respect to measurement perturbations.
#### ii.2.2 Limitations of the variational method
As mentioned earlier, the variational method often allows one to construct Bell expressions which not only have the desired maximization property, but sometimes also achieve self-testing of the target state. However, the method only provides in general a necessary condition for a Bell inequality to be maximally violated by the target state. Indeed, since it only focuses on local extrema, it may fail to provide Bell expressions with matching global Tsireslon bound: the local maximum may fail to be a global one. In this section, we provide an example
where the maximum obtained by the variational method is local but not global, thus showing that this is indeed a limitation of the method.
Concretely, we apply the method to the partially entangled two qubit state \(\ket{\phi_{\theta}}=c_{\theta}\ket{00}+s_{\theta}\ket{11}\) for \(\theta\in(0,\pi/4]\) using the Bell operator
\[\hat{S}_{\theta,p,q}=p\hat{Z}_{A}\hat{Z}_{B}+(1-p)(s_{2\theta}\hat{X}_{A}\hat{ X}_{B}+c_{2\theta}(q\hat{Z}_{A}+(1-q)\hat{Z}_{B})) \tag{22}\]
obtained by combining two stabilizers. This ensures that the Bell operator satisfies \(\hat{S}_{\theta,p,q}\ket{\phi_{\theta}}=\ket{\phi_{\theta}}\). In addition, \(\ket{\phi_{\theta}}\) is the unique maximal eigenvector of \(\hat{S}_{\theta,p,q}\) when
\[4p+(1-p)^{2}2q(1-q)(\cos(4\theta)+1)>0. \tag{23}\]
Next we consider arbitrary measurements in the \(\hat{X}\)-\(\hat{Z}\) plane parameterized as in Eq. (7). The first order conditions are:
\[\begin{split}& ps_{a_{2}}s_{a_{1}}+(1-p)(s_{2\theta}^{2}c_{a_{2}}c_{ a_{1}}+qc_{2\theta}^{2}s_{a_{2}}s_{a_{1}})=0,\\ & ps_{b_{2}}s_{b_{1}}+(1-p)(s_{2\theta}^{2}c_{b_{2}}c_{b_{1}}+(1- q)c_{2\theta}^{2}s_{b_{2}}s_{b_{1}})=0.\end{split} \tag{24}\]
For concreteness, let us now set \(\theta=\pi/8,a_{1}=0,b_{1}=-b_{2}=\pi/6\). The values of \(a_{2}\) is fixed by the first order conditions to be \(a_{2}=\pi/2\), and the second equation fixes the value of \(p\) to
\[p(q)=\frac{2+q}{4+q}. \tag{25}\]
Eq. (23) is then fulfilled if \(q\in[0,4]\). This fixes the candidate Bell expression to
\[\begin{split}&\beta_{q}=p(q)\frac{\langle A_{1}B_{1}\rangle+ \langle A_{1}B_{2}\rangle}{\sqrt{3}}+(1-p(q))\\ &\quad\times\left[\frac{\langle A_{2}B_{1}\rangle-\langle A_{2} B_{2}\rangle}{\sqrt{2}}+\frac{q\langle A_{1}\rangle}{\sqrt{2}}+(1-q)\frac{ \langle B_{1}\rangle+\langle B_{2}\rangle}{\sqrt{6}}\right].\end{split} \tag{26}\]
We still need to check the second order condition to make sure that the value \(1\) is a local maximum. The non-zero eigenvalues of the Hessian matrix can be computed numerically and are all negative within the range of \(q\in[0,4]\), thus guaranteeing that the Bell expression corresponds to a local maximum.
In order to assess whether this local maximum is also global, we compute the quantum maxima numerically (see Fig. 1). As we can see the candidate Bell expression admits a quantum maxima strictly greater than \(1\) when \(q>2.83\). More than this, its local maxima is also larger than \(1\): local realizations far away from the point of study can reach a larger value of the expression. One such point is given by the deterministic strategy \(A_{x}=1\), \(B_{y}=-(-1)^{y}\) which gives \(\beta_{3}(\mathbf{P}_{L})=1.01>1\). Only in a smaller region of parameter \(q\), approximately in \((0,2.83)\), the variational method seems to give Bell expressions with the expected maximal value.
### Sum of squares decomposition method
The variational method constructs Bell expression that are potentially maximized by a target quantum state, but it is fundamentally unable to guarantee that no other quantum realization could achieve a higher score. We now present a method that is also able to provide Bell expression tailored to a target state with the guarantee that no other quantum state can attain a higher score.
#### iii.3.1 Formal polynomials
Let \(\{X_{i}\}_{i}\) be a set of indeterminates in an associative algebra over a field \(\mathbb{K}\). A _formal multivariate polynomial_ is a linear combination [62]
\[S=\sum_{i}\alpha_{i}M_{i} \tag{27}\]
where \(M_{i}\) are monomials, i.e. products of indeterminates such as \(X_{1}\), \(X_{1}X_{2}\) or \(X_{1}^{2}X_{2}X_{1}\), weighted by scalars \(\alpha_{i}\in\mathbb{K}\).
Considering the algebra induced by the Bell scenario, we associate to the outcome \(a\) of the measurement \(x\) of the party \(k\) the indeterminate \(X_{a|x}^{(k)}\). These indeterminates, also called 'non-commuting variables', obey the algebraic rules of
\[\text{Hermiticity:}\qquad\left(X_{a|x}^{(k)}\right)^{\dagger}=X_{a|x}^{(k)} \tag{28a}\] \[\text{Orthogonality:}\qquad X_{a|x}^{(k)}X_{a^{\prime}|x}^{(k)}= \delta_{a,a^{\prime}}X_{a|x}^{(k)}\] (28b) \[\text{Normalization:}\qquad\sum_{a}X_{a|x}^{(k)}=1\] (28c) \[\text{Commutation:}\quad[X_{a|x}^{(k)},X_{a^{\prime}|x^{\prime}}^{(k ^{\prime})}]=0\text{ for }k\neq k^{\prime}. \tag{28d}\]
Figure 1: In blue, the numerical value of the quantum bound of our candidate Bell expression using semi-definite programming at the order 1+AB of the NPA hierarchy [30]. In orange, the numerical value of the local bound of our candidate Bell expression. For our settings choice, the value of the expression is 1 for all values of \(q\). The blue curve is equal to 1 only in the region (0,2.83).
In this algebra, any monomial \(M_{i}\) can always be written as the product of \(n\) multivariate monomials \(M_{i}^{(k)}\) involving indeterminates from each party individually:
\[M_{i}[\{X_{a|x}^{(k)}\}]=M_{i}^{(1)}[\{X_{a|x}^{(1)}\}]...M_{i}^{(n)}[\{X_{a|x}^ {(n)}\}]. \tag{29}\]
When all monomials \(M_{i}^{(k)}\) in the polynomial \(S\) are of degree one, we say that \(S\) is of _local degree_\(1\)[63]. We then associate the Bell expression
\[\beta=\sum_{\mathbf{a}|\mathbf{x}}\alpha_{\mathbf{a}|\mathbf{x}}P(\mathbf{a}|\mathbf{x}) \tag{30}\]
to the polynomial by substituting each indeterminate \(X_{a|x}^{(k)}\) by the projector \(\hat{\Pi}_{a|x}^{(k)}\) associated to the outcome \(a\) for the setting \(x\), and considering the expectation value over a state \(|\psi\rangle\). This is always possible because an arbitrary implementation can be defined by a set of projective measurements \(\{\hat{\Pi}_{a_{k}|x_{k}}^{(k)}\}\) and a pure state \(|\psi\rangle\) (upon increasing the dimension of the local Hilbert spaces), giving the probabilities \(P(\mathbf{a}|\mathbf{x})=\langle\psi|\bigotimes_{k}\hat{\Pi}_{a_{k}|x_{k}}^{(k)}|\psi\rangle\). Furthermore, this identification is reversible, defining for each Bell expression a corresponding unique formal polynomial. We thus refer to polynomials \(S\) with local degree \(1\) as _formal Bell polynomials_. As an example, the formal CHSH polynomial associated to the Bell expression defined in Eq. (2) is given by
\[S_{\text{CHSH}}=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(N_{i}\) are formal polynomials and \(C\) is a real number. For every implementation, we have \(\langle\psi|\,\hat{N}_{i}^{\dagger}\hat{N}_{i}\,|\psi\rangle\geq 0\), where \(\hat{N}_{i}\) is the Bell operator associated to \(N_{i}\) by the implementation. Thus the maximal quantum score of the Bell expression over any implementation is upper bounded by \(C\). If a specific implementation gives exactly \(C\) then we get \(\sum_{i}||\hat{N}_{i}\,|\psi\rangle\,||^{2}=0\Rightarrow\forall i\), \(||N_{i}\,|\psi\rangle\,||^{2}=0\) and \(|\psi\rangle\) is in the kernel of all operators \(\hat{N}_{i}\). When this is the case, we say that the operators nullify the state. The opposite is also true, hence
\[\langle\psi|\,\hat{S}\,|\psi\rangle=C\iff\forall\,i,\;\hat{N}_{i}\,|\psi \rangle=0. \tag{39}\]
In summary, being able to write a Bell polynomial as a sum of squares allows one to get an upper bound on the maximal score of the corresponding Bell expression [52]. Moreover, if all squares nullify the target state for some choice of measurements, then the upper bound is strict and is obtained by this implementation.
However, in general for arbitrary polynomials \(N_{i}\), a sum of squares gives
\[\sum_{i\in I}N_{i}^{\dagger}N_{i}=C-S+\Gamma, \tag{40}\]
where \(C\) is a real number, \(S\) is a formal Bell polynomial (to which we can associate a Bell expression), and \(\Gamma\) is some leftover polynomial term of higher local order. This last term needs to vanish in order to have a valid sum of squares decomposition. The "condition" of the sum of squares method thus take the simple form
\[\Gamma=0. \tag{41}\]
Building on the variational method, where the measurement parameters (and thus Bell coefficients) are chosen in order to satisfy some local extremality condition, we propose to set these parameters in such a way that the condition \(\Gamma=0\) holds. This leads us to formulate the following SOS method for the construction of Bell expressions.
1. Choose a set of operators \(\hat{N}_{i}\) that are nullifying the target state, i.e. such that \(|\psi\rangle\in\cap_{i}\ker(\hat{N}_{i})\), and in particular \[\hat{N}_{i}\,|\psi\rangle=0\;\;\forall i.\] (42)
2. Parametrize measurement bases \(\hat{M}_{x}^{(i)}\) for each party.
3. Express the nullifiers in terms of the measurement operators \(\hat{M}_{x}^{(i)}\) and define each of the corresponding formal polynomials by promoting the measurement operators to indeterminates.
4. Compute the sum of squares Eq. (40) on the polynomials \(N_{i}\).
5. Solve the condition that all terms of local order higher than one vanishes \[\Gamma=0.\] (43)
As an illustration, consider the following application of the SOS method on the singlet state4\(|\psi\rangle=|\phi^{+}\rangle\). A simple choice of nullifiers is given by \(\hat{N}_{0}=\hat{Z}_{A}-\hat{Z}_{B}\) and \(\hat{N}_{1}=\lambda(\hat{X}_{A}-\hat{X}_{B})\), with \(\lambda\in\mathbb{R}\). We can then make a simple choice of measurement by choosing \(\hat{M}_{1}^{(1)}=\hat{Z}_{A}\), \(\hat{M}_{2}^{(1)}=\hat{X}_{A}\), \(\hat{M}_{y}^{(2)}=\cos(b)\hat{Z}_{B}-(-1)^{y}\sin(b)\hat{X}_{B}\), where \(b\in[0,\pi/2]\) is an arbitrary angle. This allows us to reexpress the nullifiers as \(\hat{N}_{0}=\hat{M}_{1}^{(1)}-\frac{\hat{M}_{1}^{(2)}+\hat{M}_{2}^{(2)}}{2\cos (b)}\), \(\hat{N}_{1}=\lambda\big{(}\hat{M}_{2}^{(1)}-\frac{\hat{M}_{1}^{(2)}-\hat{M}_{2 }^{(2)}}{2\sin(b)}\big{)}\). Substituting the \(\hat{M}_{x}^{(i)}\) operators with the indeterminates \(Y_{x}^{(i)}\), we express their sum of squares as
Footnote 4: Since we are only interested in identifying states up to local unitaries here and below we slightly abuse the terminology and refer to any maximally entangled two qubit state as the singlet state.
\[\begin{split} N_{0}^{2}+N_{1}^{2}=& 1+\frac{1}{2\cos^{2}(b)}+\lambda^{2}\left(1+\frac{1}{2 \sin^{2}(b)}\right)\\ &-\left(A_{1}\frac{B_{1}+B_{2}}{\cos(b)}+\lambda^{2}A_{2}\frac{B_ {1}-B_{2}}{\sin(b)}\right)\\ &+\frac{1}{4}\left(\frac{1}{\cos^{2}(b)}-\frac{\lambda^{2}}{\sin ^{2}(b)}\right)\{B_{1},B_{2}\}.\end{split} \tag{44}\]
The term containing \(\{B_{1},B_{2}\}\) is the only one with local order higher than 1. It vanishes for the choice \(\lambda^{2}=\tan^{2}(b)\), yielding \(N_{0}^{2}+N_{1}^{1}=C(b)-S(b)\) with the Bell expression
\[S(b)=\frac{A_{1}(B_{1}+B_{2})+\tan(b)A_{1}(B_{1}-B_{2})}{\cos(b)} \tag{45}\]
and the Tsirelson bound \(S(b)\preceq C(b)=2(1+\tan^{2}(b))\) attained by the target state \(|\phi^{+}\rangle\). One can check that achieving this value self-tests the state.
The promotion of the nullifiers from the operator space to the formal polynomial algebra performed in Step 3 before computing the sum of squares is a key ingredient of the SOS method. While the choice space of the nullifiers \(\hat{N}_{i}\) only depends on the target state, this mapping from \(\hat{N}_{i}\) to \(N_{i}\) depends on the choice of measurements. When evaluated on specific measurements, these polynomials correspond to operators that nullify the target state. We thus refer to them as _formal nullifiers_ for the considered state and measurements.
Note that as in the variational method, when using the SOS method to obtain a self-test candidate, rather than simply a Bell expression maximized by the target state, one may want to choose nullifiers in such a way that the target state is the unique one nullified by all of them, i.e. \(\cap_{i}\ker(\hat{N}_{i})=\text{span}\{|\psi\rangle\}\). Indeed, consider the particular case \(\lambda=0\) to the above example: we can expand the unique nullifier as \(\hat{N}_{0}=\hat{Z}_{A}-\hat{Z}_{B}\), which yields
the SOS decomposition \(N_{0}^{2}=(A_{1}-B_{1})^{2}=2-S\) with the candidate Bell expression \(S=A_{1}B_{1}\). This successfully yields the candidate \(S=2A_{1}B_{1}\), along with SOS decomposition \(2-S=(A_{1}-B_{1})^{2}\). But since only one square appears in the decomposition, only the nullifying equation \((\hat{M}_{1}^{(1)}-\hat{M}_{1}^{(2)})\ket{\psi}=0\) is certified when the quantum value \(2\) is reached. Clearly, this is not sufficient to identify the state uniquely since even in the ideal implementation there are many candidates that can achieve the maximal value of this expression: many states are nullified by \(\hat{Z}_{A}-\hat{Z}_{B}\); for instance \(\ket{\phi^{+}}\) but also \(\ket{00}\) and \(\ket{11}\).
The SOS method guarantees that no implementation can provide a larger Bell score to the Bell expression \(\beta\) than the target one. It is thus a _sufficient condition_ to construct a Bell expression for a target quantum state: any expression verifying \(\Gamma=0\) is maximized by the considered state. The method relies on a choice of operators, here nullifiers, and of measurements. Therefore, a candidate that does not verify the SOS condition Eq. (41) for a specific choice of \(\hat{N}_{i}\) and \(\hat{M}_{x}^{(i)}\) is not automatically ruled out as it might admit a valid SOS decomposition for another choice.
Note that the nullifiers chosen in Step 1 act on a finite dimension Hilbert space and so they can be fully parametrized. However, their number is not bounded (they can be linearly dependent). Moreover, each nullifier can be expressed in terms of the measurement operators in numerous ways. In particular, their expressions do not need to be restricted to local degree one. Indeed, the space of polynomials corresponding to a formal nullifier for a given state and measurements is of infinite dimension. Therefore, formal nullifiers cannot be parameterized with a bounded number of parameters in full generality. The number of parameters is however finite when the length of the monomials that can be used in the formal polynomials is bounded.
It is known that finding the SOS decomposition of a Bell expression [52] is dual to the NPA hierarchy [30], which converges in the limit of the hierarchy. This limit corresponds to considering the full space of formal polynomials with no limit on the number of squares and on the monomial length. Thus, if one considers all possible choices of formal nullifiers for monomial length \(n\), and completely parametrizes the finite dimensional measurements, the SOS method becomes _necessary and sufficient_ in the asymptotic limit \(n\rightarrow\infty\). This means that a Bell expression can be maximized by the considered state if and only if there exists a choice of measurements and an asymptotic choice of nullifiers such that \(\Gamma=0\). In this sense, the SOS method is _asymptotically complete_.
Note that contrary to the variational method which cannot ensure that its maxima is global, the SOS method provides this guarantee. Moreover, since a global maximum is a local one as well, its solution always fulfills the variational condition. Indeed, the SOS condition \(C-S=\sum_{i}N_{i}^{2}\) is verified for the target implementation, where it gives the opertor equality \(C\mathds{1}-\hat{S}=\sum_{i}\hat{N}_{i}^{2}\). Since \(\hat{S}\ket{\psi}=C\ket{\psi}\), \(\hat{S}\) defines a valid Bell operator for the variational method. The conditions of the variational method are then satisfied: the target state is an eigenvector of maximal eigenvalue and all first order equations upon variations of the measurements choices vanish. This observation can be used to restrict the choice of nullifiers and/or of measurements settings that one can use in the SOS method by first applying the variational method.
Finally, we remark that the strength of this method is not only to construct Bell expressions whose maximal value is reached by the target state. By also providing their sum of squares decomposition, conditions on the action of the measurements on the state are also obtained: the operators \(\hat{N}_{i}\) nullify the state for any implementation reaching the quantum bound. This is interesting as for an arbitrary Bell expression, it is in general hard to find its quantum bound or its SOS decomposition. Moreover, many proofs of self-testing rely on the sum of squares decomposition of the Bell test [33]. We discuss this point in more details in Appendix B.1.
## III Applications and Results
Here we present applications of the SOS method to several states and Hilbert spaces, demonstrating how it may be used to derive Bell inequalities maximized by target states and self-test for a variety of cases.
### Recovering all linear self-tests of the singlet with two binary measurements
In this section, we use the SOS method to derive Bell expressions that self-test the singlet for all possible measurement settings. We apply the SOS method to the state \(\ket{\phi^{+}}=(\ket{00}+\ket{11})/\sqrt{2}\) allowing operators to be taken in \(\langle\{1,\hat{Z},\hat{X}\}^{\otimes 2}\rangle\).
Consider a subspace of nullifiers given by:
\[\mathcal{A}=\langle\{\hat{Z}_{A}-\hat{Z}_{B},\hat{X}_{A}-\hat{X}_{B}\}\rangle.\]
The only two qubit state nullified by any two linearly independent elements of \(\mathcal{A}\) is the singlet. Therefore a good candidate for nullifiers \(\hat{N}_{i}\) would be to take two such operators \(\hat{N}_{0},\hat{N}_{1}\in\mathcal{A}\):
\[\begin{cases}\hat{N}_{0}=\alpha(\hat{Z}_{A}-\hat{Z}_{B})+\beta(\hat{X}_{A}- \hat{X}_{B}),\\ \hat{N}_{1}=\gamma(\hat{Z}_{A}-\hat{Z}_{B})+\delta(\hat{X}_{A}-\hat{X}_{B}), \end{cases} \tag{46}\]
where \(\alpha,\beta,\gamma,\delta\) are arbitrary real numbers.
Consider measurement setting in the \(\hat{X}\)-\(\hat{Z}\) plane for both Alice and Bob. Up to local unitaries and relabeling
of outcomes we can always write
\[\begin{cases}\hat{M}_{1}^{(1)}=\hat{Z}_{A},\\ \hat{M}_{2}^{(1)}=\cos(a_{2})\hat{Z}_{A}+\sin(a_{2})\hat{X}_{A},\\ \hat{M}_{1}^{(2)}=\cos(b_{1})\hat{Z}_{B}+\sin(b_{1})\hat{X}_{B},\\ \hat{M}_{2}^{(2)}=\cos(b_{2})\hat{Z}_{B}+\sin(b_{2})\hat{X}_{B}\end{cases} \tag{47}\]
with angles \(a_{2},b_{1},b_{2}\in[0,\pi[\). We can also assume up to relabeling of measurements that \(b_{1}\leq b_{2}\). In the basis of the measurements, we obtain
\[\hat{Z}_{A}=\hat{M}_{1}^{(1)},\quad\hat{X}_{A}=\frac{\hat{M}_{2 }^{(1)}-\cos(a_{2})\hat{M}_{1}^{(1)}}{\sin(a_{2})},\] \[\hat{Z}_{B}=\frac{\sin(b_{2})\hat{M}_{1}^{(2)}-\sin(b_{1})\hat{M }_{2}^{(2)}}{\sin(b_{2}-b_{1})}, \tag{48}\] \[\hat{X}_{B}=\frac{-\cos(b_{2})\hat{M}_{1}^{(2)}+\cos(b_{1})\hat{ M}_{2}^{(2)}}{\sin(b_{2}-b_{1})}.\]
Replacing the operators \(\hat{Z}_{A},\hat{X}_{A},\hat{Z}_{B},\hat{X}_{B}\) with the above, we can express the nullifiers \(\hat{N}_{i}\) in terms of the measurements \(\hat{M}_{x}^{(i)}\). Let us now consider the formal polynomials associated to \(\hat{N}_{0}\) and \(\hat{N}_{1}\) and look at the sum of their squares
\[N_{0}^{2}+N_{1}^{2}=C-S+\Gamma. \tag{49}\]
The requirement \(\Gamma=0\) for the measurement choice we made translates into two equations:
\[\begin{cases}\beta(\sin(a_{2})\alpha-\cos(a_{2})\beta)+\delta(\sin(a_{2})\gamma -\cos(a_{2})\delta)=0,\\ (\sin(b_{2})\alpha-\cos(b_{2})\beta)(\sin(b_{1})\alpha-\cos(b_{1})\beta)\\ \quad+(\sin(b_{2})\gamma-\cos(b_{2})\delta)(\sin(b_{1})\gamma-\cos(b_{1}) \delta)=0.\end{cases} \tag{50}\]
The first equation comes from cancelling the term containing \(\{A_{1},A_{2}\}\) in \(\Gamma\), and the second from the \(\{B_{1},B_{2}\}\) term. We now look for a solution to this set of equations.
One solution is given by the choice \(\alpha=1,\beta=0\), which gives
\[\gamma=\frac{\cos(a_{1})}{\sin(a_{1})}\delta, \tag{51a}\] \[\delta^{2}=\frac{1}{f(a_{2},b_{1},b_{2})}, \tag{51b}\]
where
\[f(a_{1},b_{0},b_{1})=(\cot(a_{2})-\cot(b_{2}))(\cot(b_{1})-\cot(a_{2})). \tag{52}\]
This set of equation only admits solutions when \(f(a_{1},b_{0},b_{1})>0\) which is the case when operators of Alice and Bob "alternate" stricly, _i.e_\(b_{1}<a_{2}<b_{2}\). We thus find a Bell expressions for all settings that can be self-tested for the singlet, as described in [65], except for the "limit points" for which Alice and Bob share one common measurement. When this is the case, the initial set of equations, with no assumptions on \(\alpha,\beta\), does admit a solution but the expressions we find are not satisfactory as the SOS decomposition only involves a single square. In the Appendix A, we prove that all those limit points are non-exposed points of the set of quantum correlations and thus cannot be self-tested with a single Bell expression.
The sum of squares decomposition given by the method is
\[N_{0}^{2}+N_{1}^{2}=C(a_{1},b_{0},b_{1})-S(a_{1},b_{0},b_{0}) \tag{53}\]
where
\[N_{0} =A_{1}-\frac{\sin(b_{2})B_{1}-\sin(b_{1})B_{2}}{\sin(b_{2}-b_{1})} \tag{54a}\] \[N_{1} =\frac{1}{\sin(a_{2})\sqrt{f(a_{2},b_{1},b_{2})}}\left(A_{2}- \frac{\sin(b_{2}-a_{2})B_{1}-\sin(b_{1}-a_{2})B_{2}}{\sin(b_{2}-b_{1})}\right)\] (54b) \[S(a_{2},b_{1},b_{2}) =\frac{2}{\sin(b_{2}-b_{1})}\left[\sin(b_{2})A_{1}B_{1}+\frac{\sin (b_{2}-a_{2})}{\sin^{2}(a_{2})f(a_{2},b_{1},b_{2})}A_{2}B_{1}+\frac{\sin(a_{2} -b_{1})}{\sin^{2}(a_{2})f(a_{2},b_{1},b_{2})}A_{2}B_{2}-\sin(b_{1})A_{1}B_{2}\right]\] (54c) \[C(a_{2},b_{1},b_{2}) =\frac{2\sin(a_{2})\sin(a_{2}-b_{1}-b_{2})}{\sin(a_{2}-b_{1})\sin (a_{2}-b_{2})}. \tag{54d}\]
The Tsirelson bound associated to this sum of squares is given by
\[S(a_{2},b_{1},b_{2})\preceq C(a_{2},b_{1},b_{2}). \tag{55}\]
We show in Appendix B.2 that all these Bell expressions indeed grant a self-test for the \(|\phi^{+}\rangle\) state and their specific measurements settings.
Note that in [65], the self-test of all those points is
proven using the description of the border of the set of bipartite quantum correlation given by
\[\arcsin (\langle A_{1}B_{1}\rangle)+\arcsin(\langle A_{2}B_{1}\rangle) \tag{56}\] \[+\arcsin(\langle A_{2}B_{2}\rangle)-\arcsin(\langle A_{1}B_{2} \rangle)=\pi\]
By linearizing this equation for the border of this set we can obtain the equation of a tangent hyperplane to each point on the border. For the correlation point obtained with state \(|\phi^{+}\rangle\) and previous measurements parameterized by \((a_{2},b_{1},b_{2})\), we obtain the hyperplane
\[\mathcal{H}_{a_{2},b_{1},b_{2}}=\big{\{}(\langle A_{1}B_{1} \rangle,\langle A_{1}B_{2}\rangle,\langle A_{2}B_{1}\rangle,\langle A_{2}B_{2 }\rangle)\text{ s.t.} \tag{57}\] \[\frac{1}{\sin(b_{1})}\langle A_{1}B_{1}\rangle+\frac{1}{\sin(a_{ 2}-b_{1})}\langle A_{2}B_{1}\rangle\] \[+\frac{1}{\sin(b_{2}-a_{2})}\langle A_{2}B_{2}\rangle-\frac{1}{ \sin(b_{2})}\langle A_{1}B_{2}\rangle\] \[=\cot(b_{1})+\cot(a_{2}-b_{1})+\cot(b_{2}-a_{2})-\cot(b_{2}) \big{\}}.\]
One can check that up to re-normalization we can write the equation of the hyperplane as follows
\[\mathcal{H}_{a_{2},b_{1},b_{2}}=\big{\{} (\langle A_{1}B_{1}\rangle,\langle A_{1}B_{2}\rangle,\langle A_{2 }B_{1}\rangle,\langle A_{2}B_{2}\rangle) \tag{58}\] \[\text{ s.t. }\quad S(a_{2},b_{1},b_{2})=C(a_{2},b_{1},b_{2})\big{\}}\]
Therefore the Tsirelson bounds we find match the equations of these hyperplanes at each point of the border of the quantum set.
### Self-tests for the partially entangled two-qubit states
#### v.2.1 A one-parameter family of self-test based on two nullifiers
In this section, we look at partially entangled two qubit states
\[\ket{\phi_{\theta}}=c_{\theta}\ket{00}+s_{\theta}\ket{11}, \tag{59}\]
where we use the notation \(c_{\theta}=\cos(\theta)\), \(s_{\theta}=\sin(\theta)\). We just slightly modify the nullifiers used for the singlet in the previous section to obtain the following nullifying operators
\[\begin{cases}\hat{N}_{0}=\hat{Z}_{A}-\hat{Z}_{B},\\ \hat{N}_{1}=\hat{X}_{A}-s_{2\theta}\hat{X}_{B}-c_{2\theta}\hat{X}_{A}\hat{Z}_{B }\end{cases} \tag{60}\]
for \(\ket{\phi_{\theta}}\).
We parameterize the measurement operators \(\hat{M}_{x}^{(i)}\) in the \(\hat{X}\)-\(\hat{Z}\) plane with angles \(a_{x},b_{y}\) for Alice and Bob respectively as Eq. (7). We then express the nullifiers in terms of the measurement operators, promote them to formal polynomials and introduce the formal sum of squares \(N_{0}^{2}+\lambda^{2}N_{1}^{2}\) for \(\lambda\in\mathbb{R}\). The condition \(\Gamma=0\) grants the following set of five equations:
\[\begin{cases}s_{a_{1}}s_{a_{2}}+\lambda^{2}\left(1+c_{2\theta}^{2}\frac{s_{b_ {1}}^{2}+s_{b_{2}}^{2}}{s_{b_{1}-b_{2}}^{2}}\right)c_{a_{1}}c_{a_{2}}=0,\\ s_{b_{1}}s_{b_{2}}+\lambda^{2}\left(s_{2\theta}^{2}c_{b_{1}}c_{b_{2}}+c_{2 \theta}^{2}\frac{c_{a_{1}}^{2}+c_{a_{2}}^{2}}{s_{a_{1}-a_{2}}^{2}}s_{b_{1}}s_{ b_{2}}\right)=0,\\ \lambda^{2}c_{2\theta}c_{a_{1}}c_{a_{2}}=0,\\ \lambda^{2}s_{2\theta}c_{2\theta}s_{b_{1}+b_{2}}=0,\\ \lambda^{2}c_{2\theta}^{2}c_{a_{1}}c_{a_{2}}s_{b_{1}}s_{b_{2}}=0.\end{cases} \tag{61}\]
For a fixed value of \(\lambda\), this set of equations only admits one solution - up to relabelling measurements and/or outcomes - when \(s_{a_{1}}=c_{a_{2}}=0\) and \(b_{1}+b_{2}=0\). This means that the ideal measurements follow:
\[\begin{cases}\hat{M}_{1}^{(1)}=\hat{Z}_{A},\quad\hat{M}_{2}^{(1)}=\hat{X}_{A}, \\ \hat{M}_{y}^{(2)}=\cos(b)\hat{Z}_{B}-(-1)^{y}\sin(b)\hat{X}_{B},\end{cases} \tag{62}\]
where the parameter \(b\) satisfies
\[\frac{1}{\lambda^{2}}=\sin^{2}(2\theta)\cot^{2}(b)-\cos^{2}(2\theta). \tag{63}\]
When this condition holds, the sum of squares gives a formal polynomial of local degree 1 together with its SOS decomposition
\[N_{0}^{2}+\lambda^{2}N_{1}^{2}=C(\theta,b)-S_{\theta,b} \tag{64}\]
where:
\[N_{0}=A_{1}-\frac{B_{1}+B_{2}}{2\cos(b)} \tag{65}\] \[N_{1}=A_{2}-s_{2\theta}\frac{B_{1}-B_{2}}{2\sin(b)}-c_{2\theta}A _{2}\frac{B_{1}+B_{2}}{2\cos(b)}\] \[S_{\theta,b}=A_{1}\frac{B_{1}+B_{2}}{\cos(b)}+\lambda^{2}\left[s_ {2\theta}A_{2}\frac{B_{1}-B_{2}}{\sin(b)}+c_{2\theta}\frac{B_{1}+B_{2}}{\cos(b )}\right]\] \[C(\theta,b)=2(1+\lambda^{2})\]
Since \(N_{i}\ket{\phi_{\theta}}=0\) for the considered implementation, the Tsirelson bound associated with the obtained Bell expression is
\[S_{\theta,b}\preceq 2(1+\lambda^{2}). \tag{66}\]
But it turns out that we can obtain a stronger conclusion. In Appendix B.3 we prove that the maximal quantum value of this Bell expression self-tests the partially entangled state \(\ket{\phi_{\theta}}\) and the measurements given in Eq. (62).
Note that the left hand side of the Eq. (63) is strictly positive. This implies a condition on the measurement angle \(b\) and the entanglement parameter \(\theta\) for a solution to exist. As such, we obtain:
\[b\in(\max(-2\theta,-\pi+2\theta),\min(2\theta,\pi-2\theta))\ \backslash\ \{0\} \tag{67}\]
It is an open question whether the limit points (for which \(b=\min(2\theta,\pi-2\theta)\)) can be self-tested or not. The method gives only unsatisfying candidates with decomposition into a single square. Our guess is that as for the singlet case, these points might self-test the underlying implementation but not with a single Bell expression - _i.e_ they are non-exposed [46, 47].
One can study the second order of the variational method to choose a good candidate among all those self-tests. Considering only relative variations \(\delta_{A/B}=\delta_{A_{1}/B_{1}}-\delta_{A_{2}/B_{2}}\) of Alice and Bob's measurement parameters, the Hessian matrix is:
\[\gamma_{\theta,b}=\begin{pmatrix}-\frac{\lambda^{2}s_{\theta}^{2}}{1+\lambda^ {2}}&0\\ 0&-\frac{1}{4}\left(1+\lambda^{2}-\frac{\lambda^{2}s_{\theta}^{2}}{s_{\theta }^{2}}\right)\end{pmatrix} \tag{68}\]
Like in the singlet case we can look for settings for which the two eigenvalues are equals, so that the maximal eigenvalue of \(\hat{S}\) drops equally for all nontrivial measurement perturbations. For \(\theta\in(0,\pi/4]\), this is the case when \(b=\theta\). In this case the two eigenvalues are equal to \(-\sin^{2}(\theta)\).
In subsection III.3, we generalize these expressions to an arbitrary number of parties \(n\), by looking at partially entangled GHZ states, providing a first self-test of all states in this family in terms of a single Bell expression family.
#### iii.3.2 Insight on geometrical properties of the set of quantum correlations
The Bell expressions given by Eq. (65) enable us to self-test states and settings for which self-testing was already known to be possible with other Bell expressions. Indeed, the partially entangled states \(|\phi_{\theta}\rangle\) can be self-tested using the so-called tilted CHSH inequality [66, 34]
\[I_{\alpha(\theta)}=S_{\text{CHSH}}+\alpha(\theta)A_{1}\preceq\sqrt{8+2\alpha^{ 2}}, \tag{69}\]
where \(\alpha(\theta)=2/\sqrt{1+2\tan^{2}(2\theta)}\) for \(\theta\in]0,\pi/4]\). The self-tested measurement settings are
\[\begin{cases}\hat{M}_{1}^{(1)}=\hat{Z}_{A},\quad\hat{M}_{2}^{(1)}=\hat{X}_{A},\\ \hat{M}_{2}^{(2)}=\cos(\mu_{\theta})\hat{Z}_{B}-(-1)^{y}\sin(\mu_{\theta})\hat {X}_{B},\end{cases} \tag{70}\]
where \(\tan(\mu_{\theta})=\sin(2\theta)\). Notice that \(\mu_{\theta}\leq 2\theta\) and thus the correlation points achieved with the settings of tilted CHSH inequality can also be self-tested using Eq. (65) for the choice of \(b\stackrel{{!}}{{=}}\mu_{\theta}\).
Since these two inequalities differ, the correlation points corresponding to the tilted CHSH expression can be self-tested using two different Bell expressions. In fact, it can be self-tested using any convex combination of the tilted CHSH inequality \(I_{\alpha(\theta)}\) and the new Bell inequalities we presented \(S_{\theta,\mu_{\theta}}\).
Geometrically speaking, this means that this quantum point admits two tangent hyperplanes. This proves that the boundary of the quantum set admits non-local angulous point. This particular conclusion could also be inferred from the observation made in [46] that some Bell inequalities are maximized by both the Tsirelson point and a local point.
The same analysis could be done for the points maximizing the expression Eq. (17), as we again find novel Bell expressions for the same realisations. Those properties are illustrated in Fig. 2 and Fig. 3.
Figure 3: The green area shows the upper bound on the quantum set given by the NPA hierarchy at level 1+AB in the slice specified by \(dir_{1}\) and \(dir_{2}\). The red point is the quantum point achieved by the state and settings maximizing inequality Eq. (17) for \(\theta=\pi/8\). The orange line correspond to this Bell inequality and the blue line correspond to our new Bell inequality \(S_{\theta,b_{\theta}}\). The quantum set cannot go beyond those two lines.
Self-testing the partially-entangled state when Alice's measurements are \(\hat{Z}\) and \(\hat{X}\)
The one-parameter family of Bell expressions Eq. (65) allows to self-test the state \(|\phi_{\theta}\rangle\) with any \(\theta\) using a continuous set of measurements settings. Can this construction be extended to include additional settings?
An idea to explore more measurement settings would be to increase the space of nullifiers in which the squares are chosen. In [34], the authors introduce a family of five linearly independent formal nullifiers of the partially entangled state \(|\phi_{\theta}\rangle\) for a given choice of settings. These are used to find the SOS decomposition of previously known Bell inequalities including the tilted CHSH inequalities and more recently for the generalized tilted Bell inequalities [67]. In our case, we define nullifier operators independently of the measurement settings, thus leaving the choice of appropriate formal nullifiers to the SOS condition (41).
In this section, we showcase how constructing squares in terms of additional nullifiers can lead to new Bell expressions suitable for more realizations. To do so we add a third nullifier to the two considered previously:
\[\hat{N}_{2}=1-s_{2\theta}\hat{X}_{A}\hat{X}_{B}-c_{2\theta}\hat{Z}_{B} \tag{71}\]
Adding this third nullifier allows one to lift the conditions on Bob's angles to have symmetric measurements around the \(Z\) axis, while Alice's measurements remains the same: we can set
\[\begin{cases}\hat{M}_{1}^{(1)}=\hat{Z}_{A},\quad\hat{M}_{1}^{(1)}=\hat{X}_{A},\\ \hat{M}_{y}^{(2)}=\cos(b_{y})\hat{Z}_{B}+\sin(b_{y})\hat{X}_{B},\end{cases} \tag{72}\]
where angles \(b_{1}\) and \(-b_{2}\) might be different. Note that up to relabelling of measurements incomes and/or outcomes, we can always assume that \(b_{1},b_{2}\in(-\pi/2,\pi/2]\) and \(b_{1}<b_{2}\).
With those measurements, the corresponding formal nullifiers are given by:
\[N_{0}= A_{1}-\frac{\sin(b_{2})B_{1}-\sin(b_{1})B_{2}}{\sin(b_{2}-b_{1})}, \tag{73a}\] \[N_{1}= A_{2}-s_{2\theta}\frac{-\cos(b_{2})B_{1}+\cos(b_{1})B_{2}}{\sin(b_{ 2}-b_{1})}\] (73b) \[-c_{2\theta}A_{2}\frac{\sin(b_{2})B_{1}-\sin(b_{1})B_{2}}{\sin(b_ {2}-b_{1})},\] \[N_{2}= 1-s_{2\theta}A_{2}\frac{-\cos(b_{2})B_{1}+\cos(b_{1})B_{2}}{\sin (b_{2}-b_{1})}\] (73c) \[-c_{2\theta}\frac{\sin(b_{2})B_{1}-\sin(b_{1})B_{2}}{\sin(b_{2}-b _{1})}.\]
We consider an SOS of the form \(N_{0}^{2}+(\lambda_{1}N_{1}+\lambda_{2}N_{2})^{2}\) for real parameters \(\lambda_{1}\) and \(\lambda_{2}\). When developing the squares we obtain terms proportional to the anticommutator \(\{B_{1},B_{2}\}\) and to \(A_{2}\{B_{1},B_{2}\}\) which contribute to \(\Gamma\). The conditions \(\Gamma=0\) thus leads to two equations:
\[\alpha(\lambda_{1}^{2}+\lambda_{2}^{2})+2\beta\lambda_{1}\lambda_ {2}=0, \tag{74a}\] \[\beta(\lambda_{1}^{2}+\lambda_{2}^{2})+2\alpha\,\lambda_{1}\lambda_ {2}=-s_{b_{1}}s_{b_{2}}, \tag{74b}\]
with
\[\alpha =-\frac{1}{2}s_{4\theta}s_{b_{1}+b_{2}} \tag{75a}\] \[\beta =s_{2\theta}^{2}c_{b_{1}}c_{b_{2}}+c_{2\theta}^{2}s_{b_{1}}s_{b_ {2}}. \tag{75b}\]
These two equations admit a solution when Bob's measurements lie in the squared region:
\[b_{1} \in(-2\theta,0) \tag{76a}\] \[b_{2} \in(0,2\theta). \tag{76b}\]
The sum of square decompostion is then given by:
\[N_{0}^{2}+(\lambda_{1}N_{1}+\lambda_{2}N_{2})^{2}=C(\theta,b_{1},b_{2})-S_{ \theta,b_{1},b_{2}} \tag{77}\]
where:
\[S_{\theta,b_{1},b_{2}}=2A_{1}\frac{s_{b_{2}}B_{1}-s_{b_{1}}B_{2}} {s_{b_{2}-b_{1}}}\] \[\quad-4\lambda_{1}\lambda_{2}\left[A_{2}-s_{2\theta}\frac{-c_{b_ {2}}B_{1}+c_{b_{1}}B_{2}}{s_{b_{2}-b_{1}}}-c_{2\theta}A_{2}\frac{s_{b_{2}}B_{ 1}-s_{b_{1}}B_{2}}{s_{b_{2}-b_{1}}}\right]\] \[\quad+2(\lambda_{1}^{2}+\lambda_{2}^{2})\left[s_{2\theta}A_{2} \frac{-c_{b_{2}}B_{1}+c_{b_{1}}B_{2}}{s_{b_{2}-b_{1}}}+c_{2\theta}\frac{s_{b_ {2}}B_{1}-s_{b_{1}}B_{2}}{s_{b_{2}-b_{1}}}\right],\] \[C(\theta,b_{1},b_{2}) =2(1+\lambda_{1}^{2}+\lambda_{2}^{2}) \tag{78}\]
and
\[\lambda_{1}\lambda_{2} =-\frac{s_{b_{1}}s_{b_{2}}s_{b_{1}+b_{2}}s_{4\theta}}{(c_{2b_{1}}- c_{4\theta})(c_{2b_{2}}-c_{4\theta})}, \tag{79a}\] \[\lambda_{1}^{2}+\lambda_{2}^{2} =-\frac{4s_{b_{1}}^{2}s_{b_{2}}^{2}(c_{2\theta}^{2}+\cot(b_{1})\cot (b_{2})s_{2\theta}^{2})}{(c_{2b_{1}}-c_{4\theta})(c_{2b_{2}}-c_{4\theta})}. \tag{79b}\]
The new candidate Bell expressions with associated quantum bound are given by:
\[S_{\theta,b_{1},b_{2}}\preceq C(\theta,b_{1},b_{2}). \tag{80}\]
These new expressions can also be used to self-test the partially entangled two qubit state \(|\phi_{\theta}\rangle\) along with the measurements settings defined in Eq. (72). The proof of the self-test can be found in Appendix B.4.
In the case where Alice's measurements are given by \(\hat{Z}\) and \(\hat{X}\), the settings given by Eq. (76) seem to be the only ones allowing to self-tested the partially-entangled state \(|\phi_{\theta}\rangle\). Indeed, we verify numerically that any other choice of measurement settings for Bob results in behaviors that do not lie on the boundary of the NPA relaxation set at local level \(\ell=1\). For this, we consider the following optimization
\[\Delta=\min_{i}\;\max_{\mathbf{P}_{1},\mathbf{P}_{2}} \;\mathbf{P}_{1}^{i}-\mathbf{P}_{2}^{i}\] (81) \[\text{s.t.} \;\mathbf{P}_{1}^{j}=\mathbf{P}_{2}^{j}=\mathbf{P}^{j},\;j\neq i\] \[\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \
Here \(i\) runs over all components of the behavior \(\mathbf{P}(ax|by)\) seen as a vector in \(\mathbb{R}^{8}\) and \(\text{NPA}_{\ell}\) stands for the \(\ell^{\text{th}}\) level of the NPA hierarchy. The result of this optimization is zero iff the point is on the boundary of the NPA relaxation. Fig. 4 shows the result of this optimization as a function of the parameters \(b_{1}\), \(b_{2}\) in the case \(\theta=\pi/8\). We see that all statistics outside the considered region Eq. (76) and its symmetric version for \(B_{1}\leftrightarrow B_{2}\) (i.e. \(b_{1}>b_{2}\)) admit a decomposition.
### Multi-partite entangled states
In this section, we present a generalization of the expressions we found in the case of partially entangled two-qubit states whose maximum score is achieved by partially entangled GHZ states for an arbitrary number of parties \(n\). We present only the results as the core of the reasoning follows the discussion of the subsection III.2.1.
The realizations that we aim to find a Bell expression for are given by the following combinations of the state and the measurements
\[\begin{cases}\ket{\psi}\sim\ket{\text{GHZ}_{n,\theta}}=c_{\theta}\ket{0...0} +s_{\theta}\ket{1...1}\\ \hat{M}_{1}^{(i)}=\hat{Z}^{(i)},\quad\hat{M}_{2}^{(i)}=\hat{X}^{(i)},\text{for }\ i<n\\ \hat{M}_{y}^{(n)}=\cos(b)\hat{Z}^{(n)}-(-1)^{y}\sin(b)\hat{X}^{(n)}.\end{cases} \tag{82}\]
The family of nullifiers that we use contains \(n\) operators given by
\[\begin{split}&\hat{N}_{0,i}=\ \hat{Z}^{(i)}-\hat{Z}^{(n)}\quad\forall i=1, \ldots,n-1,\\ &\hat{N}_{1}=\prod_{i=1}^{n-1}\hat{X}^{(i)}-s_{2\theta}\hat{X}^{( n)}-c_{2\theta}\prod_{i=1}^{n-1}\hat{X}^{(i)}\cdot\hat{Z}^{(n)}\end{split} \tag{83}\]
The sum of squares that we look at is of the form \(\sum_{i}\hat{N}_{0,i}^{2}+\lambda^{2}\hat{N}_{1}^{2}\). Under the condition \(\frac{1}{\lambda^{2}}=s_{2\theta}^{2}\text{ct}^{2}(b)-c_{2\theta}^{2}\), just like in Eq. (63), we obtain the following SOS decomposition
\[\sum_{i}N_{0,i}^{2}+\lambda^{2}N_{1}^{2}=2(1+\lambda^{2})-S_{\theta,b}^{n}, \tag{84}\]
where
\[N_{0,i}= Y_{1}^{(i)}\frac{Y_{1}^{(n)}+Y_{2}^{(n)}}{\cos(b)}\quad\forall\,i<n \tag{85a}\] \[N_{1}= \prod_{i=1}^{n-1}Y_{2}^{(i)}-s_{2\theta}\frac{Y_{1}^{(2)}-Y_{2}^{ (2)}}{\sin(b)}\] (85b) \[-c_{2\theta}\prod_{i=1}^{n-1}Y_{2}^{(i)}\cdot\frac{Y_{1}^{(n)}+Y _{2}^{(n)}}{\cos(b)}\] \[S_{\theta,b}^{n}= \frac{1}{n-1}\sum_{i=1}^{n-1}Y_{1}^{(i)}\frac{Y_{1}^{(n)}+Y_{2}^{ (n)}}{\cos(b)}\] (85c) \[+\lambda^{2}\left[s_{2\theta}\prod_{i=1}^{n-1}Y_{2}^{(i)}\cdot \frac{Y_{1}^{(2)}-Y_{2}^{(2)}}{\sin(b)}+c_{2\theta}\frac{Y_{1}^{(n)}+Y_{2}^{ (n)}}{\cos(b)}\right].\]
The Tsirelson bound take the simple form
\[S_{\theta,b}^{n}\preceq 2(1+\lambda^{2}). \tag{86}\]
In Appendix B.3 we prove that the saturation of these inequalities self-tests the state and the measurements in Eq. (82). Note that the case \(n=2\) recovers exactly the result of section III.2.1 and the inequality \(S_{\theta,b}\preceq 2(1+\lambda^{2})\).
### Maximally entangled state of two qutrits
Finally, we apply the SOS method to the case of the maximally entangled state of two qutrits
\[\ket{\psi^{3}}=\frac{\ket{00}+\ket{11}+\ket{22}}{\sqrt{3}}. \tag{87}\]
We consider a family of measurements which generalizes the ones first used in [17] to maximize the CGLMP expression, and then in [35; 24] to self-test the maximally entangled two qutrits state. Namely, for \(x,y\in\{1,2\}\) we choose measurement bases of the form
\[\hat{\Pi}_{a|x} =U(a_{x})F^{\dagger}\ket{a}\!\bra{a}FU(a_{x})^{\dagger} \tag{88a}\] \[\hat{\Pi}_{b|y} =U(b_{y})F^{\dagger}\ket{b}\!\bra{b}FU(b_{y})^{\dagger} \tag{88b}\]
Figure 4: Result of the optimization Eq. (81) for various choice of measurement angles for Bob. Points with \(\Delta\leq 10^{-11}\) in yellow are on the boundary of the quantum set. Except when \(b_{1}=0\) or \(b_{2}=0\), in which case the behavior admits a probability equal to zero (both Alice and Bob measure in the \(\hat{Z}\) direction), all points outside of the interval Eq. (76) belong to the blue region.
These bases are related to the computational one by the Fourier transform
\[F=\frac{1}{\sqrt{3}}\sum_{k,l=0}^{2}w^{kl}\,|k\rangle\!\langle l|\quad\text{ with}\quad w=\exp(2i\pi/3) \tag{89}\]
followed by a phase rotation with a real parameter \(\theta\)
\[U(\theta)=\sum_{k=0}^{2}w^{k\theta}\,|k\rangle\!\langle k|\,. \tag{90}\]
By analogy with the case of formal polynomials in Eq. (33), we define the unitary operators associated to each measurement by
\[\hat{M}_{x}^{(1)} =\sum_{a=0}^{2}w^{a}\hat{\Pi}_{a|x}=\begin{pmatrix}0&0&w^{-2a_{x} }\\ w^{a_{x}}&0&0\\ 0&w^{a_{x}}&0\end{pmatrix} \tag{91a}\] \[\hat{M}_{y}^{(2)} =\sum_{b=0}^{2}w^{b}\hat{\Pi}_{b|y}=\begin{pmatrix}0&0&w^{-2b_{y} }\\ w^{b_{y}}&0&0\\ 0&w^{b_{y}}&0\end{pmatrix}. \tag{91b}\]
In the case of qutrits these operators verify \(\left(\hat{M}_{x}^{(i)}\right)^{\dagger}=\left(\hat{M}_{x}^{(i)}\right)^{2}\), and any projector \(\hat{\Pi}_{a|x}\) can be expressed as a linear combination of the identity operator, \(\hat{M}_{x}^{(i)}\) and its adjoint \(\left(\hat{M}_{x}^{(i)}\right)^{\dagger}\).
The first step of the SOS method is to find nullifiers for the target state. To do so we exploit the fact that for any unitary operator \(\hat{M}\) the following identity holds:
\[\hat{M}\otimes\hat{M}^{*}\left|\psi^{3}\right\rangle=\left|\psi^{3}\right\rangle, \tag{92}\]
where \(\hat{M}^{*}\) is the complex conjugate of \(\hat{M}\). Thus any choice \(\hat{N}=\mathds{1}-\hat{M}\otimes\hat{M}^{*}\) with a unitary \(\hat{M}\) defines a nullifier. Defining two nullifiers as
\[\hat{N}_{x}=\mathds{1}-\hat{M}_{x}^{(1)}\otimes\hat{\overline{M}}_{x}^{(2)}, \ x\in\{1,2\}, \tag{93}\]
we obtain two operators \(\hat{\overline{M}}_{x}^{(2)}\) on Bob's Hilbert space that must verify
\[\hat{\overline{M}}_{x}^{(2)}=\left(\hat{M}_{x}^{(1)}\right)^{*} \tag{94}\]
when acting on Alice's.
The most general way to construct these operators from Bob's measurement operators is to take
\[\begin{split}\hat{\overline{M}}_{x}^{(2)}&=c_{x} \mathds{1}+\mu_{1,x}\hat{M}_{1}^{(2)}+\mu_{2,x}\hat{M}_{2}^{(2)}\\ &\quad+\nu_{1,x}\left(\hat{M}_{1}^{(2)}\right)^{2}+\nu_{2,x} \left(\hat{M}_{2}^{(2)}\right)^{2}.\end{split} \tag{95}\]
Now, together with equation (94), this implies that \(c_{x}=\nu_{1,x}=\nu_{2,x}=0\) and
\[\begin{cases}\mu_{1,x}w^{b_{1}}+\mu_{2,x}w^{b_{2}}=w^{-a_{x}}\\ \mu_{1,x}w^{-2b_{1}}+\mu_{2,x}w^{-2b_{2}}=w^{2a_{x}}.\end{cases} \tag{96}\]
With simple algebra this system of equation solves to
\[\begin{cases}\mu_{1,x}=\frac{w^{2a_{x}}-w^{-a_{x}-3b_{2}}}{w^{-2b_{1}}-w^{b_{ 1}-3b_{2}}}\\ \mu_{2,x}=\frac{w^{2a_{x}}-w^{-a_{x}-3b_{1}}}{w^{-2b_{2}}-w^{b_{2}-3b_{1}}}. \end{cases} \tag{97}\]
Now that we defined our two nullifiers we move to the formal polynomial formalism. We look at the formal polynomials associated to the nullifiers \(N_{x}=\mathds{1}-A_{x}\otimes\overline{B_{x}}\) and the sum of squares \(pN_{1}^{\dagger}N_{1}+(1-p)N_{2}^{\dagger}N_{2}\) for an arbitrary \(p\in(0,1)\) (the limit cases \(p=0,1\) where only one nullifier appears would most likely be insufficient to grant self-testing). When developing the squares we get:
\[pN_{1}^{\dagger}N_{1}+(1-p)N_{2}^{\dagger}N_{2}=C-\hat{S}+\hat{\Gamma}, \tag{98}\]
where \(C{=}2\), \(S\) is a formal polynomial of local degree \(1\) and \(\Gamma\) is the leftover of higher local degree given here by
\[\Gamma=\left(p\mu_{1,1}^{*}\mu_{2,1}+(1-p)\mu_{1,2}^{*}\mu_{2,2}\right)B_{1}^ {\dagger}B_{2}+h.c. \tag{99}\]
Following the SOS method we look for the parameter regime where the leftover term \(\Gamma\) vanishes. This is the case when
\[p\mu_{1,1}^{*}\mu_{2,1}+(1-p)\mu_{1,2}^{*}\mu_{2,2}=0. \tag{100}\]
This can be written as a condition on the measurements parameters \(a_{x},b_{y}\):
\[\begin{split} 0&=p(w^{-2a_{1}}-w^{a_{1}+3b_{2}})(w^{2a_{1}}-w^{- a_{1}-3b_{1}})\\ &\quad+(1-p)(w^{-2a_{2}}-w^{a_{2}+3b_{2}})(w^{2a_{2}}-w^{-a_{2}-3 b_{1}})\\ \Longleftrightarrow 0&=p(1-w^{3a_{1}+3b_{2}})(1-w^{-3a_{1}-3b_{1}})\\ &\quad+(1-p)(1-w^{3a_{2}+3b_{2}})(1-w^{-3a_{2}-3b_{1}})\\ \Longleftrightarrow 0&=p(1-e^{2i\pi(a_{1}+b_{2})})(1-e^{-2i\pi(a_{1}+b_{1} )})\\ &\quad+(1-p)(1-e^{2i\pi(a_{2}+b_{2})})(1-e^{-2i\pi(a_{2}+b_{2})}) \end{split} \tag{101}\]
leads to the condition
\[\begin{split}& p\sin(\pi(a_{1}+b_{2}))\sin(\pi(a_{1}+b_{1}))\\ &\quad+(1-p)\sin(\pi(a_{2}+b_{2}))\sin(\pi(a_{2}+b_{1}))=0.\end{split} \tag{102}\]
We thus obtain a family of Bell expressions with a maximal quantum violation given by:
\[S=pA_{1}\overline{B_{1}}+(1-p)A_{2}\overline{B_{2}}+h.c.\preceq 2, \tag{103}\]
where \(\overline{B_{x}}=\mu_{1,x}B_{1}+\mu_{2,x}B_{2}\) and coefficient \(\mu_{y,x}\) are given by equation (97). Here, \(a_{x}\), \(b_{y}\) and \(p\) are free parameters constrained only by Eq. (102). Without loss of generality we can set \(a_{1}=0\) and \(-b_{1}<-b_{2}\), and choose \(a_{2},-b_{1},-b_{2}\) in \([0,\pi)\). Eq. (102) then implies the alternating condition \(-b_{1}<a_{2}<-b_{2}\) in analogy with the qubit case. Note that the average value of \(S\) over any state is real as the Hermitian conjugate (\(h.c.\)) part ensures that \(S\) is a Hermitian polynomial.
To further study these Bell expression candidates, we look at the second order of the variational method for small perturbation of the measurement parameters \(a_{x}\), \(b_{y}\). This analysis shows that the two negative eigenvalues of the Hessian matrix \(\gamma\) are equal for measurements parameters \(a_{1}=0,a_{2}=1/2,b_{1}=1/4,b_{2}=3/4\). Up to a local unitary transform, these are the parameters used in [24], which implies a form of optimality for this Bell expression within the considered family. As proven in [24; 35], this inequality self-tests the maximally entangled state of two qutrits.
## VI Conclusion
In this work we considered the problem of constructing a Bell expression that is tailored to a generic target state in the sense that its maximal admissible value can be achieved by measuring the state. We presented a solution to this problem in the form of a systematic method applicable to arbitrary quantum states which uses a sum of square condition to define the Bell expression coefficients.
In principle, this method is _asymptotically complete_ in the sense that every Bell expression with the desired property can be obtained by starting from a complete enough set of formal nullifiers. When the degree of the nullifiers is bounded, the method provides _sufficient_ (but not necessary) conditions for a Bell expression to be maximally violated by the target state. Therefore, in all cases the method constructs Bell expressions with the guarantee that their maximal value is achieved by the desired state. The SOS method is thus complementary to the variational method, also presented here in details, which provides _necessary_ conditions for the maximal value of a Bell expression to be achieved by a target state.
In addition to providing a Bell expression with the desired property, a key feature of the SOS method is that it also grants, by construction, its sum of squares. This provides a first step towards the self-testing of the quantum realization. We confirmed this advantage with several examples.
Namely, using the SOS method we constructed a single family of Bell expressions able to self-test all partially-entangled n-partite states of the form \(\left|\mathrm{GHZ}_{n,\theta}\right\rangle=\cos\theta\left|0...0\right\rangle+ \sin\theta\left|1...1\right\rangle\). We also recovered all the linear self-tests of the maximally entangled two-qubit state \(\left|\phi^{+}\right\rangle=(\left|00\right\rangle+\left|11\right\rangle)/ \sqrt{2}\) with two binary measurements, and proved that all self-tests in this scenario with degenerate measurements are non-exposed. We then used the method to derive a family of Bell expressions with 2 parameters self-testing the partially entangled state \(\left|\phi_{\theta}\right\rangle=\cos\theta\left|00\right\rangle+\sin\theta \left|11\right\rangle\) when Alice performs \(\hat{Z}\) and \(\hat{X}\) measurements. In turn, this allowed us to demonstrate that the set of quantum correlations admits nonlocal angulous points. Finally, we demonstrated the generality of our method by constructing a family of Bell expressions for the maximally entangled two-qutrit state \(\left|\psi^{3}\right\rangle=(\left|00\right\rangle+\left|11\right\rangle+ \left|22\right\rangle)/\sqrt{3}\), inferring a form of optimality for the Bell expression introduced in [24].
The constraints imposed by the SOS method apply to both the state and measurements parameters. Therefore, the method can be used to construct inequalities tailored to target measurements as well, such as families of Bell expressions for fixed settings and varying states [67]. It would be interesting to further investigate the relevance of the SOS method to self-testing of measurements.
Another open question would be to clarify when Bell expressions obtained by the SOS method exhibit the self-testing property. Whereas SOS decompositions provide a key ingredient to self-testing, complete self-testing proofs can still require substantial work [35; 43]. Finding conditions under which the SOS method allows for self-testing could lead to an asymptotically complete method for self-testing target states.
Finally, it would be interesting to better understand the resistance to noise of the obtained self-tests. Given the families of Bell expressions able to self-test the partially entangled state of two qubits that we discovered with the SOS method, it is natural to ask which of these is most robust to noise. In fact, as we showed that several Bell expressions can sometimes self-test the same point of correlations, it would be interesting to answer this question even for a fixed set of measurement settings.
###### Acknowledgements.
We acknowledge funding by Commissariat a l'Energie Atomique et aux Energies Alternatives (CEA), the European Union's Horizon 2020 research and innovation program European High-Performance Computing Joint Undertaking under grant agreement No 101018180 (HPCQS) and a French national quantum initiative managed by Agence Nationale de la Recherche in the framework of France 2030 with the reference ANR- 22-PETQ-0007.
|
2301.04530 | Action on the circle at infinity of foliations of ${\mathbb R}^2 $ | This paper provides a canonical compactification of the plane ${\mathbb R}^2$
by adding a circle at infinity associated to a countable family of singular
foliations or laminations (under some hypotheses), generalizing an idea by
Mather \cite{Ma}. Moreover any homeomorphism of ${\mathbb R}^2 $ preserving the
foliations extends on the circle at infinity.
Then this paper provides conditions ensuring the minimality of the action on
the circle at infinity induced by an action on ${\mathbb R}^2 $ preserving one
foliation or two transverse foliations.
In particular the action on the circle at infinity associated to an Anosov
flow $X$ on a closed $3$-manifold is minimal if and only if $X$ is non-$\mathbb
R$-covered. | Christian Bonatti | 2023-01-11T15:47:57Z | http://arxiv.org/abs/2301.04530v1 | # Action on the circle at infinity of foliations of \(\mathbb{R}^{2}\)
###### Abstract.
This paper provides a canonical compactification of the plane \(\mathbb{R}^{2}\) by adding a circle at infinity associated to a countable family of singular foliations or laminations (under some hypotheses), generalizing an idea by Mather [Ma]. Moreover any homeomorphism of \(\mathbb{R}^{2}\) preserving the foliations extends on the circle at infinity.
Then this paper provides conditions ensuring the minimality of the action on the circle at infinity induced by an action on \(\mathbb{R}^{2}\) preserving one foliation or two transverse foliations.
In particular the action on the circle at infinity associated to an Anosov flow \(X\) on a closed \(3\)-manifold is minimal if and only if \(X\) is non-\(\mathbb{R}\)-covered.
**Keywords:** Foliation of the plane, Anosov flow, compactification.
**Codes AMS: 37D20-37E10-37E35-37C86**
January 12, 2023
## 1. Introduction
### General presentation
There are many ways to compactify the plane \(\mathbb{R}^{2}\), the simplest one being the Alexandrov compactification by point at infinity, and \(\mathbb{R}^{2}\cup\{\infty\}\) is the topological sphere \(\mathbb{S}^{2}\). This compactication is canonical and does not depend on any extra structure on \(\mathbb{R}^{2}\). That is its strength, but also its weakness as it does not bring any informations on any structure we endow \(\mathbb{R}^{2}\).
Another very natural and usual compactification of \(\mathbb{R}^{2}\) is by adding a circle at infinity, so that \(\mathbb{R}^{2}\cup\mathbb{S}^{1}\) is the disc \(\mathbb{D}^{2}\). This compactification is not canonical: it consists in a homeomorphism \(h\colon\mathbb{R}^{2}\to\check{\mathbb{D}}^{2}\), where \(\check{\mathbb{D}}^{2}\) is the open disc. Two homeomorphisms \(h_{1},h_{2}\) define the same compactification if \(h_{2}\circ h_{1}^{-1}\colon\check{\mathbb{D}}^{2}\to\check{\mathbb{D}}^{2}\) extends on \(\check{\mathbb{S}}^{1}=\partial\mathbb{D}^{2}\) as a homeomorphism of \(\mathbb{D}^{2}\). There are uncountably many such a compactification.
Here, we start be recalling Mather [Ma] canonical compactification of the plane \(\mathbb{R}^{2}\), endowed with a foliation \(\mathcal{F}\), by a circle at infinity \(\mathbb{S}^{1}_{\mathcal{F}}\). Then we explore the flexibility of this contruction for extending it to more general objects. Thus, we provide an elementary (nothing sophisticated), simple (nothing too complicated), and unified construction which associates a compactification \(\mathbb{D}^{2}_{\mathcal{F}}\) of the plane \(\mathbb{R}^{2}\) by the disc \(\mathbb{D}^{2}\) to a countable family \(\mathcal{F}=\{\mathcal{F}_{i}\}\) of foliations, non-singular or with singular points of saddle type, which are pairwise transverse or at least have some kind of weak transversality condition at infinity, see the precise statements below. The boundary \(\partial\mathbb{D}^{2}_{\mathcal{F}}\) is called _the circle at infinity_ of \(\mathcal{F}\) and is denoted by \(\mathbb{S}^{1}_{\mathcal{F}}\). This compactification is unique, in the sense that the identity on \(\mathbb{R}^{2}\) extends as a homeomorphism on the circles at infinity of two such compactifications.
For giving a concrete example, Corollary 5.1 builds this canonical compactification \(\mathbb{D}^{2}_{\mathcal{F}}\) associated to any countable family \(\mathcal{F}=\{\mathcal{F}_{i}\}\) of singular foliations, where each \(\mathcal{F}_{i}\) is directed by a polynomial vector field on \(\mathbb{R}^{2}\) whose singular points are hyperbolic saddles.
The uniqueness of the compactification implies that any homeomorphism of \(\mathbb{R}^{2}\) preserving \(\mathcal{F}\) (that is, permuting the \(\mathcal{F}_{i}\)) extends as an homeomorphism of the compactification \(\mathbb{D}^{2}_{\mathcal{F}}\), inducing a homeomorphism of the circle at infinity \(\mathbb{S}^{1}_{\mathcal{F}}\).
### Mather idea for building the circle at infinity
The common setting for this unified construction are families of _rays_, where a ray is a proper topological embedding of \([0,+\infty)\) on \(\mathbb{R}^{2}\). We require that the _germs of the rays_ in the family are pairwise disjoint, meaning that the intersection between any two distinct rays is compact. The key idea is that a set of rays in \(\mathbb{R}^{2}\) whose germs are pairwise disjoint is _totally cyclically ordered_, and we will use this cyclic order for building the circle at infinity.
The key technical result (essentially due to [Ma]) is:
**Theorem 1**.: _Let \(\mathcal{R}\) be a family of rays in \(\mathbb{R}^{2}\) whose germs are pairwise disjoint. Let \(\mathcal{E}\subset\mathcal{R}\) be a countable subset which is separating for the cyclic order, that is, any non-degenerate interval contains a point in \(\mathcal{E}\) (see Definition 2.1)._
_Then there is a compactification of \(\mathbb{R}^{2}\) by the disc \(\mathbb{D}^{2}\) so that:_
* _any ray of_ \(\mathcal{R}\) _tends to a point of the circle at infinity_ \(\partial\mathbb{D}^{2}=\mathbb{S}^{1}\)_._
* _any two distinct rays of_ \(\mathcal{R}\) _tend to distinct points of_ \(\mathbb{S}^{1}\)__
* _the points of_ \(\mathbb{S}^{1}\) _which are the limit point of a ray in_ \(\mathcal{R}\) _are dense in_ \(\mathbb{S}^{1}\)_._
_Furthermore, this compactification is unique up to a homeomorphism of \(\mathbb{D}^{2}\) and does not depend on the separating countable set \(\mathcal{E}\)._
Then Theorem 6 provides such a canonical compactification for a countable union \(\mathcal{R}=\bigcup\mathcal{R}_{i},i\in I\subset\mathbb{N}\) of families of rays, assuming that the germs of rays in \(\mathcal{R}\) are pairwise disjoint and each \(\mathcal{R}_{i}\) admits a countable separating subset \(\mathcal{E}_{i}\). The difficulty here is that \(\mathcal{R}\) by itself may not admit any separating family. The idea for solving this problem consists in considering a natural equivalence relation on \(\mathcal{R}\), identifying the rays which cannot be separated.
### Countable families of transverse foliations
A natural setting where we will apply this general construction are (at most countable) families of transverse foliations on the plane \(\mathbb{R}^{2}\). Notice that any _half leaf_ of a (non-singular) foliation of \(\mathbb{R}^{2}\) is a ray. An _end of leaf_ is the germ at infinity of an half leaf. In this setting we get:
**Theorem 2**.: _Let \(\mathcal{F}=\{\mathcal{F}_{i}\}_{i\in I\subset\mathbb{N}}\) be an at most countable family of pairwise transverse foliations on the plane \(\mathbb{R}^{2}\)._
_There is a compactification \(\mathbb{D}^{2}_{\mathcal{F}}\simeq\mathbb{D}^{2}\) of \(\mathbb{R}^{2}\) by adding a circle \(\mathbb{S}^{1}_{\mathcal{F}}=\partial\mathbb{D}^{2}_{\mathcal{F}}\) with the following properties:_
* _Any end of leaf tends to a point of the circle at infinity_ \(\mathbb{S}^{1}_{\mathcal{F}}\)_,_
* _The set of ends of leaves tending to a same points of_ \(\mathbb{S}^{1}_{\mathcal{F}}\) _is at most countable,_
* _For any non-empty open subset_ \(O\subset\mathbb{S}^{1}_{\mathcal{F}}\) _the set of ends of leaves having their limit in_ \(O\) _is uncountable._
_This compactification with these three properties is unique, up to a homeomorphism of \(\mathbb{D}^{2}_{\mathcal{F}}\)._
The circle \(\mathbb{S}^{1}_{\mathcal{F}}\) is called _the circle at infinity_ of the family \(\mathcal{F}=\{\mathcal{F}_{i}\}_{i\in I\subset\mathbb{N}}\).
**Remark 1**.: _The countablity of the set of ends tending to the same point implies that_
* _the two ends of a given leaf always have distinct limits on_ \(\mathbb{S}^{1}_{\mathcal{F}}\)_._
* _if two leaves_ \(L_{1},L_{2}\) _of the same foliation_ \(\mathcal{F}_{i}\) _have the same pair of limits of ends, they are equal (see Lemma_ 4.2_)._
Recall that foliations of \(\mathbb{R}^{2}\) may have leaves which are _not separated_ one from the other. The leaves which are separated from any other leaves are called _regular leaves_. At most countably many leaves are not regular (see here Lemma 3.2). We will see that,
**Proposition 1.1**.: _Let \(\mathcal{F}=\{\mathcal{F}_{i}\}_{i\in I\subset\mathbb{N}}\) be an at most countable family of pairwise transverse foliations on the plane \(\mathbb{R}^{2}\). Any two distinct ends of regular leaves of the same foliation \(\mathcal{F}_{i}\) tend to two distinct points of \(\mathbb{S}^{1}_{\mathcal{F}}\)._
Now, in the setting of Theorem 2 we can apply this theorem to each foliation \(\mathcal{F}_{i}\), \(i\in I\) so that we get a family of compactifications \(\mathbb{D}^{2}_{\mathcal{F}_{i}}\). In fact, we get a compactification \(\mathbb{D}^{2}_{\mathcal{F}_{i}}\) for any subfamily \(J\subset I\) leading to an uncountable set of (maybe distinct) compactifications of \(\mathbb{R}^{2}\) by the disc \(\mathbb{D}^{2}\) (Example 5 provides
a simple example where these compactifications \(\mathbb{D}^{2}_{J}\), for \(J\subset I\), are pairwise distincts and uncountably many).
These compactifications are easily related : for any subfamily \(J\subset I\) the identity map on \(\mathbb{R}^{2}\) extends in a unique way by continuity as a projection \(\Pi_{I,J}\colon\mathbb{D}^{2}_{\mathcal{F}}=\mathbb{D}^{2}_{I}\to\mathbb{D}^{2}_ {J}\), which simply consists in collapsing the intervals in \(\mathbb{S}^{1}_{I}\) which do not contain any limit of an end of a leaf of a foliation \(\mathcal{F}_{j},j\in J\).
We will also see in a simple example that the assumption of _at most countability_ of the family \(I\) of foliations cannot be erased: for instance, the conclusion Theorem 2 is false for the family of all affine foliations (by parallel straight lines) of \(\mathbb{R}^{2}\), parametrized by \(\mathbb{R}\mathbb{P}^{1}\) (see Example 4).
Example 8 and Lemma 4.8 present a simple example where generic points (i.e. points in a residual set) of the circle at infinity \(\mathbb{S}^{1}_{\mathcal{F}}\) of a foliation \(\mathcal{F}\) are not the limit of any end of leaf of \(\mathcal{F}\). In this example, at the contrary, points in a dense subset of \(\mathbb{S}^{1}_{\mathcal{F}}\) are limit of \(2\) distinct ends of leaves.
Lemma 4.4 and 4.5 caracterize the points \(p\) at the circle at infinity \(\mathbb{S}^{1}_{\mathcal{F}}\), where \(\mathcal{F}\) is a foliation of \(\mathbb{R}^{2}\), which are limit of several ends of leaves: the rays arriving at \(p\) are ordered as an interval of \(\mathbb{Z}\) and two successive ends bound a hyperbolic sector.
Corollary 5.3 generalizes Lemma 4.4 and 4.5 to the case of a countable family \(\mathcal{F}=\{\mathcal{F}_{i}\}\) of transverse foliations and gives a complete description of the points in \(\mathbb{S}^{1}_{\mathcal{F}}\) which are limit of several ends of leaves of the same \(\mathcal{F}_{i}\).
### Countable families of non-transverse or singular foliations
This construction can be generalized easily to the setting of families of non transverse or singular foliations. Let us present the most general setting we consider here.
The foliations we consider admit singular points which are _saddle point with \(k\)-separatrices_ (also called \(k\)_-prongs singularity_), \(k>1\), the case \(k=2\) corresponding to non-singular points.
In this setting an _end of leaf_ is a ray of \(\mathbb{R}^{2}\) disjoint from the singular points and contained in a leaf.
**Theorem 3**.: _Let \(\mathcal{F}=\{\mathcal{F}_{i}\}\), \(i\in I\subset\mathbb{N}\) be a family of singular foliations of \(\mathbb{R}^{2}\) whose singular points are each a saddle with \(k\)-separatrices with \(k>2\). We assume that, given any two ends \(L_{1},L_{2}\) of leaves we have the following alternative:_
* _either the germs of_ \(L_{1}\) _and_ \(L_{2}\) _are disjoints_
* _or the germs of_ \(L_{1}\) _and_ \(L_{2}\) _coincide._
_Then there is a compactification \(\mathbb{D}^{2}_{\mathcal{F}}\simeq\mathbb{D}^{2}\) of \(\mathbb{R}^{2}\) by adding a circle \(\mathbb{S}^{1}_{\mathcal{F}}=\partial\mathbb{D}^{2}_{\mathcal{F}}\) with the following properties:_
* _Any end of leaf tends to a point of the circle at infinity_ \(\mathbb{S}^{1}_{\mathcal{F}}\)_,_
* _The set of ends of leaves tending to a same points of_ \(\mathbb{S}^{1}_{\mathcal{F}}\) _is at most countable,_
* _For any non-empty open subset_ \(O\subset\mathbb{S}^{1}_{\mathcal{F}}\) _the set of ends of leaves having their limit in_ \(O\) _is uncountable._
_This compactification with these three properties is unique, up to a homeomorphism of \(\mathbb{D}^{2}_{\mathcal{F}}\)._
The hypothesis that the germs of ends of leaves are either equal or disjoint means that if the intersection of two leaves is not bounded, then these two leaves coincide on an half leaf. One easily checks that transverse foliations satisfy this hypothesis so that Theorem 2 is a straightforward corollary of Theorem 3.
As a simple and natural example, we will see that any countable family \(\mathcal{F}=\{\mathcal{F}_{i}\}\) of singular foliations, directed by polynomial vector fields on \(\mathbb{R}^{2}\) whose singular points are hyperbolic saddles, satisfies the hypotheses of Theorem 3: this will prove Corollary 5.1 already mentioned above.
### Laminations
The construction of the circle at infinity for foliations cannot be extended without hypotheses to the case of laminations, as leaves of laminations may fail to be lines, and can even be recurrent, see for instance example 13.
Theorems 8 and 10 provide a generalisation of this construction to closed orientable laminations with no compact leaves and with uncountably many leaves. This generalisation is not as satifactory as in the case of foliations, and we discuss some of the issues in Section 6. In particular Theorem 9 provides another canonical compactification, which holds also for countable oriented laminations with no compact leaves.
### Minimality of the action on the circle at infinity
Then we consider group actions \(H\subset Homeo(\mathbb{R}^{2})\) on \(\mathbb{R}^{2}\) preserving \(1\) or \(2\) transverse foliations \(\mathcal{F}_{i}\). The action of \(H\) extends canonically on the circle at infinity and we will consider the following question:
**Question 1.1**.: _Under what conditions on \(H\) and on the foliations \(\mathcal{F}_{i}\) can we ensure that the action induced on \(\mathbb{S}^{1}_{\{\mathcal{F}_{i}\}}\) is minimal?_
Our main result, for the case of \(1\) foliation is the following:
**Theorem 4**.: _Let \(\mathcal{F}\) be a foliation of \(\mathbb{R}^{2}\) and \(H\subset Homeo(\mathbb{R}^{2})\) be a group of homeomorphisms preserving \(\mathcal{F}\). We assume that for any leaf \(L\), the union of its images \(H(L)\) is dense in \(\mathbb{R}^{2}\)._
_Then the two following properties are equivalent_
1. _the action induced by_ \(H\) _on the circle at infinity is minimal_
2. _there are pairs of distinct leaves_ \((L_{1},L_{2})\) _and_ \((L_{3},L_{4})\) _so that_ \(L_{1}\) _and_ \(L_{2}\) _are not separated from above and_ \(L_{3}\) _and_ \(L_{4}\) _are not separated from below._
We will also generalize Theorem 4 for families of transverse foliations.
### Action on the circle at infinity of an Anosov flow
Finally, we will consider the setting of an _Anosov flow_\(X\) on a closed \(3\)-manifold \(M\).
**Remark 2**.: _In this setting it is known that \(\pi_{1}(M)\) acts on \(\mathbb{S}^{1}\) by orientation preserving homeomorphisms, see Calegari Dunfield \([\mathrm{CaDu}]\) inspirated by an unpublished work of Thurston \([\mathrm{Th}]\). This works follows completely distinct ideas that those presented here._
_Another construction of this circle at infinity (called ideal circle boundary) is given in [Fe4] for pseudo-Anosov flows._
Barbot and Fenley [Ba1, Fe1] show that the lift \(\tilde{X}\) of \(X\) is conjugated to the constant vector field \(\frac{\partial}{\partial x}\) on \(\mathbb{R}^{3}\), so that the \(\tilde{X}\)-orbit space is a plane \(\mathcal{P}_{X}\simeq\mathbb{R}^{2}\). This plane \(\mathcal{P}_{X}\) is endowed with two transverse foliations \(F^{s},F^{u}\) which are the projection of the stable and unstable foliations of \(X\) lifted on \(\mathbb{R}^{3}\). Thus \((\mathcal{P}_{X},F^{s},F^{u})\) is the _bifoliated plane_ associated to \(X\). Furthermore, the fundamental group \(\pi_{1}(M)\) acts on \(\mathcal{P}_{X}\) and its action preserves both foliations \(F^{s}\) and \(F^{u}\). This action induces a natural action of \(\pi_{1}(M)\) on the circles at infinity \(\mathbb{S}^{1}_{F^{s}},\mathbb{S}^{1}_{F^{u}}\), and \(\mathbb{S}^{1}_{F^{s},F^{u}}\).
A folklore conjecture asserts that two Anosov flows are orbitaly equivalent if and only if they induces the same action on the circle at infinity of \(\{F^{s},F^{u}\}\), see [Ba1] for a result in this direction. This conjecture as been recently announced to be proved in [BFM].
[Ba1, Fe1] show that every leaf of \(F^{s}\) is regular if and only if every leaf of \(F^{u}\) is regular, and then the Anosov flow \(X\) is called \(\mathbb{R}\)-_covered_. Our main result in that setting is
**Theorem 5**.: _Let \(X\) be an Anosov flow on a closed \(3\)-manifold and \((\mathcal{P}_{X},F^{s},F^{u})\) its bifoliated plane. Let \(\mathbb{D}^{2}_{F^{s},F^{u}}\), \(\mathbb{D}^{2}_{F^{s}}\), and \(\mathbb{D}^{2}_{F^{u}}\) be the compactifications associated to, respectively, the pair of foliations \(F^{s},F^{u}\), the foliation \(F^{s}\) and the foliation \(F^{u}\). Then_
1. \(\mathbb{D}^{2}_{F^{s},F^{u}}=\mathbb{D}^{2}_{F^{s}}=\mathbb{D}^{2}_{F^{u}}\) _unless_ \(X\) _is orbitally equivalent to the suspension of an Anosov diffeomorphism of the torus_ \(\mathbb{T}^{2}\)_._
2. _the action of_ \(\pi_{1}(M)\) _on the circles at infinity_ \(\mathbb{S}^{1}_{F^{s},F^{u}}\)_,(or equivalently_ \(\mathbb{S}^{1}_{F^{s}}\) _or_ \(\mathbb{S}^{1}_{F^{u}}\)_) is minimal if and only if_ \(X\) _is not_ \(\mathbb{R}\)_-covered._
When \(X\) is assumed to be transitive, this result is a simple consequence of Theorem 4 above and a result by Fenley [Fe3] ensuring that, assuming \(X\) is non-\(\mathbb{R}\)-covered, then \(F^{s}\) and \(F^{u}\) admit non-separated leaves from above and non-separated leaves form below. The proof of Theorem 5, when \(X\) is not assumed to be transitive, is certainly the most technically difficult argument of the paper, and is based on a description of hyperbolic basic sets for flows on \(3\)-manifolds.
Theorem 5 implies that the minimality of the action on the circle at infinity is not related with the transitivity of the flow. However, according to [BFM] the action on the circle at infinity charaterizes the dynamics of the flow. This leads to the following question:
**Question 1.2**.: _What property of the action of \(\pi_{1}(M)\) on the circle at infinity \(\mathbb{S}^{1}_{F^{s},F^{u}}\) implies the transitivity of \(X\)?_
_Can we find the transverse tori by looking at the action of \(\pi_{1}(M)\) on the circle at infinity?_
#### 1.7.1. Aknowledgments
I would thank Sebastien Alvarez who invited me to present the results in this paper as a mini-course in Montevideo. This mini-course has been a motivation for ending this paper. I would also thanks Kathrin Mann for indicating me that the argument of Theorem 1 is essentially contained in [Ma], and Michele Triestino for the statement and reference of Cantor-Bendixson theorem.
## 2. Circles at infinity for families of rays on the plane
### Cyclic order
Let \(X\) be a set. A _total cyclic order_ on \(X\) is a map \(\theta\colon X^{3}\to\{-1,0,+1\}\) with the following properties
* \(\theta(x,y,z)=0\) if and only if \(x=y\) or \(y=z\) or \(x=z\).
* \(\theta(x,y,z)=-\theta(y,x,z)=-\theta(x,z,y)\) for every \((x,y,z)\)
* for every \(x\in X\) the relation on \(X\setminus\{x\}\) defined by \[y<z\Leftrightarrow\theta(x,y,z)=+1\] is a total order.
The emblematic example is:
**Example 1**.: _The oriented circle \(\mathbb{S}^{1}=\mathbb{R}/\mathbb{Z}\) is totally cyclically ordered by the relation \(\theta\) defined as follows: \(\theta(x,y,z)=+1\) if and only if the \(y\) belongs to the interior of the positively oriented simple arc staring at \(x\) and ending at \(z\)._
If \(\theta\) is a total cyclic order then for \(x\neq z\) we define the interval \((x,y)\) by
\[(x,z)=\{y,\theta(x,y,z)=1\}.\]
We define the semi closed and closed intervals \([x,z)\),\((x,z]\), and \([x,z]\) by adding the corresponding extremities \(x\) or \(z\) to the interval \((x,z)\).
We say that \(y\) is _between_\(x\) and \(z\) is \(y\in(x,z)\).
The following notion of _separating set_ will be fundamental all along this work:
**Definition 2.1**.: _Let \(X\) be a set endowed with a total cyclic order. A subset \(\mathcal{E}\subset X\) is said separating if given any distinct \(x,z\in X\) there is \(y\in\mathcal{E}\) (distinct from \(x\) and \(z\)), between \(x\) and \(z\)._
We will use the following easy exercize of topology of \(\mathbb{R}\) and \(\mathbb{S}^{1}\):
**Proposition 2.1**.: _Let \(X\) be a set endowed with a total cyclic order. Assume that there is a countable subset \(\mathcal{E}\subset X\) which is separating._
_Then there is a bijection \(\varphi\) of \(X\) on a dense subset \(Y\subset\mathbb{S}^{1}\) which is strictly increasing for the cyclic orders of \(X\) and of \(\mathbb{S}^{1}\). Furthermore this bijection is unique up to a composition by a homeomorphism of \(\mathbb{S}^{1}\)._
The argument is classical but short and beautiful and I have no references for this precise statement. So let me present it:
Proof.: One builds a bijection \(\phi\) of \(\mathcal{E}\) to a notable dense subset \(\mathcal{D}\subset\mathbb{S}^{1}\) by induction, as follows: one choose an indexation of \(\mathcal{E}=\{e_{i},i\in\mathbb{N}\}\) and of \(\mathcal{D}=\{d_{i},i\in\mathbb{N}\}\). One defines
* \(\phi(e_{0})=d_{0}\), \(\phi(e_{1})=d_{1}\)\(i(0)=j(0)=0\)\(i(1)=j(1)=1\)
* consider \(e_{2}\), it belongs either in \((e_{0},e_{1})\) or in \((e_{1},e_{0})\) and we chose \(\phi(e_{2})\) being \(d_{j(2)}\) where \(j(2)\) is the infimum of the \(d_{i}\) in the corresponding interval \((d_{0},d_{1})\) or \((d_{1},d_{0})\). One denotes i(2)=2.
* consider now \(j(3)=\inf\mathbb{N}\setminus\{0,1,i(2)\}\) and define \(\phi^{-1}(d_{j(3)})=e_{i(3)}\) where \(i(3)\) is the infimum of the \(i\notin\{0,1,2\}\) so that the position of \(e_{i(3)}\) with respect to \(e_{0},e_{1},e_{2}\) is the same as the position of \(d_{j(3)}\) with repsect to \(d_{0},d_{1},d_{j(2)}\).
* \(\phi^{-1}\)
* choose \(i(2n)=\inf\mathbb{N}\setminus\{i(k),k<2n\}\) and \(\phi(e_{i(2n)})\) is \(d_{j(2n)}\) where \(j(2n)\) is the infimum of the \(j\) so that \(d_{j}\) as the same position with respect to the \(d_{j(k)},k<2n\) as \(e_{i(2n)}\) with respect to the \(e_{i(k)}\).
* choose \(j(2n+1)=\inf\mathbb{N}\setminus\{j(k),k<2n+1\}\) and \(\phi^{-1}(d_{j(2n+1)})\) is \(e_{i(2n+1)}\) where \(i(2n+1)\) is the infimum of the \(i\) so that \(e_{i}\) as the same position with respect to the \(e_{i(k)},k<2n+1\) as \(d_{j(2n+1)}\) with respect to the \(d_{j(k)}\).
At each step of this construction one uses the separation property of \(\mathcal{E}\) and \(\mathcal{D}\) for ensuring the existence of the point announced in the same position.
Once we built \(\phi\) on \(\mathcal{E}\), it extends in a unique increasing way on \(X\). Then the separation property of \(\mathcal{E}\) implies that this extension is injective.
**Remark 3**.: _Assume that \(Z,\theta\) is a set endowed with a total cyclic order, and \(\mathcal{E}\subset X\subset Z\) are subsets so that \(\mathcal{E}\) is separating for \(X,\theta\)._
_Let \(\varphi\colon X\to Y\) be the map given by Proposition 2.1. Then \(\phi\) extends in a unique way as an (not strictly) increasing map \(\Phi\colon Z\to\mathbb{S}^{1}\): \(\Phi(y)\) is between \(\Phi(x)\) and \(\Phi(z)\) only if \(y\) is between \(x\) and \(z\)._
_The non-injectivity of the map \(\Phi\) is determined as follows. Consider distinct points \(x\neq y\) of \(Z\), then \(\Phi(x)=\Phi(y)\) if and only if either \((x,y)\) or \((y,x)\) contains no more than \(1\) element of \(X\)_
### Cyclic order on families of rays
A _line_ is a proper embedding of \(\mathbb{R}\) in \(\mathbb{R}^{2}\). A line \(L\) cuts \(\mathbb{R}^{2}\) in two half plane. If \(L\) is oriented, then there is an orientation preserving homeomorphism \(h\) of \(\mathbb{R}^{2}\) mapping \(L\) on the oriented \(x\)-axis of \(\mathbb{R}^{2}\) (endowed with the coordinates \((x,y)\)). This allows us to defined the upper and lower half-planes \(\Delta_{L}^{+}\) and \(\Delta_{L}^{-}\) as the pre-images by \(h\) of \(\{y\geq 0\}\) and \(\{y\leq 0\}\) respectively.
A _ray_ is a proper embedding of \([0,+\infty)\) in \(\mathbb{R}^{2}\). Two rays define the same _germ of ray_ if their images coincide out of a compact ball. Two germs of rays are said disjoint if they admit disjoint realisations.
**Example 2**.:
1. _If_ \(\mathcal{F}\) _is a foliation of_ \(\mathbb{R}^{2}\)_, every leaf defines to germs of rays called the_ ends of the leaf_. By fixing an orientation of_ \(\mathcal{F}\) _we will speak of the_ right and left ends _of a leaf._
2. _If_ \(\{\mathcal{F}_{i}\}_{i\in\mathcal{I}}\) _is a family of pairwise transverse foliations of_ \(\mathbb{R}^{2}\) _then the set of all ends of leafs of the foliations_ \(\mathcal{F}_{i}\) _is a family of pairwise disjoint germs of rays._
3. _Consider the set_ \(\mathcal{S}\) _of all germs of rays_ \(\gamma\) _which are contained in an orbit of an affine (polynomial of degree_ \(=1\)_) vector field of saddle type. Then_ \(\mathcal{S}\) _is a family of pairwise disjoint germs of rays._
Next lemmas are simple exercizes of plane topology:
**Lemma 2.1**.: _Let \(\gamma_{0},\gamma_{1},\gamma_{2}\) be three disjoint rays._
_Assume that \(C_{1}\) and \(C_{2}\) are simple closed curves on the plane \(\mathbb{R}^{2}\) so that \(\gamma_{i}\cap Cj\) is a unique point \(p_{i,j}\), \(i\in\{0,1,2\},j\in\{1,2\}\). We endow \(C_{i}\) with the boundary-orientation corresponding to the compact disc bounded by \(C_{i}\). Then the cyclic order of the \(3\) points \(p_{0,1},p_{1,1},p_{2,1}\) for the orientation of \(C_{1}\) is the same as the cyclic order of the \(3\) points \(p_{0,2},p_{1,2},p_{2,2}\) for the orientation of \(C_{2}\)._
_We call it the cyclic order on the rays \(\gamma_{0},\gamma_{1},\gamma_{2}\)._
**Lemma 2.2**.: _The cyclic order on three disjoint germs of rays \(R_{0},R_{1},R_{2}\) does not depend on the choice of disjoint rays \(\gamma_{0},\gamma_{1},\gamma_{2}\) realizing the germs \(R_{0},R_{1},R_{2}\)._
**Corollary 2.1**.: _Let \(\gamma_{0},\gamma_{1},\gamma_{2}\) be three disjoint rays and \(C\) be any simple close curve, oriented as the boundary of the compact disc bounded by \(C\), and having a non-empty intersection with every \(\gamma_{i}\)._
_Let \(p_{i}\) be the last point of \(\gamma_{i}\) in \(C\). Then the cyclic order of the \(\gamma_{i}\) coincides with the cyclic order of the \(p_{i}\) for the orientation of \(C\)._
**Corollary 2.2**.: _Let \(R_{0},R_{1},R_{2}\) be three disjoint germs of rays. Let \(L\) be an oriented line whose right end is \(R_{0}\) and whose left end is \(R_{2}\). Then \(R_{1}\) is between \(R_{0}\) and \(R_{2}\) for the cyclic order defined above (we denote \(R_{1}\in(R_{0},R_{2})\)) if and only if it admits a realization contained in the upper half-plane \(\Delta_{L}^{+}\) bounded by \(L\)._
Next proposition summerizes what we have got with this sequence of easy lemmas.
**Proposition 2.2**.: _Consider \(\mathcal{R}\) a family of pairwise disjoint germs of rays. Then \(\mathcal{R}\) is totally cyclically ordered by the following relation:_
_given three distinct germs of rays \(R_{0},R_{1},R_{2}\in\mathcal{R}\), the germ \(R_{2}\) is between \(R_{1}\) and \(R_{3}\) if it admits a realisation contained in the upper-half plane \(\Delta_{L}^{+}\) where \(L\) an oriented line whose right end is \(R_{0}\) and and whose left end is \(R_{3}\)._
### Compactification of a family of rays by a circle at infinity
In this paper _a compactification of the plane \(\mathbb{R}^{2}\) by the disc \(\mathbb{D}^{2}\)_ is by definition a homeomorphism between \(\mathbb{R}^{2}\) and the open disc \(\dot{\mathbb{D}}^{2}\).
The aim of this section is the proof of Theorem 1 which build a canonical compactification of \(\mathbb{R}^{2}\) associated to a family \(\mathcal{R}\) of rays, assuming it admits a countable separating (for the cyclic order) subset \(\mathcal{E}\subset\mathcal{R}\). One of the main ingredients for the proof of Theorem 1 is the following lemma which is an easy exercize of plane topology.
**Lemma 2.3**.: _Let \(\gamma_{0},\gamma_{1},\dots,\gamma_{n}\) be \(n\) disjoint rays, \(n>0\), and \(K\subset\mathbb{R}^{2}\) be a compact set. Then there is a simple closed curve \(C\) disjoint from \(K\) bounding a compact disc \(D\) containing \(K\) in its interior and so that \(C\cap\gamma_{i}\) consists in a unique point \(p_{i}\), \(i\in\{0,\dots,n\}\)._
Proof.: Just notice that there is a homeomorphism of \(\mathbb{R}^{2}\) mapping \(\gamma_{i}\), \(i\in\{1,\dots,n\}\) on radial (half-straight lines) rays. Then the proof is trivial.
_sketch of proof of Theorem 1._ We consider the set of rays endowed with the cyclic order and we embedd it in the circle \(\mathbb{S}^{1}\) by Proposition 2.1. We denote by \(E\subset\mathbb{S}^{1}\) the dense countable subset corresponding to \(\mathcal{E}\). We define a topology on \(\mathbb{R}^{2}\coprod\mathbb{S}^{1}\) by choosing a basis of neighborhood of the points in \(\mathbb{S}^{1}\) as the halph planes bounded by lines \(L\) whose both ends are rays \(R_{-},R_{+}\) in \(\mathcal{E}\) (each half plane correspond to a segment in \(\mathbb{S}^{1}\setminus\{R_{-},R_{+}\}\).
This topology does not depend of the choice of the countable separating subset \(\mathcal{E}\): if \(\tilde{\mathcal{E}}\) is another countable separating subset, each neighborhood of a point of \(\mathbb{S}^{1}\) obtain by using one family contains a neighborhood obtained by using the other family.
Now one builds a map from \(\mathbb{R}^{2}\) on the interior of \(\mathbb{D}^{2}\) as follow:
1. one considers the circles \(C_{n}\), \(n\geq 1\), of radius \(\rho_{n}=1-\frac{1}{n+1}\) (that is, \(C_{n}=\rho_{n}\cdot\mathbb{S}^{1}\)) endowed with the finite set of point \(\rho_{n}\cdot x_{1},\dots,\rho_{n}\cdot x_{n}\), where \(E=\{x_{n},n\geq 1\}\) is a choice of indexation of the countable set \(E\).
2. one choses by induction a realisation \(R_{n}\) of the rays in \(\mathcal{E}\) and a family of simple closed loops \(\gamma_{n}\) with the following properties: * \(\gamma_{n}\) is the boundary of a compact disc \(D_{n}\) containing \(D_{n-1}\) in its interior and containing the disk of radius \(n\) of \(\mathbb{R}^{2}\). In particular, \(\bigcup_{n}D_{n}=\mathbb{R}^{2}\). * \(\gamma_{n}\) cuts the rays \(R_{m}\), \(m<n\) in a unique point. * one choses a representative of \(R_{n}\), disjoint from \(R_{m}\), \(m<n\), with origin on \(\gamma_{n}\) and with no other intersection point with \(\gamma_{n}\). Then, by definition of the cyclic order on the rays, the points \(\gamma_{n}\cap R_{i}\), \(i\leq n\), are cyclically ordered on \(\gamma_{n}\) as the points \(\rho_{n}\cdot x_{1},\dots,\rho_{n}\cdot x_{n}\) on \(C_{n}\)
3. this allows us to choose a homeomorphism of \(\mathbb{R}^{2}\) to the interior of \(\mathbb{D}^{2}\) sending the loops \(\gamma_{n}\) on the circles \(C_{n}\) and the rays \(R_{n}\) on the segments \([\rho_{n},1)\cdot x_{n}\).
This homeomorphisms extends on the circle at infinity \(\mathbb{S}^{1}\) to \(\partial\mathbb{D}^{2}\).
### Union of countably many families of rays: the circle
**Proposition 2.3**.: _Let \(\{X_{i},i\in I\}\), \(I\subset\mathbb{N}\) be a finite or countable family of sets so that \(\bigcup_{i}X_{i}\) is endowed with a total cyclic order. Assume that, for every \(i\), there exist \(E_{i}\subset X_{i}\) countable separating subset._
_On the union \(X=\bigcup_{i}X_{i}\) we consider the relation_
\[x\sim y\Leftrightarrow([x,y]\cap E_{i}\text{ is finite for every }i\text{, or }[y,x]\cap E_{i}\text{ is finite for every }i).\]
_In other words, \(x\sim y\) if one of the two segments (for the cyclic order) bounded by \(x\) and \(y\) meets each family \(E_{i}\) in at most finitely many points._
_Then \(\sim\) is an equivalence relation and every class contains at most \(1\) point in each \(X_{i}\)._
_Let denote_
\[\pi\colon X\to\mathcal{X}=\bigcup_{i}X_{i}/\sim.\]
_We denote by \(\mathcal{E}\) the projection \(\pi(E)\) of \(E=\bigcup E_{i}\) on \(\mathcal{X}\)._
_Then \(\sim\) provides a complete cyclic order on \(\mathcal{X}\) and \(\mathcal{E}\) is a countable separating subset._
Proof.: The fact that \(\sim\) is an equivalence relation is quite easy, as the union of two intervals meeting \(X_{i}\) on finite sets meets \(X_{i}\) on a finite set.
Note that, assuming \(x\sim y\), the interval \([x,y]\) or \([y,x]\) (meeting every \(E_{i}\) in finitely many points) is contained in the class of \(x\) and \(y\). Thus the class \([x]_{\sim}\) is a (proper) interval for the cyclic order.
Consider \(x,y\in X\) and assume that \([x,y]\cap E_{j}\) is finite for every \(j\). Assume that there is \(i\) and distinct \(z,t\in[x,y]\cap X_{i}\). Then the separating property of \(E_{i}\) for \(X_{i}\) ensures that \([x,y]\cap E_{i}\) is infinite contradicting the choice of the interval \([x,y]\). We deduces that every class meets every \(X_{i}\) in at most \(1\) point.
Notice that this implies that the projection of \(E_{i}\) on \(\mathcal{X}\) is injective.
Consider \(x,y,z\in X\) whose classes are distinct, and assume \(z\in(x,y)\). Consider now \(a\sim x\), \(b\sim y\) and \(c\sim z\). Let \(I_{a},I_{b},I_{c}\) be the intervals \([x,a]\) or \([a,x]\), \([y,b]\) or \([b,y]\), \([z,c]\) or \([c,z]\) with finite intersections with the \(E_{i}\), respectively. Then these intervals are disjoints as there a contained in disjoint equivalence classes. Thus the cyclic order for point in \(I_{a},I_{b},I_{c}\) does not depend on the point in \(I_{a},I_{b},I_{c}\) and thus \(c\in(a,b)\).
This shows that the quotient \(\mathcal{X}\) is endowed with a total cyclic order.
Consider now two distinct classes \([x]_{\sim},[y]_{\sim}\in\mathcal{X}\) of points \(x,y\in X\). Thus there is \(i\) so that \([x,y]\cap X_{i}\) is infinite Now the separating property of \(E_{i}\) implies that \([x,y]\cap E_{i}\) is infinite.
As \(\pi\) is injective on \(E_{i}\) one gets that \(([x]_{\sim},[y]_{\sim})\cap\pi(E_{i})\) is infinite and thus \(([x]_{\sim},[y]_{\sim})\cap\mathcal{E}\) is infinite. One proved that \(\mathcal{E}\) is separating for \(\mathcal{X}\), ending the proof.
### Union of countably many families of rays: the compactification
**Theorem 6**.: _Let \(\mathcal{R}=\coprod_{i\in I}\mathcal{R}_{i}\), \(I\subset\mathbb{N}\), be a family of rays in \(\mathbb{R}^{2}\) whose germs are pairwise disjoint. Assume that for every \(i\in I\) there is a countable subset \(E_{i}\subset\mathcal{R}_{i}\) which is separating for \(\mathcal{R}_{i}\)._
_Then, there is a compactification of \(\mathbb{R}^{2}\) by the disc \(\mathbb{D}^{2}\) so that:_
* _any ray of_ \(\mathcal{R}\) _tends to a point of the circle at infinity_ \(\partial\mathbb{D}^{2}=\mathbb{S}^{1}\)_._
* _for every_ \(i\)_, any two distinct rays of_ \(\mathcal{R}_{i}\) _tends to distinct points of_ \(\mathbb{S}^{1}\)__
* _for any non-empty open interval_ \(J\subset\mathbb{S}^{1}\) _there is_ \(i\in I\) _so that at least_ \(2\) _rays in_ \(\mathcal{R}_{i}\) _have there limit point in_ \(J\)_._
_Furthermore, this compactification is unique up to a homeomorphism of \(\mathbb{D}^{2}\) and does not depend on the separating countable sets \(E_{i}\)._
Let us discuss item 3, whose formulation may be surprising.
**Remark 4**.:
1. _The third item implies that the points of_ \(\mathbb{S}^{1}\) _which are the limit point of a ray in_ \(\bigcup E_{i}\) _are dense in_ \(\mathbb{S}^{1}\)_._
2. _if_ \(I\) _is finite, item 3 is equivalent to the density of points in_ \(\mathbb{S}^{1}\) _which are limit of rays._
3. _item 3 is necessary when there is an uncountable set of equivalence class_ \([c]\)_, for the equivalence relation_ \(\sim\) _(defined at Section_ 2.4_), which are infinite (necessarily countable) and contain a set_ \(\mathcal{C}([c])\) _which is separating for the cyclic order. In that case the uniqueness property announced in Theorem_ 6 _would be wrong if we replace item 3 by the density of points in_ \(\mathbb{S}^{1}\) _which are limit of rays. Example_ 3 _provides a simple illustration of this trouble._
Proof of Theorem 6.: Let us denote \(X=\mathcal{R}\), \(X_{i}=\mathcal{R}_{i}\), and \(E=\bigcup E_{i}\). Let \(\sim\) be the equivalence relation defined in Proposition 2.3 on \(X=\mathcal{R}\) and let \(\pi\) denote the projection \(\pi\colon X\to\mathcal{X}=X/\sim\). We choose a subset \(Y\subset X\) with the following properties
* each equivalence class \([\gamma]\) for \(\sim\) contains exactly \(1\) point \(y_{\gamma}\) in \(Y\)
* if a class for \(\sim\) contains a point in \(E\), then \(y_{\gamma}\in E\).
The existence of such subset \(Y\) is certainly implied by the choice axiom but this existence does not require this axiom. For instance we can fix
* if \([\gamma]_{\sim}\in\pi(\bigcup\mathcal{E}_{i})\) then \(y_{\gamma}\) is the unique point in \(E_{i}\) in the \([\gamma]_{\sim}\) where \(i\) is the smallest index for which \([\gamma]_{\sim}\in\pi(\mathcal{E}_{i})\).
* if \([\gamma]_{\sim}\notin\pi(E)\) then \(y_{\gamma}\) is the unique point in \(X_{i}\cap[\gamma]_{\sim}\) where \(i\) is the smallest index for which \([\gamma]_{\sim}\cap X_{i})\neq\emptyset\).
We denote \(F=Y\cap E\). Notice that \(\pi(F)=\pi(E)\) by construction.
Now the projection \(\pi\colon Y\to\mathcal{X}\) is a bijection which is strictly increasing for the cyclic order. As \(\mathcal{E}=\pi(E)\) is separating for \(\mathcal{X}\) (see Proposition 2.3) one gets that \(F\) is separating for \(Y\).
We can now apply Theorem 1 to the set of rays \(Y\) and the countable separating subset \(F\). One gets a compactification of \(\mathbb{R}^{2}\) as a disc \(\mathbb{D}^{2}\) so that every ray in \(Y\) tends to a point at the circle at infinity, two distinct rays in \(Y\) tends to two distinct points, and the set of points at infinity limit of rays in \(F\) is dense in the circle at infinity.
Let us check now that every ray \(\gamma\in\mathcal{R}\) tends to a point at infinity. By construction of \(Y\) there is \(\sigma\in\mathcal{R}\) so that \(\sigma\sim\gamma\). We will prove that \(\gamma\) tends to the limit point \(s\in\mathbb{S}^{1}\) of \(\sigma\). For that we recall that a basis of neighborhood of \(s\) is given by the half planes \(\Delta_{n}\) bounded by lines \(L_{n}\) whose both ends are \(\sigma_{n}^{-},\sigma_{n}^{+}\in Y\) so that \(\sigma\in(\sigma_{n}^{-},\sigma_{n}^{+})\). Note that both intervals \((\sigma,\sigma_{n}^{+})\) and \((\sigma_{n}^{-},\sigma)\) are infinite when one of the intervals \((\sigma,\gamma)\) or \((\gamma,\sigma)\) is finite. One deduces that \(\gamma\in(\sigma_{n}^{-},\sigma_{n}^{+})\) and thus the end of \(\gamma\) is contained in \(\Delta_{n}\). Thus \(\gamma\) tends to \(s\).
Now consider two distinct rays \(\gamma,\gamma^{\prime}\in\mathcal{R}_{i}\). As every class for \(\sim\) contains at most \(1\) point of \(\mathcal{R}_{i}\) the classes of \(\gamma\) and \(\gamma^{\prime}\) are distincts. Thus there are \(\sigma\neq\sigma^{\prime}\in Y\) which are equivalent to \(\gamma\) and \(\gamma^{\prime}\), respectively. The limit points of \(\gamma\) and \(\gamma^{\prime}\) are those of \(\sigma\) and \(\sigma^{\prime}\) respectively, which are distinct. We just checked that distincts rays in \(\mathcal{R}_{i}\) tends to distinct point at infinity.
**Claim 1**.: _Let \(\varphi\colon\mathbb{R}^{2}\to\mathbb{D}^{2}\) be a compactification satisfying the announced properties. Then \(2\) rays in \(\mathcal{R}\) tends to the same point of the circle at infinity of the compactification if and only if they are equivalent for \(\sim\)_
Proof.: If two rays \(a,b\) are not equivalent then each of \((a,b)\) and \((b,a)\) contains infinitely many rays in the same of the sets \(\mathcal{R}_{i}\), by definition of \(\sim\). As the limits of distinct rays in the same \(R_{i}\) are different, one deduces that the limits of \(a\) and \(b\) are distinct.
Conversely, if \(a\) and \(b\) have different limit points \(r\) and \(s\) in \(\mathbb{S}^{1}\) for the compactification then item 3 implies that there is \(i\in I\) (resp. \(j\in I\) so that, \((a,b)\) (resp. \((b,a)\)) contains the ends of at least \(2\) rays in \(\mathcal{R}_{i}\) (resp. \(\mathcal{R}_{j}\)). As \(\mathcal{R}_{i}\) and \(\mathcal{R}_{j}\) admits separating subsets, this implies that both \((a,b)\cap\mathcal{R}_{i}\) and \((b,a)\cap\mathcal{R}_{j}\) are infinite, so that \(a\nsim b\).
Consider now a non-empty open interval \(J\) of the circle at infinity. We announced that there is \(i\) for which \(J\) contains at least \(2\) limits of rays in \(\mathcal{R}_{i}\). Recall that, according to Theorem 1 the points in \(J\) which are limit of rays in \(Y\) are dense in \(J\). Thus there are at least \(2\) points in \(J\) which are limit of rays \(R_{1},R_{2}\) in \(Y\). This implies that, up to exchange \(R_{1}\) and \(R_{2}\) any ray in \(\mathcal{R}\) between \(R_{1}\) and \(R_{2}\) tend to a point in \(J\). Now, the rays \(R_{1},R_{2}\) are not equivalent for \(\sim\) according to Claim 1. By definition of \(\sim\), there is \(i\) so that there are infinitely rays between \(R_{1}\) and \(R_{2}\). This proves Item 3 of Theorem 6.
Assume now that one has another compactification \(\psi\colon\mathbb{R}^{2}\to\mathbb{D}^{2}\) satisfying also the announced properties. One deduces from Claim 1 the fact that the images by \(\psi\) of two distinct rays in the set \(Y\) (that we used for building the first compactification \(\varphi\)) have two distinct limit points and that the limit points of the image by \(\psi\) of rays in \(Y\) are dense in \(\mathbb{S}^{1}\). Thus this new compactification satisfies the same property on the set of rays \(Y\) as the one we built. Now Theorem 1 asserts that these compactifications differs from \(\varphi\) by a homeomorphism of \(\mathbb{D}^{2}\), concluding the proof.
**Lemma 2.4**.: _Assume that \(\mathcal{R}\) satisfies the hypotheses of Theorem 2.3, and let \(\tilde{\mathcal{R}}\) be a set of rays to that the germs of rays in \(\mathcal{R}\cup\tilde{\mathcal{R}}\) are pairwise disjoint._
_Let \(\mathbb{R}^{2}\hookrightarrow\mathbb{D}^{2}\) be a compactification given by Theorem 6 applied to \(\mathcal{R}\). Then any ray \(\tilde{\gamma}\) in \(\tilde{\mathcal{R}}\) tends to a point at infinity._
Proof.: The candidate for the limit the intersection of all the closed intervals in \(\mathbb{S}^{1}\), bounded by limit of rays \(a,b\in\mathcal{R}\), so that \(\tilde{\gamma}\in(a,b)\). The basis of neighborhood of this point that we exhibit implies that indeed \(\tilde{\gamma}\) tends to that point at infinity.
### An example with uncountably many compactifications
The example below shows that, in the case of a infinite countable family \(\mathcal{R}=\{\mathcal{R}_{i}\},i\in\mathbb{N}\), the compactification announced by Theorem 6 would not be unique if we replace the item 3 of the conclusion by the density in \(\mathbb{S}^{1}_{\mathcal{R}}\) of the limits of the rays in \(\mathcal{R}\). In the example below,
**Example 3**.: _Let \(B\subset\mathbb{R}^{2}\) be the open strip \(\{(x,y)\in\mathbb{R}^{2},|x-y|<1\}.\) Let \(I\) be the set of linear lines with a rational inclination \(\neq 1\). For any \(i\in I\), let \(\mathcal{F}_{i}\) be the restriction to \(B\) of the trivial foliation by parallel straight lines directed by \(i\in\mathbb{RP}^{1}\). For any \(i\), let \(\mathcal{R}_{i}\) be the set of ends of leaf of \(\mathcal{F}_{i}\). Each \(\mathcal{R}_{i}\) admits a countable separating subset. Thus \(\mathcal{R}=\bigcup_{i\in I}\mathcal{R}_{i}\) satisfies the hypotheses of Theorem 6._
_Then there are uncountably many distinct compactifications of \(\mathbb{R}^{2}\) for which_
* _any ray of_ \(\mathcal{R}\) _tends to a point of the circle at infinity_ \(\partial\mathbb{D}^{2}=\mathbb{S}^{1}\)_._
* _for every_ \(i\)_, any two distinct rays of_ \(\mathcal{R}_{i}\) _tend to distinct points of_ \(\mathbb{S}^{1}\)__
* _the points of_ \(\mathbb{S}^{1}\) _which are the limit point of a ray in_ \(\bigcup E_{i}\) _are dense in_ \(\mathbb{S}^{1}\)_._
Proof.: Let \(\mathbb{D}^{2}_{\mathcal{R}}\) be the compactification of \(B\simeq\mathbb{R}^{2}\) by adding the circle at infinity \(\mathbb{S}^{1}_{\mathcal{R}}\).
Every class \(C\) for \(\sim\) contains exactly \(1\) ray in \(\mathcal{R}_{i}\) for any \(i\in I\). The rays in \(C\) are ordered,for the cyclic order, as the points of \(I\) in \(\mathbb{RP}_{1}\). So, \(C\) is a separating set for itself.
By construction of \(\mathbb{S}^{1}_{\mathcal{R}}\), the class \(C\) corresponds to a point \(c\in\mathbb{S}^{1}_{\mathbb{R}}\). We can build another circle \(\mathbb{S}^{1}_{\mathcal{R},C}\) by opening the point \(c\) in a segment \(I_{C}\). Then, we can build a compactification \(\mathbb{D}^{2}_{\mathcal{R},C}\) so that the rays in \(C\) tend to distinct points dense in \(I_{C}\). In particular \(I_{C}\) contains exactly \(1\) limit point of a ray in \(\mathcal{R}_{i}\), for any \(i\).
We can repeat this argument opening not just a point in \(\mathbb{S}^{1}_{\mathcal{R}}\) but a countable subset \(\mathcal{C}\) of classes for \(\sim\): we build a compactification \(\mathbb{D}^{2}_{\mathcal{R},\mathcal{C}}\) where the circle at infinity contains disjoint intervals \(I_{C},C\in\mathcal{C}\), so that each \(I_{C}\) contains exactly \(1\) limit point of a ray in \(\mathcal{R}_{i}\) for any \(i\), ans these points are dense in \(I_{C}\).
As there are uncountably many such countable subsets \(\mathcal{C}\), this provides an uncountable family of pairwise distinct compactifications of \(B\) satisfying the \(2\) first items and the density of the limit points of rays.
This shows that the uniqueness part of Theorem 6 becomes wrong if we replace item 3 by the density in \(\mathbb{S}^{1}\) of the set of limits of rays.
### Uncountable families of families of rays
Theorem 6 is wrong for the union of an uncountable family of sets of rays, as shows the Example 4 below.
**Example 4**.: _We consider \(\mathbb{R}^{2}\) endowed with all constant foliations \(\mathcal{F}_{\theta},\theta\in\mathbb{RP}^{1}\), where \(\mathcal{F}_{\theta}\) is the foliation whose leaves are the straight lines parallels to \(\theta\)._
_Then given any compactification of \(\mathbb{R}^{2}\) by \(\mathbb{D}^{2}\) for which every end of leaf tends to a point at infinity, then for all but a countable set of \(\theta\) the right ends of the leaves of \(\mathcal{F}_{\theta}\) tends to the same point at infinity._
Proof.: The ends of leaves \(F^{+}_{\theta}\) at the right, and those at the left \(F^{-}_{\theta}\) of the foliation \(\mathcal{F}_{\theta}\) are disjoint interval depending on the uncountable parameter theta. On the circle at most countably many disjoint intervals can be non trivial, ending the proof.
### Projection on the compactifications associated to each families
Let us start with a very easy example, showing the at the circles at infinity associated to the subsets of a countable family of transverse foliations may lead to uncountabily many distinct compactifications, all quotient of the compactification associated to the whole family.
**Example 5**.: _Consider now a infinite countable subset \(I\subset\mathbb{RP}^{1}\) and consider the family \(\mathcal{R}_{I}\) of the leaves of the constant foliations \(\mathcal{F}_{\theta},\theta\in I\) on \(\mathbb{R}^{2}\) as already considered in example 4._
_Now the set of ends of leaves of each foliation \(\mathcal{F}_{\theta}\) corresponds to \(2\) (because each leaf has \(2\) ends) non-empty open intervals in \(\mathbb{S}^{1}_{I}\), and these intervals do not contain any end of leaf of any other foliation._
_Thus if \(J,K\subset I\) are distinct subsets, the circles at infinity \(\mathbb{S}^{1}_{J}\) and \(\mathbb{S}^{1}_{K}\) are obtained by collapsing distinct intervals of \(\mathbb{S}^{1}_{I}\) and they are different._
_As the set \(\mathcal{P}(I)\) of all subset of \(I\) in uncountable, this leads to an uncountable family of compatifications \(\{\mathbb{D}^{2}_{J}\}_{J\in\mathcal{P}(I)}\) of \(\mathbb{R}^{2}\) by a circle at infinity._
This situation is quite general.
Let \(\mathcal{R}=\mathcal{R}_{1}\coprod\cdots\coprod\mathcal{R}_{k}\) be a family of rays in \(\mathbb{R}^{2}\) whose germs are pairwise disjoint. Assume that for every \(i\in\{1,\ldots,k\}\) there is a countable subset \(E_{i}\subset\mathcal{R}_{i}\) which is separating.
Thus for every subset \(I\subset\{1,\ldots,k\}\), Theorem 6 provides a compactification \(\mathbb{D}^{2}_{I}\) of \(\mathbb{R}^{2}\), by the circle at infinity corresponding to the rays in \(\mathcal{R}_{i},i\in I\).
**Proposition 2.4**.: _If \(J\subset I\) then the identity map on \(\mathbb{R}^{2}\) extend by continuity as a projection \(\Pi_{I,J}\colon\mathbb{D}^{2}_{I}\to\mathbb{D}^{2}_{J}\). This projection consists in collapsing intervals of \(\mathbb{S}^{1}_{I}=\partial\mathbb{D}^{2}_{I}\) which do not contain any limit points of ray is \(\mathcal{R}_{j},j\in J\)._
_Furthermore if \(K\subset J\) then_
\[\Pi_{I,K}=P_{J,K}\circ P_{I,J}.\]
Proof.: We first define a projection \(\pi_{I,J}\colon\mathbb{S}^{1}_{I}\to\mathbb{S}^{1}_{J}\) by using Remark 3: the subset \(R_{J}\subset\mathbb{S}^{1}_{I}\) of limit of rays of \(\mathcal{R}_{J}\) is is a strcily increasing bijection with \(\mathcal{R}_{J}\). Thus the increasing bijection of \(\mathcal{R}_{J}\) on a dense subset of \(\mathbb{S}^{1}_{J}\) induces an increasing bijection of \(R_{J}\) in this dense subset of \(\mathbb{S}^{1}_{J}\). Now Remark 3 asserts that this bijection extends on the whole \(\mathbb{S}^{1}_{I}\) in a not-stricly increasing map \(\pi_{I,J}\colon\mathbb{S}^{1}_{I}\to\mathbb{S}^{1}_{J}\). An increasing map with dense image is always continuous, so that \(\pi_{I,J}\) is continuous.
Finaly Remark 3 asserts that the non-injectivity of \(\pi_{I,J}\) consist in collapsing intervals of \(\mathbb{S}^{1}_{I}\) with at most \(1\) point in \(R_{J}\), which is the same topological operation as collapsing intervals with no points in \(R_{J}\).
For ending the proof, we will check that \(\pi_{I,J}\) is the extension by continuity of the identity map of \(\mathbb{R}^{2}\) to the circles at infinity.
Recall that we defined a basis of neighborhood of each point of the circle at infinity \(\mathbb{S}^{1}_{I}\) (resp. \(\mathbb{S}^{1}_{J}\) ) as the half-planes \(\Delta^{+}_{L}\) bounded by lines whose both ends are in \(\mathcal{R}_{I}\) (resp. \(\mathcal{R}_{J}\)). In particular, as \(J\subset I\), the neighborhoods of points at infinity in \(DD^{2}_{J}\) are still neighborhoods of points at infinity for \(\mathbb{D}^{2}_{I}\), proving that the map which is the identity from \(\mathbb{R}^{2}=\mathring{\mathbb{D}}^{2}_{I}\) to \(\mathbb{R}^{2}=\mathring{\mathbb{D}}^{2}_{J}\) and is \(\pi_{I,J}\) from \(\mathbb{S}^{1}_{I}\) to \(\mathbb{S}^{1}_{J}\) is continuous. This ends the proof.
## 3. backgroung on foliations: regular leaves, non-separated leaves
### Non-singular foliations
Let \(\mathcal{F}\) be a foliation of \(\mathbb{R}^{2}\). Then
1. as \(\mathbb{R}^{2}\) is simply connected, \(\mathcal{F}\) is orientable and admits a transverse orientation. Let us fix an orientation of \(\mathcal{F}\) and a transverse orientation.
2. every leaf is a line (i.e. a proper embedding of \(\mathbb{R}\) in \(\mathbb{R}^{2}\)).
3. a basis of neighborhoods of a leaf \(L\) is obtained by considering the union of leaves through a transverse segment \(\sigma\) through a point of \(L\).
**Definition 3.1**.:
* _two leaves_ \(L_{1}\)_,_ \(L_{2}\) _are_ not separated _one from the other if they do not admit disjoint neighborhood._
* _A leaf_ \(L\) _is called_ not separated _or_ not regular _if there is a leaf_ \(L^{\prime}\) _which is not separated from_ \(L\)_._
* _A leaf is called_ regular _if it is separated from any other leaf._
We will need some times to be somewhat more specific.
Let \(L_{1}\) and \(L_{2}\) be distinct leaves of \(\mathcal{F}\). Consider two segments \(\sigma_{i}\colon[-1,1]\) transverse to \(\mathcal{F}\), positively oriented for the transverse orientation of \(\mathcal{F}\), and so that \(\sigma_{i}(0)\in L_{i}\), \(i=1,2\). Then \(L_{1}\) is not separated from \(L_{2}\) means that there are sequences \(t^{i}_{n}\), \(i=1,2\) tending to \(0\) as \(n\to+\infty\) so that \(\sigma_{1}(t^{1}_{n})\) and \(\sigma_{2}(t^{2}_{n})\) belongs to the same leaf \(L^{n}\). Then
* as \(\mathcal{F}\) is transversely oriented and the \(\sigma_{i}\) are positively oriented, one gets that \(t_{n}^{1}\) has the same sign as \(t_{n}^{2}\), for every \(n\). Futhermore all the \(t_{n}^{i}\) have the same sign. One says that \(L_{1}\) and \(L_{2}\) are _not separated from above_ (resp. _from below_) if the \(t_{n}^{i}\) are positive (resp. negative).
* By shirinking the segments \(\sigma_{i}\) if necessary one may assume that they are disjoint. Now, up to exchange \(L_{1}\) with \(L_{2}\) we may assume that \(\sigma_{1}(t_{n})\) is at the left of \(\sigma_{2}(t_{n})\) in the oriented leaf \(L^{n}\). We say that \(L_{1}\) (resp. \(L_{2}\)) is not separated from \(L_{2}\) at its right (resp. at its left).
Consider a leaf \(L\) and \(\sigma\colon[-1,1]\to\mathbb{R}^{2}\) a transverse segment (positively oriented) with \(\sigma(0)\in L\). Let \(L_{t}\) be the leaf through \(\sigma(t)\). Let \(U_{t}\), \(t\in(0,1)\), be the closure of the connected component of \(\mathbb{R}^{2}\setminus(L_{t}\cup L_{-t})\) containing \(L\). Then
Lemma 3.1: The leaf \(L\) is regular if and only if
\[\bigcap_{t}U_{t}=L.\]
The intersection \(\bigcap_{t}U_{t}\) does not depend on the segment \(\sigma\) and is denoted \(\mathfrak{U}(L)\).
If \(L\) is not regular, \(\mathfrak{U}(L)\) as non-empty interior, and the leaves which are not separated from \(L\) are precisely the leaves in the boundary of \(\mathfrak{U}(L)\).
Demonstration Proof: A leaf \(\tilde{L}\) not separated from \(L\) is contained in every \(U_{t}\) and is accumulated by leaves \(L_{t_{n}}\) in the boundary of \(U_{t_{n}}\). Thus \(\tilde{L}\) is contained in the boundary of \(\mathfrak{U}(L)=\bigcap_{t}U_{t}\). Furthermore one of the two half planes bounded by \(\tilde{L}\) is contained in \(U_{t}\) and therefore in \(\mathfrak{U}(L)\).
Conversely,\(\bigcap_{t}U_{t}\) consist in entire leaves of \(\mathcal{F}\) and so does its boundary. Now any transverse segment through a leaf in the boundary of \(\bigcap_{t}U_{t}\) crosses the boundary \(L_{t}\cup L_{-t}\) of \(U_{t}\) for \(t\) small: that is the definition of being not separated from \(L\).
Lemma 3.2: Let \(\mathcal{F}\) be a foliation of \(\mathbb{R}^{2}\). The set of not separated leaves is at most countable.
Demonstration Proof: We consider a countable family of transverse lines \(\Sigma\) whoses union cuts every leaf of \(\mathcal{F}\). It is enough to proof that such a tranverse line \(\Sigma\) cuts at most a countable set of non-regular leaves \(L\) admiting a non separated leaf \(\tilde{L}\) from below.
For that just notice that the \(\mathfrak{U}(L)\) for \(L\cap\Sigma\neq\emptyset\) are pairwise disjoint. Thus,there are at most countably many of them with non-empty interior, ending the proof.
Note that \(L\) cuts the strip \(U_{t}\), \(t\in(0,1]\) in two strips \(U_{t}^{+}\) and \(U_{t}^{-}\) bounded respectively by \(L_{t}\cup L\) and by \(L_{-t}\cup L\), and we denote
\[\mathfrak{U}^{+}(L)=\bigcap_{t}U_{t}^{+}\text{ and }\mathfrak{U}^{-}(L)= \bigcap_{t}U_{t}^{-}\]
Then
Lemma 3.3: \(L\) is non-separated from above (resp. from below) if and only if \(\mathfrak{U}^{+}(L)\neq L\) (resp. \(\mathfrak{U}^{-}(L)\neq L\)) and if and only if \(\mathfrak{U}^{+}(L)\) (resp. \(\mathfrak{U}^{-}(L)\)) has non-emptynterior.
In the same spirit, \(\sigma\) cuts the strip \(U_{t}\) in two half strips \(U_{t}^{left}\) and \(U_{t}^{right}\) according to the orientation of \(\mathcal{F}\). Then one says that the _right end_\(L^{+}\) (resp. _left end\(L^{-}\)) of \(L\) is regular_ if
\[\mathfrak{U}_{right}=(L)\bigcap_{t}U_{t}^{right}=L^{+}\text{ (resp. }\mathfrak{U}_{ left}(L)=\bigcap_{t}U_{t}^{left}=L^{-}).\]
We can be even more precise by considering the 4 quadrants \(U_{t}^{+,right},U_{t}^{+,left},U_{t}^{-,right},U_{t}^{-,left}\) obtained by considering the intersections of \(U_{t}^{+}\) and \(U_{t}^{-}\) with \(U_{t}^{right}\) and \(U_{t}^{left}\). This allows us to speak on right or left ends of leaves non separated from above or from below, in the obvious way.
### Singular foliations: saddles with \(k\)-separatrices
A singular foliation \(\mathcal{F}\) on \(\mathbb{R}^{2}\) is a foliation on \(\mathbb{R}^{2}\setminus Sing(\mathcal{F})\) where \(Sing(\mathcal{F})\) is a closed subset of \(\mathbb{R}^{2}\). A _leaf of \(\mathcal{F}\)_ is a leaf of the restriction of \(\mathcal{F}\) to \(\mathbb{R}^{2}\setminus Sing(\mathcal{F})\). Let us now recall the notion of saddles with \(k\)-separatrices, also called \(k\)-prongs singularities.
We denote by \(A_{0}\) the quotient of \([-1,1]^{2}\) by the involution \((x,y)\mapsto(-x,-y)\). The projection of \((0,0)\) on \(A_{0}\) is still called \(0,0\). Note that the horizontal foliation (whose leaves are the segments \([-1,1]\times\{t\}\) is invariant by \((x,y)\mapsto(-x,-y)\), and therefore passes to the quotient on \(A_{0}\setminus(0,0)\) and we denote by \(\mathcal{H}_{1}\) the induced foliation on \(A_{0}\setminus\{(0,0)\}\).
A \(1\)_-prong singular point_\(p\) of \(\mathcal{F}\) is a point of \(Sing(\mathcal{F})\) which admits a neighborhood \(U\) and a homeomorphism \(h\) from \(U\) to \(A_{0}\) so that \(h(p)=(0,0)\) and \(h\) maps \(\mathcal{F}\) on \(\mathcal{H}_{1}\).
We denote by \(A_{k},\mathcal{H}_{k}\) the cyclic ramified cover of \(A_{0}\) at the point \((0,0)\) with \(k\) leaves, endowed with the lift of \(\mathcal{H}_{1}\).
A \(k\)_-prongs singular point_\(p\), equivalently a _saddle point with \(k\) separatrices_ of \(\mathcal{F}\) is a singular point admiting a homeomorphism of a neighborhood onto \(A_{k}\) mapping \(p\) on \((0,0)\) and \(\mathcal{F}\) on \(\mathcal{H}_{k}\). A separatrix of the saddle point \(p\) is the leaf of \(\mathcal{F}\) containing a connected component of the lift of \(]0,1]\times\{0\}\).
**Remark 5**.:
* _If_ \(p\) _is a_ \(2\)_-prongs singular point of_ \(\mathcal{F}\)_, then the foliation_ \(\mathcal{F}\) _can be extended on_ \(p\) _so that_ \(p\) _is not singular._
* _The Poincare-Hopf index of a_ \(k\)_-prongs singular point is_ \(1-\frac{k}{2}\)_._
A _foliation with singularities of saddle type_ on \(\mathbb{R}^{2}\) is a singular foliation for which each singular point is a saddle with \(k\) separatrices, \(k>2\).
### Leaves of singular foliations
**Lemma 3.4**.: _Let \(\mathcal{F}\) be a foliation on \(\mathbb{R}^{2}\) with singular points of saddle type. Let \(\sigma\colon[0,1]\to\mathbb{R}^{2}\setminus Sing(\mathcal{F})\) be a segment transverse to \(\mathcal{F}\). Then for every leaf \(\gamma\) one has_
\[\#\sigma\cap\gamma\leq 1,\]
_where \(\#\) denotes the cardinal._
Proof.: Assume (arguing by contradiction) that \(\sigma\cap\gamma\geq 2\). Let \(x,y\) be two successive (for the parametrisation of \(\gamma\)) intersection points with \(\sigma\). The concatenation of the segments \([x,y]_{\gamma}\) and \([y,x]_{\sigma}\) is a simple closed curve \(c\) in \(\mathbb{R}^{2}\setminus Sing(X)\). By Jordan theorem \(c\) bounds a disc \(D\) in \(\mathbb{R}^{2}\) and the Poincare Hopf index of \(\mathcal{F}\) on \(D\) is either equal to \(1\), if \(\gamma\) cuts \(\sigma\) with the same orientation at \(x\) and \(y\), or \(\frac{1}{2}\) otherwise: anyway this index is strictly positive. However, this index is the sum of the Poincare Hopf index of the singular points of \(\mathcal{F}\) contained in \(D\). As each of them is negative, that is a contradiction, ending the proof.
The same argument shows that
**Lemma 3.5**.: _Let \(\mathcal{F}\) be a foliation on \(\mathbb{R}^{2}\) with singular points of saddle type. Then \(\mathcal{F}\) has no compact leaves_
Proof.: The index of \(\mathcal{F}\) on the disc bounded by a compact leaf should be \(1\) which is impossible with singular points with negative index.
**Corollary 3.1**.: _Let \(\mathcal{F}\) be a singular foliation of \(\mathbb{R}^{2}\) whose singular points are all saddle points with at least \(3\) separatrices. Then every half leaf of \(\mathcal{F}\) is either a ray or tends to a singular point \(p\) of \(\mathcal{F}\) and is contained in a separatrix of \(p\)._
Proof.: Consider the Alexandrov compactification of \(\mathbb{R}^{2}\) by a point at infinity. Consider a leaf \(\gamma\) and choose a parametrisation \(\gamma(t)\). Consider
\[\limsup_{t\to+\infty}\gamma(t)=\bigcap_{t>0}\overline{\gamma([t,+\infty)},\]
where the closure is considered in \(\mathbb{R}^{2}\cup\{\infty\}\). It is a decreasing intersection of connected compact sets, and hence it is a non-empty connected compact set.
If \(\limsup_{t\to+\infty}\gamma(t)\) is not just a point, if contains a regular point \(x\) of \(\mathcal{F}\), hence it cuts infinitely many times any transverse segment through \(x\), which is forbidden by Lemma 3.4.
Now \(\limsup_{t\to+\infty}\gamma(t)\) is either the point \(\infty\) or is a singular point of \(\mathcal{F}\), which is the announced alternative.
### Regular leaves of singular foliations
Let \(\mathcal{F}\) be a foliation with singular points of saddle type, \(L_{0}\) a leaf of \(\mathcal{F}\) and \(\sigma\) be a transverse segment through the point \(\sigma(0)\in L_{0}\).
The set of \(t\) so that \(\sigma(t)\) is contained in a separatrix of a singular point is at most countable. For any \(t\) so that \(\sigma(t)\) and \(\sigma(-t)\) are not in a separatrix of a singular point, the leaves \(L_{t}\) and \(L_{-t}\) through \(\sigma(t)\) and \(\sigma(-t)\) are disjoint lines and therefore cut \(\mathbb{R}^{2}\) in \(3\) connected components. We denote by \(U_{t}\) the closure of the connected component of \(\mathbb{R}^{2}\setminus(L_{t}\cup L_{-t})\) containing \(L_{0}\). Notice that \(U_{t}\) is a strip (homeomorphic to \(\mathbb{R}\times[-1,1]\)) bounded by \(L_{t}\cup L_{-t}\) and saturated for \(\mathcal{F}\).
**Lemma 3.6**.: _With the notation above \(\bigcap_{t}U_{t}\) is a non-empty closed subset of \(\mathbb{R}^{2}\) saturated for \(\mathcal{F}\) and we have the following alternative:_
* _either_ \(\bigcap_{t}U_{t}=L_{0}\) _and_ \(L_{0}\) _is a non-singular leaf of_ \(\mathcal{F}\)_,_
* _or_ \(\bigcap_{t}U_{t}\) _has non-empty interior._
_Furthermore, \(\bigcap_{t}U_{t}\) does not depend on the choice of the transverse segment \(\sigma\) through \(L_{0}\) and is denoted \(\mathfrak{U}(L_{0})\)._
Proof.: \(\mathfrak{U}(L_{0})\) is saturated for \(\mathcal{F}\). If it contains a non-singular leaf, it contains one of the half planes bounded by this leaf. If it contains a singular leaf, it contains the corresponding singular point, and then it contains at least one of the sectors bounded by the separatrices.
**Definition 3.2**.: _With the notation above, the leaf \(L_{0}\) is called regular if \(\mathfrak{U}(L_{0})=L_{0}\), and will be called non-regular otherwise._
**Remark 6**.: _If \(L_{0}\) is a separatrix of a singular point, then it is non-regular._
As in the case of non-singular foliations we have:
**Proposition 3.1**.: _Let \(\mathcal{F}\) be a foliation with singular points of saddle type. Then the set of non-regular leaves is at most countable._
Proof.: For any transverse segment \(\sigma\) let denote by \(L_{t}\) the leaf through \(\sigma_{t}\). Then by construction the closed sets \(\mathfrak{U}(L_{t})\) are pairwise disjoint. Thus at most countably many of them may have non-empty interior, that is, at most countably many of leaves \(L_{t}\) are non-regular. We conclude the proof by noticing that \(\mathcal{F}\) admits a countable family of transverse segment \(\sigma_{n},n\in\mathbb{N}\) every leaf of \(\mathcal{F}\) cuts at least \(1\) segment \(\sigma_{n}\).
The leaves of a foliations have two ends, and the notion of regular leaves can be made more precise, looking at each of its ends.
More precisely, let \(L_{0,+}\) be an half leaf of \(\mathcal{F}\), and let \(\sigma\) be a transverse segment so that \(\sigma(0)\) is the initial point of \(L_{0,+}\). For any \(t\) so that \(\sigma(t)\) and \(\sigma(-t)\) do not belong to a separatrix of a singular point, we consider \(L_{t,+}\) and \(L_{t,-}\) the half leaves starting at \(\sigma(t)\) and \(\sigma(-t)\) in the same side of \(\sigma\) as \(L_{0,+}\). We denote by \(U_{t}(L_{0,+})\subset\mathbb{R}^{2}\) the closed half plane containing \(L_{0,+}\) and bounded by the line of \(\mathbb{R}^{2}\) obtained by concatenation of \(L_{t,+}\), \(\sigma([-t,t])\) and \(L_{t,-}\). We denote \(\mathfrak{U}(L_{0,+})=\bigcap_{t}U_{t}(L_{0,+})\). Then :
* either \(\mathfrak{U}(L_{0,+})=L_{0,+}\) and one says that the half leaf \(L_{0,+}\) (or equivalently, the end of \(L_{0}\) corresponding to \(L_{0,+}\)) is regular
* or \(\mathfrak{U}(L_{0,t})\neq L_{0,t}\) is a closed subset with non-empty interior.
A leaf is regular if and only if its two ends are regular, and the set of non-regular ends of leaves is at most countable.
### Orientations
A foliation with singular points of saddle type is locally orientable (and transversely orientable) in a neighborhood of a singular point \(x\) if and only if the number of separatrices of \(x\) is even.
Thus a foliation of \(\mathbb{R}^{2}\) whose singular points are saddles with even numbers of separatrices is locally orientable and transversely orientable, and therefore is globally orientable and transversely orientable, as \(\mathbb{R}^{2}\) is simply connected.
Let \(\mathcal{F}\) be a foliation with singular points of saddle type with even numbers of separatrices, and fix an orientation and transverse orientation of \(\mathcal{F}\).
Thus every leaf \(L\) have a right and left end. We defined \(\mathfrak{U}^{right}(L)\) and \(\mathfrak{U}^{l}eft(L)\) so that \(L\) can be regular at the right or at the left.
If \(L_{0}\) is a leaf which is not a separatrix and \(\sigma\) be a transverse segment with \(\sigma(0)\in L_{0}\). One defines in the same way the notions of being regular from above and from below, for \(L_{0}\) or for each of its two ends.
For instance \(L_{0}^{right}\) is regular from above if \(\mathfrak{U}_{+}(L_{0}^{right})=\bigcap_{t}U_{t,+}(L_{0}^{right})=L_{0}^{right}\) where \(U_{t,+}(L^{right})\) is bounded by \(L_{0}^{right}\), \(\sigma([0,t])\) and \(L_{t}^{right}\).
## 4. The circle at infinity of a family of foliations
### The circle at infinity of a foliation of \(\mathbb{R}^{2}\): statement
The aim of this section is to recall the following result essentially due to [Ma] and to present a short proof of it.
**Theorem 7**.: _Let \(\mathcal{F}\) be a foliation of the plane \(\mathbb{R}^{2}\), possibly with singularities of saddle type. Then there is a compactification \(\mathbb{D}^{2}_{\mathcal{F}}\simeq\) of \(\mathbb{R}^{2}\) by adding a circle at infinity \(\mathbb{S}^{1}_{\mathcal{F}}=\partial\mathbb{D}^{2}_{\mathcal{F}}\) with the following property:_
* _any half leaf tends either to a saddle point or to a point at infinity._
* _given a point_ \(\theta\in\mathbb{S}^{1}_{\mathcal{F}}\) _the set of ends of leaves tending to_ \(\theta\) _is at most countable._
* _the subset of_ \(\mathbb{S}^{1}_{\mathcal{F}}\) _corresponding to limits of regular ends of leaves is dense in_ \(\mathbb{S}^{1}_{\mathcal{F}}\)_._
_Furthermore this compactification of \(\mathbb{R}^{2}\) by \(\mathbb{D}^{2}\) with these three properties is unique, up to a homeomorphisms of the disk \(\mathbb{D}^{2}\)._
**Remark 7**.: _If \(L_{1}^{+}\neq L_{2}^{+}\) are two ends of leaves tending to the same point \(\theta\in\mathbb{S}^{1}_{\mathcal{F}}\), then \(L_{2}\subset\mathfrak{U}^{+}\). In particular, the ends \(L_{1}^{+}\) and \(L_{2}^{+}\) are not regular._
**Corollary 4.1**.: _If a homeomorphisms \(f\) of the plane \(\mathbb{R}^{2}\) preserves the foliation \(\mathcal{F}\) then it extends in a unique way as a homeomorphism \(F\) of the compactification \(\mathbb{D}^{2}_{\mathcal{F}}\)._
_Furthermore the restriction of \(F\) to \(\mathbb{S}^{1}_{\mathcal{F}}\) is the identity map if and only if \(f\) preserves every leaf of \(\mathcal{F}\) and preserves the orientation on each leaf._
Proof.: The first part is, as already noted, a straightforward consequence of the uniqueness of the compactification.
If \(f\) preserves every leaf and preserves the orientation of the leaves, then it preserves every end of leaf. Thus the extension \(F\) fixes every point of \(\mathbb{S}^{1}_{\mathcal{F}}\) which is limit of an end of leaf. As the limit points of end of leaves are dense in \(\mathbb{S}^{1}_{\mathcal{F}}\) one deduces that the restriction of \(F\) to \(\mathbb{S}^{1}_{\mathcal{F}}\) is the identity map.
Conversely, assume that \(F\) is the identity on \(\mathbb{S}^{1}_{\mathcal{F}}\). If \(\theta\in\mathbb{S}^{1}_{\mathcal{F}}\) is the limit of an unique end of leaf \(L_{+}\) then \(L_{+}\) is preserved by \(f\).
Thus \(f\) preserves every regular end of leaf. As the regular leaves are dense in \(\mathbb{R}^{2}\), one deduces that \(f\) preserves every oriented leaf, concluding.
### Proof of Theorem 7
We denote by \(Reg(\mathcal{F})\) the set of regular leaves of \(\mathcal{F}\) and by \(\mathcal{R}(\mathcal{F})\) the set of ends of regular leaves (any non singular leaf and in particular any regular leaf has two ends). Recall that \(\mathcal{R}(\mathcal{F})\) is a family of disjoint rays of \(\mathbb{R}^{2}\) and therefore is cyclically ordered.
**Lemma 4.1**.: _If \(D\) is a family of regular leaves whose union in dense in \(\mathbb{R}^{2}\), then the set \(\mathcal{D}\) of ends of the leaves in \(D\) is a separating family for the set of ends of regular leaves \(\mathcal{R}(\mathcal{F})\)._
Proof.: Let \(L_{0}\) be a regular leaf of \(\mathcal{F}\), \(\sigma\colon[-1,1]\to\mathbb{R}^{2}\) a segement transverse to \(\mathcal{F}\) with \(\sigma(0)\in L_{0}\) and \(U_{t}\) the family of neighborhoods of \(L_{0}\) associated to the transverse segment \(\sigma\). Our assumption implies that for a dense subset of \(t\in[-1,1]\), the leaf \(L_{t}\) belongs to \(\mathcal{D}\). Consider a sequence \(t_{n}\in[-1,1],n\in\mathbb{Z}\) so that
* \(L_{n}=L_{t_{n}}\in\mathcal{D}\)
* \(t_{n}\to 0\) as \(|n|\to\infty\)
* \(t_{n}\) as the same signe as \(n\in\mathbb{Z}\)
Let \(L_{n}^{+}\) and \(L_{n}^{-}\) be the half leaves of \(L_{n}\) (for the orientation given by the transverse orientation induced by \(\sigma\)). As \(L_{0}\) is regular one gets \(\mathfrak{U}(L_{0}^{+})=L_{0}^{+}.\) This implies that \(L_{0}^{+}\) (resp. \(L_{0}^{-}\)) is the intersection of the intervals (for the cyclic order) \([L_{-n}^{+},L_{n}^{+}]\) (resp.\([L_{n}^{-},L_{-n}^{-}]\)) for \(n>0\). In other words, the rays \(L_{-n}^{+},L_{n}^{+}\) (resp. \(L_{n}^{-},L_{-n}^{-}\)) are separating the ray \(L_{0}^{+}\) (resp. \(L_{0}^{-}\)) from any other ray in \(\mathcal{R}(\mathcal{F})\) (and indeed from any other ray of leaf, regular or not ), concluding the proof.
We are now ready to prove Theorem 7.
Proof of Theorem 7.: We chose a countable set \(E\) of regular leaves whose union is dense in \(\mathbb{R}^{2}\). According to Lemma 4.1 the set \(\mathcal{E}\) of ends of leaves in \(E\) is a countable separating subset of \(\mathcal{R}(\mathcal{F})\). Thus we may apply Theorem 1.
One gets a compactification of \(\mathbb{R}^{2}\) by the disc \(\mathbb{D}_{\mathcal{F}}^{2}\simeq\mathbb{D}^{2}\), so that every two distinct ends of regular leaves tend to two distinct points at the circle at infinity \(\mathbb{S}_{\mathcal{F}}^{1}\) and these points are dense on the circle and this compactification does not depend of the choice of the family. This prove the items \(2\) and \(3\) of the theorem, and also proves that these two items are enough for the uniqueness of this compactification.
It remains to prove the first item, that is to show that the rays contained in non-regular leaves also tend to points on \(\mathbb{S}_{\mathcal{F}}^{1}\). That is done by Lemma 2.4.
**Remark 8**.: _Let \(\mathcal{F}\) be a foliation (possibly with saddles). Then every line \(L\) transverse to \(\mathcal{F}\) has \(2\) distinct limit points at infinity corresponding to its \(2\) ends._
Proof.: The two ends of \(L\) are rays disjoint from the ends in \(\mathcal{R}(\mathcal{F})\) (that is of the ends of leaves of \(\mathcal{F}\)), as any transverse segment intersects any leaf in at most \(1\) point. Now Lemma 2.4 implies that the ends of \(L\) tends to points on \(\mathbb{S}_{\mathcal{F}}^{1}\). These points are distinct because the regular half leaves through \(L\) are between these two ends.
**Lemma 4.2**.: _Let \(\mathcal{F}\) be a foliation (possibly with saddles). Given any two (non-singular) leaves \(L_{1},L_{2}\), if the ends of \(L_{1}\) and \(L_{2}\) tend to the same \(2\) points in \(\mathbb{S}_{\mathcal{F}}^{1}\) then \(L_{1}=L_{2}\)._
Proof.: Assume \(L_{1}\neq L_{2}\) share the same end points. Then the leaves in the strip bounded by \(L_{1}\cup L_{2}\) would have their ends on the same points in \(\mathbb{S}_{\mathcal{F}}^{1}\) contradicting the fact that at most notably many ends of leaves share the same end point on \(\mathbb{S}_{\mathcal{F}}^{1}\).
As a by-product of the proof of Lemma 4.1 we get the following:
**Lemma 4.3**.: _Let \(\mathcal{F}\) be a foliation (maybe with saddle-like singular points) and let \(\sigma\colon[-1,1]\to\mathbb{R}^{2}\setminus Sing(\mathcal{F})\) be a transverse segment. Let \(\{L_{t}^{+}\}\) and \(\{L_{t}^{-}\}\) be the half leaves starting at \(\sigma(t)\). Consider the map associating to \(t\in(-1,1)\) the limit point of \(L_{t}^{+}\) on \(\mathbb{S}_{\mathcal{F}}^{1}\). Then \(t\) is a continuous point of this map if and only if \(L_{t}^{+}\) is a regular end._
### Points at \(\mathbb{S}_{\mathcal{F}}^{1}\) limit of several ends of leaves: hyperbolic sectors
**Lemma 4.4**.: _Let \(A\) and \(B\) be distinct ends of leaves. Then the following properties are equivalent_
* _There are no end of regular leaf between_ \(A\) _and_ \(B\)_._
* _The set of ends of leaves between_ \(A\) _and_ \(B\) _is at most countable._
* _The set of ends of leaves between_ \(A\) _and_ \(B\) _is finite._
Proof.: First assume that there is an end \(L^{+}\) of a regular leaf \(L\) between \(A\) and \(B\). We will prove that the interval \((A,B)\) is uncountable.
Consider the neighborhood \(U_{t}\) of \(L\) associated to a transverse segment \(\sigma\) with \(\sigma(0)\in L\). As \(L\) is regular, one gets that \(\mathfrak{U}(L)=\bigcap_{t}U_{t}=L\). As a consequence there is \(t\) so that \(A\) and \(B\) are out of \(U_{t}\).
First assume that \(A\) and \(B\) are in the same connected component of \(\mathbb{R}^{2}\setminus U_{t}\). Then there is a line \(L\) whose left end is \(B\) and whose right end is \(A\) and which is disjoint from \(U_{t}\). One deduces that one of
the interval \((A,B)\) and \((B,A)\) contains no end of leaf in \(U_{t}\) (this cannot be \((A,B)\) which contains \(L^{+}\) by assumption) and the other contains all ends of leaves in \(U_{t}\), so \((A,B)\) contains uncountably many ends of leaves as announced.
Now assume that \(A\) and \(B\) are in distinct connected components of \(\mathbb{R}^{2}\setminus U_{t}\). Then there is a line \(\Gamma\) whose left end is \(B\), whose right end is \(A\) and whose intersection with \(U_{t}\) is \(\sigma([-t,t])\). As \(L^{+}\) is in the interval \((A,B)\) so that \(L^{+}\subset\Delta_{\Gamma}^{+}\), one deduces that all the positive half leaves \(L^{+}_{r}\), \(r\in[-t,t]\) are contained in the upper half plane \(\Delta_{\Gamma}^{+}\) and therefore are between \(A\) and \(B\). So the interval \((A,B)\) (and also \((B,A)\)) is uncoutable which is what we announced.
Conversely, if there are uncountably many ends in \((A,B)\) one of them is the end of a regular leaf as non-regular leaves are countably many.
This proves the equivalence of the two first items. The third items implies trivialy the second, so we now prove that the second implies the third.
Let \(A\) and \(B\) be two ends of leaves so that \((A,B)\) is at most countable. We consider a line \(\delta\) with the following properties:
* \(A\) and \(B\) are the right and left ends of \(\delta\), respectively,
* \(\delta\setminus(B\cup A)\) is a segment \(\sigma\), consisting in finitely many transverse segments \(a_{0},\ldots a_{k}\) and finitely many leaf segments \(b_{1},\ldots,b_{k}\), with \(a_{0}(0)\in B\) and \(a_{k}(1)\in A\).
Let \(\Delta=\Delta^{+}(\delta)\) be the upper half plane bounded by \(\delta\) and corresponding to the interval \((A,B)\).
Notice that no entire leaf may be contained in \(\Delta\) otherwise there would be uncountably many ends between \(A\) and \(B\).
We consider the half leaves \(L^{+}_{0,t}\) entering in \(\Delta\) through \(a_{0}(t)\). As there are only countably many end between \(A\) and \(B\), there is a sequence of \(t_{n}\to 0\) so that \(L^{+}_{0,t_{n}}\) goes out of \(\Delta\) through a point \(\sigma(s_{n})\). Note that the half leaves \(L^{+}_{0,t}\), \(t\in[t_{n+1},t_{n}]\) need to go out of \(\Delta\).
Thus every \(L^{+}_{0,t}\), \(t\leq t_{0}\) goes out of \(\Delta\) at a point \(\sigma(s(t))\), where \(t\mapsto s(t)\) is a decreasing function. Let \(s_{0}\) be the limit
\[s_{0}=\lim_{t\to 0}s(t).\]
Notice that a half leaf entering in \(\Delta\) though \(a_{0}\) cannot go out \(\Delta\) through \(a_{0}\) because a transverse segment cuts a leaf in at most a point. Thus we deduce that \(s_{0}\) belongs to some \(a_{i},i>0\).
We consider the compact segments \(I_{t}\subset L^{+}_{0,t}\) joining \(a_{0}(t)\) to \(\sigma(s(t))\). We consider
\[\limsup_{t\to 0}I_{t}.\]
It is a closed subset of \(\mathbb{R}^{2}\) consisting on \(B\) and of whole leaves contained in \(\Delta\) and of a half leaf \(\tilde{B}_{1}\) ending at \(\sigma(s_{0})\). We already noticed that no entire leaves may be contained in \(\Delta\). Thus this limit consists in \(B\cup\tilde{B}_{1}\). As a consequence, the ends \(B\) and \(\tilde{B}_{1}\) are successive ends, \(\tilde{B}_{1}\in(A,B)\) and thus \((\tilde{B}_{1},A)\) is at most countable too.
We consider \(B_{1}\subset\tilde{B}_{1}\) the half leaf starting at the last intersection point of \(\tilde{B}_{1}\) with \(\sigma\). Note that \(B_{1}\) starts at a point of some segment \(a_{i}\), with \(i>0\).
Thus, if \(B_{1}\neq A\) one may iterate the argument, getting successive half leaves \(B_{i}\) starting at points of some transverse segment \(a_{j(i)}\), where \(i\mapsto j(i)\) is stricly increasing. As there are finitely many segments \(a_{i}\) one gets that this inductive argument needs to stop. In other words, there is \(i\) with \(B_{i}=A\), ending the proof: \([A,B]=A=B_{i},A_{i-1},\ldots,B_{1},B\).
This proves that the second item is equivalent to the third.
The proof of Lemma 4.4 proved, as a by product, the following:
**Lemma 4.5**.: _Assume that \(A\) and \(B\) are successive ends of leaves, that is: the interval \((A,B)\) is empty. Then, there is an embedding of \(\psi\colon[-1,1]\times[0,1]\to\mathbb{D}^{2}_{\mathcal{F}}\) so that:_
* _the segments_ \(\psi([-1,1]\times\{t\})\)_,_ \(0\leq t<1\)_, are leaf segment_
* \(A=\psi([-1,0)\times\{1\})\) _and_ \(B=\psi((0,1]\times\{1\})\)__
* _the point_ \(\psi(0,1)\) _is the point is_ \(\mathbb{S}^{1}_{\mathcal{F}}\) _end of both end_ \(A\) _and_ \(B\)_._
**Definition 4.1**.: _The embedding \(\psi\colon[-1,1]\times[0,1]\to\mathbb{D}^{2}_{\mathcal{F}}\) is called a hyperbolic sector._
We say that two half leaves \(A,B\) are _asymptotic_ if \([A,B]\) or \([B,A]\) does not contain any end of regular leaf. We already proved next lemma:
Lemma 4.6 ().: _To be asymptotic is an equivalence relation in the set of ends of leaves of \(\Cal{F}\)._
_Each equivalence class is either finite or countable and is, as an ordered set, isomorphic to an interval of \((\mathbb{Z},<)\)._
_There are at most countably many non-trivial classes._
We also already proved:
Lemma 4.7 ().: _Let \(\Cal{F}\) be a foliation (possibly with singular points of saddle type). Then two half leaves tend to the same point \(\theta\in\mathbb{S}^{1}_{\Cal{F}}\) if and only if they are asymptotic, and every half leaf arriving to \(\theta\) belongs to their asymptotic class._
In particular, if a point of \(\mathbb{S}^{1}_{\Cal{F}}\) is the limit of a regular end of leaf, it is the limit of a unique end of leaf.
Notice that points at infinity which are limit of a unique end of leaf may be the limit of a non-separated end of leaf as shows next example:
Example 6 ().: _Let \(K\subset\mathbb{R}\) be a Cantor set and consider_
\[\Cal{P}_{K}=\mathbb{R}^{2}\setminus(K\times[0,+\infty)).\]
_Thus \(\Cal{P}_{K}\) is homeomorphic to \(\mathbb{R}^{2}\)._
_Let \(\Cal{F}_{K}\) be the restriction to \(\Cal{P}_{K}\) of the horizontal foliation on \(\mathbb{R}^{2}\) (whose leaves are the \(\mathbb{R}\times\{y\}\)). Thus all the leaves of the form \(I\times\{0\}\) where \(I\) is a connected component of \(\mathbb{R}\setminus K\) are pairwise non separated from below._
_However, any two distinct ends of leaves of \(\Cal{F}_{K}\) tend to distinct points in \(\mathbb{S}^{1}_{\Cal{F}_{K}}\)._
Remark 9 ().: _Assume that \(\Cal{F}\) is oriented._
_If \(A_{0},\dots A_{k}\) are successive ends of leaves, and assuming \(A_{0}\) is a right half leaf, then \(A_{1}\) is a left half leaf and \(A_{0}\) and \(A_{1}\) are not separated from above._
_Then \(A_{2}\) is a right half leaf and \(A_{1}\) and \(A_{2}\) are not separated from below, and so on._
_Thus, each non trivial classes of the asymptotic relation consists in alternately right and left ends of non-separated leaves, alternately from above and from below._
### Points at infinity which are not limit of leaves: center-like points
In this section, foliations are assumed to be non-singular.
Remark 10 ().: _Let \(\Cal{F}\) be a foliation of \(\mathbb{R}^{2}\) and \(o\in\mathbb{S}^{1}_{\Cal{F}}\) be a point so that \(o=\bigcap_{n}(a_{n},b_{n})\), \(n\in\mathbb{N}\) where \(a_{n},b_{n}\) are the limit points of the two ends of a same leaf \(L_{n}\)._
_Then \(a_{n}\) and \(b_{n}\) tends to \(o\) and \(o\) is not a limit point of an end of leaf of \(\Cal{F}\)._
Proof.: Consider \(\Delta_{n}\) being the compact disc of \(\mathbb{D}^{2}_{\Cal{F}}\) whose boudary (as a disc) is \(L_{n}\cup[a_{n},b_{n}]\). Then the \(\Delta_{n}\) are totally ordered by the inclusion and \(o\in\bigcap_{n}\Delta_{n}\). If a leaf \(L\) had an end on \(o\), it should be contained in every \(\Delta_{n}\) and hence contained in \(\bigcap_{n}\Delta_{n}\). Thus the two ends of \(L\) are distinct points in \(\bigcap_{n}[a_{n},b_{n}]\) contradicting the hypothesis.
We say that a point \(o\in\mathbb{S}^{1}_{\Cal{F}}\) satisfying the hypothese of Remark 10 is a _center-like point_.
Here is a very simple example with this situation:
Example 7 ().: _The trivial horizontal foliation \(\Cal{H}\) admits two center-like points at infinity which are the limit points of the (vertical) \(y\) axis (transverse to \(\Cal{H}\))._
It is indeed easy to check that:
Remark 11 ().: _Given any foliation \(\Cal{F}\) of \(\mathbb{R}^{2}\), \(\mathbb{S}^{1}_{\Cal{F}}\) carries at least \(2\) center-like points. To see that, just consider the (decreasing) intersection of the closure in \(\mathbb{D}^{2}_{\Cal{F}}\) of the half planes \(\Delta^{\pm}_{L}\) for a maximal chain (given by Zorn lemma) for the inclusion._
But the situation may be much more complicated, as shows next example.
**Example 8**.: _Consider a simple closed curve \(\gamma=\gamma^{+}\cup\gamma^{-}\) of \(\mathbb{R}^{2}\) where \(\gamma^{+}\) and \(\gamma^{-}\) are the graphs of continuous functions \(\varphi\colon[-1,1]\to[0,1]\) and \(-\varphi\), respectively, where_
* \(\varphi(-1)=\varphi(1)=0\)_,_
* \(\varphi(t)>0\) _for_ \(t\in(-1,1)\)_,_
* _the local maxima and minima of_ \(\varphi\) _are dense in_ \([-1,1]\) _(some kind of Weierstrass function)._
_Let \(\Delta\) be the open disc bounded by \(\gamma\) and endowed with the constant horizontal foliation \(\mathcal{F}\)._
_Then \(\mathbb{S}^{1}_{\mathcal{F}}=\gamma\) and any local maximum point of \(\gamma^{+}\) and any local minimum of \(\gamma^{-}\) are center-like points of \(\mathbb{S}^{1}_{\mathcal{F}}\)_
The aim of this section is to show that the situation of Example 8 is in fact very common.
**Lemma 4.8**.: _Le \(\mathcal{F}\) be a foliation on \(\mathcal{R}^{2}\). Assume that the union of leaves which are non separated at their right side is dense in \(\mathbb{R}^{2}\), and in the same way, that the union of leaves which are non separated at their left side is dense in \(\mathbb{R}^{2}\)._
_Then the set of center-like points on \(\mathbb{S}^{1}_{\mathcal{F}}\) is a residual subset of \(\mathbb{S}^{1}_{\mathcal{F}}\)._
Proof.: Fix a metric on \(\mathbb{S}^{1}_{\mathcal{F}}\). Let \(\mathcal{O}_{n}\subset\mathbb{S}^{1}_{\mathcal{F}}\) be the set of points belonging to an interval \((a,b)\) of length less than \(\frac{1}{n}\) where \(a,b\) are both ends of a same leaf of \(\mathcal{F}\).
We will proof that \(\mathcal{O}_{n}\) is a dense open subset of \(\mathbb{S}^{1}_{\mathcal{F}}\). Then \(\bigcap_{n}O_{n}\) will be the announced residual subset.
The fact that \(\mathcal{O}_{n}\) is open is by definition. We just need to prove the density of \(\mathcal{O}_{n}\).
Recall that the ends of regular leaves are dense in \(\mathbb{S}^{1}_{\mathcal{F}}\). Thus we just need to prove that the ends of regular leaves are contained in the closure of \(\mathcal{O}_{n}\).
Let \(L\) be a regular leaf and \(\sigma\colon[-,1,1]\to\mathbb{R}^{2}\) be a positively oriented transverse segment with \(\sigma_{0}\in L\). We denote \(L_{t}\) the leaf through \(\sigma_{t}\) and we recall that, as \(L\) is regular, the right and left ends \(L_{t}^{+},L_{t}^{-}\) of \(L_{t}\) tend to the right and left ends \(L^{+}\) annd \(L^{-}\), respectively, as \(t\to 0\).
Given any \(r<s\in[-1,1]\), we denote by \(U_{r,s},U_{r,s}^{right}\), and \(U_{r,s}^{left}\) the strip bounded by \(L_{r}\) and \(L_{s}\), and the two closed half strips obtained by cutting \(U_{r,s}\) along the segment \(\sigma([r,s])\). Let \(I_{r,s}^{right}\subset\mathbb{S}^{1}_{\mathcal{F}}\) and \(I_{r,s}^{left}\subset\mathbb{S}^{1}_{\mathcal{F}}\) be the corresponding intervals on \(\mathbb{S}^{1}_{\mathcal{F}}\). Notice that, as \(L\) is regular, these interval have a length smaller than \(\frac{1}{n}\) if \(r,s\) close to \(0\).
Our hypotheses imply that there are \(t^{right},t^{left}\in(r,s)\) so that \(L_{t^{right}}\) is non-separated at the right, and \(L_{t^{left}}\) is non-separated at the right.
This implies that both \(U_{r,s}^{right}\), and \(U_{r,s}^{left}\) contain entire leaves. Thus \(I_{r,s}^{right}\) and \(I_{r,s}^{left}\) contain intervals whose both extremal points are ends of the same leaf. Taking \(r,s\) small enough, these intervals are contained in \(\mathcal{O}_{n}\) showing that the points of \(\mathbb{S}^{1}_{\mathcal{F}}\) corresponding to \(L^{+}\) and \(L^{-}\) are in the closure of \(\mathcal{O}_{n}\). This ends the proof.
However, not every point \(o\) which is not limit of an end of leaf is center-like.
**Example 9**.: _Let \(\mathcal{F}_{K}\) be the foliation defined in Example 6 by restriction of the horizontal foliation on \(\mathbb{R}^{2}\setminus(K\times[0,+\infty))\) where \(K\) is a Cantor set \(K\subset\mathbb{R}\). Consider a point \(x\in K\) which are not the end point of a component of \(\mathbb{R}\setminus K\). Then the point \((x,0)\) corresponds to a point in \(\mathbb{S}^{1}_{\mathcal{F}}\) which is not limit of an end of leaf, and is not center-like._
Consider a point \(o\in\mathbb{S}^{1}_{\mathcal{F}}\) and assume it is not the limit point of any end of leaf. For any leaf \(L\) we denote by \(\Delta_{L}\subset\mathbb{D}^{2}_{\mathcal{F}}\) the compact disk containing \(o\) and whose frontier in \(\mathbb{D}^{2}_{\mathcal{F}}\) is the segment \(\bar{L}\) closure of \(L\). Then \(\Delta_{L}\cap\mathbb{S}^{1}_{L}\) is a segment \(I_{L}\) whose end points are the limit points of the ends of \(L\). Note that the closed segment \(I_{L}\) are totally ordered for the inclusion, and so does the disks \(\Delta_{L}\). Let denote
\[I_{o}=\bigcap_{L}I_{L}\text{ and }\Delta_{o}=\bigcap_{L}\Delta_{L}\]
Then
* if \(o=I_{o}\) then \(o\) is a center-like point.
* Otherwise, \(\partial\Delta_{o}\cap\mathbb{D}^{2}_{\mathcal{F}}\) consists in infinitely (countably) many leaves pairwise not separated and there is a subsequence of them whose limit is \(o\).
## 5. The circle at infinity of a countable family of foliations
The aim of this section is the proof of Theorem 3, that is to build the compactification associated to a countable family of foliations with saddles and prove its uniqueness.
**Remark 12**.: _Example 4 already shown us that Theorem 3 is wrong for uncountable families._
The new difficulty in comparison to Theorem 7 is that there are no more separating families for the set of ends of all the foliations.
**Example 10**.: _Consider the restriction of the constant horizontal and vertical foliations to the strip \(\{(x,y),|x-y|<1\}\), so important for the study of Anosov flows. Then every end of horizontal leaf has a unique successor or predecessor which is the end of a vertical leaf. Thus no family can be separating._
For by-passing this difficulty, we will apply Theorem 6 instead of Theorem 1.
Proof of Theorem 3.: The ends of regular leaves \(\mathcal{R}=\bigcup_{i\in I}\mathcal{R}_{i}\) of all the foliations \(\mathcal{F}_{i},i\in I\) is a family of disjoints ends of rays.
We have seen that for every foliation \(\mathcal{F}_{i}\) the set of ends of regular leaves \(\mathcal{R}_{i}\) admits a countable separating family, for instance by considering regular leaves through a dense subset in \(\mathbb{R}^{2}\).
Thus Theorem 6 provides a compactification of \(\mathbb{R}^{2}\) by \(\mathbb{D}^{2}\) satisfying the announced properties for the regular leaves, that is, items 2 and 3.
For item 1 one need to see that even the ends of non regular leaves tends to points at infinity. That is given by Lemma 2.4.
The uniqueness comes from the uniqueness in Theorem 6, ending the proof.
### Example: Countable families of polynomial vector fields
**Corollary 5.1**.: _Let \(\mathcal{F}=\{\mathcal{F}_{i}\}_{i\in I},I\subset\mathbb{N}\) be a countable family of foliations directed by polynomial vector fields on \(\mathbb{R}^{2}\) whose singular points are all of saddle type. Then the ends of leaves either are disjoint or coincide._
_Thus, according to Theorem 3, there is a unique compactification \(\mathbb{D}^{2}_{\mathcal{F}}=\mathbb{R}^{2}\cup\mathbb{S}^{1}_{\mathcal{F}}\) for which the ends of regular leaves of the same foliation tend to pairwise distinct points at the circle at infinity, and this ends of leaves are dense in \(\mathbb{S}^{1}_{\mathcal{F}}\)._
Proof.: We just need to prove it for \(2\) such distinct foliations \(\mathcal{F}\) and \(\mathcal{G}\). Consider the tangency locus of \(\mathcal{F}\) and \(\mathcal{G}\). That is an algebraic set in \(\mathbb{R}^{2}\) which is either \(\mathbb{R}^{2}\) (so that \(\mathcal{F}=\mathcal{G}\) contradicting the assumption) or is at most \(1\)-dimensional. Thus it consist in the union of a compact part and a family of disjoint rays \(\delta_{1},\ldots,\delta_{k}\).
If it is compact, then every end of leaf of \(\mathcal{F}\) is transverse to \(\mathcal{G}\) and therefore cuts every leaf of \(\mathcal{G}\) in at most \(1\) point: the ends are disjoints.
Otherwise, each ray \(\delta_{i}\) either is tangent to both foliations and is therefore a comon leaf (which is one of the announced possibilities) or is transverse to \(\mathcal{F}\) and to \(\mathcal{G}\) out of a finite set (because the tangencies on \(\delta_{i}\) are a algebraic subset).
Thus up to shrink the non-tangent \(\delta_{i}\), we assume that they are transverse to both foliations therefore cut every leaf of \(\mathcal{F}\) in at most \(1\) point. This implies that every end of leaf of \(\mathcal{F}\) which is not an end of \(\mathcal{G}\) is transverse to \(\mathcal{G}\) and thus is disjoint from any end of leaf of \(\mathcal{G}\), concluding.
**Remark 13**.: _The compactification in Example 5.1 is in general distinct from the algebraic extension of the \(\mathcal{F}_{i}\) on \(\mathbb{R}\mathbb{P}^{2}\): for instance, consider the trivial example of \(\mathbb{R}^{2}\) endowed with the horizontal and vertical foliations. In this case the compactification by the algebraic extension, all the leaves of the horizontal (resp. vertical) foliations tend to the same point at \(\mathbb{R}\mathbb{P}^{1}\) (which corresponds to \(2\) points for the circle at infinity)._
Projections of \(\mathbb{D}^{2}_{\mathcal{F}}\) on \(\mathbb{D}^{2}_{\mathcal{F}_{i}}\) and center-like points on the circle at infinity
**Example 11**.: _Consider \(\mathbb{R}^{2}\) endowed with the trivial horizontal and vertical foliation, \(\mathcal{H}\) and \(\mathcal{V}\) respectively. Then the compactification \(\mathbb{D}^{2}_{\mathcal{H},\mathcal{V}}\) is conjugated to the square \([-1,1]^{2}\) endowed with the trivial horizontal and vertical foliation. Every point \(p\in\mathbb{S}^{1}_{\mathcal{H},\mathcal{V}}\), but the four vertices, are limit of exactly \(1\) end of leaf, either horizontal (for \(p\) in the vertical sides) or vertical (for \(p\) in the horizontal sides)._
_The projection \(\Pi_{\mathcal{H}}\colon\mathbb{D}^{2}_{\mathcal{H},\mathcal{V}}\to\mathbb{D}^{ 2}\mathcal{H}\) consists in colapsing the two horizontal sides, which are tranformed in the center-like points of \(\mathbb{S}^{1}_{\mathcal{H}}\)._
_The projection \(\Pi_{\mathcal{H}}\colon\mathbb{D}^{2}_{\mathcal{H},\mathcal{V}}\to\mathbb{D}^{ 2}\mathcal{V}\) consists in colapsing the two vertical sides, which are tranformed in the center-like points of \(\mathbb{S}^{1}_{\mathcal{H}}\)._
**Example 12**.: _Consider the strip \(\{(x,y)\in\mathbb{R}^{2},|x-y|<1\}\) endowed with the horizontal and vertical foliations \(\mathcal{H}\) and \(\mathcal{V}\) respectively.. Then \(\mathbb{D}^{2}_{\mathcal{H},\mathcal{V}}=\mathbb{D}^{2}_{\mathcal{H}}=\mathbb{ D}^{2}_{\mathcal{V}}\) and consists in adding to two points \(\pm\infty\) to the closed strip \(\{(x,y)\in\mathbb{R}^{2},|x-y|\leq 1\}\). Every point in the sides \(|x-y|=1\) are the limit of exactly \(1\) end of leaf of \(\mathcal{H}\) and \(1\) end of leaf of \(\mathcal{V}\), and the points \(\pm\infty\) are center like for both foliations._
These two examples show that pairs of very simple foliations may lead to different behavior of the projection of the compactification associated to the pair on the compactification of each foliation.
Proposition 5.1 below shows that, for complicated foliations, the compactification of the pair of foliations in general coincides with the compactification of each foliations.
**Proposition 5.1**.: _Let \(\mathcal{F}\), \(\mathcal{G}\) be two transverse foliations on \(\mathbb{R}^{2}\). Assume that_
* _the union of leaves of_ \(\mathcal{G}\) _which are not separated at their right from an other leaf is dense in_ \(\mathbb{R}^{2}\)__
* _the union of leaves of_ \(\mathcal{G}\) _which are not separated at their left from an other leaf is dense in_ \(\mathbb{R}^{2}\)__
_Then the identity map on \(\mathbb{R}^{2}\) extend as a homeomorphism from \(\mathbb{D}^{2}_{\mathcal{F},\mathcal{G}}\to\mathbb{D}^{2}_{\mathcal{F}}\): in other words \(\mathbb{D}^{2}_{\mathcal{F},\mathcal{G}}=\mathbb{D}^{2}_{\mathcal{F}}\)._
Proof.: Assume that there is an open interval \(I\) of \(\mathbb{S}^{1}_{\mathcal{F},\mathcal{G}}\) corresponding to no end of leaf of \(\mathcal{F}\). Then the ends of leaves of \(\mathcal{G}\) are dense in \(I\), and therefore the projection of \(I\) on \(\mathbb{S}^{1}_{\mathcal{G}}\) is injective.
Now Lemma 4.8 implies that there are leaves \(L\) of \(\mathcal{G}\) having both ends on \(I\). Thus up to change positive in negative, every positive half leaf of \(\mathcal{F}\) through \(L\) has its end on \(I\) contradicting the definition of \(I\).
So the points of \(\mathbb{S}^{1}_{\mathcal{F},\mathcal{G}}\) corresponding to ends of leaves of \(\mathcal{F}\) are dense. Thus \(\mathbb{S}^{1}_{\mathcal{F},\mathcal{G}}=\mathbb{S}^{1}_{\mathcal{F}}\), concluding the proof.
As a direct corollary of Proposition 5.1 and Lemma 4.8 one gets
**Corollary 5.2**.: _Let \(\mathcal{F},\mathcal{G}\) be two transverse foliations on \(\mathbb{R}^{2}\) so that both \(\mathcal{F}\) and \(\mathcal{G}\) have density of leaves non separated at the right and of leaves non separated at the left._
_Then generic points in \(\mathbb{S}^{1}_{\mathcal{F},\mathcal{G}}=\mathbb{S}^{1}_{\mathcal{F}}=\mathbb{ S}^{1}_{\mathcal{G}}\) are center-like for both foliations._
### Hyperbolic sectors
In the case of \(1\) foliation we have seen that, if several ends of leaves have the same limit points on the circle at infinity, then they are ordered as a segment of \(\mathbb{Z}\) and two succesive ends bound a hyperbolic sector. These hyperbolic sectors have a very precise model, which allows us to understand the position of a transverse foliation.
**Lemma 5.1**.: _Let \(\mathcal{F}\) and \(\mathcal{G}\) be two transverse foliations on \(\mathbb{R}^{2}\). and consider \(\pi_{\mathcal{F}}\colon\mathbb{D}^{2}_{\mathcal{F},\mathcal{G}}\to\mathbb{D}^ {2}_{\mathcal{F}}\). Assume that \(p\in\mathbb{D}^{2}_{\mathcal{F}}\) is the corner of a hyperbolic sector bounded by the ends \(A\) and \(B\) of leaves of \(\mathcal{F}\)._
_Then there is a non-empty interval \(I_{\mathcal{G}}\) of ends of leaves of \(\mathcal{G}\) ending at \(p\) in \(\mathbb{D}^{2}_{\mathcal{F}}\). Furthermore_
* _either_ \(I_{\mathcal{G}}\) _consist in a unique end of leaf_ \(C\) _of_ \(\mathcal{G}\) _and_ \(A,B,C\) _tend to tend same point at infinity in_ \(\mathbb{D}^{2}_{\mathcal{F},\mathcal{G}}\)__
* _or_ \(\pi^{-1}_{\mathcal{F}}(I_{G})\) _is a closed interval on the circle_ \(\mathbb{S}^{1}_{\mathcal{F},\mathcal{G}}\) _whose interior consist in regular ends of leaves of_ \(\mathcal{G}\)_._
Proof.: Just use the model \([-1,1]\times[0,1]\) where \(p\) is the point \((0,1)\), \(A=[-1,0)\times\{1\}\) and \(B=(0,1]\times\{1\}\), and the horizontal segment \([-1,1]\times\{t\}\), \(0\leq t<1\) are \(\mathcal{F}\)-leaf segments. We can choose this model so that the vertical sides \(\{-1\}\times[0,1]\) and \(\{1\}\times[0,1]\) are leaves segments of \(\mathcal{G}\). Consider the \(\mathcal{G}\)-leaves throug \([-1,1]\times\{0\}\). The leaves reaching \(A\) and the leaves reaching \(B\) are two non-empty intervals, open in
\([-1,1]\) and disjoint. By connectedness, there are leaves, corresponding to an closed interval of \([-1,1]\) which reach neither \(A\) nor \(B\) and these leaves end at \(p\in\mathbb{S}^{1}_{\mathcal{F}}\).
Assume that this interval is not reduced to a single end of leaf of \(\mathcal{G}\) and consider an end \(C\) in the interior of this interval and assume that \(C\) is, for instance, a right end. Consider the neighborhoods \(U_{t}^{right}\) of \(C\) defined in Section 3. Then \(\bigcap_{t}U_{t}^{right}\) consist in \(C\) and in a (maybe empty) set of entire leaves of \(\mathcal{G}\) contained in the hyperbolic sector. Assume that this set is not empty and let \(D\) be such a leaf of \(\mathcal{G}\). Every leaf of \(\mathcal{F}\) cutting \(D\) has an half leaf contained in the hyperbolic sector, contradicting the definition of hyperbolic sector. Thus \(C=\bigcap_{t}U_{t}^{right}\), meaning that \(C\) is a regular end of leaf of \(\mathcal{G}\), ending the proof.
As a straightforward consequence, one gets:
**Corollary 5.3**.: _Let \(\mathcal{F}=\{\mathcal{F}_{i}\}_{i\in I}\), \(I\subset\mathbb{N}\) be an at most countable family of pairwise transverse foliations on \(\mathbb{R}^{2}\). Consider a point \(p\in\mathbb{D}^{2}_{\mathcal{F}}\). Then_
* _either at most_ \(1\) _end of leaf of each_ \(\mathcal{F}_{i}\) _has_ \(p\) _as its limit._
* _or the set of ends tending to_ \(p\) _is ordered as an interval of_ \(\mathbb{Z}\) _and, between any two succesive ends of leaves of the same_ \(\mathcal{F}_{i}\)_, there is exactly_ \(1\) _end of a leaf of each_ \(\mathcal{F}_{j}\)_,_ \(j\neq i\)_._
## 6. The circle at infinity for orientable laminations.
### The circle at infinity of a lamination
The way we proposed to compactify \(\mathbb{R}^{2}\) can be generalized for any object providing a family of rays admitting a separating set.
For instance, what about laminations? The theory cannot be extended without hypotheses. An evident obstruction is that the leaves can be too few for going to a dense subset of a circle at infinity. But there are more subtle issues as shows Example 13 below.
**Example 13**.: _There are closed laminations whose leaves are recurrent. For instance consider a Plykin attractor on \(\mathbb{R}^{2}\): it is a compact minimal lamination (by the unstable manifolds)._
_If you consider now a Plykyn attractor on \(\mathbb{S}^{2}=\mathbb{R}^{2}\cup\{\infty\}\) where \(\infty\) belongs to the attractor, we get a closed lamination on \(\mathbb{R}^{2}\) where every leaf is unbounded but recurrent._
Notice that the recurrent lamination in Example 13 are not orientable. Let me show that Poincare Bendixson argument applies on orientable laminations:
**Lemma 6.1**.: _Let \(\mathcal{L}\) be a closed orientable lamination of \(\mathbb{R}^{2}\). Given any leaf \(L\), either the closure \(\bar{L}\) contains a compact leaf or \(L\) is a line._
Proof.: If \(\bar{L}=L\) then \(L\) is either a compact leaf or is a line (i.e. is properly embedded in \(\mathbb{R}^{2}\)). Assume now that \(\bar{L}\setminus L\) contains a point \(x\in\mathbb{R}^{2}\). We fix an orientation of \(\mathcal{L}\). Chose a segment \(\sigma\colon[-1,1]\) transverse to \(\mathcal{L}\) so that \(\sigma(0)=x\) and so that \(\sigma\) cuts positively every leaf. The hypothesis implies that \(L\) cuts \(\sigma\) in an infinite set. Consider 2 successive (for the order in the leaf \(L\)) intersection points \(z_{0},z_{1}\). Then one gets a simple closed curve \(\delta\) in \(\mathbb{R}^{2}\) by concatenation of the segments \([z_{0},z_{1}]_{L}\) and \([z_{1},z_{0}]_{\sigma}\) joining \(z_{0}\) to \(z_{1}\), in \(L\) and in \(\sigma\) respectively.
Consider the disc \(\Delta\) bounded by \(\delta\). Every leaf cuts \(\delta\) with the same orientation, that is, either every leaf enter in \(\Delta\) or every leaf goes out of \(\Delta\). Up to reverse the orientation one may assume that every leaf enters in \(\Delta\) and in particular \(L\) enters in \(\Delta\). In other words, there is a positive half leaf \(L_{+}\) included in \(\Delta\). This half leaf cannot be reccurent (otherwise it would cut again \([z_{0},z_{1}]_{\sigma}\) and for that it needs to go out of \(\Delta\)). Futhermore:
**Claim 2**.: _No other leaf \(L^{\prime}\neq L\) can accumulate on \(L\): \(L\cap\bar{L}^{\prime}=\emptyset\) if \(L^{\prime}\neq L\)._
Proof.: If \(L^{\prime}\) accumulates on \(L\), it cuts \([z_{0},z_{1}]_{\sigma}\) on an infinite set.
Thus the \(\omega\)-limit set is \(\omega(L)=\bar{L}_{+}\setminus L_{+}\) and is not empty. Consider \(y\in\bar{L}_{+}\setminus L_{+}\). The leaf \(L_{y}\) is contained in \(\Delta\). Either \(L_{y}\) is compact and \(\omega(L)=L_{y}\) and we are done, or \(\bar{L}_{y}\setminus L_{y}\not\!\!\!/\). In that case, Claim 2 implies that \(L_{y}\) is not accumulated by any leaf, in particular by \(L\), contradicting the definition of \(L_{y}\)
We are now ready to extend Theorem 7 to the case of orientable laminations:
**Theorem 8**.: _Let \(\mathcal{L}\) be a closed orientable lamination of \(\mathbb{R}^{2}\) with no compact leaf and assume that the set of leaves of \(\mathcal{L}\) is uncountable. Then there is a compactification \(\mathbb{D}^{2}_{\mathcal{L}}\simeq\) of \(\mathbb{R}^{2}\) by adding a circle at infinity \(\mathbb{S}^{1}_{\mathcal{L}}=\partial\mathbb{D}^{2}_{\mathcal{F}}\) with the following properties:_
* _any half leaf tends to a point at infinity._
* _given a point_ \(\theta\in\mathbb{S}^{1}_{\mathcal{L}}\) _the set of ends of leaves tending to_ \(\theta\) _is at most countable._
* _for any non-empty open subset_ \(I\) _of_ \(\mathbb{S}^{1}_{\mathcal{L}}\) _the set of points in_ \(I\) _corresponding to limits of ends of leaves is uncountable._
_Furthermore this compactification of \(\mathbb{R}^{2}\) by \(\mathbb{D}^{2}\) with these three properties is unique, up to a homeomorphism of the disk \(\mathbb{D}^{2}\)._
Let me just given a sketch of proof.
Proof.: The lamination \(\mathcal{L}\) is assumed to be oriented and without compact leaves, so that every leaf is a line, according to Lemma 6.1.
According to Cantor-Bendixson theorem, see for instance [Ke], the lamination \(\mathcal{L}\) can be written in a unique way as union \(\mathcal{L}=\mathcal{L}_{0}\cup\mathcal{L}_{1}\) of two disjoint laminations, where \(\mathcal{L}_{0}\) is a closed lamination with no isolated leaves and \(\mathcal{L}_{1}\) consists in a countable set of leaves.
A leaf \(L\in\mathcal{L}_{0}\) is called _regular_ if it is accumulated on both sides by leaves in \(\mathcal{L}_{0}\) and is separated from any other leaf of \(\mathcal{L}_{0}\). The same proof as for foliations shows that the set of leaves in \(\mathcal{L}_{0}\) which are not regular are at most countable.
Finally, as for foliations, one consider the set \(\mathcal{R}\) of germs of rays contained in regular leaves of \(\mathcal{L}_{0}\). We consider a countable set \(\mathcal{D}\) of regular leaves, whose union is dense in \(\mathcal{L}_{0}\), and as for the case of foliations, one proves that the rays in \(\mathcal{D}\) are separating for \(\mathcal{R}\).
Then we apply Theorem 1 and we get the announced canonical compactification.
When \(\mathcal{L}\) is transversely a perfect compact set (that is, there is a transverse segment \(\sigma\) through every point \(x\in\mathcal{L}\) so that \(\sigma\cap\mathcal{L}\) is a compact set without isolated points), then the compactification given by Theorem 8 seems very natural: any homeomorphism \(h\) of \(\mathbb{R}^{2}\) preserving \(\mathcal{L}\) extends on \(\mathbb{S}^{1}_{\mathcal{L}}\) as a homeomorphism \(H\) of \(\mathbb{D}^{2}_{\mathcal{L}}\), and the restriction \(H|_{\mathbb{S}^{1}_{\mathcal{L}}}\) is the identity map if and only if \(h\) preserves every leaf of \(\mathcal{L}\). That is no more the case if \(\mathcal{L}\) has isolated leaves.
For lamination with isolated leaves, Theorem 8 just ignores the countable part \(\mathcal{L}_{1}\) of \(\mathcal{L}\) (in the Cantor Bendixson decomposition of \(\mathcal{L}\)). We will now propose a cannonical compactification which takes in account this countable part.
We start by looking at two very different examples of countable oriented laminations.
**Example 14**.: _Consider the lamination \(\mathcal{L}=\mathbb{R}\times\mathbb{Z}\). Then \(\mathcal{L}\) does not admit any compactification by a circle at infinity so that any homeomorphism \(h\) preserving \(\mathcal{L}\) extends on the circle at infinity._
**Example 15**.: _Consider a hyperbolic surface \(S\) of finite volume and consider a set \(\ell\) of essential disjoint simple closed geodesic on \(S\). Then the lift \(\mathcal{L}\) of \(\ell\) on the universal cover \(\tilde{S}=\dot{\mathbb{D}}^{2}\) is a countable, discrete lamination by geodesic of the Poincare disc so that the ends of leaves tend each to points on the circle \(\mathbb{S}^{1}=\partial\mathbb{D}^{2}\), and the set of such limit points is dense on \(\mathbb{S}^{1}\) as the action of \(\pi_{1}S\) on \(\mathbb{S}^{1}\) is minimal._
_In this example, the lamination is transversely discrete, but the set of ends of leaves is a separating set for himself for the cyclic order._
In the example 15 above, what implies the existence of a separating set is the minimality of the action on the circle at infinity of a natural compactification.
In order to propose a cannonical compactification for a closed, oriented lamination without compact leaves we need to determine what part of a cyclically totally ordered set admits separating subset. That is what is done in next easy proposition whose proof is let to the reader.
**Proposition 6.1**.: _Let \(X\) be a set endowed with a total cyclic order. Consider the relation on \(X\) defined as follows: \(x\approx y\) if one of the intervals \([x,y]\) or \([y,x]\) does not contained any self-separating subset \(E\) (with \(\#E\geq 2\)). Then
* _the relation_ \(\approx\) _is an equivalence relation_
* _each equivalence class is an interval_
* _the cyclic order on_ \(X\) _induces a total cyclic order on the quotient_ \(X/\approx\)__
_Futhermore, \(X/\approx\) is either a single point or is an infinite self-separating set._
Note that any two distinct points in a self separated set belong to distinct classes, so that \(\#(X/\approx)=1\) if and only if \(X\) does not contain any (non-trivial) self-separating subsets. Otherwise \(\#(X/\approx)=\infty\).
The canonical compactification is now given by Theorem 9 below:
**Theorem 9**.: _Let \(\mathcal{L}\) be a closed oriented lamination of the plane \(\mathbb{R}^{2}\), with no compact leaf, and let \(\mathcal{R}\) be the set of ends of leaves of \(\mathcal{L}\). As the ends of leaves are disjoints rays the set \(\mathcal{R}\) is totally cyclically ordered. Assume that \(\#(\mathcal{R}/\approx)>1\)._
_Then there is a unique compactification \(\mathbb{D}^{2}_{\mathcal{L}}\) of \(\mathbb{R}^{2}\) by adding a circle at infinity \(\mathbb{S}^{1}_{\mathcal{L}}\) so that_
* _any end of leaf of_ \(\mathcal{L}\) _tends to a point in_ \(\mathbb{S}^{1}_{\mathcal{L}}\)__
* _the set of points in_ \(\mathbb{S}^{1}_{\mathcal{L}}\) _end of an end of leaf is dense in_ \(\mathbb{S}^{1}\)__
* _two ends of leaves tend to the same point in_ \(\mathbb{S}^{1}_{\mathcal{L}}\) _if and only if they belong to the same class in_ \(\mathcal{R}/\approx\)_._
Proof.: Just apply Theorem 1 to a subset \(E\subset\mathcal{R}\) containing exactly \(1\) representative in each class of \(\approx\). One checks that the compactification obtained satisfies the announced properties and does not depend on the choice of \(E\).
**Remark 14**.: _Every class of \(\approx\) in \(\mathcal{R}\) is at most countable because the set of ends of regular leaves in the perfect part \(\mathcal{L}_{0}\) is self-separating._
This compactification takes in account more leaves that the compactification given by Theorem 8, but it is still may have unexpected behaviours:
**Example 16**.: _Consider a non-compact hyperbolic surface \(S\) of finite volume and consider a closed lamination \(\ell\) defined by two disjoint freely homotopic essential closed curves and a closed (but non-compact) leaf whose both ends tend to the same puncture of \(S\)._
_Then the lift \(\mathcal{L}\) of \(\ell\) on the universal cover \(\tilde{S}=\hat{\mathbb{D}}^{2}\) is a countable, discrete lamination of the Poincare disc so that the ends of leaves tend each to points on the circle \(\mathbb{S}^{1}=\partial\mathbb{D}^{2}\), the set of such limit points is dense on \(\mathbb{S}^{1}\) (again as the action of \(\pi_{1}S\) on \(\mathbb{S}^{1}\) is minimal)._
_In this example, however, there are pairs of leaves which share the same limits of their both ends, and there are leaves whose both ends tend to the same point._
Given a closed oriented lamination \(\mathcal{L}\) with no compact leaves and its Cantor-Bendixson decomposition \(\mathcal{L}=\mathcal{L}_{0}\cup\mathcal{L}_{1}\) (\(\mathcal{L}_{0}\) is a closed lamination without isolated leaves and \(\mathcal{L}_{1}\) is countable), Theorem 9 takes in account the part of the ends of leaves in \(\mathcal{L}_{1}\) with separating subsets, in contrast with Theorem 8. For my personal taste, the main issue in Theorem 9 is that I did not found any natural criterion to calculate the equivalence classes of \(\approx\) in \(\mathcal{L}_{1}\). In fact Lemma 6.2 below seems to present as paradoxal the fact that \(\mathcal{L}_{1}\) may have separating subsets:
**Lemma 6.2**.: _Let \(D\subset\mathbb{R}\) be a countable compact subset, ordered by \(\mathbb{R}\). Then \(D\) does not contain any self separating subsets \(\mathcal{E}\subset D\)(that is, \(\#\mathcal{E}>2\) and for every \(x<z\), \(x,z\in\mathcal{E}\) there is \(y\in\mathcal{E}\) with \(x<y<z\))._
Proof.: If \(\mathcal{E}\) is a non-trivical self-separating subset, then there is an increasing bijection from \(\mathcal{E}\backslash\{\min\mathcal{E},\max\mathcal{E}\}\) to \(\mathbb{Q}\cap(0,1)\). This increasing bijection extends in a unique way in a (non-strictly) increasing map from \(\mathbb{R}\to[0,1]\). This map is continuous and the image of \(D\) is \([0,1]\).
Thus \(D\) is not countable.
Lemma 6.2 tell us that the separating property of a closed countable lamination cannot be obtained locally (in foliated charts of the lamination). One deduces:
**Proposition 6.2**.: _Let \(\mathcal{L}\) be a closed countable orientd lamination of \(\hat{\mathbb{D}}^{2}\) so that every end of leaf tends to a point on \(\mathbb{S}^{1}\) and the set of such limit points are dense in \(\mathbb{S}^{1}\)._
_Then given any non-empty open interval \(I\subset\mathbb{S}^{1}\) there is \(L\in\mathcal{L}\) whose both ends have their limits in \(I\)._
_More precisely, any neighborhood of \(I\) in \(\mathbb{D}^{2}\) contains an entire leaf of \(\mathcal{L}\)._
Proof.: One consider a neighborhood of \(I\) bounded by two half leaves whose limits are points \(x\neq y\in I\) and a segment \(\sigma\) transverse to \(\mathcal{L}\) and joining this two leaves. If no leaves is contained in this neighborhood, then every leaf having an end in \(I\) cuts \(\sigma\).
On the other hand any dense subset of an interval \(J\) of \(\mathbb{R}\) is self separating. One deduces that \(\sigma\cap\mathcal{L}\) contains a self separating subset, but is is a countable compact set, and this contradicts Lemma 6.2.
This proposition says that the separating property for a countable oriented lamination is obtained by leaves in small neighborhoods of the points at infinity.
### Families of transverse laminations
Transversality does not imply in general the compactness of the intersection of two leaves of transverse laminations. But this compactness is our main hypothesis for the compactification associated to families of foliations.
However, if two lines \(L_{1},L_{2}\subset\mathbb{R}\) intersect always with the same orientation, then \(\#L_{1}\cap L_{2}\leq 1\). One deduces that Theorem 2 extends without difficulties to countable families of oriented closed laminations intersecting pairwise transversely and with always the same orientation
**Theorem 10**.: _Let \(\mathcal{L}=\{\mathcal{L}_{i}\}\), \(i\in I\subset\mathbb{N}\) be a family of closed orientable laminations of \(\mathbb{R}^{2}\) with no compact leaves and so that the set of leaves of \(\mathcal{L}_{i}\) is uncountable. We assume that the laminations are pairwise transverse with constant orientation of the intersections. Then there is a compactification \(\mathbb{D}^{2}_{\mathcal{L}}\simeq\) of \(\mathbb{R}^{2}\) by adding a circle at infinity \(\mathbb{S}^{1}_{\mathcal{L}}=\partial\mathbb{D}^{2}_{\mathcal{F}}\) with the following properties:_
* _any half leaf tends to a point at infinity._
* _given a point_ \(\theta\in\mathbb{S}^{1}_{\mathcal{L}}\) _the set of ends of leaves tending to_ \(\theta\) _is at most countable._
* _for any non-empty open subset_ \(I\) _of_ \(\mathbb{S}^{1}_{\mathcal{L}}\) _the set of points in_ \(I\) _corresponding to limits of ends of leaves is uncountable._
_Furthermore this compactification of \(\mathbb{R}^{2}\) by \(\mathbb{D}^{2}\) with these three properties is unique, up to a homeomorphism of the disk \(\mathbb{D}^{2}\)._
## 7. Actions on a bifoliated plane
We have seen than any homeomorphism \(h\in Homeo(\mathbb{R}^{2})\) preserving a at most countable family of transverse foliations \(\mathcal{F}\) admits a unique extension as an homeomorphism on the compactification \(\mathbb{D}^{2}_{\mathcal{F}}\).
Thus if \(H\hookrightarrow Homeo(\mathbb{R}^{2})\) is a group acting on \(\mathbb{R}^{2}\) and preserving the (at most countable) family of transverse foliations \(\mathcal{F}\) then this action extends in an action on \(\mathbb{D}^{2}_{\mathcal{F}}\). By restriction to the circle at infinity, one gets an action of \(H\) on \(\mathbb{S}^{1}_{\mathcal{F}}\).
If \(H\hookrightarrow Homeo(\mathbb{R}^{2})\) is a group acting on \(\mathbb{R}^{2}\) and preserving a family of foliations \(\mathcal{F}\), we say that _the action is minimal on the leaves of \(\mathcal{F}\)_ if \(H(L)\) is dense of \(\mathbb{R}^{2}\) for every leaf \(L\) of a foliation of the family \(\mathcal{F}\).
### Faithfullness
**Proposition 7.1**.: _Let \(\mathcal{F}\) be a foliation, and \(h\in Homeo(\mathbb{R}^{2})\) be a homeomorphism preserving \(\mathcal{F}\). Then the action of \(h\) on \(\mathbb{S}^{1}_{\mathcal{F}}\) is the identity map if and only if \(h(L)=L\) for any leaf \(L\), and \(h\) preserves the orientation of the leaves._
Proof.: If \(h\) preserves every leaf and its orientation, then it preserves any limit of its ends. As these limit of ends are dense in \(\mathbb{S}^{1}_{\mathcal{F}}\) one gets that the homeomorphism induced by \(h\) on \(\mathbb{S}^{1}_{\mathcal{F}}\) is the identity map.
Conversely, if \(h\) induces the identity on \(SS^{1}_{\mathcal{F}}\) then for every leaf \(L\) the leaf \(h(L)\) have the same limit of ends as \(L\). According to Lemma 4.2 this implies \(h(L)=L\) as announced.
**Corollary 7.1**.: _Let \(\mathcal{F}=\{\mathcal{F}_{i}\},i\in I\subset\mathbb{N}\) be a family of at least \(2\) transverse foliations. Let \(h\in Homeo(\mathbb{R}^{2})\) be a homeomorphism preserving each foliation \(\mathcal{F}_{i}\). Then the action of \(h\) on \(\mathbb{S}^{1}_{\mathcal{F}}\) is the identity map if and only if \(h\) itself is the identity map._
Proof.: If the induced homeomorphism induced by \(h\) on \(\mathbb{S}^{1}_{\mathcal{F}}\) is the identity map then the same happens to homeomorphism induced by \(h\) on every \(\mathbb{S}^{1}_{\mathcal{F}_{i}}\) (because they are quotient of \(\mathbb{S}^{1}_{\mathcal{F}}\)). Thus Proposition 7.1 implies that \(h\) preserves each leaf of each \(\mathcal{F}_{i}\). As every point of \(\mathbb{R}^{2}\) is the unique intersection point of the leaves through it, one deduces that every point of \(\mathbb{R}^{2}\) is fixed by \(h\) and \(h\) is the identity map.
### Orientations and injectivity of the projections
Let \(\mathcal{F}\) be a foliation of the plane \(\mathbb{R}^{2}\), endowed with an orientation and a transverse orientation. Let \(\mathcal{G}\subset Homeo(\mathbb{R}^{2})\) be a group of homeomorphisms preserving (globally) \(\mathcal{F}\). Let \(\mathcal{G}^{+}\) (resp. \(\mathcal{G}_{+}\)) be the index at most \(2\) subgroup consisting of the elements of \(\mathcal{G}\) preserving the orientation (reps. the transverse orientation)of \(\mathcal{F}\), and \(\mathcal{G}^{+}_{+}=\mathcal{G}^{+}\cap\mathcal{G}_{+}\)) the subgroup of elements preserving both orientations. Then:
**Lemma 7.1**.: _If one of the groups \(\mathcal{G},\mathcal{G}^{+},\mathcal{G}_{+},\mathcal{G}^{+}_{+}\) acts minimally on the leaves of \(\mathcal{F}\), then so does each of these \(4\) groups._
We will indeed prove Lemma 7.2 for which Lemma 7.1 is a particular case.
**Lemma 7.2**.: _Let \(\mathcal{G}\) be a group acting minimally on the leaves of a foliation \(\mathcal{F}\) of \(\mathbb{R}^{2}\), and \(\mathcal{H}\subset\mathcal{G}\) be a subgroup of finite index. Then \(\mathcal{H}\) acts minimally on the leaves of \(\mathcal{F}\)._
Proof.: \(\mathcal{G}\) acts minimally on the leaves of \(\mathcal{F}\), and consider such a leaf \(L\). As \(\mathcal{H}\) is a finite index subgroup, there are \(g_{1},..,g_{n}\in\mathcal{G}\) so that for any \(g\in\mathcal{G}\) there is \(i\in\{1,\ldots,n\}\) with \(g.\mathcal{H}=g_{i}\mathcal{H}\). Let denote \(\mathcal{H}_{i}=g_{i}.\mathcal{H}\). In particular \(\mathcal{G}=\bigcup_{i}\mathcal{H}_{i}\), and then \(\mathbb{R}^{2}=\bigcup_{i}\overline{\mathcal{H}_{i}(L)}\).
Consider any open subset \(O\) of \(\mathbb{R}^{2}\).
\[O=O\cap\bigcup_{i}\overline{\mathcal{H}_{i}(L)}=\bigcup_{i}(O\cap\overline{ \mathcal{H}_{i}(L)})\]
The open set \(O\) is a baire space so that the union of finitely many closed sets with empty interior has empty interior: one deduce that at least one of the \(O\cap\overline{\mathcal{H}_{i}(L)}\) have non empty interior. One deduces that the union \(\bigcup_{i}\dot{\overline{\mathcal{H}_{i}(L)}}\) of the interiors of the \(\overline{\mathcal{H}_{i}(L)}\) is dense in \(\mathbb{R}^{2}\).
Notice that for every \(i\) and every \(g\) there is \(j\) so that \(g(\mathcal{H}_{i}(L))=\mathcal{H}_{j}(L)\).
Consider \(\mathbb{R}^{2}\setminus\bigcup_{i}\dot{\overline{\mathcal{H}_{i}(L)}}\). It is a \(\mathcal{G}\)-invariant closed set, saturated for the foliation \(\mathcal{F}\), and with empty interior. As every \(\mathcal{G}\)-orbit is dense, one deduces that this set is empty.
Thus
\[\mathbb{R}^{2}=\bigcup_{i}\dot{\overline{\mathcal{H}_{i}(L)}}.\]
The open sets \(\dot{\overline{\mathcal{H}_{i}(L)}}\) are images on from the other by homeomorphisms in \(\mathcal{G}\), and in particular they are all non-empty.
As \(\mathbb{R}^{2}\) is connected, one deduces that the open sets \(\dot{\overline{\mathcal{H}_{i}(L)}}\) are not pairwise disjoint. Let \(k\in\{1,\ldots,n\}\) be the maximum number so that there are distinct \(i_{1},\ldots,i_{k}\) with
\[\bigcap_{1}^{k}\dot{\overline{\mathcal{H}_{i_{j}}(L)}}\neq\emptyset.\]
As the \(\dot{\overline{\mathcal{H}_{i}(L)}}\) are not pairwise disjoint, we know that \(k\geq 2\). We will prove, arguing by contradiction:
**Claim 3**.: \(k=n\)_._
Proof.: For that we assume that \(k<n\).
Then we consider the union of all the intersections of \(k\) of these open sets. This union is a \(\mathcal{F}\)-saturated \(\mathcal{G}\)-invariant non-empty set and hence is dense. Its complement is an \(\mathcal{F}\)-saturated invariant closed set with empty interior, and therefore is empty.
Thus \(\mathbb{R}^{2}\) is the union of these open sets. Now again the connexity of \(\mathbb{R}^{2}\) implies that these open sets are not pairwise disjoint. This provides a non-empty intersection of \(2\) distinct of these sets, that is, a non
empty intersection of more than \(k\) of the \(\hat{\overline{\mathcal{H}_{i}(L)}}\), contradicting the choice of \(k\). This shows \(k=n\) proving the claim.
Thus
\[\bigcap_{1}^{n}\hat{\overline{\mathcal{H}_{i}(L)}}.\]
is an non-empty, \(\mathcal{G}\)-invariant open set saturated for the foliation \(\mathcal{F}\), and thus it is dense in \(\mathbb{R}^{2}\).
We just proved that \(\mathcal{H}(L)\) is dense in \(\mathbb{R}^{2}\), concluding the proof.
We will use the next straightforward corollary of Lemma 7.1
**Corollary 7.2**.: _Let \(H\subset Homeo(\mathbb{R}^{2})\) be a group preserving a foliation \(\mathcal{F}\) and acting minimally on the leaves. Assume that \(L\) is a leaf which is not separated at the right and from below. Then the union of the leaves \(h(L)\), \(h\in H\), which are non-separated at the right and from below is dense in \(\mathbb{R}^{2}\) (the same holds changing right by left and/or below by above)._
As a direct consequence of Proposition 5.1 and Corollary 7.2 we get:
**Proposition 7.2**.: _Let \(\mathcal{F}\),\(\mathcal{G}\) be two transverse foliations of \(\mathbb{R}^{2}\) and \(H\subset Homeo(\mathbb{R}^{2})\) preserving \(\mathcal{F}\) and \(\mathcal{G}\). Assume that the orbit of every leaf of \(\mathcal{G}\) in dense in \(\mathbb{R}^{2}\)._
_If \(\mathcal{G}\) has a non-separated leaf, then the projection of \(\Pi_{\mathcal{F}}\colon\mathbb{D}^{2}_{\mathcal{F},\mathcal{G}}\to\mathbb{D} ^{2}_{\mathcal{F}}\) is injective._
Proof.: If \(\mathcal{G}\) has a non separated leaf \(L_{1}\) at the right, it is non separated from a leaf \(L_{2}\) which is non separated at the left. Now Corollary 7.2 asserts that the leaves of \(\mathcal{G}\) non separated at the left as well as the leaves non separated at the right are dense in \(\mathbb{R}^{2}\). Now Proposition 5.1 asserts that \(\Pi_{\mathcal{F}}\) is a homeomorphism, concluding.
### Minimality of the action on the circle at infinity
**Theorem 11**.: _Let \(\mathcal{F}\) be a foliation on the plane \(\mathbb{R}^{2}\) and \(H\subset Homeo(\mathbb{R}^{2})\) preserving the foliation \(\mathcal{F}\)._
1. _If the action of_ \(H\) _on_ \(\mathbb{S}^{1}_{\mathcal{F}}\) _is minimal then the foliation_ \(\mathcal{F}\) _admits non separated leaves from above and non separated leaves from below._
2. _Conversely if the foliation_ \(\mathcal{F}\) _admits non separated leaves from above and non separated leaves from below and if the orbit of every leaf is dense in_ \(\mathbb{R}^{2}\) _then the action of_ \(H\) _on_ \(\mathbb{S}^{1}_{\mathcal{F}}\) _is minimal._
We will see with Theorem 14 that the minimality of the action on the leaves is not a necessary condition for the minimality of the action on the circle at infinity.
Item 1 of Theorem 11 is a consequence of Proposition 7.3 below.
**Proposition 7.3**.: _Let \(\mathcal{F}\) be a foliation of \(\mathbb{R}^{2}\) and assume that \(\mathcal{F}\) has no non-separated leaves from below. Given any leaf \(L\) we denote by \(\Delta^{+}_{L}\) the closure on \(\mathbb{D}^{2}_{\mathcal{F}}\) of the upper half plane of \(\mathbb{R}^{2}\) bounded by \(L\)._
_Then \(\bigcap_{L\in\mathcal{F}}\Delta^{+}_{L}\) is non empty and consists in an unique point \(O_{\mathcal{F}}\) on \(\mathbb{S}^{1}_{\mathcal{F}}\). As a consequence, any \(h\in Homeo(\mathbb{D}^{2}_{\mathcal{F}})\) preserving \(\mathcal{F}\) admits \(O_{\mathcal{F}}\) as a fixed point:_
\[h(O_{\mathcal{F}})=O_{\mathcal{F}}.\]
Proof.: We introduce a relation on the set \(\mathcal{L}\) of leaves of \(\mathcal{F}\) as follows: \(L_{1}\preceq L_{2}\) if there is a positively oriented (for a transverse orientation of \(\mathcal{F}\)) transverse segment \(\sigma\) starting at \(L_{1}\) and ending at \(L_{2}\). One easily checks that \(\preceq\) is a partial order relation on \(\mathcal{L}\).
Due to the convexity of \(\mathbb{R}^{2}\), one gets:
**Claim 4**.: _given any leaves \(L,\tilde{L}\in\mathcal{L}\) there is \(k\geq 0\) and \(L_{0},\ldots,L_{k}\in\mathcal{L}\) so that,_
* _for any_ \(i\in\{0,\ldots,k-1\}\) _the leaves_ \(L_{i}\) _and_ \(L_{i+1}\) _are comparable for_ \(\preceq\) _(that is_ \(L_{i}\preceq L_{i+1}\) _or_ \(L_{i+1}\preceq L_{i}\)_)_
* \(L=L_{0}\) _and_ \(L^{\prime}=L_{k}\)
Proof.: There is a countable family of segments in \(\mathbb{R}^{2}\) transverse to \(\mathcal{F}\) and so that every leaf \(L\) cuts at least one of these segments. The set of leaves cutting a given segment induces a connected open set of \(\mathbb{R}^{2}\). Given any two points in \(\mathbb{R}^{2}\) one considers a compact path joining these two points. By compacity, it is covered by a finite family of these open sets. One concludes easily.
We denote \(\prec L,L^{\prime}\succ\in\mathbb{N}\) the minimum value of such a number \(k\). One easily checks that \(\prec\cdot,\cdot\succ\) is a distance on the set of leaves \(\mathcal{L}\).
Up to now, this could be done for any foliation \(\mathcal{F}\). In this setting, our hypothesis that \(\mathcal{F}\) does not admit leaves which are non-separate from below is translated as follows:
**Claim 5**.: _Assume that \(L_{0},L_{1},L_{2}\in\mathcal{L}\) are three leaves so that \(L_{0}\preceq L_{1}\) and \(L_{0}\preceq L_{2}\). Then \(L_{1}\) and \(L_{2}\) are comparable for \(\preceq\)._
Proof.: We assume that the leaves \(L_{i}\) are distinct, otherwise there is nothing to do. Let \(\sigma_{i}\colon[0,1]\to\mathbb{R}^{2}\), \(i=1,2\) transverse to \(\mathcal{F}\) and positively oriented so that \(\sigma_{i}(0)\in L_{0}\) and \(\sigma_{i}(1)\in L_{i}\).
Let \(I=\{t\in[0,1],L(\sigma_{1}(t))\cap\sigma_{2}\neq\emptyset\}\) and \(J=\{t\in[0,1],L(\sigma_{1}(t))\cap\sigma_{2}\neq\emptyset\}\). As \(\mathbb{R}^{2}\) is simply connected, one shows that \(I\) and \(J\) are connected and each of them contains \(0\).
Let \(t_{1}=\sup I\) and \(t_{2}=\sup J\) For any \(t\in[0,t_{1})\) let \(\tilde{t}\in J\) so that \(L(\sigma_{1}(t))=L(\sigma_{2}(\tilde{t})\). In particular, \(\tilde{t}\) tends to \(t_{2}\) as \(t\) tends to \(t_{1}\).
Thus the leaves \(L(\sigma_{1}(t_{1})\) and \(L(\sigma_{2}(t_{2})\) are accumulated from below by the leaves \(L(\sigma_{1}(t))=L(\sigma_{2}(\tilde{t})\), thus are non separated from below. By assumption on \(\mathcal{F}\) this implies that they are equal:
\[L(\sigma_{1}(t_{1})=L(\sigma_{2}(t_{2})\]
If \(t_{1}<1\) and \(t_{2}<1\) then the leaf \(L(\sigma_{1}(t)\) for \(t>t_{1}\) close to \(t\) cuts \(\sigma_{1}\) at a point \(\sigma_{2}(\tilde{t})\) with \(\tilde{t}>t_{2}\), close to \(t_{2}\). This contradicts our choice of \(t_{1}\) and \(t_{2}\).
Thus \(t_{1}=1\) or (non exclusive) \(t_{2}=1\). In the first case \(L_{1}=L(\sigma_{1}(t_{1}))\) cuts \(\sigma_{2}\) and then \(L_{1}\preceq L_{2}\) and in the second case \(L_{2}\) cuts \(\sigma_{1}\) and \(L_{2}\preceq L_{1}\). This ends the proof.
As a consequence of Claims 4 and 5 one deduces
**Claim 7.1**.: _Given any two leaves \(L,\tilde{L}\) there is a leaf \(\hat{L}\) so that \(L\preceq\hat{L}\) and \(\tilde{L}\preceq\hat{L}\). In particular, the distance \(\prec\cdot,\cdot\succ\) is bounded by \(2\)._
Proof.: Consider a finite sequence of leaves \(L=L_{0},\ldots,L_{k}=\tilde{L}\), \(k=\prec L,\tilde{L}\succ\), and \(L_{i}\) comparable with \(L_{i+1}\).
The minimality of \(k\) implies that \(L_{i-1}\) and \(L_{i+1}\) are not comparable (otherwise one could delete \(L_{i}\) getting a strictly smaller sequence).
Assume that there is \(i\in\{1,\ldots k-1\}\) so that \(L_{i-1}\succeq L_{i}\). If \(L_{i}\succeq L_{i+1}\) then \(L_{i-1}\succeq L_{i+1}\) which is forbidden by the observation above. Thus \(L_{i}\preceq L_{i+1}\) and Claim 5 implies again that \(L_{i-1}\) and \(L_{i+1}\) are comparable, which again is impossible. This proves that
\[\forall i\in\{1,\ldots k-1\},L_{i-1}\preceq L_{i}\]
As a consequence one deduces
**Claim 6**.: _There is a increasing sequence \(L_{i}\prec L_{i+1}\)\(i\in\mathbb{N}\), \(L_{i}\in\mathcal{L}\) so that, given any leaf \(L\in\mathcal{L}\) there is \(n\) with \(L\prec L_{n}\)._
Proof.: One chose a countable set of compact positively oriented segments \(\sigma_{i}\colon[0,1]\to\mathbb{R}^{2}\) transverse to \(\mathcal{F}\) to that any leaf cuts one of the \(\sigma_{i}\) (and thus is less than \(L(\sigma_{i}(1)\) for \(\preceq\)). Then one builds inductively the sequence \(L_{i}\): \(L_{i+1}\) is obtained by applying Claim 7.1 to the leaves \(L_{i}\) and \(L(\sigma_{i}(1)\).
**Claim 7**.: _The compact discs \(\Delta_{L}^{+}\) are decreasing with \(L\) for \(\prec\): more precisely, if \(L\prec\tilde{L}\) then_
\[\Delta_{\tilde{L}}^{+}\subset\mathring{\Delta_{L}^{+}},\]
_where \(\mathring{\Delta^{+}_{L}}\) denotes the interior for the topology of \(\mathbb{D}^{2}_{\mathcal{F}}\) ( it does not means the open disc)._
Proof.: The hypothesis implies that \(L_{i+1}\) is contained in the interior of \(\Delta^{+}_{L_{i}}\), so that \(\Delta^{+}_{L_{i+1}}\cap\mathbb{R}^{2}\) is contained in the interior of \(\Delta^{+}_{L_{i}}\). We need to prove that \(\Delta^{+}_{L_{i+1}}\cap\mathbb{S}^{1}_{\mathcal{F}}\) is contained in the interior of \(\Delta^{+}_{L_{i}}\cap\mathbb{S}^{1}_{\mathcal{F}}\). In other words, we need to prove that the ends of \(L_{i+1}\) do not share a limit with the ends of \(L_{i}\).
Recall that \(L_{i}\prec L_{i+1}\) that is, there is a segment \(\sigma\colon[0,1]\to\mathbb{R}^{2}\) transverse to \(\mathcal{F}\) and positively oriented, with \(\sigma(0)\in L_{i}\) and \(\sigma(1)\in L_{i+1}\). If \(L_{i}\) share with \(L_{i+1}\) a limit point in \(\mathbb{S}^{1}_{\mathcal{F}}\) so does any leaf \(L(\sigma(t))\) contradicting the fact that points in \(\mathbb{S}^{1}_{\mathcal{F}}\) are limits of at most a countable set of ends of leaves. This ends the proof.
Thus Claims 6 and 7 implies
\[\bigcap_{L\in\mathcal{F}}\Delta^{+}_{L}=\bigcap_{i\in\mathbb{N}}\Delta^{+}_{L _{n}}.\]
Now \(\bigcap_{L\in\mathcal{F}}\Delta^{+}_{L}=\bigcap_{i\in\mathbb{N}}\Delta^{+}_{L _{n}}\) is a decreasing sequence of connected compact metric sets, saturated for \(\mathcal{F}\) and therefore is a non-empty connected compact sets saturated for \(\mathcal{F}\). As it does not contain any leaf of \(\mathcal{F}\) one deduces that \(\bigcap_{L\in\mathcal{F}}\Delta^{+}_{L}\cap\mathbb{R}^{2}=\emptyset\) that is \(\bigcap_{L\in\mathcal{F}}\Delta^{+}_{L}\) is a compact interval \(U\) in \(\mathbb{S}^{1}_{\mathcal{F}}\).
It remains to show that this interval \(U=\bigcap_{L\in\mathcal{F}}\Delta^{+}_{L}\) is reduced to a point. Otherwise, there is and half leaf \(L_{+}\) whose limit belongs to the interior of \(U\). According to Claim 7, this implies that \(L_{+}\cap\Delta^{+}_{L_{i}}\neq\emptyset\) for every \(i\). This contradics the fact that, for \(n\) large enough, the leaf \(L_{n}\) is larger (for \(\prec\)) than the leaf \(L\) carrying the half leaf \(L_{+}\) and thus \(L\cap\Delta^{+}_{L_{n}}=\emptyset\).
This contradiction ends the proof of Proposition 7.3.
Proof of item 1 of Theorem 11.: Assume that the action of \(H\) on \(\mathbb{S}^{1}_{\mathcal{F}}\) is minimal. The foliation \(\mathcal{F}\) cannot be conjugated to the trivial foliation otherwise the set \(\{N,S\}\) (unique points in \(\mathbb{S}^{1}_{\mathcal{F}}\) which are not limit of leaves) would be \(H\)-invariant.
Thus \(\mathcal{F}\) admits non-separated leaves, and we can assume it is from above (up to change the transverse orientation of \(\mathcal{F}\)). If it do not admit non-separated leaves from below then the point \(O_{\mathcal{F}}\) in \(\mathbb{S}^{1}_{\mathcal{F}}\) given by Proposition 7.3 is a global fix point of \(H\).
Proof of the Item 2 of Theorem 11.: We assume that \(H\) is a group acting minimally on the leaves of a foliation \(\mathcal{F}\) having non-separated leaves, some of them from above and some of them from below. According to lemma 7.1 up to consider a finite index subgroup of \(H\), acting minimally on the leaves of \(\mathcal{F}\), one may assume that \(H\) preserves the orientation and transverse orientation of \(\mathcal{F}\).
Recall that the ends of regular leaves are dense in \(\mathbb{S}^{1}_{\mathcal{F}}\). Thus it is enough to check that any neighborhood of any end of a regular leaf contains points in the orbit for \(H\) of any point of \(\mathbb{S}^{1}_{\mathcal{F}}\). Consider a regular leaf \(L\) and \(\sigma\colon[-1,1]\to\mathbb{R}^{2}\) a segment transverse to \(\mathcal{F}\) with \(\sigma(0)\in L\). We will show that the end of \(L_{+}\) belongs to the closure of any \(H\)-orbit (the same argument holds for the end of \(L^{-}\)).
We denote by \(L_{t}\) the leaf through \(t\). Consider the basis of neighborhood \(U^{+}_{t}\) of the end \(L^{+}\) given by the compact discs in \(\mathbb{D}^{2}_{\mathcal{F}}\) closure of the half plane bounded by \(L^{+}_{-t}\), \(\sigma([-t,t])\), and \(L^{+}_{t}\).
Our hypothesis implies
**Claim 8**.: _There is a dense subset of values of \(t\) so that \(L_{t}\) is not separated at the right. As a consequence for every \(t\) the topological disc \(U^{+}_{t}\) contains entire leaves._
Proof.: The first sentence is directly implied by the existence of leaves which are non-separated at the right, the fact that \(H\) preserves the orientations of the leaves and acts minimaly on the leaves of \(\mathcal{F}\).
The second sentence have been seen in Section 4.
Any leaf \(L\) cuts \(\mathbb{D}^{2}_{\mathcal{F}}\) in two discs, \(\Delta^{+}_{L}\) and \(\Delta^{-}_{L}\) (following the tranverse orientation of \(\mathcal{F}\)) whose union \(\Delta^{+}_{L}\cup\Delta^{-}L\) is \(\mathbb{D}^{2}_{\mathcal{F}}\).
**Claim 9**.: _Under the hypotheses, given any \(L\) there are \(g_{1}\), \(g_{2}\in H\) so that \(g_{1}(\Delta^{+}_{L})\subset\mathring{\Delta}^{-}_{L}\) and \(g_{2}(\Delta^{-}_{L})\subset\mathring{\Delta}^{+}_{L}\)_
_As a consequence both \(\Delta^{+}_{L}\) and \(\Delta^{-}_{L}\) contains points in any \(H\)-orbit of point in \(\mathbb{D}^{2}_{\mathcal{F}}\)._
Proof.: We prove the first inclusion, the other is obtained by reversing the transverse orientation of \(\mathcal{F}\).
Considers \(L\) a leaf and \(\sigma\colon[-1,1]\to\mathbb{R}^{2}\) a segment transverse to \(\mathcal{F}\) (positively oriented for the transverse orientation of \(\mathcal{F}\)) so that \(\sigma(0)\in L\). There is \(-t\in[-1,0)\) so that the leaf \(L_{-t}\) is non-separated from below from a leaf \(L_{2}\), because the leaves non-separated from below are dense in \(\mathbb{R}^{2}\), due to the minimality of the action of \(H\) on the leaves, and the fact that \(H\) preserves the transverse orientation of \(\mathcal{F}\). Thus \(L_{-t}\subset\Delta^{-}_{L}\), \(L_{2}\subset\Delta^{-}_{L}\). Furthermore \(\Delta^{-}_{L_{2}}\) contains \(L_{-t}\) and thus contains \(L\). One deduces:
\[\Delta^{+}_{L_{2}}\subset\Delta^{-}_{L}.\]
Now, there is \(h\in H\) so that \(h(L_{2})=L_{-s}\).with \(-s\in(-1,0)\). In particular one gets that \(\Delta^{+}_{L}\subset\mathring{\Delta}^{+}(h(L_{2})\) and thus
\[h^{-1}\Delta^{+}_{L}=\Delta^{+}_{h^{-1}(L)}\subset\mathring{\Delta}^{+}(L_{2} )\subset\mathring{\Delta}^{-}_{L}.\]
This concludes the proof.
We are ready to conclude the proof of Theorem 11: Any neighborhood in \(\mathbb{D}^{2}_{\mathcal{F}}\) of any point of \(\mathbb{S}^{1}_{\mathcal{F}}\) contains an entire leaf \(L\) (claim 8 above), and thus contains either \(\Delta^{+}_{L}\) or \(\Delta^{-}_{L}\). According to Claim 9 this neighborhood contains points in any \(H\)-orbit of points in \(\mathbb{D}^{2}_{\mathcal{F}}\). This shows the minimality of the action of \(H\) on \(\mathbb{S}^{1}_{\mathcal{F}}\), concluding.
**Theorem 12**.: _Let \(\mathcal{F},\mathcal{G}\) be two transverse foliations on the plane \(\mathbb{R}^{2}\). Let \(H\subset Homeo(\mathbb{R}^{2})\) be a group preserving both foliations \(\mathcal{F}\) and \(\mathcal{G}\)._
1. _If the action of_ \(H\) _on_ \(\mathbb{S}^{1}_{\mathcal{F},\mathcal{G}}\) _is minimal then both foliations_ \(\mathcal{F}\)__\(\mathcal{G}\) _have non-separated leaves from above and non separated leaves from below._
2. _Conversely, if both foliations_ \(\mathcal{F}\)__\(\mathcal{G}\) _have non-separated leaves from above and non separated leaves from below and if the orbit of every leaf of_ \(\mathcal{F}\) _and_ \(\mathcal{G}\) _is dense_ \(\mathbb{R}^{2}\)_, then the action of_ \(H\) _on_ \(\mathbb{S}^{1}_{\mathcal{F},\mathcal{G}}\) _is minimal._
Proof.: For item 1, if the action of \(H\) on \(\mathbb{S}^{1}_{\mathcal{F},\mathcal{G}}\) is minimal then both actions of \(H\) on \(\mathbb{S}^{1}_{\mathcal{F}}\) and \(\mathbb{S}^{1}_{\mathcal{G}}\) are minimal. Thus item 1 follows from Item 1 of Theorem 11.
Conversely, as the action on the leaves of \(\mathcal{F}\) and \(\mathcal{G}\) is assumed to be minimal, and they have non-separated leaves, then Proposition 7.2 implies that both projections \(\Pi_{\mathcal{F}}\) and \(\Pi_{\mathcal{G}}\) are injective. That is \(\mathbb{S}^{1}_{\mathcal{F},\mathcal{G}}=\mathbb{S}^{1}_{\mathcal{F}}=\mathbb{ S}^{1}_{\mathcal{G}}\). Now the minimality of the action of \(H\) on this circle at infinity is given by item 2 of Theorem 11.
## 8. Action of the fundamental group on the bifoliated plane of an Anosov flow
### The bifoliated plane associated to a Anosov flow
Let \(X\) be an Anosov flow on a closed \(3\)-manifold \(M\). Then Fenley and Barbot show that the lift of \(X\) on the universal cover of \(M\) is conjugated to \(\mathbb{R}^{3},\frac{\partial}{\partial x}\); in particular the space of orbits of this lifted flow is a plane \(\mathcal{P}_{X}\simeq\mathbb{R}^{2}\). Then, the center-stable and center-unstable foliations of \(X\) induce (by lifitng on the universal cover and projecting on \(\mathcal{P}_{X}\)) a pair of transverse foliations \(\mathcal{F}^{s},\mathcal{F}^{u}\) on the plane \(\mathcal{P}_{X}\). The triple \((\mathcal{P}_{X},\mathcal{F}^{s},\mathcal{F}^{u})\) is called the _bi-foliated plane_ associated to \(X\). Finally, the natural action of the fundamental group \(\pi_{1}(M)\) on the universal cover of \(M\) projects on \(\mathcal{P}_{X}\) in an action preserving both foliations \(\mathcal{F}^{s}\) and \(\mathcal{F}^{u}\).
Fenley and Barbot proved that, if one of the foliation \(\mathcal{F}^{s},\mathcal{F}^{u}\) is trivial (that is, has no non-separated leaf and therefore is conjugate to an affine foliation by parallel straight lines) then the other is also trivial. In that case, one says that \(X\) is \(\mathbb{R}\)_-covered_. In that case the bifoliated plane is conjugated to one of the two possible models:
* the plane \(\mathbb{R}^{2}\) endowed with the trivial horizontal and vertical foliations; Solodov proved that this is equivalent to the fact that \(X\) is orbitally equivalent to the suspension flow of a linear automorphism of the torus \(\mathbb{T}^{2}\);
* the restriction of the trivial horizontal and vertical foliation to the strip \(|x-y|<1\).
Injectivity of the projection of \(\mathbb{D}^{2}_{\mathcal{F}^{s},\mathcal{F}^{u}}\) on \(\mathbb{D}^{2}_{\mathcal{F}^{s}}\) and \(\mathbb{D}^{2}_{\mathcal{F}^{u}}\)
The aim of this section is to prove Theorem 5 which is restated as Proposition 8.1 below and Theorem 13 (in next section).
**Proposition 8.1**.: _Let \(X\) be an Anosov flow on a \(3\)-manifold. Then:_
* _Either_ \(X\) _is topologically equivalent to the suspension flow of a hyperbolic element of_ \(SL(2,\mathbb{Z})\)__
* _Or both projections of the compactification_ \(\mathbb{D}^{2}_{\mathcal{F}^{s},F^{u}}\) _on_ \(\mathbb{D}^{2}_{\mathcal{F}^{s}}\) _and_ \(\mathbb{D}^{2}_{\mathcal{F}^{u}}\) _are homeomorphisms._
Proof.: Assume that the projection on \(\mathbb{D}^{2}_{\mathcal{F}^{u}}\) is not injective. Thus there is a non trivial open interval \(I\) of \(\mathbb{S}^{1}_{\mathcal{F}^{s},\mathcal{F}^{u}}\) whose point are not limit of end of leaves of \(\mathcal{F}^{u}\). Thus a dense subset of point in \(I\) are limit of ends of leaves of \(\mathcal{F}^{s}\). Furthermore, every end of leaf of \(\mathcal{F}^{s}\) in \(I\) is a regular end. Consider a regular end (for instance, a right end) of leaf \(L^{s}_{right}\) of \(\mathcal{F}^{s}\) whose limit is in the interior of \(I\). Then there is a small unstable segment \(\sigma\) through a point of \(L^{s}_{right}\) so that every right half leaf \(L^{s}_{right,t}\) of \(\mathcal{F}^{s}\) is regular and has its limits in \(I\). Then the union of all these half leaves is what Fenley called a _product region_, in [Fe2]. Now [Fe2, Theorem 5.1] asserts that any Anosov flow admiting a product region is a suspension flow, concluding.
### Minimality of the action on the circle at infinity
In order to prove Theorem 5 it remais to prove Theorem 13 below:
**Theorem 13**.: _Let \(X\) a flot d'Anosov on a closed \(3\)-manifold \(M\). Then \(X\) is non-\(\mathbb{R}\)-covered if and only if the action of \(\pi_{1}(M)\) on the circle \(\mathbb{S}^{1}_{F^{s},F^{u}}\) at infinity is minimal._
**Remark 15**.: _If the manifold \(M\) is not orientable and if \(X\) is \(\mathbb{R}\)-covered, then [Fe1] noticed that \(X\) is a suspension flow. Thus, on non-orientable manifolds \(M\), Theorem 13 asserts the minimality of the action on the circle at infinity, excepted if \(M\) is a suspension manifold._
**Remark 16**.: _The bifoliated plane \((\mathcal{P}_{X},\mathcal{F}^{s},\mathcal{F}^{u})\) remains unchanged if we consider a lift of \(X\) on a finite cover. Thus it is enough to prove Theorem 13 in the case where \(M\) is oriented and the action of \(\pi_{1}(M)\) preserves both orientation and transverse orientation of both foliations \(\mathcal{F}^{s}\), \(\mathcal{F}^{u}\)._
Thus, up to now we will assume that \(M\) is oriented and the action of \(\pi_{1}(M)\) preserves both orientations and transverse orientations of both foliations \(\mathcal{F}^{s}\), \(\mathcal{F}^{u}\).
**Remark 17**.: _If \(X\) is \(\mathbb{R}\)-covered, then \(\mathbb{S}^{1}_{\mathcal{F}^{s}}\) has exactly \(2\) center-like points, which are therefore preserved by the action of \(\pi_{1}(M)\) on \(\mathbb{S}^{1}_{\mathcal{F}^{s}}\): this action is not minimal, and thus the action on \(\mathbb{S}^{1}_{\mathcal{F}^{s},\mathcal{F}^{u}}\) is not minimal._
Thus we are left to prove Theorem 13 in the case where \(X\) is not \(\mathbb{R}\)-covered.We will start with the easier case, when \(X\) is assumed to be transitive. The non-transitive case will be done in the whole next section.
Proof of Theorem 13 when \(X\) is transitive.: When \(X\) is non-\(\mathbb{R}\)-covered and transitive, then [Fe3] proved that \(\mathcal{F}^{s}\) and \(\mathcal{F}^{u}\) admits non-separated leaves from above and non-separated leaves from below. As (up to consider a finite cover of \(M\)), the action of \(\pi_{1}(M)\) preserves the orientation and tranverse orientation of \(\mathcal{F}^{s}\), and the action is minimal on the set of leaves of \(\mathcal{F}^{s}\) thus Theorem 11 asserts that the action of \(\pi_{1}(M)\) on \(\mathbb{S}^{1}_{\mathcal{F}^{s}}\) is minimimal. As \(\mathbb{S}^{1}_{\mathcal{F}^{s}}=\mathbb{S}^{1}_{\mathcal{F}^{s},\mathcal{F}^ {u}}\), this concludes the proof.
Minimality of the action on the circle at infinity for non-transitive Anosov flows: ending the proof of Theorem 5
For ending the proof of Theorem 5, we are left to prove :
**Theorem 14**.: _Let \(X\) be a non-transitive Anosov flow on a closed connected \(3\)-manifold \(M\). Then the action of the fundamental group of \(M\) on the circle at infinity is minimal._
This result is somewhat less intuitive, as the action of the fundamental group \(\pi_{1}(M)\) on the leaves of \(M\) is not minimal, and even, if \(X\) has several attractors, may fail to admit a leaf whose orbit is dense.
The proof of the minimality of the action on the circle at infinity will require some background on Anosov flow, in particular on non-transitive Anosov flows. In the whole section, \(X\) is a non-transitive
Anosov flow on an orientable closed connected manifold \(M\) and the natural action of \(\pi_{1}(M)\) on the bifoliated plane \((\mathcal{P}_{X},\mathcal{F}^{s},\mathcal{F}^{u})\) preserves the orientations and transverse orientations of both foliations. Recall that we have seen that the compactification of both foliations coincide with the one of each foliation. We will denote by \(\mathbb{D}^{2}_{X},\mathbb{S}^{1}_{X}\) this compactification and the corresponding circle at infinity. We refer by \(*\) for this package of hypotheses and notations.
### Background on non-transitive Anosov flows
Let \(X\) be a non-transitive Anosov flow. Thus, according to [Fe1, Ba1]\(X\) is not \(\mathbb{R}\)-covered.
The flow \(X\) is a structurally stable flow, so that Smale spectral decomposition theorem splits the non-wandering set of \(X\) in basic pieces ordered by _Smale order_: a basic piece is upper another if its unstable manifolds cuts the stable manifold of the other. For this order, the maximal basic pieces are the repellers and the minimal are the attractors. In [Br], Brunella noticed that the basic pieces are separated by incompressible tori transverse to the flow.
Consider an attractor \(\mathcal{A}\) of \(X\). It is a compact set consisting in leaves of the unstable foliation of \(X\), hence it is a compact lamination by unstable leaves. Furthermore the intersection of \(\mathcal{A}\) with a transverse segment \(\sigma\) is a Cantor set. An unstable leaf \(W^{u}\) in \(\mathcal{A}\) is called of _boundary type_ if \(W^{u}\cap\sigma\) belongs to the boundary of a connected component of \(\sigma\setminus\mathcal{A}\).
A classical results from hyperbolic theory (see for instance [BeBo]) asserts that the unstable leaves in \(\mathcal{A}\) of boundary type are the unstable manifolds of a finite number of periodic orbits called _periodic orbits fo boundary type_.
The same happens for repellers \(\mathcal{R}\): they are compact laminations by stable leaves, tranversally a Cantor sets, and they admits finitely many boundary leaves, stable manifolds of finitely many periodic orbits called of boundary type.
In this section, we will focus on attractors and repellers. Consider an attractor \(\mathcal{A}\) of \(X\), its lift \(\tilde{A}\) on the universal cover, and consider the projection of \(\mathcal{A}\) on the bi-foliatioed plane \(\mathcal{P}_{X}\). This projection is a closed lamination by leaves of \(\mathcal{F}^{s}\) and its cuts every tranverse curve along a Cantor set. By a pratical abuse of notation we will still denote by \(\mathcal{A}\) this lamination of \(\mathcal{P}_{X}\): thus \(\mathcal{A}\) denotes at the same time a \(2\)-dimensional lamination on \(M\) and a \(1\)-dimensional lamination on \(\mathcal{P}_{X}\).
The same happens for repeller.
Let \(\mathcal{A}\subset\mathcal{P}_{X}\) and \(\mathcal{R}\subset\mathcal{P}_{X}\) be the unstable and stable laminations (respectively) corresponding to an attractor and a repeller of \(X\). Then
* \(\mathcal{A}\cap\mathcal{R}=\emptyset\). This seems obvious, but it will be a crucial property for us: given an unstable leaf \(L^{u}\) and a stable leaf \(L^{s}\), this will be our unique criterion for knowing that they don't intersect.
* the periodic point contained in \(\mathcal{A}\) (reps. \(\mathcal{R}\)) are dense in \(\mathcal{A}\) (resp. \(\mathcal{R}\)).
* Each periodic orbit of \(X\) has a discrete \(\pi_{1}(M)\)-orbit in \(\mathcal{P}_{X}\)
* the periodic orbits of boundary types are the \(\pi_{1}(M)\)-orbits of finitely many \(X\)-orbits, and therefore are a discrete set in \(\mathcal{P}_{X}\).
* Fenley [Fe2] shows that the non-separated stable leaves of \(\mathcal{F}^{s}\) (resp. \(\mathcal{F}^{u}\)) correspond to finitely many orbits of \(X\), and hence to a discrete set of periodic points in \(\mathcal{P}_{X}\)
* thus the periodic points \(p\) in \(\mathcal{A}\) (resp. \(\mathcal{R}\)) which are not of boundary type and whose unstable (resp. stable) leaf is regular are dense in \(\mathcal{A}\).
* If \(\mathcal{A}_{1},\dots,\mathcal{A}_{k}\) are the attractors of \(X\) then the union of the stable leaves of \(\mathcal{F}^{s}\) through the laminations \(\mathcal{A}_{1},\dots,\mathcal{A}_{K}\) of \(\mathcal{P}_{X}\) are disjoint open seubsets of \(\mathcal{P}_{X}\) whose union is dense in \(\mathcal{P}_{X}\). The same holds for the repellers.
As a straightforward consequence one gets:
**Lemma 9.1**.: _There is a dense subset of \(\mathcal{P}_{X}\) of points \(x\) whose stable leaf \(L^{s}(x)\) contains a periodic point \(p\) in an attractor \(\mathcal{A}\), not of boundary type and so that \(L^{u}(p)\) is regular. A symmetric statement holds for repellers._
### Proof of Theorem 13
The two main steps of the proof of Theorem 13 are Propositions 9.1 and 9.2 below.
**Proposition 9.1**.: _Let \(L^{u}\) be a leaf of \(\mathcal{F}^{u}\) corresponding to an unstable leaf of \(X\) contained in a attractor of \(X\). Let \(\Delta_{+}\) and \(\Delta_{-}\) be the closures in \(\mathbb{D}^{2}_{X}\) of the half planes in \(\mathbb{R}^{2}\) bounded by \(L^{s}\). Then there are \(g^{+},g^{-}\in\pi_{1}(M)\) so that \(g^{-}(\Delta^{-})\subset\Delta^{+}\) and \(g^{+}(\Delta^{+})\subset\Delta^{-}\)._
_The same statement holds for stable leaves in the repellers._
**Corollary 9.1**.: _Let \(L^{s}\) and \(L^{u}\) be leaves of \(\mathcal{F}^{s}\) and \(\mathcal{F}^{u}\) in a repeller and in an attractor, respectively. Let \(I\subset\mathbb{S}^{1}_{X}\) be a segment with non empty interior and whose end points are the limit of both ends of the same leaf, \(L^{s}\) or \(L^{u}\)._
_Then every orbit of the action of \(\pi_{1}(M)\) contains points in \(I\)._
Proof.: According to Proposition 9.1 there is \(g\in\pi_{1}(M)\) so that \(g(\mathbb{S}^{1}\setminus I)\subset I\), ending the proof.
**Proposition 9.2**.: _Given any non-empty open interval \(J\subset\mathbb{S}^{1}_{X}\), there is a \(L\) which is either a leaf of \(\mathcal{F}^{s}\) in a repeller or a leaf of \(\mathcal{F}^{u}\) in an attractor whose both ends have limits in \(J\)._
Proof of Theorem 14 assuming Propositions 9.1 and 9.2.: According to Proposition 9.2, every interval \(J\) with non-empty interior contains an interval \(I\) whose end points are both limit point of the end of a stable or unstable leaf in an a repeller or an attractor, respectively. Now, according to Corollary 9.1, the interval \(I\) contains a point in every \(\pi_{1}(M)\) orbit in \(\mathbb{S}^{1}_{X}\). Thus any \(\pi_{1}(M)\) orbit in \(\mathbb{S}^{1}_{X}\) has points in any interval with non-empty interior: in other words, every \(\pi_{1}(M)\) orbit si dense in \(\mathbb{S}^{1}_{X}\), or else, the action of \(\pi_{1}(M)\) on \(\mathbb{S}^{1}_{X}\) is minimal, ending the proof.
### Proof of Proposition 9.1
Let \(L^{u}_{0}\) be an unstable leaf in an attractor \(\mathcal{A}_{0}\), and \(\Delta^{+}_{0}\) be the closure of the upper half plane bounded by \(L^{u}_{0}\). For proving Proposition 9.1 we want to prove that there is \(f\in\pi_{1}(M)\) so that \(f(\Delta^{-}_{0})\subset\Delta^{+}_{0}\) (the other announced inclusion is identical).
Consider a point \(p_{0}\in L^{u}_{0}\) and \(L^{s}_{0}\) the stable leaf through \(p_{0}\).
**Claim 10**.: _There is an unstable leaf \(L^{u}_{1}\) with the following property:_
* \(L^{u}_{1}\subset\Delta^{+}_{0}\)__
* \(L^{u}_{1}\) _is contained in the basin of a repeller_ \(\mathcal{R}_{1}\)__
* \(L^{u}_{1}\) _contains a non-boundary periodic point_ \(p_{1}\in L^{u}_{1}\) _of the repeller_ \(\mathcal{R}_{1}\)_._
* \(L^{u}_{1}\) _cuts the stable leaf_ \(L^{s}_{0}\) _in a point_ \(L^{u}_{1}\cap L^{s}_{0}=q_{0}\)_._
Proof.: The union of unstable leaves in the basin of a repeller and carrying a non-boundary periodic point of this repeller is dense in \(\mathbb{R}^{2}\). We can therefore choose such a leaf in \(\Delta^{+}_{0}\) and cutting \(L^{s}_{0}\).
Let \(L^{s}_{1}\) be the stable leaf through \(p_{1}\). It is a non-boundary stable leaf contained in the repeller \(\mathcal{R}_{1}\). Note that \(L^{s}_{1}\) is disjoint from the attractor \(\mathcal{A}_{0}\). Thus
* \(L^{s}_{1}\) is disjoint from \(L^{u}_{0}\in\mathcal{A}_{0}\).
* the stable leaf \(L^{s}_{1}\) is distinct, and therefore disjoint from the stable leaf \(L^{s}_{0}\).
In other words, the union \(L^{s}_{0}\cup L^{u}_{0}\) divides \(\mathcal{P}_{X}\) in 4 quadrants and \(L^{s}_{1}\) contained in one of this quadrants. Let us denote by \(C^{\pm,\pm}\) these 4 quadrants so that \(\Delta^{+}_{0}=C^{-,+}\cup C^{+,+}\) and \(L^{s}_{1}\subset C^{+,+}\).
Let denote by \(\Delta^{+}_{1}=\Delta^{+}(L^{s}_{1})\) the closure of the half plane bounded by \(L^{s}_{1}\) and contained in \(\Delta^{+}_{0}\). Thus \(\Delta^{+}_{1}\) is contained in the same quadrant \(C^{+,+}\) as \(L^{s}_{1}\). We denote by \(\Delta^{-}_{1}\) the closure of the other half plane bounded by \(L^{s}_{1}\). Note that \(\Delta^{-}_{1}\) contains the 3 other quadrants, in particular it contains \(\Delta^{-}_{0}\) and \(C^{-,+}\).
As the leaf \(L^{s}_{1}\) is (by assumption) not a boundary leaf of \(\mathcal{R}_{1}\) it is accumulated on both sides by its \(\pi_{1}(M)\)-orbit. Thus there is a leaf \(L^{s}_{2}=g(L^{s}_{1})\) in its orbit, cutting \(L^{u}_{1}\) at a point \(x\in\Delta^{-}_{1}\) arbitrarilly close to \(p_{1}\) and hence \(x\in C^{+,+}\). Notice that \(L^{s}_{2}\) is contained in the repeller \(\mathcal{R}_{1}\) and thus is disjoint from \(L^{u}_{0}\cup L^{s}_{0}\). Thus it is contained in one quadrant. As it contained \(x\in C^{+,+}\) one has
\[L^{s}_{2}\subset C^{+,+}.\]
Let \(h\in\pi_{1}(M)\) be the generator of the stabilizer \(p_{1}\) so that \(L^{u}_{1}\) is expanded by \(h\). We consider the sequence of leaves \(h^{n}(L^{s}_{2})\) which cut \(L^{u}_{1,-}\) at the point \(h^{n}(x)\).
**Claim 11**.: _For \(n\) large enough \(h^{n}(L^{s}_{2})\) is contained in the quadrant \(C^{-,+}\)._
Proof.: Each leaf \(h^{n}(L^{s}_{2})\) intersects \(L^{u}_{1}\subset\Delta^{+}_{0}\) and is disjoint from \(L^{u}_{0}\) (because \(h^{n}(L^{s}_{2})\) is contained in the repeller). Hence \(h^{n}(L^{s}_{2})\) is contained in \(\Delta^{+}_{0}\), and is distinct and therefore disjoint from \(L^{s}_{0}\). Thus \(h^{n}(L^{s}_{2})\) is contained in one of the quadrants \(C^{+,+}\) of \(C^{-,+}\).
The point \(x_{n}\) tends to infinity in \(L^{u}_{1}\) and so goes further \(q_{0}=L^{s}_{0}\cap L^{u}_{1}\). Thus for \(n\) large enough \(x_{n}\in C^{-,+}\). We proved the for \(n\) large enough \(h^{n}(L^{s}_{2})\subset C^{-,+}\), proving the claim.
We conclude the proof of proposition 9.1 by proving :
**Claim 12**.: _Consider \(n\) large enough so that \(h^{n}(L^{u}_{2})\subset C^{-,+}\)._
_Then either \(g(\Delta^{-}_{1})\subset C^{+,+}\subset\Delta^{+}_{0}\) or \(h^{n}g(\Delta^{-}_{1})\subset C^{-,+}\subset\Delta^{+}_{0}\)_
As \(\Delta^{-}_{1}\) contains \(\Delta^{-}_{0}\) the claim implies that either \(g(\Delta^{-}_{0})\subset\Delta^{+}_{0}\) or \(h^{n}g(\Delta^{-}_{0})\subset\Delta^{+}_{0}\) which concludes the proof of Proposition 9.1.
Proof of the claim.: Assume \(g(\Delta^{-}_{1})\) is not contained in \(C^{+,+}\). As \(g(\Delta^{-}_{1})\) is one of the half plane bounded by \(g(L^{u}_{1})=L^{u}_{2}\subset C^{+,+}\) one gets that \(g(\Delta^{+}_{1})\) is the half plane bounded by \(L^{u}_{2}\) and contained in \(C^{+,+}\). In particular,\(g(\Delta^{+}_{1})\) does not contain \(q_{0}\). As \(p_{1}\) and \(q_{0}\) are on distinct sides of \(L^{u}_{2}\) one deduces that \(p_{1}\in g(\Delta^{+}_{1})\).
As \(p_{1}\) is the fixed point of \(h\) one deduces
\[p_{1}\in h^{n}g(\Delta^{+}_{1})\]
Thus \(h^{n}g(\Delta^{+}_{1})\) is the half plane bounded by \(h^{n}(L^{u}_{2})\) which is not contained in the quadrant \(C^{-,+}\). Thus \(h^{n}g(\Delta^{-}_{1})\) is the other half plane bounded by \(h^{n}(L^{u}_{2})\) and is contained in \(C^{-,+}\), ending the proof.
### Proof of Proposition 9.2
We want to prove that any open interval \(I\) in the circle \(\mathbb{S}^{1}_{X}\) contains the two ends of an unstable leaf in an attractor or the two ends of a stable leaf of a repeller.
**Lemma 9.2**.: _Assuming \(*\), there are dense subsets \(E^{s}_{0}\), \(E^{u}_{0}\) of \(\mathbb{S}^{1}_{X}\) so that_
* _any_ \(p\in E^{s}_{0}\) _is the limit of a regular leaf of_ \(\mathcal{F}^{s}\) _containing a periodic point_ \(x\) _which belongs to an attractor_ \(\mathcal{A}(p)\)_, and is not of boundary type._
* _any_ \(q\in E^{u}_{0}\) _is the limit of a regular leaf of_ \(\mathcal{F}^{u}\) _containing a periodic point_ \(y\) _which belongs to an repeler_ \(\mathcal{R}(p)\)_, and is not of boundary type._
Proof.: According to Lemma 9.1 the union of regular stable leaves containing periodic point of non-boundary type of an attractors are dense in \(\mathcal{P}_{X}\). This family is therefore separating, according to Lemma 3.1. Thus the limits of their ends is a dense subset of \(\mathbb{S}^{1}_{X}\), as announced.
**Lemma 9.3**.: _Assuming \(*\), there is a dense subset \(E\subset\mathbb{S}^{1}_{X}\) so that every \(x\in E\) is the limit of the end a regular leaf of \(\mathcal{F}^{s}\) (resp. \(\mathcal{F}^{u}\)) contained in a repeller \(\mathcal{R}\) (resp. an attractor \(\mathcal{A}\)), and carrying a periodic point of non-boundary type._
Proof.: Consider a non empty open interval \(I\subset\mathbb{S}^{1}_{X}\). According to Lemma 9.2 there is a point \(x\in I\) which is the limit of an end \(L^{s}_{+}(p_{0})\) of a regular leave of \(\mathcal{F}^{s}\) carrying a periodic point \(p_{0}\) in a non-boundary type unstable leaf \(L^{u}(p_{0})\) of a attractor \(\mathcal{A}\).
The point \(p_{0}\) is accumulated on both sides by periodic points in \(\mathcal{A}\). We chose \(p_{1}\) so that the limit \(y\) of \(L^{s}_{+}(p_{1})\) belongs to \(I\) (that is possible because \(L^{s}_{+}(p_{0})\) is regular) and \(L^{s}_{+}(p_{1})\) intersects \(L^{u}(p_{0})\) at a point \(q_{1}\). Thus let \(J\subset I\) be the segment contained in \(I\) and whose end points are \(x\) and \(y\).Notice that \(y\neq x\),that is \(J\) has non-empty interior, as \(L^{s}(p_{0})\) is a regular leaf.
Now \(L^{u}(p_{0})\) is accumulated on both sides by regular unstable leaves contained in the attractor \(\mathcal{A}\) and containing periodic point of non-boundary type. Let \(L^{u}_{0}\) be such a leaf, with non empty intersection with \(L^{s}_{+}(p_{0})\).
If \(L^{u}_{0}\) does not cut \(L^{s}_{+}(p_{1})\), then one ends is contained in the half strip bounded by \(L^{s}_{+}(p_{0})\), the segment of \([p_{0},q_{1}]^{u}\) and \(L^{u}_{+}(q_{1})\). As a consequence, the limit of this end belongs to \(I\) and we are done.
Thus we may assume now that \(L^{u}_{0}\) cuts \(L^{s}_{+}(p_{1})\).
Let \(h_{0}\) and \(h_{1}\) be the generators of the stabilizers of \(p_{0}\) and \(p_{1}\), respectively, so that \(h_{0}\) expands \(L^{s}_{+}(p_{0})\) and \(h_{1}\) expands \(L^{s}_{+}(p_{1})\).
We consider the images \(\{h^{n}_{0}(L^{u}_{0}),h^{n}_{1},n\in\mathbb{N}(L^{u}_{0})\}\) of the leaf \(L^{u}_{0}\) by the positive iterates of \(h_{0}\) and \(h_{1}\). Each of these images is an regular unstable leaf in \(\mathcal{A}\), and has a non-empty intersection with either \(L^{s}_{+}(p_{0})\) or \(L^{s}_{+}(p_{1})\). If one of these leaves does not cross both \(L^{s}_{+}(p_{0})\) and \(L^{s}_{+}(p_{1})\), then it has an end in the segment \(J\subset I\), and we are done.
Assume now that every leaf in \(\{h^{n}_{0}(L^{u}_{0}),h^{n}_{1},n\in\mathbb{N}(L^{u}_{0})\}\) crosses both \(L^{s}_{+}(p_{0})\) and \(L^{s}_{+}(p_{1})\). These images are leaves of \(\mathcal{F}^{u}\), and therefore they are either disjoint or equal. For \(L\in\{h^{n}_{0}(L^{u}_{0}),h^{n}_{1},n\in\mathbb{N}(L^{u}_{0})\}\), let \(D(L)\subset\mathbb{D}^{2}_{\mathcal{F}}\) be the disk obtained as follows: one cuts along \(L\) the strip bounded by \(L^{s}_{+}(p_{0})\) and \(L^{s}_{+}(p_{1})\), one gets two components; one considers the closure in \(\mathbb{D}^{2}_{\mathcal{F}}\) of these components; now \(D(L)\) is the one containing the segment \(J\subset\mathbb{S}^{1}_{\mathcal{F}}\).
The disks \(D(L)\) are naturally totally ordered by the inclusion and we fix the indexation \(\{h^{n}_{0}(L^{u}_{0}),h^{n}_{1}(L^{u}_{0}),n\in\mathbb{N}\}=\{L^{u}_{n}\}\) according to this order: \(D(L^{u}_{n+1}\subset D(L^{u}_{n})\).
Consider \(D=\bigcap_{n}(D(L^{u}_{n})\). It is a compact subset of \(\mathbb{D}^{2}_{\mathcal{F}}\) whose intersection with \(\mathbb{S}^{1}_{\mathcal{F}}\) is the segment \(J\).
**Claim 13**.: \(D\cap(L^{s}_{+}(p_{0})\cup L^{s}_{+}(p_{1}))=\emptyset\)__
Proof.: The leaves \(h^{n}_{0}(L^{u}_{0})\) have their intersection with \(L^{s}(p_{0})\) tending to \(x\) as \(n\to\infty\): one deduces that \(D\cap L^{s}(p_{0})=\emptyset\). The leaves \(h^{n}_{1}(L^{u}_{1})\) have their intersection with \(L^{s}(p_{1})\) tending to \(y\), and thus \(D\cap L^{s}(p_{1})=\emptyset\).
**Claim 14**.: \(D\setminus\mathbb{S}^{1}_{\mathcal{F}}\neq\emptyset\)_._
Proof.: there is a point \(z\) in the interior of \(J\) which is the limit of an end of leaf of \(\mathcal{F}^{u}\). Thus there is an half unstable leaf \(L^{u}_{+}\) contained in the strip bounded by \(L^{s}_{+}(p_{0})\) and \(L^{s}_{+}(p_{1})\), and whose limit is \(z\). Now \(L^{u}_{+}\) is disjoint from all the \(L^{u}_{n}\), and therefore
\[L^{u}_{+}\subset D(L^{u}_{n}),\forall n\]
This concludes the proof of the claim.
Consider now a point \(t\in D\setminus\mathbb{S}^{1}_{\mathcal{F}}\). The leaf \(L^{u}(t)\) is disjoint from the leaves \(L^{u}_{n}\) for any \(n\). Thus it has an empty intersection with \((L^{s}_{+}(p_{0})\cup L^{s}_{+}(p_{1}))\). As the consequence one gets
\[L^{u}(t)\subset D\]
In particular, \(L^{u}(t)\) has both ends on \(J\).
Suppose now that the point \(t\in D\setminus\mathbb{S}^{1}_{X}\) as been chosen on the boundary of \(D\). Thus \(t\) is a limit of points in \(L^{u}_{n}\subset\mathcal{A}\). As \(\mathcal{A}\) is a closed subset of \(\mathbb{R}^{2}=\dot{\mathbb{D}^{2}_{\mathcal{F}}}\) one deduces that \(t\in\mathcal{A}\), and so \(L^{u}(t)\subset\mathcal{A}\).
One just found a leaf \(L^{u}(t)\) contained in \(\mathcal{A}\) and having both ends in \(J\subset I\). Let \(D_{t}\subset D\) be the disc bounded by \(L^{u}(t)\).We are not yet done, because \(L^{u}_{t}\) may fail to be a regular leaf.
Now Proposition 9.1 implies that every unstable leaf has an image by an element \(k\in\pi_{1}(M)\) which is contained in \(D_{t}\), for instance \(L^{u}_{0}\). Now \(k(L^{u}_{0})\) is a regular unstable leaf in an attractor which has the limits of its both ends contained in \(J\subset I\), ending the proof.
We are now ready for ending the proof of Proposition 9.2, and therefore of Theorem 14 which ends the proof of Theorem 13 and Theorem 5.
Proof of Proposition 9.1.: Let \(I\subset\mathbb{S}^{1}_{X}\) be a non-empty open interval. According to Lemma 9.3 there is a regular unstable leaf \(L^{u}_{0}\), contained in an attractor \(\mathcal{A}\) and containing a periodic point of non-boudary type \(p_{0}\), and having an end, say \(L^{u}_{0,+}\), whose limit is a point \(x\in I\).
As \(L^{u}_{0}\) is not a boundary leaf of \(\mathcal{A}\) there are unstable leaves in \(\mathcal{A}\) arbitrarily close to \(L^{u}_{0}\), on both sides of \(L^{u}_{0}\). As furthermore \(L^{u}_{0}\) is a regular leaf, one can chose a leaf \(L^{u}_{1}\subset\mathcal{A}\) so that
* the limit of the end \(L^{u}_{1,+}\) is a point \(y\in I\) with \([x,y]\subset I\).
* there is a segment \(\sigma\) of a stable leaf having both ends \(a\) and \(b\) on \(L^{u}_{0}\) and \(L^{u}_{1}\) repsectively.
We denote by \(D_{\sigma}\) the disc in \(\mathbb{D}^{2}_{X}\) bounded by \(\sigma\), \([x,y]\), \(L^{u}_{+}(a)\subset L^{u}_{0}\) and \(L^{u}_{+}(b)\subset L^{u}_{1}\).
Now according to Lemma 9.2 there is a point \(z\in[x,y]\) which is limit of the end \(L^{u}_{+}\) of a unstable leaf \(L^{u}\) which carries a periodic point \(q\) in a repeller \(\Cal{R}\), and \(q\) is not of boundary type. We denote by \(h\in\pi_{1}(M)\) the generator of the stabilizer of \(q\) which is expanding along \(L^{u}\).
The stable leaf \(L^{s}(q)\) is contained in the repeller \(\Cal{R}\) and is accumulated on both sides by stable leaves in \(\Cal{R}\). We denote by \(L^{s}_{0}\) a stable leaf in \(\Cal{R}\) crossing \(L^{u}_{+}\) at a point \(x_{0}\).
We consider \(L^{s}_{n}=h^{n}(L^{s}_{0})\). It is a stable leaf in \(\Cal{R}\) which cuts \(L^{u}_{+}\) at the point \(x_{n}=h^{n}(x_{0})\).
Not that \(x_{n}\to z\) as \(n\to+\infty\). In particular, \(x_{n}\) belongs to the disc \(D_{\sigma}\) for \(n\) large.
As \(\Cal{R}\cap\Cal{A}=\emptyset\) the leaves \(L^{s}_{n}\) are disjoint from \(L^{u}_{0}\) and \(L^{u}_{1}\). As two distinct stable leaves are disjoint they are (all but at most one of them) disjoint from \(\sigma\).
So for large \(n\), the leaf \(L^{s}_{n}\) is contained in \(D_{\sigma}\) and therefore as its both ends on \([x,y]\subset I\).
We just exhibit a stable leaf in a repeller, whose both ends are in \(I\), that is we ended the proof of Proposition 9.2.
|
2304.11085 | Testing the Reliability of ChatGPT for Text Annotation and
Classification: A Cautionary Remark | Recent studies have demonstrated promising potential of ChatGPT for various
text annotation and classification tasks. However, ChatGPT is non-deterministic
which means that, as with human coders, identical input can lead to different
outputs. Given this, it seems appropriate to test the reliability of ChatGPT.
Therefore, this study investigates the consistency of ChatGPT's zero-shot
capabilities for text annotation and classification, focusing on different
model parameters, prompt variations, and repetitions of identical inputs. Based
on the real-world classification task of differentiating website texts into
news and not news, results show that consistency in ChatGPT's classification
output can fall short of scientific thresholds for reliability. For example,
even minor wording alterations in prompts or repeating the identical input can
lead to varying outputs. Although pooling outputs from multiple repetitions can
improve reliability, this study advises caution when using ChatGPT for
zero-shot text annotation and underscores the need for thorough validation,
such as comparison against human-annotated data. The unsupervised application
of ChatGPT for text annotation and classification is not recommended. | Michael V. Reiss | 2023-04-17T00:41:19Z | http://arxiv.org/abs/2304.11085v1 | # Testing the Reliability of ChatGPT for Text Annotation and Classification: A Cautionary Remark
###### Abstract
Recent studies have demonstrated promising potential of ChatGPT for various text annotation and classification tasks. However, ChatGPT is non-deterministic which means that, as with human coders, identical input can lead to different outputs. Given this, it seems appropriate to test the reliability of ChatGPT. Therefore, this study investigates the consistency of ChatGPT's zero-shot capabilities for text annotation and classification, focusing on different model parameters, prompt variations, and repetitions of identical inputs. Based on the real-world classification task of differentiating website texts into news and not news, results show that consistency in ChatGPT's classification output can fall short of scientific thresholds for reliability. For example, even minor wording alterations in prompts or repeating the identical input can lead to varying outputs. Although pooling outputs from multiple repetitions can improve reliability, this study advises caution when using ChatGPT for zero-shot text annotation and underscores the need for thorough validation, such as comparison against human-annotated data. The unsupervised application of ChatGPT for text annotation and classification is not recommended.
###### Abstract
We propose a method for extracting the
Alongside validity, reliability in measures is important for unbiased conclusions (Ruggeri et al., 2011). However, ChatGPT is non-deterministic, which means, that identical input can lead to different outputs. Certain model parameters of ChatGPT (e.g., temperature, which steers the randomness of the output (OpenAI, 2023a)) or the input prompt can influence this inconsistency. And while these factors can be optimized and controlled by validation, inconsistencies due to the general randomness of ChatGPT can still impair reliability of ChatGPT's classification output or text annotations. The black-box character of ChatGPT's inner workings add to this randomness. Therefore, this study has the aim to investigate the zero-shot consistency of ChatGPT for text classification. The study uses data of a real-world classification problem (i.e., categorizing website texts into news and not news) and investigates the influence of different temperature settings and variations in prompt instructions on the consistency of the classification output. Furthermore, the consistency across repetitions of the exact same input is assessed. By investigating the reliability and consistency of ChatGPT, this contributes to a quickly evolving body of research on ChatGPT's capabilities for text classification and annotation but adds a more cautious perspective. The findings show that the consistency of classification outputs by ChatGPT can be below recommended thresholds, questioning the reliability of the classification and annotation results. This illustrates that validation of reliability is important, for example by comparing ChatGPT outputs with human-annotated reference data. This study only investigates reliability but the recommendation to validate outputs also pertain to validity as researchers can only be sure to measure what they intend to measure if this measure is validated.
## Methods
### Data Collection
This analysis is based on the classification of websites into News or not News. The websites were collected for a previous study and a subset manually annotated (Reiss, 2023). Of these, 234 website texts were randomly selected to form the basis for this investigation. The website texts (all German-speaking) were obtained for the previous study by parsing the respective html to plain text. To inform ChatGPT on the classification task it has to perform, ten different instructions were created. The first instruction is adapted from the original codebook that was used by human coders to classify the websites for the previous study. As the manual coding was done on the website screenshots, the instructions for
ChatGPT were adapted as closely as possible to fit the new mode. Instructions two to ten are much shorter and less detailed but convey the same basic understanding of the classification task. All combinations of instructions and website texts form the prompts that were fed into ChatGPT. For instance, instruction two and the excerpt of one website text looked like this:
Furthermore, to assess the influence of different temperature settings, each prompt is tested for a low and a high temperature setting (0.25 and 1). Finally, to investigate within-consistency, all input configurations were repeated 10 times. This results in 46,800 inputs in total. Figure 1 informs about the setup. To feed this input into ChatGPT, OpenAI's official API and the model gpt-3.5-turbo (as of 01. April 2023, using Python 3.9) were used. The instructions, all code and data to reproduce this study's result can be accessed via [http://doi.org/10.17605/OSF.IO/PRZEF](http://doi.org/10.17605/OSF.IO/PRZEF).
#### Analysis
To analyze how different factors such as temperature setting, variation in instructions and repetitions of the same input configurations affect the consistency of ChatGPT's classification outputs, the different combinations are compared. For example, to assess the impact of different temperature settings, the classification output for input configurations with a low temperature setting and a high temperature setting are compared.
Figure 1: Setup
Krippendorff's Alpha is used as a metric to assess the consistency. An Alpha of 1 indicates perfect agreement and implies, in the context of comparing temperature settings, every single text was classified in the same way when temperature parameter was high as when temperature parameter was low. Consistency for variations in instructions are analyzed likewise. Comparing the output of repetitions of the identical input therefore resembles evaluating the intra-coder reliability of ChatGPT, comparable to a scenario for a human coder.1 Again, Krippendorff's Alpha is used as metric to evaluate whether the same input leads to consistent outputs. Consistencies with Krippendorff's Alpha above 0.8 are considered reliable (Krippendorff, 2004) and should also be the aim when basing text annotation or classification on ChatGPT.
Footnote 1: This is not exactly true as ChatGPT has no context knowledge and memory of past interactions, since the environment is reset after each text classification. In comparison, humans learn from past texts they annotated.
Traditional human-annotation tasks usually involve the assessments of more than one coder. This approach is connected to hopes that pooling the assessment of multiple coders increases the validity and reliability of the measure, compared to depending on one potentially biased coder only. This concept can be transferred to the ChatGPT context by repeating the same classification task. Since ChatGPT has no memory of past conversations in this study's setting, repetitions of the same input configuration are independent. Hence, repeating the same prompt several times allows to pool assessments and base the classification outputs on a majority decision. For this binary classification task, this means that the label (i.e., News or not News) with the highest frequency among repetitions is taken as final classification output for the given input. To see if pooling improves consistency, three classification regimes are compared for all consistency comparisons in this study: (i) each input is classified once, no pooling takes place, (ii) each input is classified three times, majority decision on the classification output, (iii) each input is classified ten times, majority decision on the classification output.
**Results**
Figure 2 reports the consistency for three different classification regimes when comparing the classification outputs of identical prompts (n = 2340) for two temperature settings (i.e., 0.25 and 1). Because each identical prompt was repeated ten times for each temperature setting, different classification regimes can be compared. When only considering the first
classification output of each prompt for both temperature settings under the first (no pooling) regime (i), the consistency between both sets of output is \(\alpha\) = 0.75, failing Krippendorff's recommendation for reasonable reliabilities (Krippendorff, 2004). This consistency improves, when pooling the output of more repetitions of the same prompt and choosing the classification with the highest frequency. Hence, when pooling classification outputs of ten repetitions and taking the majority output for each temperature setting, consistency increases to \(\alpha\) = 0.91.
Figure 3 presents the classification consistency for three classification regimes when comparing the classification outputs for two different instructions, holding everything else constant (n = 468). The instructions, for which the classification output is compared, are identical except that one uses the wording "classify" and "classification", while the other uses "rate" and "rating". For all classification regimes, Krippendorff's Alpha for this comparison does not exceed 0.6. Again, consistency increases when pooling the classification output of several repetitions. Pairwise comparison of all 45 combinations of the ten instructions does by and large not change these impressions (min \(\alpha\) = 0.24, mean \(\alpha\) = 0.43, max \(\alpha\) = 0.7 for classification regime (i)).
Finally, figure 4 reports the classification consistency for three classification regimes when comparing the classification outputs of repetitions of identical prompts (i.e., everything is held constant, n = 2340). Results are split for both temperature settings. It becomes apparent that within-consistency is higher for the lower temperature setting, with Krippendorff's Alpha for all three classification regimes above 0.9. This indicates a reasonable reliability. In contrast, for the higher temperature settings, only within-consistency for the
Figure 2: Comparing output consistency for two temperature settings
highest classification regime reaches a reasonable reliability of \(\alpha\) = 0.85. As this study took ten repetitions per input configuration, a comparison for the usual classification regime (iii) is not available for this analysis as this would have required at least 20 repetitions (i.e., comparing the pooled output of ten repetitions vs. the pooled output of ten other repetitions). So, here the pooled output of five repetitions each were compared.
## Discussion
Results of the comparison between two temperature settings show that the classification output by two different temperature settings is not consistent. This is not surprising as ChatGPT's outputs from a higher temperature setting are meant to be more random (OpenAI, 2023a). Accordingly, the classifications between both settings differ. In parallel,
Figure 4: Comparing output consistency for repetitions of identical inputs
Figure 3: Comparing output consistency for two different instructions
consistency within the same temperature setting is higher for lower temperature settings as demonstrated in Figure 3. Furthermore, it is likely that the length of prompts and complexity of the classification task interact with the randomness of the temperature settings. However, how temperature settings impact the validity will need to be evaluated for each study (e.g., comparing ChatGPT's classification outputs with reference classifications). For high temperature settings, the comparison should be based on a larger set of classifications to prevent misleading performance assessments caused by ChatGPT's randomness. Second, the prompt is decisive for the output and even small variations can cause large inconsistencies. For a human coder, the change of a single word in the instructions would likely not have caused large inconsistencies as the overall meaning was barely changed. When aiming to use ChatGPT for zero-shot text annotation and classification this has implications for the preparation of instructions (i.e., codebook). The findings of this study indicate that the same codebook, which works well for human coders (who might be less sensitive to small changes in instructions), does not necessarily perform well for ChatGPT. This can pose an arduous problem as slight variations could largely impact outputs and their validity and reliability. Future studies could investigate efficient prompting strategies for text annotation contexts (Argyle et al., 2023).
While the other factors can be optimized and held constant, inconsistencies within repetitions point to randomness in ChatGPT that affects all classification outcomes, making this the most important source for possible inconsistencies. Results show that within-repetition inconsistencies are starkly influenced by the temperature setting, which steers the randomness of the output. When randomness is reduced due to low temperature settings, output consistency for identical input is high. When temperature settings are high, consistency is comparatively low. Since consistency and reliability do not necessarily relate to validity, it is possible that in some contexts high temperature settings are preferable, even though reliability is reduced. In such cases, pooling of the classification outputs of repetitions of the same input can improve consistency (but still requires validation).
For all three factors investigated in this study, majority decisions on repeated classification outputs of the same input led to improvements in consistency. This is a valuable strategy to protect classification results against the randomness of ChatGPT. Therefore, building a majority decision based on three, ideally more repetitions of each single input is recommended when using ChatGPT for zero-shot text annotation. This also depends on
the resources as repetitions increase computing time and costs, making ChatGPT potentially less attractive (Gilardi et al., 2023).2 The inclusion of ChatGPT in text annotation and analysis software must also be viewed critically in this respect as users might falsely assume reliable outcomes by ChatGPT while outputs are likely not pooled and hence not reliable. And even for pooled outputs, validation should always be a priority as this might increase reliability but does necessarily relate to or inform about the validity.
Footnote 2: Currently (April 2023), OpenAI has a default cap on its API usage per month which is at $120. The cost for this study with around 48,000 inputs (76 million tokens) was $152.
The texts used for this study are comparatively noisy (as a lot of website artefacts were included) and in German. It is possible that consistency is higher for English texts, shorter texts, and texts with a clearer structure. However, thorough validation is always necessary as the complexity of the task and the validity and reliability of ChatGPT on particular tasks is not known in advance. Furthermore, although the data and classification task are based on a real-world problem (Reiss, 2023), the instructions were slightly adjusted to accommodate for the ChatGPT context. This does not impinge the consistency analysis in this study, but the classification situation is not comparable to the human labeling which is why no comparison with the human labels are made.
This study is based on gpt-3.5-turbo, which is marketed as the "most capable and cost effective model in the GPT-3.5 family" (OpenAI, 2023b) and which is the basis of the popular ChatGPT web application (Brockman et al., 2023). Future studies should also test gpt-4, which is again more capable than any gpt-3.5 model but comes with more than 10 times the costs ($0.03/1K tokens vs. $0.002/1K tokens). At the time of writing this study, API access to gpt-4 is still limited and the author had no access to it. A very limited test via the web interface of some prompts that had the least consistency for gpt-3.5 showed much improved, although not perfect consistency when using gpt-4.
## Conclusion
This study assessed the consistency of ChatGPT's zero-shot capabilities for text annotation. Several factors contribute to the fact that the overall consistency of zero-shot text annotation by ChatGPT can be considerably below common scientific standards. First, lower temperature settings make annotations by ChatGPT more deterministic and hence consistent, but this could interact with the validity of outputs as high temperature
settings could provide more valid but less reliable output. Second, the exact prompt is decisive and consistency between two prompts can be low, also for minuscule wording alterations between two prompts. Third, even repeating the identical input configurations (i.e., same prompt, same parameter settings) can lead to different outputs, questioning the reliability of ChatGPT for text annotation and classification. Furthermore, the extent of inconsistencies and relation to the validity of the classification likely varies depending on the classification or annotation task. All in all, even though pooling the outputs of repetitions of the same input can increase reliability, findings of this study suggest that ChatGPT for zero-shot text annotation should only be used with caution: robust validation of ChatGPT's output is strongly recommended. While some promising studies suggest the usability of ChatGPT for text annotation and classification, it is important to note that this can only be guaranteed when validation is included (e.g., comparing against human-annotated references). The unsupervised use should be avoided.
## Acknowledgements
I want to thank Michael Jacobs, Kiran Kappeler, Valerie Hase, Aleksandra Urman, Manuel Issel, Nico Pfiffner, and Fjona Gerber for helpful comments and valuable input at various stages of this project and my division led by Michael Latzer for the opportunity to use the data.
## Funding
Parts of this work were supported by the Swiss National Science Foundation [176443]. |
2302.01130 | Operator algebras of free wreath products | We give a description of operator algebras of free wreath products in terms
of fundamental algebras of graphs of operator algebras as well as an explicit
formula for the Haar state. This allows us to deduce stability properties for
certain approximation properties such as exactness, Haagerup property,
hyperlinearity and K-amenability. We study qualitative properties of the
associated von Neumann algebra: factoriality, fullness, primeness and absence
of Cartan subalgebra and we give a formula for Connes' $T$-invariant and
$\tau$-invariant. We also study maximal amenable von Neumann subalgebras.
Finally, we give some explicit computations of K-theory groups for C*-algebras
of free wreath products. As an application we show that the reduced C*-algebras
of quantum reflection groups are pairwise non-isomorphic. | Pierre Fima, Arthur Troupel | 2023-02-02T14:41:23Z | http://arxiv.org/abs/2302.01130v2 | # Operator algebras of free wreath products
###### Abstract.
We give a description of operator algebras of free wreath products in terms of fundamental algebras of graphs of operator algebras as well as an explicit formula for the Haar state. This allows us to deduce stability properties for certain approximation properties such as exactness, Haagerup property, hyperlinearity and K-amenability. We study qualitative properties of the associated von Neumann algebra: factoriality, fullness, primeness and absence of Cartan subalgebra and we give a formula for Connes' \(T\)-invariant and \(\tau\)-invariant. We also study maximal amenable von Neumann subalgebras. Finally, we give some explicit computations of K-theory groups for C*-algebras of free wreath products. As an application we show that the reduced C*-algebras of quantum reflection groups are pairwise non-isomorphic.
P.F. is partially supported by the ANR project ANCG (No. ANR-19-CE40-0002), ANR project AODynG (No. ANR-19-CE40-0008), and Indo-French Centre for the Promotion of Advanced Research - CEFIPRA
## 1. Introduction
The _Haagerup property_ of a graph of operator algebras is a well-known property of the _Haagerup property_ of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebras. The Haagerup property is a well-known property of the Haagerup property of a graph of operator algebra.
_Moreover, the Connes' \(\tau\)-invariant of the full factor \(\mathrm{L}^{\infty}(G\wr_{*}S^{+}_{N})\) is the smallest topology on \(\mathbb{R}\) for which the map \((t\mapsto\sigma^{G}_{t})\) is continuous._
Concerning maximal amenable von Neumann subalgebras of \(\mathrm{L}^{\infty}(G\wr_{*}S^{+}_{N})\), once again, nothing seems to be known. A von Neumann subalgebra \(A\subset M\) is called _with expectation_ if there a normal faithful conditional expectation \(P\to A\). \(A\subset P\) is called _maximal amenable_ if \(A\) is amenable and, for any intermediate von Neumann algebra \(A\subset P\subset M\) if \(P\) is amenable then \(A=P\). \(A\subset P\) is called _maximal amenable with expectation_ if \(A\) is amenable with expectation and, for any \(A\subset P\subset M\), if \(P\) is amenable with expectation then \(A=P\).
Recall that \(\mathrm{L}^{\infty}(G\wr_{*}S^{+}_{N})\) is generated by \(N\) free copies of \(\mathrm{L}^{\infty}(G)\) given by \(\nu_{i}\,:\,\mathrm{L}^{\infty}(G)\to\mathrm{L}^{\infty}(G\wr_{*}S^{+}_{N})\), \(1\leq i\leq N\), and by \(\mathrm{L}^{\infty}(S^{+}_{N})\subset\mathrm{L}^{\infty}(G\wr_{*}S^{+}_{N})\) plus some relations (see Section 2.7). Let \(u=(u_{ij})_{ij}\in M_{N}(\mathbb{C})\otimes\mathrm{L}^{\infty}(S^{+}_{N})\) be the fundamental representation of \(S^{+}_{N}\) so that \(\mathrm{L}^{\infty}(S^{+}_{N})\) is generated by the coefficients \(u_{ij}\) of \(u\). We use [1] to deduce the following.
**Theorem C**.: _Let \(G\) be a compact quantum group and \(N\geq 2\). The following holds._
1. _If_ \(\mathrm{L}^{\infty}(G)\) _is amenable and_ \(\mathrm{Irr}(G)\) _is infinite then, for all_ \(1\leq i\leq N\)_, the von Neumann subalgebra of_ \(\mathrm{L}^{\infty}(G\wr_{*}S^{+}_{N})\) _generated by_ \(\{\nu_{i}(a)u_{ij}\,:\,a\in\mathrm{L}^{\infty}(G),\,1\leq j\leq N\}\) _is maximal amenable with expectation in_ \(\mathrm{L}^{\infty}(G\wr_{*}S^{+}_{N})\)_._
2. _If_ \(G\) _is Kac then_ \(\mathrm{L}^{\infty}(S^{+}_{4})\subset\mathrm{L}^{\infty}(G\wr_{*}S^{+}_{4})\) _is maximal amenable._
Finally, we compute K-theory groups of the C*-algebras \(C(G\wr_{*}S^{+}_{N})\) and \(C_{r}(G\wr_{*}S^{+}_{N})\) (recall that they are KK-equivalent whenever \(G\) is K-amenable). In the next Theorem, we denote by \(C_{\bullet}(G)\) either the full \(C^{*}\)-algebra or the reduced \(C^{*}\)-algebra of \(G\).
**Theorem D**.: _For any compact quantum group \(G\) and integer \(N\in\mathbb{N}^{*}\) we have,_
\[K_{0}(C_{\bullet}(G\wr_{*}S^{+}_{N})) \simeq K_{0}(C_{\bullet}(G))\otimes(\mathbb{Z}^{N^{2}})\oplus K_{0}(C (S^{+}_{N}))/(\mathbb{Z}^{N^{2}})\] \[\simeq \left\{\begin{array}{ll}K_{0}(C_{\bullet}(G))^{\oplus N^{2}}/ \mathbb{Z}^{2N-2}&\mbox{if}\ \ \ N\neq 3,\\ K_{0}(C_{\bullet}(G))^{\oplus N^{2}}/\mathbb{Z}^{3}&\mbox{if}\ \ \ N=3.\end{array}\right.\mbox{ and,}\] \[K_{1}(C_{\bullet}(G\wr_{*}S^{+}_{N})) \simeq K_{1}(C_{\bullet}(G))^{\oplus N^{2}}\oplus K_{1}(C(S^{+}_{N})) \simeq\left\{\begin{array}{ll}K_{1}(C_{\bullet}(G))^{\oplus N^{2}}\oplus \mathbb{Z}&\mbox{if}\ \ \ N\geq 4,\\ K_{1}(C_{\bullet}(G))^{\oplus N^{2}}&\mbox{if}\ \ \ 1\leq N\leq 3.\end{array}\right.\] \[.\]
_In particular, for the (K-amenable) quantum reflection groups \(H^{s+}_{N}=\widehat{\mathbb{Z}_{s}}\wr_{*}S^{+}_{N}\), if \(N\geq 4\),_
\[K_{0}(C_{\bullet}(H^{s+}_{N}))\simeq\left\{\begin{array}{ll}\mathbb{Z}^{N^{ 2}-2N+2}&\mbox{if}\ \ s=+\infty,\\ \mathbb{Z}^{sN^{2}-2N+2}&\mbox{if}\ \ s<+\infty,\end{array}\right.\quad K_{1}(C_{ \bullet}(H^{s+}_{N}))\simeq\left\{\begin{array}{ll}\mathbb{Z}^{N^{2}+1}& \mbox{if}\ \ \ s=+\infty,\\ \mathbb{Z}&\mbox{if}\ \ \ s<+\infty,\end{array}\right.\]
_and if \(N\in\{1,2,3\}\),_
\[K_{0}(C_{\bullet}(H^{s+}_{N}))\simeq\left\{\begin{array}{ll}\mathbb{Z}^{N!}& \mbox{if}\ \ \ s=+\infty,\\ \mathbb{Z}^{(s-1)N^{2}+N!}&\mbox{if}\ \ \ s<+\infty,\end{array}\right.\quad K_{1}(C_{ \bullet}(H^{s+}_{N}))\simeq\left\{\begin{array}{ll}\mathbb{Z}^{N^{2}}&\mbox{ if}\ \ \ s=+\infty,\\ 0&\mbox{if}\ \ \ s<+\infty,\end{array}\right.\]
It seems that the only attempt to compute the K-theory of free wreath products was done in [11] in which they compute K-theory groups of some quantum groups which are not free wreath product with \(S^{+}_{N}\) but only monoidally equivalent to a free wreath products with \(S^{+}_{N}\). Actually the K-theory is computed for some quantum groups of the form \(G\wr_{*}SO_{q}(3)\), where the free wreath product is in the sense of [12]. The method of computation in [11], which is based on the ideas of the works of Voigt [13, 14] and Voigt-Vergnioux [15] is to prove the strong Baum-Connes property and then find an explicit projective resolution of the trivial action. However, they could not do it for free wreath products by \(S^{+}_{N}\) but they showed that this can be done for \(G\wr_{*}SO_{q}(3)\) with some specific \(G\) such as free products of free orthogonal and free unitary quantum groups as well as dual of free groups. While our method cannot provide results on the stability of the strong Baum-Connes conjecture for free wreath products, it has the advantage to be extremely simple and also applicable to any free wreath product with \(S^{+}_{N}\). Our K-theory computations are based on our decomposition as fundamental algebras and the results of [10]. We also use the result of [13] on the computation of the K-theory of \(C_{\bullet}(S^{+}_{N})\). As
mentioned to us by Adam Skalski, it seems unknown whether or not the reduced C*-algebras of quantum reflection groups \(H_{N}^{s+}\) are isomorphic or not for different values of the parameters \(N,s\). Our K-theory computation in Theorem D solves this question: we show in Corollary 7.1 that for all \(N,M\geq 8\) one has \(C_{\tau}(H_{N}^{s+})\simeq C_{r}(H_{M}^{t+})\Leftrightarrow(N,s)=(M,t)\). We do have to restrict the parameters to \(N,M\geq 8\) since our proof uses the unique trace result of Lemeux [14], which is valid only for \(N\geq 8\). We also compute some other \(K\)-theory groups of free wreath products, such as \(\widehat{\mathbb{F}}_{m}\wr_{*}S_{N}^{+}\) and deduce that reduced (as well as full) C*-algebras \(C_{\bullet}(\widehat{\mathbb{F}}_{m}\wr_{*}S_{N}^{+})\) are pairwise non-isomorphic.
The paper is organized as follows. After the introduction in Section 1, we have included preliminaries in Section 2 in which we detail all our notations concerning operator algebras and quantum groups. We also prove in this preliminary Section that the von Neumann algebra \(\mathrm{L}^{\infty}(G)\) of a quantum group \(G\) is diffuse if and only if it is infinite-dimensional. This result, which will be useful in the study of von Neumann algebras of free wreath product, also solve a question left open in [1] in which it is mentioned that it is unknown if the von Neumann algebra of a quantum reflection group \(\widehat{\mathbb{Z}}_{s}\wr_{*}S_{N}^{+}\) is diffuse when \(N\leq 7\). Section 2 also contains important remarks on free product quantum groups, semi-direct product quantum groups, quantum permutation groups and free wreath product quantum groups, as well as amalgamated free wreath products (in the sense of Freslon [11]). In particular, we identify the amalgamated free wreath product \(G\wr_{*,H}S_{N}^{+}\) at \(N=2\) with a semi-direct product. In Section 3 we explain how operator algebras of amalgamated free wreath products can be described as fundamental algebras of graphs of operator algebras. Section 4 contains the proof of the amalgamated version of Theorem A, Section 5 contains the proof of the amalgamated versions Theorem B and C. In Section 6, we give a formula describing an amalgamated free wreath product of a fundamental quantum group of a graph of quantum groups by \(S_{N}^{+}\) as the fundamental quantum group of a graph of quantum groups given by free wreath products. Finally, section 7 contains the proof of Theorem D and some other K-theory computations.
## 2. Preliminaries
### Notations
All Hilbert spaces, C*-algebras and preduals of von Neumann algebras considered in this paper are assumed to be separable. The inner product on an Hilbert space \(H\) is always linear on the right. The C*-algebra of bounded linear maps from \(H\) to \(H\) is denoted by \(\mathcal{B}(H)\) and, given vectors \(\xi,\eta\in H\) we denote by \(\omega_{\eta,\xi}\in\mathcal{B}(H)^{*}\) the bounded linear form on \(\mathcal{B}(H)\) defined by \((x\mapsto\langle\eta,x\xi\rangle)\). For \(T\in\mathcal{B}(H)\), we denote by \(\mathrm{Sp}(T)\) the spectrum of \(T\) and by \(\mathrm{Sp}_{p}(T)\) the point spectrum of \(T\). The symbol \(\otimes\) will denote the tensor product of Hilbert spaces, von Neumann algebras as well as the minimal tensor product of C*-algebras.
### Full von Neumann algebras
Let \(M\) be a von Neumann algebra with predual \(M_{*}\). We consider on \(\mathrm{Aut}(M)\) the topology of pointwise convergence in \(M_{*}\) i.e. the smallest topology for which the maps \(\mathrm{Aut}(M)\to M_{*}\), \(\alpha\mapsto\omega\circ\alpha\) and \(\alpha\mapsto\omega\circ\alpha^{-1}\) are continuous, for all \(\omega\in M_{*}\), where \(M_{*}\) is equipped with the norm topology. It is well known that \(\mathrm{Aut}(M)\) is a Polish group (since \(M_{*}\) is separable). When \(\omega\in M_{*}\) is a faithful normal state, we may consider the closed subgroup \(\mathrm{Aut}(M,\omega)<\mathrm{Aut}(M)\) of automorphisms preserving \(\omega\). Note that the induced topology from \(\mathrm{Aut}(M)\) to \(\mathrm{Aut}(M,\omega)\) is the smallest topology making the maps \(\mathrm{Aut}(M,\omega)\to\mathrm{L}^{2}(M,\omega)\), \(\alpha\mapsto\alpha(a)\xi_{\omega}\) and \(\alpha\mapsto\alpha^{-1}(a)\xi_{\omega}\) continuous, for the norm on \(\mathrm{L}^{2}(M,\omega)\), where \(\xi_{\omega}\) is the cyclic vector in the GNS construction \(\mathrm{L}^{2}(M,\omega)\) of \(\omega\). We record the following Remark for later use.
**Remark 2.1**.: Note that if \((M,\omega)\) is a von Neumann algebra with a faithful normal state \(\omega\) and \(A\subset M\) is a von Neumann subalgebra with a \(\omega\)-preserving normal conditional expectation \(M\to A\) then, the subgroup \(\mathrm{Aut}_{A}(M,\omega)=\{\alpha\in\mathrm{Aut}(M,\omega)\,:\,\alpha(A)=A \}<\mathrm{Aut}(M,\omega)\) is closed and the restriction map \(\mathrm{Aut}_{A}(M,\omega)\to\mathrm{Aut}(A,\omega)\), \(\alpha\mapsto\alpha|_{A}\) is continuous.
Let \(\mathrm{Inn}(M)\subset\mathrm{Aut}(M)\) be the normal subgroup of inner automorphisms and \(\mathrm{Out}(M):=\mathrm{Aut}(M)/\mathrm{Inn}(M)\) be the quotient group. Equipped with the quotient topology, \(\mathrm{Out}(M)\) is a topological group and it is Hausdorff if and only if \(\mathrm{Inn}(M)\subset\mathrm{Aut}(M)\) is closed. In that case
(since \(M_{*}\) is assumed to be separable) \(\operatorname{Out}(M):=\operatorname{Aut}(M)/\operatorname{Inn}(M)\) is actually a metrizable topological group so the convergence of sequences in \(\operatorname{Out}(M)\) completely characterises the topology. Following Connes [10], a von Neumann algebra is called _full_ if \(\operatorname{Inn}(M)\subset\operatorname{Aut}(M)\) is closed. A normal faithful semi-finite weight \(\varphi\) on \(M\) is called _almost-periodic_ if its modular operator \(\nabla_{\varphi}\) has pure point spectrum. Connes defines [10] the invariant \(Sd(M)\) of a full von Neumann algebra as the intersection of point spectrum of \(\nabla_{\varphi}\) for \(\varphi\) a normal faithful semi-finite almost-periodic weight and shows that the closure of \(Sd(M)\) is the \(S\)-invariant \(S(M)\).
The famous noncommutative Radon-Nykodym Theorem of Connes [10] shows that, for any pair of normal faithful states (actually semi-finite weights) \(\omega_{1}\) and \(\omega_{2}\) on \(M\), their modular groups \(\sigma_{t}^{\omega_{1}}\) and \(\sigma_{t}^{\omega_{2}}\) have the same image in \(\operatorname{Out}(M)\), for all \(t\in\mathbb{R}\). Hence, there is a well defined homomorphism \(\delta\,:\,\mathbb{R}\to\operatorname{Out}(M)\), called the _modular homomorphism_, defined by \(\delta(t)=[\sigma_{t}^{\omega}]\), where \(\omega\) is any normal faithful state on \(M\) and \([\cdot]\) denotes the class in \(\operatorname{Out}(M)\).
Connes defines in [10], the _invariant_\(\tau(M)\) of a full von Neumann algebra \(M\) as the smallest topology on \(\mathbb{R}\) for which the modular homomorphism \(\delta\) is continuous. When \(M\) is full (and \(M_{*}\) is separable), \(\operatorname{Out}(M)\) is clearly a metrizable topological group hence, \((\mathbb{R},\tau(M))\) is also a metrizable topological group. In particular, the topology \(\tau(M)\) is completely characterized by the knowledge of which sequences are converging to zero.
### Compact quantum groups
For a discrete group \(\Gamma\) we denote by \(C^{*}(\Gamma)\) its full C*-algebra, \(C^{*}_{r}(\Gamma)\) its reduced C*-algebra and \(\operatorname{L}(\Gamma)\) its von Neumann algebra. We briefly recall below some elements of the compact quantum group (CQG) theory developed by Woronowicz [11, 12, 13].
For a CQG \(G\), we denote by \(C(G)\) its _maximal_ C*-algebra, which is the enveloping C*-algebra of the unital \(*\)-algebra \(\operatorname{Pol}(G)\) given by the linear span of coefficients of irreducible unitary representations of \(G\). The set of equivalence classes of irreducible unitary representations will be denoted by \(\operatorname{Irr}(G)\). We will denote by \(\varepsilon_{G}\,:\,C(G)\to\mathbb{C}\) the counit of \(G\) which satisfies \((\operatorname{id}\otimes\varepsilon_{G})(u)=\operatorname{Id}_{H}\) for all finite dimensional unitary representation \(u\in\mathcal{B}(H)\otimes C(G)\).
Let us recall below the modular ingredients of a CQG. Let us fix a complete set of representatives \(u^{x}\in\mathcal{B}(H_{x})\otimes C(G)\) for \(x\in\operatorname{Irr}(G)\). It is known that for any \(x\in\operatorname{Irr}(G)\) there exists a unique \(\overline{x}\in\operatorname{Irr}(G)\), called the contragredient of \(x\), such that \(\operatorname{Mor}(1,x\otimes\overline{x})\neq\{0\}\) and \(\operatorname{Mor}(1,\overline{x}\otimes x)\neq\{0\}\), where \(1\) denotes the trivial representation of \(G\). Both the spaces \(\operatorname{Mor}(1,\overline{x}\otimes x)\) and \(\operatorname{Mor}(1,x\otimes\overline{x})\) are actually one-dimensional. Fix non-zero vectors \(s_{x}\in H_{\overline{x}}\) and \(s_{\overline{x}}\in H_{\overline{x}}\otimes H_{x}\) such that \(s_{x}\in\operatorname{Mor}(1,x\otimes\overline{x})\) and \(s_{\overline{x}}\in\operatorname{Mor}(1,\overline{x}\otimes x)\). Let \(J_{x}\,:\,H_{x}\to H_{\overline{x}}\) be the unique invertible antilinear map satisfying \(\langle J_{x}\xi,\eta\rangle=\langle s_{x},\xi\otimes\eta\rangle\) for all \(\xi\in H_{x}\), \(\eta\in H_{\overline{x}}\) and define the positive invertible operator \(Q_{x}=J_{x}^{*}J_{x}\in\mathcal{B}(H_{x})\). Then, there exists a unique normalization of \(s_{x}\) and \(s_{\overline{x}}\) such that \(\|s_{x}\|=\|s_{\overline{x}}\|\) and \(J_{\overline{x}}=J_{x}^{-1}\). With this normalization, \(Q_{x}\) is uniquely determined, we do have \(\operatorname{Tr}(Q_{x})=\operatorname{Tr}(Q_{x}^{-1})=\|s_{x}\|^{2}\), \(Q_{\overline{x}}=(J_{x}J_{x}^{*})^{-1}\) and \(\operatorname{Sp}(Q_{\overline{x}})=\operatorname{Sp}(Q_{x})^{-1}\) (\(\operatorname{Tr}\) is the unique trace on \(\mathcal{B}(H_{x})\) such that \(\operatorname{Tr}(1)=\dim(H_{x})\)). The number \(\operatorname{Tr}(Q_{x})\) is called the _quantum dimension_ of \(x\) and is denoted by \(\dim_{q}(x)\). From the orthogonality relations:
\[(\operatorname{id}\otimes h_{G})((u^{x})^{*}(\xi\eta^{*}\otimes 1)u^{y})=\frac{ \delta_{x,y}1}{\dim_{q}(x)}\langle\eta,Q_{x}^{-1}\xi\rangle\]
it is not difficult to check that, with \(\operatorname{L}^{2}(G)=\bigoplus_{x\in\operatorname{Irr}(G)}H_{x}\otimes H_{ \overline{x}}\), \(\xi_{G}:=1\in H_{1}\otimes H_{\overline{\rm T}}=\mathbb{C}\) and \(\lambda_{G}\,:\,C(G)\to\mathcal{B}(\operatorname{L}^{2}(G))\) the unique unital \(*\)-homomorphism such that
\[\lambda_{G}((\omega_{\eta,\xi}\otimes\operatorname{id})(u^{x}))\xi_{G}=\dim_{ q}(x)^{-\frac{1}{2}}\xi\otimes J_{\overline{x}}^{*}\eta,\]
\(\xi_{G}\) is \(\lambda_{G}\)-cyclic and \(h_{G}(x)=\langle\xi_{G},\lambda_{G}(a)\xi_{G}\rangle\)\(\forall a\in C(G)\). Hence, the triple \((\operatorname{L}^{2}(G),\lambda_{G},\xi_{G})\) is an explicit GNS construction for the Haar state \(h_{G}\) on \(C(G)\).
Let \(C_{r}(G)=\lambda_{G}(C(G))\subset\mathcal{B}(\operatorname{L}^{2}(G))\) be the _reduced C*-algebra_ of \(G\). The surjective unital \(*\)-homomorphism \(\lambda_{G}\,:\,C(G)\to C_{r}(G)\) is called the canonical surjection. Recall that \(G\) is called _co-amenable_ whenever \(\lambda_{G}\) is injective. The von Neumann algebra of \(G\) is denoted by \(\operatorname{L}^{\infty}(G):=C_{r}(G)^{\prime\prime}\subset\mathcal{B}( \operatorname{L}^{2}(G))\). We will still denote by \(h_{G}\) the Haar state of \(G\) on the C*-algebra
\(C_{r}(G)\) as well as on the von Neumann algebra \(\mathrm{L}^{\infty}(G)\) i.e. \(h_{G}=\langle\xi_{G},\cdot\xi_{G}\rangle\) when viewed as a state on \(C_{r}(G)\) or a normal state on \(\mathrm{L}^{\infty}(G)\). We recall that \(h_{G}\) is faithful on both \(C_{r}(G)\) and \(\mathrm{L}^{\infty}(G)\). With the explicit GNS construction of \(h_{G}\) given above, it is not difficult to compute the modular ingredients of the normal faithful state \(h_{G}\) on \(\mathrm{L}^{\infty}(G)\) and we find that the closure \(S_{G}\) of the antilinear operator \(x\xi_{G}\mapsto x^{*}\xi_{G}\) has a polar decomposition \(S_{G}=J_{G}\nabla_{G}^{\frac{1}{2}}\), where \(\nabla_{G}\) is the positive operator on \(\mathrm{L}^{2}(G)\) given by \(\nabla_{G}:=\bigoplus_{x\in\mathrm{Irr}(G)}Q_{x}\otimes Q_{\overline{x}}^{-1}\). Hence, the Haar state of a CQG is always almost-periodic and its modular group \((\sigma_{t}^{G})_{t}\) is the unique one-parameter group of \(\mathrm{L}^{\infty}(G)\) such that \((\mathrm{id}\otimes\sigma_{t}^{G})(u^{x})=(Q_{t}^{it}\otimes 1)u^{x}(Q_{x}^{it} \otimes 1)\). Let us recall that \(G\) is said to be of _Kac type_ whenever \(h_{G}\) is a trace and it happens if and only if \(Q_{x}=1\) for all \(x\in\mathrm{Irr}(G)\). Let us also recall that the _scaling group_\((\tau_{t}^{G})_{t}\) is the one-parameter group of \(\mathrm{L}^{\infty}(G)\) given by by the formula \((\mathrm{id}\otimes\tau_{t}^{G})(u^{x})=(Q_{x}^{it}\otimes 1)u^{x}(Q_{x}^{-it} \otimes 1)\). It is well known that the scaling group of \(G\) is the unique one parameter group \((\tau_{t}^{G})_{t}\) of \(\mathrm{L}^{\infty}(G)\) such that \(\Delta\circ\sigma_{t}^{G}=(\tau_{t}^{G}\otimes\sigma_{t}^{G})\circ\Delta\ \forall t\in\mathbb{R}\). Moreover, the scaling group preserves that Haar state: \(h_{G}\circ\tau_{t}^{G}=h_{G}\) (this means that the _scaling constant_ of a compact quantum group is \(1\)).
Let us also recall the definition of the _\(T\)-invariant_ of a CQG \(G\), introduced by S. Vaes in [10]:
\[T(G):=\{t\in\mathbb{R}\,:\,\exists u\in\mathcal{U}(\mathrm{L}^{\infty}(G))\,: \,\Delta(u)=u\otimes u\text{ and }\tau_{t}^{G}=\mathrm{Ad}(u)\},\]
where \(\mathrm{Ad}(u)\) is the automorphism of \(\mathrm{L}^{\infty}(G)\) given by \(a\mapsto uau^{*}\).
The following result is well known [12, Proposition 2.2].
**Proposition 2.2**.: _The set \(\mathrm{Mod}(G):=\bigcup_{x\in\mathrm{Irr}(G)}\mathrm{Sp}(Q_{x})\) is a subgroup of \(\mathbb{R}_{+}^{*}\)._
Proof.: We already remarked in Section 2.3 that \(\mathrm{Sp}(Q_{\overline{x}})=(\mathrm{Sp}(Q_{x}))^{-1}\). Hence \(\mathrm{Mod}(G)\) is stable by inverse. The fact that \(\mathrm{Mod}(G)\) is stable by product follows from the relation
\[SQ_{z}=(Q_{x}\otimes Q_{y})\quad\text{for all }S\in\mathrm{Mor}(z,x\otimes y), \,\,\,x,y,z\in\mathrm{Irr}(G).\]
**Remark 2.3**.: For convenience, we will use the following non-standard notations.
1. Let \(Sd(G):=\mathrm{Sp}_{p}(\nabla_{G})\). From the explicit computation of \(\nabla_{G}\) and since \(\mathrm{Sp}(Q_{\overline{x}})=\mathrm{Sp}(Q_{x})^{-1}\) one has \(Sd(G)=\cup_{x\in\mathrm{Irr}(G)}\mathrm{Sp}(Q_{x})^{2}\subset\mathrm{Mod}(G)\) and \(Sd(G)^{-1}=Sd(G)\).
2. We denote by \(\tau(G)\) the smallest topology on \(\mathbb{R}\) such that the map \((t\mapsto\sigma_{t}^{G})\) is continuous. Hence \(\tau(G)\) is smaller than the usual topology on \(\mathbb{R}\). It is not difficult to check that it is the smallest topology on \(\mathbb{R}\) for which the maps \(f_{\lambda}\,:\,\mathbb{R}\to\mathbb{S}^{1}\,:\,(t\mapsto\lambda^{it})\) are continuous, for all \(\lambda\in Sd(G)\), where \(\mathbb{S}^{1}\) has the usual topology. Note also that, for any topology \(\tau\) on \(\mathbb{R}\), the set of \(\lambda>0\) for which \(f_{\lambda}\) is \(\tau\)-continuous is a closed subgroup of \(\mathbb{R}_{+}^{*}\) (for the usual topology on \(\mathbb{R}_{+}^{*}\)). Hence, \(\tau(G)\) is also the smallest topology on \(\mathbb{R}\) for which the maps \(f_{\lambda}\) are continuous, for all \(\lambda\in\overline{\langle Sd(G)\rangle}\), the closed subgroup of \(\mathbb{R}_{+}^{*}\) generated by \(Sd(G)\) (for the usual topology on \(\mathbb{R}_{+}^{*}\)). Moreover, the following is easy to check: * \(G\) is \(\mathrm{Kac}\Leftrightarrow Sd(G)=\{1\}\Leftrightarrow\tau(G)\) is the trivial topology. If \(Sd(G)\neq\{1\}\) then \((\mathbb{R},\tau(G))\) is a metrizable topological group so the topology \(\tau(G)\) is completely characterized by the knowledge of which sequences are converging to zero. * \(\overline{\langle Sd(G)\rangle}=\mathbb{R}_{+}^{*}\) if and only if \(\tau(G)\) is the usual topology on \(\mathbb{R}\).
Recall that a von Neumann algebra is diffuse when it has no non-zero minimal projection. We will use the following simple Lemma. While the arguments for its proof are already present in the literature, the precise statement seems to be unknown.
**Lemma 2.4**.: _Let \(G\) be a compact quantum group. The following are equivalent._
1. \(\mathrm{Irr}(G)\) _is infinite._
2. \(\mathrm{L}^{\infty}(G)\) _is diffuse._
3. \(C(G)\) _is infinite-dimensional._
Proof.: The implication \((2)\Rightarrow(1)\) is obvious. Let us show \((1)\Rightarrow(2)\). By the general theory, a von Neumann algebra is diffuse if and only if it has no direct summand of the form \(\mathcal{B}(H)\). When is \(H\) finite dimensional, we may apply [1, Theorem 3.4] (since the action of \(G\) on \(\mathrm{L}^{\infty}(G)\)
given by the comultiplication is ergodic) to deduce that \(\mathcal{B}(H)\) is not a direct summand of \(\mathrm{L}^{\infty}(G)\) whenever \(\mathrm{Irr}(G)\) is infinite. When \(H\) is infinite-dimensional, we may apply [10, Theorem 6.1] to deduce that \(\mathcal{B}(H)\) is not a direct summand of \(\mathrm{L}^{\infty}(G)\) whenever \(\mathrm{Irr}(G)\) is infinite. The equivalence between (1) and (3) is clear.
Let us now recall the notion of a dual quantum subgroup. We are grateful to Kenny De Commer for explaining to us the main argument of the proof of the next Proposition and to Makoto Yamashita for showing to us the reference [10]. Recall that \(C_{\bullet}(G)\) denotes either the reduced or the maximal C*-algebra of \(G\).
**Proposition 2.5**.: _Let \(G\) and \(H\) be CQG. The following data are equivalent._
* \(\iota\,:\,C(H)\to C(G)\) _is a faithful unital_ \(*\)_-homomorphism intertwining the comultiplications._
* \(\iota\,:\,\mathrm{Pol}(H)\to\mathrm{Pol}(G)\) _is a faithful unital_ \(*\)_-homomorphism intertwining the comultiplications._
* \(\iota\,:\,C_{r}(H)\to C_{r}(G)\) _is a faithful unital_ \(*\)_-homomorphism intertwining the comultiplications._
_If one of the following equivalent conditions is satisfied, we view \(\mathrm{Pol}(H)\subset\mathrm{Pol}(G)\), \(C(H)\subset C(G)\) and \(C_{r}(H)\subset C_{r}(G)\). Then, the unique linear map \(E\,:\,\mathrm{Pol}(G)\to\mathrm{Pol}(H)\) such that_
\[(\mathrm{id}\otimes E)(u^{x})=\left\{\begin{array}{ll}u^{x}&\text{if}\ \ \ x\in \mathrm{Irr}(H),\\ 0&\text{if}\ \ x\in\mathrm{Irr}(G)\setminus\mathrm{Irr}(H).\end{array}\right.\]
_has a unique ucp extension to a map \(E_{\bullet}\,:\,C_{\bullet}(G)\to C_{\bullet}(H)\) which is a Haar-state-preserving conditional expectation onto the subalgebra \(C_{\bullet}(H)\subset C_{\bullet}(G)\). At the reduced level, \(E_{r}\) is faithful and extends to a Haar-state preserving normal faithful conditional expectation \(\mathrm{L}^{\infty}(G)\to\mathrm{L}^{\infty}(H)\)._
Proof.: If \(\iota\,:\,C(H)\to C(G)\) is a faithful unital \(*\)-homomorphism intertwining the comultiplications then it is clear that its restriction to \(\mathrm{Pol}(H)\) has image in \(\mathrm{Pol}(G)\) and is still faithful. If now \(\iota\,:\,\mathrm{Pol}(H)\to\mathrm{Pol}(G)\) is defined at the algebraic level then, since it is faithful and intertwines the comultiplications, it also intertwines the Haar states and so extends to a faithful unital \(*\)-homomorphism \(\iota\,:\,C_{r}(H)\to C_{r}(G)\) which is easily seen to intertwine the comultiplications. It is proved in [20] that \(E\) extends to a faithful and Haar state preserving conditional expectation at the reduced level as well as at the von Neumann level (which is moreover normal). Also, if \(\iota\,:\,C_{r}(H)\to C_{r}(G)\) is defined at the reduced level, its restriction to \(\mathrm{Pol}(H)\) satisfies the second condition. Hence, it suffices to check that if \(\mathrm{Pol}(H)\subset\mathrm{Pol}(G)\) is a unital \(*\)-subalgebra with the inclusion intertwining the comultiplications then, the canonical extension \(\iota\,:\,C(H)\to C(G)\) of the inclusion, which obviously also intertwines the comultiplications, is still faithful. Note that it suffices to show that \(E\) extends to a ucp map \(E\,:\,C(G)\to C(H)\). Indeed, if we have such an extension then \(E\) has norm \(1\) and \(E\circ\iota=\mathrm{id}_{C(H)}\), so for all \(a\in Pol(H)\) one has \(\|a\|_{C(H)}=\|E(\iota(a))\|_{C(H)}\leq\|\iota(a)\|_{C(G)}\leq\|a\|_{C(H)}\). Then, \(\iota\) is an isometry hence faithful. The fact that \(E\) extends to a ucp map is proved in [10, Theorem 3.1]. It is clear that the ucp extension preserves the Haar states since it already does at the algebraic level (by definition of \(E\)). Now, viewing \(C(H)\subset C(G)\), \(E\) is ucp and is the identity on \(C(H)\) hence, it is a conditional expectation onto \(C(H)\).
If one of the above equivalent condition is satisfied, we say that \(H\) is a _dual quantum subgroup of_\(G\) and we will view \(\mathrm{Pol}(H)\subset\mathrm{Pol}(G)\), \(C(H)\subset C(G)\), \(C_{r}(H)\subset C_{r}(G)\) as well as \(\mathrm{L}^{\infty}(H)\subset\mathrm{L}^{\infty}(G)\). Let us note that, the ucp extension of \(E\) at the maximal level is not, in general, faithful and not even GNS-faithful (meaning that the GNS representation morphism may be non injective). We will usually denote \(E\) at the algebraic, full, reduced and von Neumann algebraic level by the same symbol \(E_{H}\).
**Remark 2.6**.: Let \(C(H)\subset C(G)\) be a dual quantum subgroup and define \(\mathrm{Pol}(G)^{\circ}:=\{a\in\mathrm{Pol}(G)\,:\,E_{H}(a)=0\}\) and \(C_{\bullet}(G)^{\circ}:=\{a\in C_{\bullet}(G)^{\circ}\,:\,E_{\bullet}(a)=0\}\). Then \(\mathrm{Pol}(G)^{\circ}\) is the linear span of coefficient of irreducible representations \(x\in\mathrm{Irr}(G)\setminus\mathrm{Irr}(H)\), \(C_{\bullet}(G)^{\circ}\) is the closure in \(C_{\bullet}(G)\) of \(\mathrm{Pol}(G)^{\circ}\) and \(\Delta(\mathrm{Pol}(G)^{\circ})\subset\mathrm{Pol}(G)^{\circ}\otimes\mathrm{Pol }(G)^{\circ}\). All this statements easily follow from the property of \(E_{H}\) stated in the previous proposition.
Let us recall that if \(\mathrm{L}^{\infty}(H)\subset\mathrm{L}^{\infty}(G)\) is a dual compact quantum subgroup then \(\sigma_{t}^{G}|_{\mathrm{L}^{\infty}(H)}=\sigma_{t}^{H}\) and \(\tau_{t}^{G}|_{\mathrm{L}^{\infty}(H)}=\tau_{t}^{H}\) for all \(t\in\mathbb{R}\).
**Definition 2.7**.: Following Vergnioux [10], given a dual quantum subgroup \(C(H)\subset C(G)\), we introduce an equivalence relation \(\sim_{H}\) on \(\mathrm{Irr}(G)\) by defining \(x\sim_{H}y\Leftrightarrow\exists s\in\mathrm{Irr}(H)\), \(\mathrm{Mor}(s,\overline{x}\otimes y)\neq\{0\}\). Note in particular that \(x\nsim_{H}y\Leftrightarrow(\mathrm{id}\otimes E_{H})(u^{\overline{x}} \otimes u^{y})=0\). We define the _index of \(H\) in \(G\)_ by the number of equivalence classes \([G:H]:=|\mathrm{Irr}(G)/\sim_{H}|\).
### Free product quantum group
We now recall some well known results about free product quantum groups. Given two compact quantum groups \(G_{1}\) and \(G_{2}\) the universal property of the C*-algebra given by the full free product \(C(G):=C(G_{1})*C(G_{2})\) allows to define the unique unital \(*\)-homomorphism \(\Delta:C(G)\to C(G)\otimes C(G)\) such that \(\Delta|_{C(G_{k})}=\Delta_{G_{k}}\) for \(k=1,2\). It is easy to check that \(G:=(C(G),\Delta)\) is a CQG with maximal C*-algebra \(C(G)\), reduced C*-algebra given by the reduced free product with respect to the Haar states \(C_{r}(G)=(C_{r}(G_{1}),h_{1})*(C_{r}(G_{2}),h_{2})\), and \(\mathrm{L}^{\infty}(G)=(\mathrm{L}^{\infty}(G_{1}),h_{1})*(\mathrm{L}^{\infty }(G_{2}),h_{2})\). Moreover the Haar state on \(C_{r}(G)\) is the free product state \(h=h_{1}*h_{2}\).
We collect below some important remarks about free products. Most of them are well known to specialists. Since we could not find any explicit statements in the literature, we include a complete proof.
**Proposition 2.8**.: _Let \(G_{1},G_{2}\) be non-trivial CQG. The scaling group of \(G:=G_{1}*G_{2}\) is the free product \(\tau_{t}^{G}=\tau_{t}^{G_{1}}*\tau_{t}^{G_{2}}\) and the Vaes' \(T\)-invariant of \(G_{1}*G_{2}\) is given by:_
\[T(G_{1}*G_{2})=\{t\in\mathbb{R}\,:\,\tau_{t}^{G_{1}}=\tau_{t}^{G_{2}}=\mathrm{ id}\}.\]
_The following are equivalent._
1. \(G_{1}*G_{2}\) _is co-amenable._
2. \(|\mathrm{Irr}(G_{1})|=2=|\mathrm{Irr}(G_{2})|\)_._
3. \(G_{1}\simeq G_{2}\simeq\mathbb{Z}/2\mathbb{Z}\)_._
4. \(\mathrm{L}^{\infty}(G_{1}*G_{2})\) _is amenable._
_Moreover, if one of the previous equivalent conditions does not hold then \(\mathrm{L}^{\infty}(G_{1}*G_{2})\) is a full and prime factor without any Cartan subalgebras and:_
* _If_ \(G_{1}\) _and_ \(G_{2}\) _are Kac then_ \(\mathrm{L}^{\infty}(G_{1}*G_{2})\) _is a type_ \(\mathrm{II}_{1}\) _factor._
* _If_ \(G_{1}\) _or_ \(G_{2}\) _is not Kac then_ \(\mathrm{L}^{\infty}(G_{1}*G_{2})\) _is a type_ \(\mathrm{III}_{\lambda}\) _factor with_ \(\lambda\neq 0\) _and its Connes'_ \(T\)_-invariant is given by_ \(\{t\in\mathbb{R}\,:\,\sigma_{t}^{h_{1}}=\mathrm{id}=\sigma_{t}^{h_{2}}\}\)_._
* \(S\mathrm{d}(\mathrm{L}^{\infty}(G_{1}*G_{2}))=\langle S\mathrm{d}(G_{1}),S \mathrm{d}(G_{2})\rangle\) _and_ \(\tau(\mathrm{L}^{\infty}(G_{1}*G_{2}))=\langle\tau(G_{1}),\tau(G_{2})\rangle\)_, where_ \(\langle\,\cdot\,\rangle\) _means either the group generated by or the topology generated by._
Proof.: Since \(h=h_{1}*h_{2}\), we have \(\sigma_{t}=\sigma_{t}^{G_{1}}*\sigma_{t}^{G_{2}}\), where \(\sigma_{t}\) denotes the modular group of \(h\). Since \(\tau_{t}^{G_{k}}\) is \(h_{k}\)-invariant, the free product \(\tau_{t}:=\tau_{t}^{G_{1}}*\tau_{t}^{G_{2}}\) makes sense and it defines a one parameter group of \(\mathrm{L}^{\infty}(G_{1}*G_{2})\). To show that it is the scaling group, we only have to check that \(\Delta\circ\sigma_{t}=(\tau_{t}\otimes\sigma_{t})\circ\Delta\), which is clear. Let \(T^{\prime}:=\{t\in\mathbb{R}\,:\,\tau_{t}^{G_{1}}=\tau_{t}^{G_{2}}=\mathrm{ id}\}\). It is clear that \(T^{\prime}\subseteq T(G_{1}*G_{2})\). Let \(t\in T(G_{1}*G_{2})\) so that there exists \(u\in\mathrm{L}^{\infty}(G_{1}*G_{2})\) a unitary such that \(\Delta(u)=u\otimes u\) and \(\tau_{t}=\mathrm{Ad}(u)\). It follows that \(u\) is a dimension \(1\) unitary representation of \(G_{1}*G_{2}\), hence irreducible. If \(u\) is non-trivial, it follows from the classification of irreducible representations of \(G_{1}*G_{2}\)[23] that \(u\) is a product of non-trivial dimension \(1\) unitary representations alternating from \(\mathrm{Irr}(G_{1})\) and \(\mathrm{Irr}(G_{2})\) i.e. \(u=u_{1}u_{2}\ldots u_{n}\), where \(u_{k}\) is a unitary in \(\mathrm{L}^{\infty}(G_{i_{k}})\) with \(\Delta_{G_{i_{k}}}(u_{k})=u_{k}\otimes u_{k}\), \(h_{k}(u_{k})=0\) and \(i_{k}\neq i_{k+1}\) for all \(k\). Let \(l\in\{1,2\}\setminus\{i_{n}\}\) and note that, since \(G_{l}\) is non-trivial, there exists a non zero \(x\in\mathrm{L}^{\infty}(G_{l})\) such that \(h_{l}(x)=0\). Then \(uxu^{*}\in\mathrm{L}^{\infty}(G_{1})*\mathrm{L}^{\infty}(G_{2})\) is a reduced operator so \(E_{l}(uxu^{*})=0\), where \(E_{l}:\,\mathrm{L}^{\infty}(G_{1})*\mathrm{L}^{\infty}(G_{2})\to\mathrm{L}^{ \infty}(G_{l})\) denotes the canonical Haar-state-preserving normal and faithful conditional expectation. However, since \(\tau_{t}(x)=\tau_{t}^{G_{l}}(x)\in\mathrm{L}^{\infty}(G_{l})\) one has \(\tau_{t}(x)=E_{l}(\tau_{t}(x))=E_{l}(uxu^{*})=0\) hence \(x=0\), leading to a contradiction. It follows that such a \(u\) is always trivial and \(T(G_{1}*G_{2})\subseteq T^{\prime}\).
\((3)\Rightarrow(2)\Rightarrow(1)\Rightarrow(4)\) are obvious. Also \((2)\Rightarrow(3)\) is easy and well known. Let us repeat however the argument here for the convenience of the reader. Let \(G\) be a CQG satisfying \(|\mathrm{Irr}(G)|=2\) and let \(u\) be the unique, up to unitary equivalence, non-trivial irreducible representation of \(G\) and write \(1\) for the trivial representation. Since \(u\) is non-trivial, \(\overline{u}\) also is hence \(\overline{u}\simeq u\). It follows that \(\dim(1,u\otimes u)=1\). Hence, \(u\otimes u=1\oplus du\), where \(d=\dim(u,u\otimes u)\in\mathbb{N}\). Let us denote by \(N\in\mathbb{N}^{*}\) the dimension of \(u\) so that we have \(N^{2}=1+dN\) hence \(1\equiv 0\pmod{N}\) which implies that \(N=1\) and then \(d=0\). Since \(u\) is of dimension \(1\), \(u\in C(G)\) is a unitary such that \(\Delta(u)=u\otimes u\), \(u=u^{*}\) (hence \(u^{2}=1\)) and \(C(G)\) is generated by \(u\) so \(G=\mathbb{Z}/2\mathbb{Z}\).
Suppose that (2) does not hold so that \(\dim(\mathrm{L}^{\infty}(G_{1}))+\dim(\mathrm{L}^{\infty}(G_{2}))\geq 5\). It follows from [13, Theorem 4.1] that there exists a central projection \(z\) in \(\mathrm{L}^{\infty}(G_{1}*G_{2})=(\mathrm{L}^{\infty}(G_{1}),h_{1})*(\mathrm{ L}^{\infty}(G_{2}),h_{2})\) such that \(z\mathrm{L}^{\infty}(G_{1}*G_{2})\) is either a full factor of type II\({}_{1}\) or a full factor of type III\({}_{1}\), \(\lambda\neq 0\), with \(T\)-invariant given by \(\{t\in\mathbb{R}\,:\,\sigma_{t}^{h_{1}}=\sigma_{t}^{h_{2}}\}\) and \((1-z)\mathrm{L}^{\infty}(G_{1}*G_{2})\) is a direct sum of matrix algebras. Hence, \(z\mathrm{L}^{\infty}(G_{1}*G_{2})\) is non-amenable (since it is full and not of type I) so \(\mathrm{L}^{\infty}(G_{1}*G_{2})\) is non-amenable either and it shows \((4)\Rightarrow(2)\). Moreover, we know from [14] that the set \(\mathrm{Irr}(G_{1}*G_{2})\) is infinite hence, by Lemma 2.4, \(\mathrm{L}^{\infty}(G_{1}*G_{2})\) is diffuse so \(z=1\). Both the primeness and absence of Cartan follow now from [13, Corollary 4.3]. Finally, the \(Sd\) and \(\tau\) invariants are computed in [13, Corollary 2.3] and [13, Theorem 3.2] respectively (recall that our von Neumann algebras are supposed to have separable preduals and that the Haar states on CQG are all almost periodic).
**Remark 2.9**.: It is known that, for \(G\) a CQG of Kac type, the co-amenability of \(G\) is equivalent to the amenability of \(\mathrm{L}^{\infty}(G)\)[12]. However, the equivalence of general CQG is open. Proposition 2.8 shows that in the class of CQG which are nontrivial free products the equivalence between co-amenability and amenability of the von Neumann algebra is true.
Let us now recall the amalgamated free product construction [14, 15]. Let \(G_{1},G_{2}\) be two CQG and \(C(H)\subset C(G_{k})\) a dual quantum subgroup of both \(G_{1},G_{2}\). Let \(E_{k}\,:\,C_{r}(G_{k})\to C_{r}(H)\) be the faithful CE. The amalgamated free product is introduced in [14] and its Haar state and reduced C*-algebra is understood in [15]. Following [14], let us define \(C(G):=C(G_{1})\underset{C(H)}{*}C(G_{2})\) the full amalgamated free product. By universal property there exists a unique unital \(*\)-homomorphism \(\Delta:C(G)\to C(G)\otimes C(G)\) such that \(\Delta|_{C(G_{k})}=\Delta_{G_{k}}\) for \(k=1,2\) and it is easy to check [14] that the pair \((C(G),\Delta)\) is a compact quantum group, denoted by \(G=G_{1}\underset{H}{*}G_{2}\). It is shown in [15] that the reduced C*-algebra is the reduced amalgamated free product with respect to the CE \(E_{k}\), \(C_{r}(G)=(C_{r}(G_{1}),E_{1})\underset{C_{r}(H)}{*}(C_{r}(G_{2}),E_{2})\) and the Haar state of \(G\) is the free product state \(h_{G_{1}}*h_{G_{2}}\). To study further amalgamated free products, we will need the following lemma. Let us introduce before some terminology. A unitary representation \(u\) of a CQG \(G\) is called a _Haar representation_ if, for all \(k\in\mathbb{Z}^{*}\) one has \(\mathrm{Mor}(1,u^{\otimes k})=\{0\}\), where \(1\) denotes the trivial representation and, for \(k\geq 1\), we define \(u^{\otimes-k}:=\overline{u}^{\otimes k}\). Two unitary representation \(u_{1},u_{2}\) of \(G\) are called _free_ if, for \(l\geq 1\), any \((i_{1},\ldots,i_{l})\in\{1,2\}^{l}\) such that \(i_{s}\neq i_{s+1}\) for all \(s\) and any \(k_{1},\ldots,k_{l}\in\mathbb{Z}^{*}\), one has \(\mathrm{Mor}(1,u_{i_{1}}^{\otimes k_{1}}\otimes u_{i_{2}}^{\otimes k_{2}} \otimes\ldots\otimes u_{i_{l}}^{\otimes k_{l}})=\{0\}\).
**Lemma 2.10**.: _Let \(G\) be a compact quantum group with two Haar representations which are free then there exists \(N\geq 1\) such that \(\mathrm{L}(\mathbb{F}_{2})\subset M_{N}(\mathbb{C})\otimes\mathrm{L}^{\infty}(G)\) (with a state preserving inclusion). In particular, if \(G\) is Kac then \(\mathrm{L}^{\infty}(G)\) is not amenable._
Proof.: Recall that, for \(u\in\mathcal{B}(H)\otimes\mathrm{L}^{\infty}(G)\) a finite dimensional unitary representation, its contragredient unitary representation \(\overline{u}\in\mathcal{B}(\overline{H})\otimes\mathrm{L}^{\infty}(G)\) satisfies the following: there exists an invertible operator \(Q\in\mathcal{B}(\overline{H})\) and an orthonormal basis \((e_{i})_{i}\) of \(H\) such that such that, writing \(u=\sum_{ij}e_{ij}\otimes u_{ij}\), where \((e_{ij})_{ij}\) are the matrix units associated to \((e_{i})_{i}\), then \(u^{c}:=(Q\otimes 1)\overline{u}(Q^{-1}\otimes 1)=\sum_{ij}e^{\prime}_{ij} \otimes u^{*}_{ij}\), where \(e^{\prime}_{ij}\) are the matrix units associated to \((\overline{e}_{i})_{i}\). Note also that \((u^{c})^{\otimes k}:=(u^{c})_{1,k+1}(u^{c})_{2,k+1}\ldots(u^{c})_{k,k+1}=(Q^{ \otimes k}\otimes 1)\overline{u}^{\otimes k}((Q^{-1})^{\otimes k}\otimes 1)\in \mathcal{B}(\overline{H}^{\otimes k})\otimes\mathrm{L}^{\infty}(G)\) for all \(k\geq 1\). Hence, if \(\mathrm{Mor}(1,\overline{u}^{\otimes k})=\{0\}\) one has \((\mathrm{id}\otimes h)((u^{c})^{\otimes k})=Q^{\otimes k}(\mathrm{id}\otimes h) (\overline{u}^{\otimes k})(Q^{-1})^{\otimes k}=0\), since \((\mathrm{id}\otimes h)(\overline{u}^{\otimes k})\in\mathcal{B}(\overline{H}^{ \otimes k})\) is the orthogonal projection onto \(\mathrm{Mor}(1,\overline{u}^{\otimes k})\). It then follows
that \(h(x)=0\) for any coefficient \(x\) of \((u^{c})^{\otimes k}\). Since the coefficients of \((u^{c})^{\otimes k}\) are exactly the products of \(k\) adjoints of coefficients of \(u\), we deduce that \(h(x)=0\) for any product of \(k\) adjoints of coefficients of \(u\) whenever \(\operatorname{Mor}(1,\overline{u}^{\otimes k})=\{0\}\). We will use this remark in the rest of the proof.
Let \(u_{k}\in\mathcal{B}(H_{k})\otimes\operatorname{L}^{\infty}(G)\), \(k=1,2\), be two free Haar representations and define \(v_{1}:=(u_{1})_{13}\) and \(v_{2}:=(u_{2})_{23}\) which are both unitary in \(\mathcal{B}(H_{1}\otimes H_{2})\otimes\operatorname{L}^{\infty}(G)\) with respect to \(\omega\). Consider the faithful normal state \(\omega=\operatorname{Tr}\otimes h\in\left(\mathcal{B}(H_{1}\otimes H_{2}) \otimes\operatorname{L}^{\infty}(G)\right)_{*}\). Let us show that \(v_{1}\) and \(v_{2}\) are two free Haar unitaries. Since \((\operatorname{id}\otimes h)(u_{i}^{\otimes k})\)\((i=1,2)\) is the orthogonal projection onto \(\operatorname{Mor}(1,u_{i}^{\otimes k})=\{0\}\), for \(k\in\mathbb{Z}^{*}\), it follows that \(h(x)=0\) whenever \(x\) is a product of \(|k|\) coefficients of \(u_{i}\) or a product of \(|k|\) adjoints of coefficients of \(u_{i}\), for all \(k\in\mathbb{Z}^{*}\). Let \(\mathcal{C}_{i}\) be the linear span of products of coefficients of \(u_{i}\) and of products of adjoints of coefficients of \(u_{i}\) so that \(\omega(\mathcal{C}_{i})=\{0\}\). Since \(v_{1}^{k}=\sum_{ij}e_{ij}\otimes 1\otimes x_{ij}\) and \(v_{2}^{k}=\sum_{ij}1\otimes e_{ij}\otimes y_{ij}\), where \(x_{ij}\in\mathcal{C}_{1}\), \(y_{ij}\in\mathcal{C}_{2}\), for all \(k\in\mathbb{Z}^{*}\), it follows that \(\omega(v_{i}^{k})=(\operatorname{Tr}\otimes\operatorname{id})(\operatorname{ id}\otimes h)(v_{i}^{k})=0\) for all \(k\in\mathbb{Z}^{*}\), \(i\in\{1,2\}\). Hence, both \(v_{1}\) and \(v_{2}\) are Haar unitaries with respect to \(\omega\). Since \(u_{1}\) and \(u_{2}\) are free representations, the same argument as before shows that \(h(\mathcal{C})=\{0\}\), where \(\mathcal{C}\) is the linear span of operators \(x\in\operatorname{L}^{\infty}(G)\) of the form \(x=y_{1}\ldots y_{l}\), \(l\geq 1\), \(y_{s}\) is a product of \(|k_{s}|\) coefficients of \(u_{i_{s}}\) if \(k_{s}\geq 1\) or adjoints of coefficients of \(u_{i_{s}}\) if \(k_{s}\leq-1\) with \(k_{s}\in\mathbb{Z}^{*}\) and \(i_{s}\in\{1,2\}\) such that \(i_{s}\neq i_{s+1}\) for all \(s\). Let now \(l\geq 1\) and \(i_{1},\ldots i_{l}\in\{1,2\}\) with \(i_{s}\neq i_{s+1}\) and \(k_{1},\ldots,k_{l}\in\mathbb{Z}^{*}\). We can write \(v_{i_{1}}^{k_{1}}\ldots v_{i_{l}}^{k_{l}}=\sum_{i,j,k,l}e_{ij}\otimes e_{kl} \otimes x_{ijkl}\), where \(x_{ijkl}\in\mathcal{C}\). It follows that \(\omega(v_{i_{1}}^{k_{1}}\ldots v_{i_{l}}^{k_{l}})=(\operatorname{Tr}\otimes \operatorname{id})(\operatorname{id}\otimes h)(v_{i_{1}}^{k_{1}}\ldots v_{i_{ l}}^{k_{l}})=0\). Hence \(v_{1}\) and \(v_{2}\) are free with respect to \(\omega\). It follows that there exists a unique normal faithful \(*\)-homomorphism \(\operatorname{L}(\mathbb{F}_{2})\to\mathcal{B}(H_{1}\otimes H_{2})\otimes \operatorname{L}^{\infty}(G)\) which maps one generator of \(\mathbb{F}_{2}\) onto \(v_{1}\) and the other onto \(v_{2}\). Hence, if \(G\) is Kac, \(\mathcal{B}(H_{1}\otimes H_{2})\otimes\operatorname{L}^{\infty}(G)\) is non amenable either and it implies that \(\operatorname{L}^{\infty}(G)\) is also non amenable.
**Definition 2.11**.: A dual quantum subgroup \(C(H)\subset C(G)\) is called _proper_ if there exists an irreducible representation \(a\) of \(G\) such that \((\operatorname{id}\otimes E_{H})(a)=0\) and for any \(s\in\operatorname{Irr}(H)\), if \(s\subset\overline{a}\otimes a\) then \(s=1\).
Note that if \(C(H)\subset C(G)\) is proper then \([G:H]\geq 2\).
**Remark 2.12**.: In the case of duals of discrete groups, the notion of proper dual quantum subgroup coincides with the usual notion of a proper subgroup. However, for quantum groups, there are examples of non proper dual quantum subgroup of index \(2\). A nice example is the dual quantum subgroup \(\operatorname{Aut}^{+}(M_{N}(\mathbb{C}))\) of \(O_{N}^{+}\). Indeed, the representation category of \(O_{N}^{+}\) is the category such that all the irreducible are self-adjoint, indexed by \(\mathbb{N}\), \((u_{n})_{n\in\mathbb{N}}\), with \(u_{0}=\varepsilon\) and fusion rules \(u_{n}\otimes u_{m}=u_{|n-m|}\oplus u_{|n-m|+2}\oplus\cdots\oplus u_{n+m}\). Taking the full subcategory of \(\operatorname{Rep}(\operatorname{O}_{N}^{+})\) generated by the irreducible representations \((v_{n}=u_{2n})\), we get a category isomorphic to \(\operatorname{Rep}(\operatorname{Aut}^{+}(\operatorname{M}_{N}(\mathbb{C})))\), with irreducible representations indexed by \(\mathbb{N}\) and fusion rules \(v_{n}\otimes v_{m}=u_{2n}\otimes u_{2n}=v_{|n-m|}\oplus v_{|n-m|+1}\oplus \cdots\oplus v_{n+m}\). With the fusion rules, we see that it is indeed a full subcategory and that the index of the corresponding subgroup is \(2\). However, taking any irreducible \(u_{n}\) for \(n\in\mathbb{N}\), we have that \(v_{1}=u_{2}\subset\overline{v_{m}}\otimes v_{m}=v_{m}\otimes v_{m}\) so the dual quantum subgroup cannot be proper.
**Proposition 2.13**.: _Let \(G=G_{1}*_{H}G_{2}\) be an amalgamated free product with \(H\) a proper dual quantum subgroup of \(G_{1}\) and \([G_{2}:H]\geq 3\) then there exists \(N\) such that \(\operatorname{L}(\mathbb{F}_{2})\subset M_{N}(\mathbb{C})\otimes\operatorname{L}^ {\infty}(G)\)._
Proof.: There exist an irreducible representation \(a\) of \(G_{1}\) satisfying the conditions of Definition 2.11 and two irreducible representations \(b_{1},b_{2}\) of \(G_{2}\) such that \((\operatorname{id}\otimes E_{H})(b_{i})=0\) and \((\operatorname{id}\otimes E_{H})(\overline{b_{1}}\otimes b_{2})=0\) (so we also have \((\operatorname{id}\otimes E_{H})(\overline{b_{2}}\otimes b_{1})=0\)). Define \(u_{i}=a\otimes b_{i}\otimes\overline{a}\otimes\overline{b}_{i}\) and let \(k\geq 1\). Since \(\chi(a)\in\operatorname{L}^{\infty}(G_{1})^{\circ},\chi(b_{i})\in\operatorname{ L}^{\infty}(G_{2})^{\circ}\), \(\chi(u_{i}^{\otimes k})=\chi(u_{i})^{k}=(\chi(a)\chi(b_{i})\chi(a)^{*}\chi(b_{i})^ {*})^{k}\) is a reduced operator in the amalgamated free product so:
\[\dim\left(\operatorname{Mor}(1,u_{i}^{\otimes k})\right)=h(\chi(u_{i}^{\otimes k }))=0.\]
Also, \(\chi(\overline{u_{i}}^{\otimes k})=(\chi(b_{i})\chi(a)\chi(b_{i})^{*}\chi(a)^{*} )^{k}\) is reduced so \(\dim\left(\operatorname{Mor}(1,\overline{u_{i}}^{\otimes k})\right)=h(\chi( \overline{u_{i}}^{\otimes k}))=0\). Hence, \(u_{i}\) is a Haar representation of \(G\) for \(i\in\{1,2\}\). Let us show that \(u_{1}\) and \(u_{2}\) are free. It suffices to show that, for any \(l\geq 1\), \((i_{1},\ldots,i_{l})\in\{1,2\}^{l}\) with \(i_{s}\neq i_{s+1}\) and \(k_{1},\ldots,k_{l}\in\mathbb{Z}^{*}\) the
operator \(x:=\chi(u_{i_{1}}^{\otimes k_{1}}\otimes\ldots\otimes u_{i_{l}}^{\otimes k_{l}})\) is in the linear span of reduced operators. The case \(l=1\) is clear by the first part of the proof and the general case can be shown by induction by using the same arguments used in the case \(l=2\) that we present below. Let \(x=\chi(u_{i_{1}}^{\otimes k_{1}}\otimes u_{i_{2}}^{\otimes k_{2}})\). It suffices to show that \(x\) is in the linear span of reduced operator. If \(k_{1}\) and \(k_{2}\) have the same sign then \(x\) is already reduced. If \(k_{1}\geq 1\) and \(k_{2}\leq-1\) then,
\[x=\chi(u_{i_{1}})^{k_{1}}\chi(\overline{u}_{i_{2}})^{-k_{2}}=\chi(u_{i_{1}})^{ k_{l-1}}\chi(a)\chi(b_{i_{1}})\chi(a)^{*}\chi(\overline{b}_{i_{1}}\otimes b _{i_{2}})\chi(a)\chi(b_{i_{2}})^{*}\chi(a)^{*}\chi(\overline{u}_{i_{2}})^{-k_{ 2}-1}\]
Since \((\operatorname{id}\otimes E_{H})(\overline{b}_{i_{1}}\otimes b_{i_{2}})=0\), we have \(\chi(b_{i_{1}}\otimes\overline{b}_{i_{2}})\in\operatorname{L}^{\infty}(G_{2} )^{\circ}\) hence \(x\) is reduced. If \(k_{1}\leq-1\) and \(k_{2}\geq 1\) then
\[x = \chi(\overline{u}_{i_{1}})^{-k_{1}-1}\chi(b_{i_{1}})\chi(a)\chi(b _{i_{1}})^{*}\chi(\overline{a}\otimes a)\chi(b_{i_{2}})\chi(a)^{*}\chi(b_{i_{ 2}})^{*}\chi(u_{i_{2}})^{k_{2}-1}\] \[= \chi(\overline{u}_{i_{1}})^{-k_{1}-1}\chi(b_{i_{1}})\chi(a)\chi( \overline{b}_{i_{1}}\otimes b_{i_{2}})\chi(a)^{*}\chi(b_{i_{2}})^{*}\chi(u_{i _{2}})^{k_{2}-1}\] \[+\sum_{s\in\operatorname{Irr}(G_{1})\setminus\operatorname{Irr}(H ),s\subset\overline{a}\otimes a}\chi(\overline{u}_{i_{1}})^{-k_{1}-1}\chi(b_{i _{1}})\chi(a)\chi(b_{i_{1}})^{*}\chi(s)\chi(b_{i_{2}})\chi(a)^{*}\chi(b_{i_{2 }})^{*}\chi(u_{i_{2}})^{k_{2}-1},\]
The right hand side of this equality is in the linear span of reduced operators since \(\chi(\overline{b}_{i_{1}}\otimes b_{i_{2}})\in\operatorname{L}^{\infty}(G_{2} )^{\circ}\).
**Remark 2.14**.: The index condition is clearly necessary since it is already necessary in the discrete group case. Indeed, if \(\Gamma=\Gamma_{1}\mathop{*}\limits_{\Sigma}\Gamma_{2}\) be a non-trivial amalgamated free product of discrete groups (i.e. \(\Sigma\neq\Gamma_{k}\), \(k=1,2\)). It is well known and easy to check that \(\Gamma\) is amenable if and only if \(\Sigma\) is amenable and \([\Gamma_{k}:\Sigma]=2\) for all \(k\in\{1,2\}\) (actually \(\Gamma\) is an extension of \(\Sigma\) by \(D_{\infty}=\mathbb{Z}_{2}*\mathbb{Z}_{2}\)).
**Example 2.15**.: The dual quantum subgroup \(C(\operatorname{Aut}^{+}(M_{2}(\mathbb{C})))\subset C(O_{2}^{+})\) is not proper, has index \(2\) and the quantum group \(G:=O_{2}^{+}\mathop{*}\limits_{\operatorname{Aut}^{+}(M_{2}(\mathbb{C}))}O_{2} ^{+}\) is co-amenable. The inclusion of \(C(\operatorname{Aut}^{+}(M_{2}(\mathbb{C})))\) in \(C(O_{2}^{+})\) is the map which sends the fundamental representation of \(\operatorname{Aut}^{+}(M_{2}(\mathbb{C}))\) onto \(v\otimes v\), where \(v\in M_{2}(\mathbb{C})\otimes C(O_{2}^{+})\) is the fundamental representation of \(O_{2}^{+}\). Hence, writing \(v_{l}\), \(l\in\mathbb{N}\), the representatives of the irreducible representations of \(O_{N}^{+}\) such that \(v_{0}=1\) and \(v_{1}=v\), \(C(\operatorname{Aut}^{+}(M_{2}(\mathbb{C})))\) is viewed in \(C(O_{2}^{+})\) has the C*-subalgebra generated by the coefficients of representations \(v_{l}\) for \(l\in 2\mathbb{N}\). Let \(\rho\,:\,C(O_{2}^{+})\to C^{*}(\mathbb{Z}_{2})\) be the unique unital \(*\)-homomorphism such that \((\operatorname{id}\otimes\rho)(v)=1\otimes g\), where \(g\) is the generator of \(\mathbb{Z}_{2}\). It is not difficult to check that \(\rho\) intertwines the comultiplications and,
\[(\operatorname{id}\otimes\rho)(v_{l})=\left\{\begin{array}{ll}1&\text{if}&l \in 2\mathbb{N},\\ g&\text{if}&l\in 2\mathbb{N}+1.\end{array}\right.\]
In particular, one has \(\rho(x)=\varepsilon(x)1\) for all \(x\in C(\operatorname{Aut}^{+}(M_{2}(\mathbb{C}))\).
It follows from the preceding discussion that, writing \(v_{1,l}\) and \(v_{2,l}\) the two copies of \(v_{l}\) in \(C(G)\), there exists a unique unital \(*\)-homomorphism \(\pi\,:\,C(G)\to C^{*}(\mathbb{Z}_{2}*\mathbb{Z}_{2})\) such that \((\operatorname{id}\otimes\pi)(v_{i,1})=1\otimes g_{i}\), for \(i=1,2\) where \(g_{1}\), \(g_{2}\) are the two copies of \(g\) in \(\mathbb{Z}_{2}*\mathbb{Z}_{2}\). In particular \(\pi\) intertwines the comultiplications, \(\pi(x)=\varepsilon(x)1\) for all \(x\in C(\operatorname{Aut}^{+}(M_{2}(\mathbb{C})))\) and, whenever \(u\) is a representation of the form \(u=v_{i_{1},l_{1}}\otimes\ldots\otimes v_{i_{n},l_{n}}\) with \(i_{s}\neq i_{s+1}\) and \(k_{s}\in 2\mathbb{N}+1\) for all \(s\) one has \((\operatorname{id}\otimes\pi)(u)=1\otimes g_{i_{1}}\ldots g_{i_{n}}\). Let us call such a representation a reduced representation and let \(\mathcal{C}\) be the linear span of coefficients of reduced representations. By the previous computation, for all \(x\in\mathcal{C}\), \(\pi(x)\) is a linear combination of reduced words in \(\mathbb{Z}_{2}*\mathbb{Z}_{2}\) hence, \(\tau\circ\pi(\mathcal{C})=\{0\}\), where \(\tau\) is the canonical tracial state on \(C^{*}(\mathbb{Z}_{2}*\mathbb{Z}_{2})\). Note also that, since any coefficient of a reduced representation \(u\) is a reduced word in the amalgamated free product \(C(G)\), one has \(E(\mathcal{C})=\{0\}\) where \(E\,:\,C(G)\to C(\operatorname{Aut}^{+}(M_{2}(\mathbb{C})))\) is the canonical conditional expectation. Since \(C(G)\) is the closed linear span of \(\mathcal{C}\) and \(C(\operatorname{Aut}^{+}(M_{2}(\mathbb{C})))\) we deduce that \(\tau\circ\pi=\varepsilon\circ E\). It follows that \(\ker(\lambda_{G})\subset\ker(\pi)\). Indeed, let \(E_{r}\,:\,C_{r}(G)\to C_{r}(\operatorname{Aut}^{+}(M_{2}(\mathbb{C})))\) be the canonical faithful conditional expectation such that \(E_{r}\circ\lambda_{G}=\lambda_{\operatorname{Aut}^{+}(M_{2}(\mathbb{C}))}\circ E\) and take \(x\in\ker(\lambda_{G})\). Then, \(E_{r}(\lambda_{G}(x^{*}x))=\lambda_{\operatorname{Aut}^{+}(M_{2}(\mathbb{C}))}(E(x^ {*}x))=0\). Since \(\operatorname{Aut}^{+}(M_{2}(\mathbb{C}))\) is co-amenable [1] it follows that \(E(x^{*}x)=0\) hence \(\varepsilon(E(x^{*}x))=\tau(\pi(x^{*}x))=0\). Since \(\mathbb{Z}_{2}*\mathbb{Z}_{2}\) is amenable, \(\tau\) is faithful so
\(x\in\ker(\pi)\). Hence \(\pi\prec\lambda_{G}\) and the co-amenability follows from [10, Theorem 3.11 and 3.12].
### Semi-direct product quantum group
The semi-direct product quantum group is defined and studied in [10]. Let us recall below the basic facts about this construction.
Let \(G\) be a compact quantum group and \(\Lambda\) a finite group acting on \(C(G)\) by quantum automorphisms, meaning that we have a group homomorphism \(\alpha\,:\,\Lambda\to\operatorname{Aut}(C(G))\) such that \(\Delta_{G}\circ\alpha_{g}=(\alpha_{g}\otimes\alpha_{g})\circ\Delta_{G}\) for all \(g\in\Lambda\). Define the C*-algebra \(C(G\rtimes\Lambda)=C(G)\otimes C(\Lambda)\) with the comultiplication \(\Delta\,:\,C(G\rtimes\Lambda)\to C(G\rtimes\Lambda)\otimes C(G\rtimes\Lambda)\) such that:
\[\Delta(a\otimes\delta_{r})=\sum_{s\in\Lambda}\left[(\operatorname{id}\otimes \alpha_{s})(\Delta_{G}(a))\right]_{13}(1\otimes\delta_{s}\otimes 1\otimes\delta_{s^{-1} r}).\]
In particular, the inclusion \(C(\Lambda)\subset C(G\rtimes\Lambda)\,:\,x\mapsto 1\otimes x\) preserves the comultiplications. It is shown in [10] that the pair \((C(G\rtimes\Lambda),\Delta)\) is a compact quantum group in its maximal version, the Haar measure \(h\) is given by \(h=h_{G}\otimes\operatorname{tr}\), where \(h_{G}\) is the Haar state on \(C(G)\) and \(\operatorname{tr}\) is the integration with respect to the uniform probability on \(\Lambda\) i.e. \(\operatorname{tr}(\delta_{r})=\frac{1}{|\Lambda|}\). Hence the reduced C*-algebra is \(C_{r}(G)\otimes C(\Lambda)\), the von Neumann algebra is \(\operatorname{L}^{\infty}(G)\otimes C(\Lambda)\) and the modular group \(\sigma_{t}\) of \(G\rtimes\Lambda\) is \(\sigma_{t}=\sigma_{t}^{G}\otimes\operatorname{id}\). Moreover, the canonical surjection \(\lambda\,:\,C(G\rtimes\Lambda)=C(G)\otimes C(\Lambda)\to C_{r}(G\rtimes\Lambda )=C_{r}(G)\otimes C(V)\) is \(\lambda=\lambda_{G}\otimes\operatorname{id}\), where \(\lambda_{G}\) is the canonical surjection \(C(G)\to C_{r}(G)\). The irreducible representations and the fusion rules of \(G\rtimes\Lambda\) are described in [10]. We could use the general classification of irreducible representations of \(G\rtimes\Lambda\) from [10] to deduce the one-dimensional representations of \(G\rtimes\Lambda\). However, since we only need to understand the one-dimensional representations, we prefer to include a self contained proof in the next Lemma.
**Lemma 2.16**.: _The one-dimensional unitary representations of \(G\rtimes\Lambda\) are of the form \(w\otimes v\in C(G\rtimes\Lambda)\), where \(w\in C(G)\) and \(v\in C(\Lambda)\) are one-dimensional unitary representations of \(G\) and \(\Lambda\) respectively and \(\alpha_{r}(w)=w\) for all \(r\in\Lambda\)._
Proof.: For this proof, we will view \(C(G\rtimes\Lambda)=C(\Lambda,C(G))\) and \(C(G\rtimes\Lambda)\otimes C(G\rtimes\Lambda)=C(\Lambda\times\Lambda,C(G) \otimes C(G))\). With this identification, the comultiplication becomes \(\Delta(u)(r,s)=(\operatorname{id}\otimes\alpha_{r})(\Delta_{G}(u(rs)))\) for all \(u\in C(G\rtimes\Lambda)\) and all \(r,s\in\Lambda\) and it is then easy to check that the unitary of the form given in the Lemma are indeed one-dimensional unitary representations of \(G\rtimes\Lambda\). Conversely, if \(u\in C(G\rtimes\Lambda)\) is a unitary such that \(\Delta(u)=u\otimes u\), then \(u(r)\in C(G)\) is a unitary for all \(r\in\Lambda\) and, for all \(r,s\in\Lambda\), \((\operatorname{id}\otimes\alpha_{r})(\Delta_{G}(u(rs))=u(r)\otimes u(s)\). It follows that \(w:=u(1)\) is a unitary in \(C(G)\) such that \(\Delta(w)=w\otimes w\). Moreover, since \(\alpha_{r}\) intertwines the comultiplication of \(G\) one has \(\varepsilon_{G}\circ\alpha_{r}=\varepsilon_{G}\), where \(\varepsilon_{G}\) is the counit of \(G\), and,
\[(\operatorname{id}\otimes\varepsilon_{G}\circ\alpha_{r})(\Delta_{G}(u(rs)))=( \operatorname{id}\otimes\varepsilon_{G})(\Delta_{G}(u(rs)))=u(rs)=u(r) \varepsilon_{G}(u(s)).\]
It follows that \(v:=(r\mapsto v_{r})\), where \(v_{r}:=\varepsilon_{G}(u_{r})\), is a one-dimensional unitary representation of \(\Lambda\) and, for all \(s\in\Lambda\), \(u(s)=wv_{s}\) i.e. \(u=w\otimes v\in C(G)\otimes C(\Lambda)=C(G\rtimes\Lambda)\). Using that \(\Delta(u)=u\otimes u\), one checks easily that \(\alpha_{r}(w)=w\) for all \(r\in\Lambda\).
We collect in the following Proposition some easy observations about \(G\rtimes\Lambda\) that are not contained in [10]. We use the terminology introduced in the Introduction before the statement of Theorem A and we denote by \(\Lambda_{cb}(G)\) the Cowling-Haagerup constant of the von Neumann algebra \(\operatorname{L}^{\infty}(G)\).
**Proposition 2.17**.: _The following holds._
1. \(G\rtimes\Lambda\) _is co-amenable if and only if_ \(G\) _is co-amenable._
2. \(G\rtimes\Lambda\) _has the Haagerup property if and only if_ \(G\) _has the Haagerup property._
3. \(\Lambda_{cb}(G\rtimes\Lambda)=\Lambda_{cb}(G)\)_._
4. _The scaling group_ \(\tau_{t}\) _of_ \(G\rtimes\Lambda\) _is the one parameter group of_ \(\operatorname{L}^{\infty}(G\rtimes\Lambda)=\operatorname{L}^{\infty}(G)\otimes C (\Lambda)\) _defined by_ \(\tau_{t}=\tau_{t}^{G}\otimes\operatorname{id}\)_, where_ \(\tau_{t}^{G}\) _is the scaling group of_ \(G\)_._
5. _The Vaes'_ \(T\)_-invariant_ \(T(G\rtimes\Lambda)\) _is:_ \[\{t\in\mathbb{R}:\exists w\in\mathcal{U}(C(G)),\tau_{t}^{G}=\operatorname{Ad}( w),\Delta_{G}(w)=w\otimes w,\,\alpha_{r}(w)=w\,\forall r\in\Lambda\}.\]
_._
6. \(Sd(G\rtimes\Lambda)=Sd(G)\) _and_ \(\tau(G\rtimes\Lambda)=\tau(G)\)_._
Proof.: (1) directly follows from the fact that \(\lambda=\lambda_{G}\otimes\mathrm{id}\) and (2) and (3) follows from \(\mathrm{L}^{\infty}(G\rtimes\Lambda)\simeq\mathrm{L}^{\infty}(G)\otimes \mathbb{C}^{K}\), where \(K=|\Lambda|\). To prove (4), one can easily check that the one parameter group \(\tau_{t}:=\tau_{t}^{G}\otimes\mathrm{id}\) satisfies \(\Delta\circ\sigma_{t}=(\tau_{t}\otimes\sigma_{t})\circ\Delta\). To prove (6), we note that since \(h=h_{G}\otimes\mathrm{tr}\), the modular operator \(\nabla\) of \(h\), is the positive operator on \(\mathrm{L}^{2}(G)\otimes l^{2}(\Lambda)\) given by \(\nabla_{G}\otimes\mathrm{id}\), the equality \(Sd(G\rtimes\Lambda)=Sd(G)\) follows, while the equality \(\tau(G\rtimes\Lambda)=\tau(G)\) is a direct consequence of \(\sigma_{t}=\sigma_{t}^{G}\otimes\mathrm{id}\). Finally, (5) is a consequence of (4) and Lemma 2.16.
### Quantum permutation group
For \(N\in\mathbb{N}^{*}\), we denote by \(S_{N}^{+}\) the quantum permutation group on \(N\) points. We recall that \(C(S_{N}^{+})\) is the universal unital C*-algebra generated by \(N^{2}\) orthogonal projections \(u_{ij}\), \(1\leq i,j\leq N\) with the relations \(\sum_{j=1}^{N}u_{ij}=1=\sum_{j=1}^{N}u_{ji}\) for all \(1\leq i\leq N\). In particular \(u=(u_{ij})_{ij}\in M_{N}(\mathbb{C})\otimes C(S_{N}^{+})\) is a unitary. The comultiplication on \(C(S_{N}^{+})\) is defined, using the universal property of \(C(S_{N}^{+})\), by the relation \(\Delta(u_{ij})=\sum_{k=1}^{N}u_{ik}\otimes u_{kj}\) for all \(1\leq i,j\leq N\). In particular, \(u\) is a unitary representation of \(S_{N}^{+}\), called the _fundamental representation_. For \(1\leq i\leq N\) we write \(L_{i}:=\mathrm{Span}\{u_{ij}\,:\,1\leq j\leq N\}\subset\mathrm{Pol}(S_{N}^{+})\). Since the family \((u_{ij})_{1\leq j\leq N}\) is a partition of unity, the vector subspace \(L_{i}\) is actually a unital \(*\)-subalgebra of \(\mathrm{Pol}(S_{N}^{+})\) and the map \(\mathbb{C}^{N}\to L_{i}\), \(e_{j}\mapsto u_{ij}\) is a unital \(*\)-isomorphism of \(*\)-algebras. Since \(L_{i}\) is finite dimensional, we may view \(L_{i}\subset C(S_{N}^{+})\) or \(L_{i}\subset C_{r}(S_{N}^{+})\) as an abelian finite dimensional C*-subalgebra and also \(L_{i}\subset\mathrm{L}^{\infty}(S_{N}^{+})\) as an abelian finite dimensional von Neumann subalgebra. We use the same symbol \(h\) to denote the Haar state of \(S_{N}^{+}\) on \(C(S_{N}^{+})\), \(C_{r}(S_{N}^{+})\) or \(\mathrm{L}^{\infty}(S_{N}^{+})\). We also recall that \(h(u_{ij})=\frac{1}{N}\) for all \(1\leq i,j\leq N\), where \(h\) is the Haar state on \(C(S_{N}^{+})\).
The elementary proof of the next Proposition is left to the reader.
**Proposition 2.18**.: _Let \(1\leq i\leq N\). The following holds_
1. _The unique trace preserving conditional expectation_ \(E_{i}\,:\,\mathrm{L}^{\infty}(S_{N}^{+})\to L_{i}\) _satisfies :_ \[E_{i}(x)=N\sum_{j=1}^{N}h(xu_{ij})u_{ij}\quad\text{for all }x\in\mathrm{L}^{\infty}(S_{N}^{+}).\]
2. _The map_ \(x\mapsto N\sum_{j=1}^{N}h(xu_{ij})u_{ij}\) _is a conditional expectation from_ \(C(S_{N}^{+})\) _(resp._ \(C_{r}(S_{N}^{+})\)_) onto_ \(L_{i}\)_. All these maps will be denoted by_ \(E_{i}\)_._
3. _The conditional expectation_ \(E_{i}\,:\,C_{r}(S_{N}^{+})\to L_{i}\) _is faithful._
We collect below some elementary computations concerning the conditional expectation onto \(L_{i}\).
**Lemma 2.19**.: _Let \(a\in C(S_{N}^{+})\) and \(1\leq i\leq N\) be such that \(E_{i}(a)=0\). Then,_
1. _For any_ \(1\leq s,j\leq N\)_, we have_ \((h\otimes id)(\Delta(a)(u_{is}\otimes u_{sj}))=0\)_._
2. _For all_ \(1\leq s\leq N\)_, we have_ \(\sum_{(1),(2)}h\left((a)_{(1)}u_{is}\right)(a)_{(2)}=0\)_._
Proof.: (1). We show that \(\forall\omega\in C(S_{N}^{+})^{*}\), \(1\leq s,j\leq N\), \((h\otimes\omega)(\Delta(a)(u_{is}\otimes u_{sj}))=0\). Let \(\omega,s,j\) be as above, and define \(\mu=\omega(\,\cdot\,u_{sj})\in C(S_{N}^{+})^{*}\). Using the Sweedler notation,
\[\Delta(au_{ij})\left(1\otimes u_{sj}\right)=\sum_{t=1}^{N}a_{(1)}u_{it} \otimes a_{(2)}u_{tj}u_{sj}=a_{(1)}u_{is}\otimes a_{(2)}u_{sj}=\Delta(a)\left( u_{is}\otimes u_{sj}\right).\]
Applying \((h\otimes\omega)\), we get
\[(h\otimes\omega)\left(\Delta(a)(u_{is}\otimes u_{sj})\right)=(h\otimes\omega) \left(\Delta(au_{ij})(1\otimes u_{sj})\right)=(h\otimes\mu)(\Delta(au_{ij}))= \mu(1)h(au_{ij})=0,\]
where we used the invariance of the Haar state \(h\) and the fact that \(h(au_{ij})=0\) for all \(1\leq j\leq N\) since \(E_{i}(a)=0\) and by definition of \(E_{i}\).
(2). It suffices to sum the relation (1) for \(1\leq j\leq N\) and use \(\sum_{j=1}^{N}u_{sj}=1\ \forall s\).
We will use the following Lemma, which is an easy consequence of the factoriality of \(\mathrm{L}^{\infty}(S_{N}^{+})\) when \(N\geq 8\)[1], but we will need the next result for all \(N\geq 4\).
**Lemma 2.20**.: _For all \(N\geq 4\) and all \(1\leq k\leq N\) one has \(\mathrm{L}^{\infty}(S_{N}^{+})^{\prime}\cap L_{k}=\mathbb{C}1\)._
Proof.: \(\mathrm{Pol}(S_{N}^{+})\) being weakly dense in \(\mathrm{L}^{\infty}(S_{N}^{+})\) one has \(\mathrm{L}^{\infty}(S_{N}^{+})^{\prime}\cap L_{k}=\mathrm{Pol}(S_{N}^{+})^{ \prime}\cap L_{k}\). By the universal property of \(\mathrm{Pol}(S_{N}^{+})\), for all \(\sigma\in S_{N}\), there exist unique unital \(*\)-homomorphisms \(R_{\sigma},C_{\sigma}\,:\,\mathrm{Pol}(S_{N}^{+})\to\mathrm{Pol}(S_{N}^{+})\) such that \(R_{\sigma}(u_{ij})=u_{\sigma(j)j}\) and \(C_{\sigma}(u_{ij})=u_{i\sigma(j)}\), for all \(1\leq i,j\leq N\). Note that \(R_{\sigma}\) and \(C_{\sigma}\) are \(*\)-isomorphisms since \(R_{\sigma^{-1}}R_{\sigma}=R_{\sigma}R_{\sigma^{-1}}=\mathrm{id}\) and \(C_{\sigma^{-1}}C_{\sigma}=C_{\sigma}C_{\sigma^{-1}}=\mathrm{id}\). Denoting by \((1,k)\in S_{N}\) the transposition of \(1\) and \(k\), one has \(R_{(1,k)}(L_{k})=L_{1}\) hence, it suffices to show that \(\mathrm{Pol}(S_{N}^{+})^{\prime}\cap L_{1}=\mathbb{C}1\).
Fix a Hilbert space \(H\) with two non-commuting orthogonal projections \(P,Q\in\mathcal{B}(H)\) and let us denote by \(A\in M_{4}(\mathcal{B}(H))\) the matrix
\[A:=\left(\begin{array}{cccc}P&1-P&0&0\\ 1-P&P&0&0\\ 0&0&Q&1-Q\\ 0&0&1-Q&Q\end{array}\right),\]
and by \(B\in M_{N}(\mathcal{B}(H))\) the block matrix \(B=\left(\begin{array}{cc}A&0\\ 0&I\end{array}\right)\), where \(I\) is the identity matrix. Note that \(B\) is a magic unitary. Hence, writing \(B=(b_{ij})\), there exists a unique unital \(*\)-homomorphism \(\pi\,:\,\mathrm{Pol}(S_{N}^{+})\to\mathcal{B}(H)\) such that \(\pi(u_{ij})=b_{ij}\) for all \(1\leq i,j\leq N\).
To show that \(\mathrm{Pol}(S_{N}^{+})^{\prime}\cap L_{1}=\mathbb{C}1\), it suffices to show that the only orthogonal projections in \(L_{1}\) that commutes with \(\mathrm{Pol}(S_{N}^{+})\) are \(0\) and \(1\). A projection \(p\in L_{1}\setminus\{0,1\}\) is of the form \(p=\sum_{j\in I}u_{1j}\) for \(I\subset\{1,\ldots,N\}\) with \(1\leq|I|\leq N-1\). Let \(\sigma\in S_{N}\) be a permutation such that \(\sigma(I)=\{1,\ldots,|I|+1\}\setminus\{2\}\) so that \(q:=C_{\sigma}(p)=u_{11}+u_{13}+u_{14}+\cdots+u_{1,|I|+1}\). It suffices to show that \(q\) does not commute with \(u_{33}\) and this follows from the \(\pi(q)=P\) and \(\pi(u_{33})=Q\).
### Free wreath products
For a compact quantum group \(G\) and an integer \(N\), the _free wreath product_ of \(G\) by \(S_{N}^{+}\), as defined by Bichon in [1], is the CQG \(\wr_{*}S_{N}^{+}\) with
\[C(G\wr_{*}S_{N}^{+})=C(G)^{*N}*C(S_{N}^{+})/I,\]
where we consider the full free product and \(I\) is the two-sided closed ideal generated by:
\[\{\nu_{i}(a)u_{ij}-u_{ij}\nu_{i}(a)\,:\,a\in C(G),\,1\leq i,j\leq N\} \tag{1}\]
and \(\nu_{i}\,:\,C(G)\to C(G)^{*N}\subset C(G)^{*N}*C(S_{N}^{+})\) is the unital \(*\)-homomorphism on the \(i^{th}\)-copy of \(C(G)\) in \(C(G)^{*N}\), \(u_{ij}\in C(S_{N}^{+})\) are the coefficients of the fundamental representation. If \(G\) has a _dual quantum subgroup_\(H\) i.e. \(C(H)\subset C(G)\), then we define, following [10], the _free wreath product with amalgamation_\(G\wr_{*H}S_{N}^{+}\). This is the CQG with \(C(G\wr_{*,H}S_{N}^{+})=C(G)^{*H}N*C(S_{N}^{+})/I\), where the full free product is taken amalgamated over \(C(H)\) and the ideal \(I\) is the same as in (1). It is easy to check (using both universal properties) that \(C(G\wr_{*,H}S_{N}^{+})=C(G\wr_{*}S_{N}^{+})/J\), where \(J\) is the closed two-sided ideal generated by \(\{\nu_{i}(a)-\nu_{j}(a)\,:\,a\in C(H),\,1\leq i,j\leq N\}\). Note that \(G\) always admits the trivial group \(\{e\}\) as a dual quantum subgroup, and we have, for \(H=\{e\}\), \(C(G\wr_{*,H}S_{N}^{+})\simeq C(G\wr_{*}S_{N}^{+})\).
**Remark 2.21**.: The surjective \(*\)-homomorphism \(a\mapsto a+I\,:\,C(G)^{*N}*C(S_{N}^{+})\to C(G\wr_{*}S_{N}^{+})\) is injective when restricted to \(C(G)^{*N}\) as well as to \(C(S_{N}^{+})\). By the universal property, there exists a unique unital \(*\)-homomorphism \(\pi\,:\,C(G\wr_{*}S_{N}^{+})\to C(G)^{*N}\otimes C(S_{N}^{+})\) such that \(\pi(x+I)=x\otimes 1\) if \(x\in C(G)^{*N}\) and \(\pi(x+I)=1\otimes x\) if \(x\in C(S_{N}^{+})\). The composition of \(x\mapsto x+I\) and \(\pi\) is the map sending \(C(G)^{*N}\) and \(C(S_{N}^{+})\) to their respective copies in \(C(G)^{*N}\otimes C(S_{N}^{+})\), which is injective. The same holds for \(C(G)^{*_{H}N}\) and \(C(S_{N}^{+})\) in the amalgamated case.
Following the previous Remark, we will always view \(C(G)^{*_{H}N}\), \(C(S_{N}^{+})\subset C(G\wr_{*,H}S_{N}^{+})\).
We endow the unital \(C^{*}\)-algebra \(C(G\wr_{*,H}S_{N}^{+})\) with the unique unital \(*\)-homomorphism \(\Delta\,:\,C(G\wr_{*,H}S_{N}^{+})\to C(G\wr_{*,H}S_{N}^{+})\otimes C(G\wr_{*,H}S _{N}^{+})\) satisfying:
\[\Delta(\nu_{i}(a))=\sum_{j=1}^{N}(\nu_{i}\otimes\nu_{j})(\Delta_{G}(a))(u_{ij} \otimes 1)\text{ and }\Delta(u_{ij})=\sum_{k=1}^{N}u_{ik}\otimes u_{kj}.\]
**Remark 2.22**.:
1. Both \(G^{*_{H}N}\) and \(S_{N}^{+}\) are compact quantum subgroups of \(G\wr_{*,H}S_{N}^{+}\) via the maps \((\mathrm{id}\otimes\varepsilon_{S_{N}^{+}})\circ\pi\,:\,C(G\wr_{*,H}S_{N}^{+}) \to C(G)^{*_{H}N}\) and \((\varepsilon_{G^{*N}}\otimes\mathrm{id})\circ\pi\,:\,C(G\wr_{*,H}S_{N}^{+}) \to C(S_{N}^{+})\) (which obviously intertwine the comultiplications), where \(\pi\) is the map defined in Remark 2.21.
2. Let \(\nu\,:\,C(H)\to C(G\wr_{*,H}S_{N}^{+})\) be the common restriction of the maps \(\nu_{i}\) on \(C(H)\subset C(G)\), \(1\leq i\leq N\). Then, \(\nu\) is faithful and, since \(\sum_{j}u_{ij}=1\) we see that \(\nu\) intertwines the comultiplications. Hence, \(H\) is a dual compact subgroup of \(G\wr_{*,H}S_{N}^{+}\).
Let us now describe another specific compact quantum subgroup of \(G\wr_{*,H}S_{N}^{+}\). We could not locate a description of this quantum subgroup in the literature (except when \(N=2\) and \(G=\widehat{\Gamma}\) is the dual of a discrete group [1, Proposition 2.6]) but we believe it to be well known.
Fix \(\sigma\in S_{N}\). By the universal property of \(C(G)^{*_{H}N}\), there exists a unique unital \(*\)-homomorphism \(\alpha_{\sigma}\,:\,C(G)^{*_{H}N}\to C(G)^{*_{H}N}\) such that \(\alpha_{\sigma}\circ\nu_{i}=\nu_{\sigma(i)}\) for all \(1\leq i\leq N\). Since we clearly have \(\alpha_{\sigma}\alpha_{\tau}=\alpha_{\sigma\tau}\), for all \(\sigma,\tau\in S_{N}\), \(\alpha\) is an action of \(S_{N}\) on the C*-algebra \(C(G)^{*_{H}N}\) by unital \(*\)-isomorphisms. Moreover, it is easy to see that, for all \(\sigma\in\Sigma\), \((\alpha_{\sigma}\otimes\alpha_{\sigma})\circ\Delta_{G^{*N}}=\Delta_{G^{*N}} \circ\alpha_{\sigma}\), where \(\Delta_{G^{*N}}\) is the comultiplication on \(C(G)^{*_{H}N}\). Note that this action by quantum automorphisms is actually the restriction of the action of \(S_{N}^{+}\) on \(C(G)^{*_{H}N}\) to the compact subgroup \(S_{N}\) which was described in [1, Proposition 2.1]. Let us now consider the semi-direct product quantum group \(G^{*_{H}N}\rtimes S_{N}\) associated to this action, as described in Section 2.5. We show below that the quantum group \(G^{*_{H}N}\rtimes S_{N}\) is actually a compact quantum subgroup of \(G\wr_{*,H}S_{N}^{+}\).
**Proposition 2.23**.: _Given \(N\in\mathbb{N}^{*}\), there exists a unique unital \(*\)-homomorphism_
\[\pi\,:\,C(G\wr_{*,H}S_{N}^{+})\to C(G^{*_{H}N}\rtimes S_{N})=C(G)^{*_{H}N} \otimes C(S_{N})\text{ s.t. }\left\{\begin{array}{lcl}\pi(\nu_{i}(a))&=&\nu_{i}(a)\otimes 1\\ \pi(u_{ij})&=&1\otimes\chi_{ij}\end{array}\right.\]
_where \(\chi_{ij}\in C(S_{N})\) is the characteristic function of \(A_{i,j}:=\{\sigma\in S_{N}\,:\,\sigma(j)=i\}\). Moreover,_
1. \(\pi\) _is surjective and intertwines the comultiplications._
2. \(\pi\) _is an isomorphism if and only if_ \(N\in\{1,2\}\) _or if_ \(N=3\) _and the inclusion_ \(C(H)\hookrightarrow C(G)\) _is an isomorphism (a special case being when_ \(G\) _is the trivial group)._
Proof.: The existence of \(\pi\) is a direct consequence of the universal property of the C*-algebra \(C(G\wr_{*,H}S_{N}^{+})\) and the fact that the matrix \((\chi_{ij})_{ij}\in M_{N}(C(S_{N}))\) is a magic unitary.
(1). Since \(C(S_{N})\) is generated by the \(\chi_{ij}\), the surjectivity of \(\pi\) is clear. Let us check that \(\pi\) intertwines the comultiplications. Recall that the comultiplication on \(G\wr_{*,H}S_{N}^{+}\) is denoted by \(\Delta\), the one on \(G\) by \(\Delta_{G}\), the one on \(G^{*_{H}N}\) by \(\Delta_{G^{*N}}\) (it satisfies \(\Delta_{G^{*N}}\circ\nu_{i}=\nu_{i}\otimes\nu_{i}\circ\Delta_{G}\)) and let us denote the one on \(C(G^{*_{H}N}\rtimes S_{N})\) by \(\Delta_{s}\). On the one hand, \(\forall 1\leq i,j\leq N\), \(a\in C(G)\),
\[\Delta_{s}(\pi(u_{ij})) = \Delta_{s}(1\otimes\chi_{ij})=\sum_{\sigma\in S_{N},\,\sigma(j)=i }\Delta_{s}(1\otimes\delta_{\sigma})=\sum_{\sigma\in S_{N},\,\sigma(j)=i}\sum _{\tau\in S_{N}}(1\otimes\delta_{\tau}\otimes 1\otimes\delta_{\tau^{-1}\sigma}),\] \[\Delta_{s}(\pi(\nu_{i}(a))) = \sum_{\sigma\in S_{N}}\Delta_{s}(\nu_{i}(a)\otimes\delta_{ \sigma})=\sum_{\tau,\sigma\in S_{N}}\left[(\mathrm{id}\otimes\alpha_{\tau})( \Delta_{G^{*N}}(\nu_{i}(a)))\right]_{13}(1\otimes\delta_{\tau}\otimes 1 \otimes\delta_{\tau^{-1}\sigma})\] \[= \sum_{\tau\in S_{N}}\left[(\mathrm{id}\otimes\alpha_{\tau})((\nu _{i}\otimes\nu_{i})(\Delta_{G}(a)))\right]_{13}(1\otimes\delta_{\tau}\otimes 1 \otimes 1).\]
On the other hand, for all \(1\leq i,j\leq N\) and all \(a\in C(G)\),
\[(\pi\otimes\pi)(\Delta(\nu_{i}(a))) = \sum_{j=1}^{N}(\pi\otimes\pi)(\nu_{i}\otimes\nu_{j})(\Delta_{G}(a) )(\pi(u_{ij})\otimes 1)\] \[= \sum_{j=1}^{N}\left[(\nu_{i}\otimes\nu_{j})(\Delta_{G}(a))\right] _{13}(1\otimes\chi_{ij}\otimes 1\otimes 1)\] \[= \sum_{j=1}^{N}\sum_{\tau\in S_{N}\,,\,\tau(j)=i}\left[(\mathrm{id }\otimes\alpha_{\tau})(\nu_{i}\otimes\nu_{i})(\Delta_{G}(a))\right]_{13}(1 \otimes\delta_{\tau}\otimes 1\otimes 1)\] \[= \sum_{\tau\in S_{N}}\left[(\mathrm{id}\otimes\alpha_{\tau})(\nu _{i}\otimes\nu_{i})(\Delta_{G}(a))\right]_{13}(1\otimes\delta_{\tau}\otimes 1 \otimes 1)\]
where, in the last equality, we use the partition \(S_{N}=\bigsqcup_{j=1}^{N}\{\tau\in S_{N}\,:\,\tau(j)=i\}\). Moreover,
\[(\pi\otimes\pi)(\Delta(u_{ij})) = \sum_{k=1}^{N}\pi(u_{ik})\otimes\pi(u_{kj})=\sum_{k=1}^{N}1 \otimes\chi_{ik}\otimes 1\otimes\chi_{kj}\] \[= \sum_{\begin{subarray}{c}\sigma,\tau\in S_{N},\\ \tau(k)=i,\,\sigma(j)=k\end{subarray}}^{N}1\otimes\delta_{\tau}\otimes 1 \otimes\delta_{\sigma}=\sum_{\begin{subarray}{c}\sigma,\tau\in S_{N},\\ \sigma(j)=i\end{subarray}}1\otimes\delta_{\tau}\otimes 1\otimes\delta_{ \tau^{-1}\sigma},\]
where we use, in the last equality, the following partition:
\[\{(\tau,\tau^{-1}\sigma)\in S_{N}^{2}\,:\,\sigma(j)=i\}=\bigsqcup_{k=1}^{N}\{ (\tau,\sigma)\in S_{N}^{2}\,:\,\tau(k)=i\text{ and }\sigma(j)=k\}.\]
(2). Suppose that \(N=2\). In that case, Bichon's proof of [1, Proposition 2.1] can be directly adapted. It is well known that \(C(S_{2}^{+})\) is commutative: the surjection \(C(S_{2}^{+})\to C(S_{2})\), \(u_{ij}\mapsto\chi_{ij}\) is an isomorphism. Hence, we view \(C(S_{2}^{+})=C(S_{2})\) and we have \(u=\left(\begin{array}{cc}\delta_{1}&\delta_{\tau}\\ \delta_{\tau}&\delta_{1}\end{array}\right)\), where \(\tau\) is the unique non-trivial element in \(S_{2}\). Now, for \(i\in\{1,2\}\) and \(a\in C(G)\), \(\nu_{i}(a)\in C(G\,\wr_{*,H}\,S_{N}^{+})\) is commuting with the line \(i\) of \(u\) hence, it is also commuting with the other line. Hence, \(C(G)^{*\,H^{2}}\) and \(C(S_{2}^{+})\) are commuting in \(C(G\,\wr_{*,H}\,S_{N}^{+})\) and it follows that \(\pi\) is actually injective.
Suppose that \(N\geq 4\). In that case, it is well known that \(C(S_{N}^{+})\) is infinite-dimensional (the classical argument is actually contained in the proof of Lemma 2.19). In particular, \(C(S_{N}^{+})\mapsto C(S_{N})\), \(u_{ij}\mapsto\chi_{ij}\) is not injective which implies that \(\pi\) itself is not injective.
The non injectivity in the case N=3 is a consequence of Theorem 3.2 (the proof of which does not rely on this isomorphism even in the case \(N=3\)), using the fact that an isomorphism between these quantum group would intertwine the Haar measures. It is then enough, when \(G\) is non-trivial to show that there is a _reduced_ operator in \(C(G\wr_{*,H}S_{3}^{+})\), hence of Haar measure \(0\), which is sent to an element of \(C(G)^{*\,H^{3}}\rtimes S_{3}\simeq C(G)^{*\,H^{3}}\otimes\mathbb{C}^{6}\). Since \(H\) is a strict (not necessarily proper) dual quantum subgroup of \(G\), then \(\mathrm{Rep}(H)\) is a full subcategory of \(\mathrm{Rep}(G)\), with a non trivial irreducible representation \(v\) of \(G\) which is not a representation of \(H\), and \(a,b\in C(G)\setminus C(H)\) non trivial coefficients of \(v\) and \(\overline{v}\) respectively such that \(ab\) has non zero Haar measure (using the unique intertwiner between the trivial representation and \(v\otimes\overline{v}\)). Then we can consider the element \(\nu_{1}(a)u_{22}\nu_{1}(b)\in C(G\,\wr_{*,H}\,S_{3}^{+})\), which is sent to \(\nu_{1}(a)\nu_{1}(b)\otimes u_{22}=\nu_{1}(ab)\otimes u_{22}\in C(G)^{*\,H^{3} }\otimes\mathbb{C}^{6}\). However, the word \(\nu_{1}(a)u_{22}\nu_{1}(b)\) is reduced in the sense of 3.2, because \(a\) and \(b\) are coefficients of a non trivial irreducible representation, and therefore of Haar measure \(0\) and the element \(u_{22}\) is such that \(E_{L_{1}}(u_{22})\) is equal to \(0\) if \(N\geq 3\) and to \(u_{22}\) if \(N\in\{0,1\}\), while the element \(\nu_{1}(ab)\otimes u_{22}\) is of Haar measure \(h_{G}(ab)/3\neq 0\), because the Haar measure on
\(C(G)^{*\mu 3}\rtimes S_{3}\simeq C(G)^{*\mu 3}\otimes\mathbb{C}^{6}\) is the tensor product of the Haar measures on \(C(G)^{*\mu 3}\) and on \(C(S_{3})\).
If \(G\) is the trivial group then the surjection is the canonical morphism \(C(S_{N}^{+})\twoheadrightarrow C(S_{N})\) which is an isomorphism if and only if \(N\leq 3\).
**Remark 2.24**.: The condition \(C(H)\hookrightarrow C(G)\) not being an isomorphism is equivalent to the index \([G:H]\) being greater than \(2\), as defined in 2.7.
**Remark 2.25**.: For many quantum groups \(G\) we can also use K-theory to distinguish between the free wreath product and the semi-direct product. Indeed, the K-theory of the algebra of the semi-direct product \(G^{*3}\rtimes S_{3}\) is just the K-theory of the tensor product \(C(G)^{*3}\otimes\mathbb{C}^{6}\) which is
\[K_{0}(C(G)^{*3}\otimes\mathbb{C}^{6})\simeq(K_{0}(C(G))^{3}/\mathbb{Z}^{2})^{ 3},\text{ and, }K_{1}(C(G)^{*3}\otimes\mathbb{C}^{6})\simeq K_{1}(C(G))^{18},\]
which can only coincide with the K-theory of the free wreath product algebra if \(K_{0}(C(G))\simeq\mathbb{Z}\) and \(K_{1}(C(G))=0\), using the computations of Theorem D which are independent from the last proposition.
In [1, Section 3.4] it is mentioned that the question whether the von Neumann algebra of a quantum reflection group \(H_{N}^{s+}:=\widehat{\mathbb{Z}}_{s}\wr_{s}S_{N}^{+}\) is diffuse is open when \(N\leq 7\). We add the following Proposition in order to provide a complete answer to this question.
**Proposition 2.26**.: _The von Neumann algebra \(\mathrm{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\) is diffuse if and only if at least one of the following conditions hold:_
* \(N\geq 4\)_,_
* \(\mathrm{Irr}(G)\) _is infinite,_
* \([G:H]\geq 2\) _(i.e._ \(C(H)\hookrightarrow C(G)\) _is not surjective) and_ \(N\geq 2\)_._
Proof.: If \(G\) is trivial, it directly follows from Lemma 2.4 since \(S_{N}^{+}\simeq S_{N}\) for \(N\leq 3\) and \(C(S_{N}^{+})\) is infinite-dimensional for \(N\geq 4\). Suppose that \(G\) is non-trivial and let \(N\geq 3\). Let us denote by \(u=(u_{ij})\) and \(v=(v_{ij})\) the fundamental representations of \(S_{N}^{+}\) and \(S_{N-1}^{+}\) respectively. By the universal property, there exists a unique unital \(*\)-homomorphism \(\rho:C(G\wr_{*,H}S_{N}^{+})\to C(G\wr_{*,H}S_{N-1}^{+})\) such that \(\rho(u_{ij})=v_{ij}\) if \(1\leq i,j\leq N-1\), \(\rho(u_{ij})=\delta_{i,j}\) if \(i\) or \(j\) is equal to \(N\) and \(\rho(\nu_{i}(a))=\nu_{i}(a)\) if \(1\leq i\leq N-1\), \(\rho(\nu_{N}(a))=\nu_{N-1}(a)\) for all \(a\in C(G)\). Since \(\rho\) is clearly surjective a direct induction implies that, for all \(N\geq 2\), \(C(G\wr_{*,H}S_{N}^{+})\) is infinite-dimensional (hence \(\mathrm{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\) is diffuse by Lemma 2.4) whenever \(C(G\wr_{*,H}S_{2}^{+})\) is. By Proposition 2.23, \(C(G\wr_{*,H}S_{2}^{+})\simeq C(G\ast_{H}G)\otimes\mathbb{C}^{2}\). Hence, it is infinite-dimensional as soon as \(C(G\ast_{H}G)\) is. There is an embedding \(C(G)\hookrightarrow C(G\ast_{H}G)\), so the case \(|\mathrm{Irr}(G)|=\infty\) is a direct consequence of Lemma 2.4. If \(C(H)\hookrightarrow C(G)\) is not surjective, then there is a non zero element \(a\in C(G)\) such that \(E_{H}(G)=0\) (for example, take a coefficient of an irreducible representation of \(G\) which is not in the representation category of \(H\)). Then the words \(b_{k}=\nu_{1}(a)\nu_{2}(a)\dots\nu_{\bar{k}}(a)\) taking a product of \(k\) elements, where \(\bar{k}\) is \(1\) if \(k\) is odd and \(2\) if \(k\) is even, form a family of linearly independent reduced elements in \(C(G\ast_{H}G)\), which is therefore infinite dimensional.
### Graphs of operator algebras
We recall below some notions and results from [13, 14]. If \(\mathcal{G}\) is a graph in the sense of [13, Def 2.1], its vertex set will be denoted \(V(\mathcal{G})\) and its edge set will be denoted \(E(\mathcal{G})\). We will always assume that \(\mathcal{G}\) is at most countable. For \(e\in E(\mathcal{G})\) we denote by \(s(e)\) and \(r(e)\) respectively the source and range of \(e\) and by \(\overline{e}\) the inverse edge of \(e\). An _orientation_ of \(\mathcal{G}\) is a partition \(E(\mathcal{G})=E^{+}(\mathcal{G})\sqcup E^{-}(\mathcal{G})\) such that \(e\in E^{+}(\mathcal{G})\Leftrightarrow\overline{e}\in E^{-}(\mathcal{G})\).
The data \((\mathcal{G},(A_{q})_{q\in V(\mathcal{G})},(B_{e})_{e\in E(\mathcal{G})},(s_{e} )_{e\in E(\mathcal{G})})\) will be called a graph of C*-algebras if:
* \(\mathcal{G}\) is a connected graph.
* For every \(q\in V(\mathcal{G})\) and every \(e\in E(\mathcal{G})\), \(A_{q}\) and \(B_{e}\) are unital C*-algebras.
* For every \(e\in E(\mathcal{G})\), \(B_{\overline{e}}=B_{e}\).
* For every \(e\in E(\mathcal{G})\), \(s_{e}:B_{e}\to A_{s(e)}\) is a unital faithful \(*\)-homomorphism.
Define \(r_{e}=s_{\overline{e}}:B_{e}\to A_{s(e)}\).
Fix a maximal subtree \(\mathcal{T}\subset\mathcal{G}\) and define \(P=\pi_{1}(\mathcal{G},A_{q},B_{e},\mathcal{T})\) to be the maximal fundamental C*-algebra of the graph of C*-algebras \((\mathcal{G},A_{q},B_{e},s_{e})\) with respect to the maximal subtree \(\mathcal{T}\). This means that \(P\) is the universal unital C*-algebra generated by \(A_{q}\), for \(q\in V(\mathcal{G})\) and unitaries \(u_{e}\), for \(e\in E(\mathcal{G})\) with the following relations:
* For every \(e\in E(\mathcal{G})\), \(u_{\overline{e}}=u_{e}^{*}\).
* For every \(e\in E(\mathcal{G})\) and every \(b\in B_{e}\), \(u_{\overline{e}}s_{e}(b)u_{e}=r_{e}(b)\).
* For every \(e\in E(\mathcal{T})\), \(u_{e}=1\).
It is known that \(P\) is non-zero and that the canonical unital \(*\)-homomorphisms \(A_{q}\to P\) are all faithful [12, Remark 2.2]. Hence, we will always view \(A_{q}\subset P\) for all \(q\in V(\mathcal{G})\).
Assume now that there exists, for all \(e\in E(\mathcal{G})\) a conditional expectation \(E_{e}^{s}\,:\,A_{s(e)}\to s_{e}(B_{e})\). For \(p\in V(\mathcal{G})\), an element \(a\in P\) will be called a _reduced operator_ from \(p\) to \(p\) if it is of the form \(a=a_{0}u_{e_{1}}a_{1}\ldots u_{e_{n}}a_{n}\) where \(n\geq 1\), \((e_{1},\ldots,e_{n})\) is a path in \(\mathcal{G}\) from \(p\) to \(p\) (i.e \(e_{k}\in E(\mathcal{G})\) are such that \(r(e_{k})=s(e_{k+1})\) and \(s(e_{1})=r(e_{n})=p\)), \(a_{0}\in A_{p}\), \(a_{k}\in A_{r(e_{k})}\) and, for all \(1\leq k\leq n-1\), if \(e_{k+1}=\overline{e}_{k}\) then \(E_{e_{k+1}}^{s}(a_{k})=0\). Then one can construct [12], the unital C*-algebra \(P_{r}\) called the _vertex reduced fundamental algebra_ which is the unique (up to canonical isomorphism) quotient \(\lambda\,:\,P\to P_{r}\) of \(P\) satisfying the following:
1. There exists, for all \(p\in V(\mathcal{G})\), a ucp map \(E_{p}\,:\,P_{r}\to A_{p}\) such that \(E_{p}(\lambda(a))=a\) for all \(a\in A_{p}\) and \(E_{p}(\lambda(a))=0\) for all reduced operator \(a\in P\) from \(p\) to \(p\). Moreover, the family \(\{E_{p}\,:\,V(\mathcal{G})\}\) is GNS-faithful.
2. For any unital C*-algebra \(C\) with a surjective unital \(*\)-homomorphism \(\rho\,:\,P\to C\) and a GNS-faithful family of ucp map \(\varphi_{p}\,:\,C\to A_{p}\), \(p\in V(\mathcal{G})\), such that \(\varphi_{p}(\rho(a))=a\) for all \(a\in A_{p}\) and \(\varphi_{p}(\rho(a))=0\) for all \(a\in P\) reduced operator from \(p\) to \(p\) there exists a unique unital \(*\)-isomorphism \(\nu\,:\,P_{r}\to C\) such that \(\nu\circ\lambda=\rho\).
We recall that, given unital C*-algebras \(A,B_{i}\), \(i\in I\), a family of ucp maps \(\varphi_{i}\,:\,A\to B_{i}\) is called _GNS-faithful_ if \(\bigcap_{i\in I}\operatorname{Ker}(\pi_{i})=\{0\}\), where \((H_{i},\pi_{i},\xi_{i})\) is the GNS-construction of \(\varphi_{i}\).
Note that a ucp map satisfying (1) is necessarily unique and property (1) implies that \(\lambda\,:\,P\to P_{r}\) is faithful on \(A_{p}\), for all \(p\in V(\mathcal{G})\) so that we may and will view \(A_{p}\subset P_{r}\) for all \(p\in V(\mathcal{G})\) and the ucp maps \(E_{p}\,:\,P_{r}\to A_{p}\) become conditional expectations under this identification. If all the ucp maps \(E_{e}^{s}\) are supposed to be GNS-faithful then the vertex reduced fundamental algebra \(P_{r}\) is the same as the reduced fundamental algebra constructed in [12] and the conditional expectations \(E_{p}\,:\,P_{r}\to A_{p}\) are all GNS-faithful.
## 3. Free wreath products as fundamental algebras
Let \(N\in\mathbb{N}^{*}\) and \(\mathcal{T}_{N}\) be the rooted tree with \(N+1\) vertices \(p_{0},\ldots,p_{N}\), \(p_{0}\) being the root and \(2N\) edges \(v_{1},\ldots,v_{N},\overline{v}_{1},\ldots\overline{v}_{N}\), source maps \(s(v_{k})=p_{0}\) and range maps \(r(v_{k})=p_{k}\) for all \(1\leq k\leq N\).
### The full version
Consider the graph of C*-algebras over \(\mathcal{T}_{N}\) given by \(\mathcal{A}_{p_{0}}=C(H)\otimes C(S_{N}^{+})\), \(\mathcal{A}_{p_{k}}:=C(G)\otimes\mathbb{C}^{N}\), \(\mathcal{B}_{v_{k}}=\mathcal{B}_{\overline{v}_{k}}=C(H)\otimes\mathbb{C}^{N}\)\((1\leq k\leq N)\) with source map \(s_{v_{k}}\,:\,C(H)\otimes\mathbb{C}^{N}\to C(H)\otimes L_{k}\subset C(H)\otimes C(S_{N}^{+})\), \(h\otimes e_{j}\mapsto h\otimes u_{kj}\) and range map \(r_{v_{k}}\,:\,C(H)\otimes\mathbb{C}^{N}\to C(G)\otimes\mathbb{C}^{N}\) being the canonical inclusion. Note that our graph of C*-algebras has conditional expectations \(\operatorname{id}\otimes E_{k}\,:\,C(H)\otimes C(S_{N}^{+})\to C(H)\otimes L_{k}\) (Proposition 2.18) which can be non GNS-faithful and \(E_{H}\otimes\operatorname{id}\,:\,C(G)\otimes\mathbb{C}^{N}\to C(H)\otimes \mathbb{C}^{N}\), coming from proposition 2.5, which can also be non-GNS faithful.
Let us denote by \(\mathcal{A}\) the maximal fundamental C*-algebra of this graph of C*-algebras relative to the unique maximal subtree \(\mathcal{T}_{N}\) itself so that \(\mathcal{A}\) is the universal unital C*-algebra generated by \(\mathcal{A}_{p_{k}}\) for \(0\leq k\leq N\) with the relations \(s_{v_{k}}(a)=r_{v_{k}}(a)\) for all \(a\in C(H)\otimes\mathbb{C}^{N}\) and all \(1\leq k\leq N\). Recall that \(\nu_{i}\,:\,C(G)\to C(G)^{*_{H}N}\subset C(G\,\wr_{s,H}\,S_{N}^{+})\) denotes the \(i^{th}\)-copy of \(C(G)\) in \(C(G)^{*_{H}N}\), we also denote by \(\nu\) the common restriction of the \(\nu_{i}\)'s to \(C(H)\)
**Proposition 3.1**.: _There is a unique isomorphism \(\pi\,:\,\mathcal{A}\to C(G\,\wr_{s,H}\,S_{N}^{+})\) such that \(\pi(h\otimes u_{ij})=\nu(h)u_{ij}\) (\(h\otimes u_{ij}\in\mathcal{A}_{p_{0}}=C(H)\otimes C(S_{N}^{+})\)) and, \(\pi(a\otimes e_{j})=\nu_{i}(a)u_{ij}\) for \(a\otimes e_{j}\in\mathcal{A}_{p_{i}}=C(G)\otimes\mathbb{C}^{N}\), \(1\leq i,j\leq N\)._
Proof.: Let us show the existence of \(\pi\). Note first that, since \(\nu_{i}(a)\) and \(u_{ij}\) commute in \(C(G\wr_{*,H}S_{N}^{+})\), there exists a unique unital \(*\)-homomorphism \(\pi_{i}\,:\,\mathcal{A}_{p_{i}}=C(G)\otimes\mathbb{C}^{N}\to C(G\wr_{*,H}S_{N}^{+})\) such that \(\pi_{i}(a\otimes e_{j})=\nu_{i}(a)u_{ij}\), for all \(a\in C(G)\) and \(1\leq j\leq N\). We also define \(\pi_{0}\,:\,\mathcal{A}_{p_{0}}=C(H)\otimes C(S_{N}^{+})\to C(G\wr_{*,H}S_{N}^{+})\), \(\pi_{0}(h\otimes u_{ij})=\nu(h)u_{ij}\). Next, for all \(1\leq i,j\leq N\), and \(h\in C(H)\), one has \(\pi_{0}(s_{v_{i}}(h\otimes e_{j}))=\pi_{0}(h\otimes u_{ij})=\nu(h)u_{ij}\) and \(\pi_{i}(r_{v_{i}}(h\otimes e_{j}))=\pi_{i}(h\otimes e_{j})=\nu(h)u_{ij}\). By the universal property of \(\mathcal{A}\), there exists a unique unital \(*\)-homomorphism \(\pi\,:\,\mathcal{A}\to C(G\wr_{*,H}S_{N}^{+})\) satisfying the properties of the proposition. Moreover, since the image of \(\pi\) contains \(u_{ij}\) for all \(i,j\) and \(\sum_{j=1}^{N}\pi_{i}(a\otimes e_{j})=\sum_{j=1}^{N}\nu_{i}(a)u_{ij}=\nu_{i}(a)\) for all \(a\in A\) and \(1\leq i\leq N\), it follows that \(\pi\) is surjective.
To show that it is an isomorphism, we will give an inverse, using this time the universal property of \(C(G\wr_{*,H}S_{N}^{+})\). By the universal property of the full free product, there exists a unique unital \(*\)-homomorphism \(\mu\,:\,C(G)^{*N}*C(S_{N}^{+})\to\mathcal{A}\) such that, for all \(1\leq i\leq N\), \(a\in C(G)\), \(b\in C(S_{N}^{+})\),
\[\mu(\nu_{i}(a))=a\otimes 1\in C(G)\otimes\mathbb{C}^{N}=\mathcal{A}_{p_{i}} \subset\mathcal{A}\text{ and }\mu(b)=1\otimes b\in C(H)\otimes C(S_{N}^{+})= \mathcal{A}_{p_{0}}\subset\mathcal{A}.\]
Recall that \(C(G\wr_{*,H}S_{N}^{+})=C(G)^{*H}*C(S_{N}^{+})/I\), where \(I\) is defined in Equation (1). Note that
\[\mu(\nu_{i}(a)u_{ij}) = (a\otimes 1)(1\otimes u_{ij})=\mu(\nu_{i}(a))s_{v_{i}}(1\otimes e _{j})=\mu(\nu_{i}(a))r_{v_{i}}(1\otimes e_{j})=(a\otimes 1)(1\otimes e_{j})\] \[= (1\otimes e_{j})(a\otimes 1)=r_{v_{i}}(1\otimes e_{j})\mu(\nu_{i} (a))=s_{v_{i}}(1\otimes e_{j})\mu(\nu_{i}(a))\] \[= (1\otimes u_{ij})\mu(\nu_{i}(a))=\mu(u_{ij}\nu_{i}(a)).\]
Moreover, we have that for every \(h\in C(H)\), and \(1\leq i,j\leq N\),
\[\mu(\nu_{i}(\iota(h))) = (\iota(h)\otimes 1)=s_{v_{i}}(h\otimes 1)=r_{v_{i}}(h\otimes 1)=(h \otimes 1)\] \[= r_{v_{j}}(h\otimes 1)=s_{v_{j}}(h\otimes 1)=(\iota(h)\otimes 1)= \mu(\nu_{j}(\iota(h))),\]
Hence, the images of \(C(H)\) through \(\mu\) coincide and \(I\subset\ker(\mu)\) so there exists a unique unital \(*\)-homomorphism \(\rho\,:\,C(G\wr_{*,H}S_{N}^{+})\to\mathcal{A}\) which factorizes \(\mu\). It is clear that \(\rho\) is surjective and it is easy to check that \(\rho\) is the inverse of \(\pi\).
### The reduced version
Consider the graph of C*-algebras over \(\mathcal{T}_{N}\) given by \(A_{p_{0}}=C_{r}(H)\otimes C_{r}(S_{N}^{+})\), \(A_{p_{k}}:=C_{r}(G)\otimes\mathbb{C}^{N}\), \(B_{v_{k}}=B_{\overline{v_{k}}}=C_{r}(H)\otimes\mathbb{C}^{N}\) for all \(1\leq k\leq N\) with source map \(s_{v_{k}}\,:\,C_{r}(H)\otimes\mathbb{C}^{N}\to C_{r}(H)\otimes L_{k}\subset C_{ r}(H)\otimes C_{r}(S_{N}^{+})\), \(h\otimes e_{j}\mapsto h\otimes u_{kj}\) and range map \(r_{v_{k}}\,:\,C_{r}(H)\otimes\mathbb{C}^{N}\to C_{r}(G)\otimes\mathbb{C}^{N}\), being the canonical inclusion. Note that our graph of C*-algebras has faithful conditional expectations \(\operatorname{id}\otimes E_{k}\,:\,C_{r}(H)\otimes C_{r}(S_{N}^{+})\to C_{r}(H )\otimes L_{k}\) (Proposition 2.18) and \(E_{H}\otimes\operatorname{id}\,:\,C_{r}(G)\otimes\mathbb{C}^{N}\to C_{r}(H) \otimes\mathbb{C}^{N}\), thanks to Proposition 2.5. Let \(A\) the vertex reduced fundamental C*-algebra of this graph of C*-algebras with faithful conditional expectations and view \(A_{p_{k}}\subset A\) for all \(0\leq k\leq N\). By the universal property of \(\mathcal{A}\) defined in Section 3.1, there exists a unique unital \(*\)-homomorphism (which is surjective) \(\lambda^{\prime}\,:\,\mathcal{A}\to A\) such that \(\lambda^{\prime}|_{\mathcal{A}_{p_{0}}}=\lambda_{H}\otimes\lambda_{S_{N}^{+}}\) and \(\lambda^{\prime}|_{\mathcal{A}_{p_{k}}}=\lambda_{G}\otimes\operatorname{id}_{ \mathbb{C}^{N}}\) for all \(1\leq k\leq N\). Let \(E\,:\,A\to C_{r}(H)\otimes C_{r}(S_{N}^{+})\) the GNS-faithful conditional expectation and define \(\omega:=h_{H}\otimes h_{S_{N}^{+}}\circ E\) so that \(\omega|_{A_{p_{0}}}=h_{H}\otimes h_{S_{N}^{+}}\) and \(\omega(c)=0\) for \(c\in A\) a reduced operator. The state \(\omega\in A^{*}\) is called the _fundamental state_. Let \(\lambda\,:\,C(G\wr_{*,H}S_{N}^{+})\to C_{r}(G\wr_{*,H}S_{N}^{+})\) be the canonical surjection.
**Theorem 3.2**.: _The Haar state \(h\in C(G\wr_{*,H}S_{N}^{+})^{*}\) is the unique state such that:_
\[h\,(a_{0}\nu_{i_{1}}(b_{1})a_{1}\nu_{i_{2}}(b_{2})\ldots\nu_{i_{n}}(b_{n})a_{n})=0\]
_whenever \(a_{k}\in C(S_{N}^{+})\subset C(G\wr_{*,H}S_{N}^{+})\), and \(b_{k}\in C(G)\) are such that \(E_{H}(b_{k})=0\) for all \(k\) and if \(i_{k}=i_{k+1}\), then \(E_{k}(a_{k})=0\)._
_There exists a unique unital \(*\)-isomorphism \(\pi_{r}\,:\,A\to C_{r}(G\wr_{*,H}S_{N}^{+})\) such that \(\lambda\circ\pi=\pi_{r}\circ\lambda^{\prime}\), where \(\pi\,:\,\mathcal{A}\to C(G\wr_{*,H}S_{N}^{+})\) is the isomorphism of Proposition 3.1. Moreover, \(\pi\) intertwines the Haar state on \(C_{r}(G\wr_{*,H}S_{N}^{+})\) and the fundamental state \(\omega\in A^{*}\)._
An element of the form \(a_{0}\nu_{i_{1}}(b_{1})a_{1}\nu_{i_{2}}(b_{2})\ldots\nu_{i_{n}}(b_{n})a_{n}\in C(G \wr_{*,H}S_{N}^{+})\) and satisfying the conditions of Theorem 3.2 will be called a _reduced operator_.
Proof.: Consider the state \(\widetilde{\omega}:=\omega\circ\lambda_{N}\circ\mu=h_{H}\otimes h_{S^{+}_{N}} \circ E\circ\lambda_{N}\circ\mu\in C(G\wr_{*,H}S^{+}_{N})^{*}\), where \(\mu:=\pi^{-1}:C(G\wr_{*,H}S^{+}_{N})\rightarrow\mathcal{A}\) has been constructed in the proof of Proposition 3.1. Let \(\mathcal{C}\subset\mathcal{A}\) be the linear span of \(C(S^{+}_{N})\), \(\nu(C(H))\) and all reduced operators in \(C(G\wr_{*,H}S^{+}_{N})\). Note that \(\mathcal{C}\) is dense in \(\mathcal{A}\). By construction, the state \(\widetilde{\omega}\) satisfies \(\widetilde{\omega}|_{C(S^{+}_{N})}=h_{S^{+}_{N}}\), \(\widetilde{\omega}\circ\nu=h_{H}\) and \(\widetilde{\omega}(c)=0\) for \(c\in C(G\wr_{*,H}S^{+}_{N})\) a reduced operator so \(\widetilde{\omega}\) satisfies the property of the state \(h\) stated in the Theorem hence \(h=\widetilde{\omega}\) by density of \(\mathcal{C}\). We will show that \(\widetilde{\omega}\) is \(\Delta\)-invariant, which will imply that it is the Haar state, and thus (since \(E\) is GNS faithful and \(h_{H}\otimes h_{S^{+}_{N}}\) is faithful on \(C_{r}(H)\otimes C_{r}(S^{+}_{N})\)) that \(A\) is isomorphic to the algebra obtained through the GNS-construction applied to the Haar state, namely the reduced C*-algebra \(C^{*}_{r}(G\wr_{*,H}S^{+}_{N})\), which will complete the proof of the Theorem. It is enough to show that \(\widetilde{\omega}\) is \(\Delta\) invariant when restricted to \(\mathcal{C}\). An element in \(\mathcal{C}\) is the sum of an element \(x_{0}\in C(S^{+}_{N})\otimes C(H)\) and elements of the form \(x=a_{0}\nu_{i_{1}}(b_{1})a_{1}\ldots a_{n-1}\nu_{i_{n}}(b_{n})a_{n}\) with \(n\geq 1\), \(a_{k}\in C(S^{+}_{N})\), \(b_{k}\in C(G)^{\circ}\), where \(C(G)^{\circ}:=\{b\in C(G),\,E_{H}(b)=0\}\), for all \(0\leq k\leq N\), and if there is \(k\) such that \(i_{k}=i_{k+1}\), then \(E_{i_{k}}(a_{k})=0\). It suffices to show \(\Delta(x)\in\mathcal{C}\odot C(G\wr_{*,H}S^{+}_{N})\) (where \(\odot\) is the algebraic tensor product). Using the Sweedler notation,
\[\Delta(x) = \sum(a_{0})_{(1)}\nu_{i_{1}}((b_{1})_{(1)})u_{i_{1},s_{1}}(a_{1}) _{(1)}\nu_{i_{2}}((b_{2})_{(1)})u_{i_{2},s_{2}}\ldots(a_{n})_{(1)}\] \[\otimes(a_{0})_{(2)}\nu_{s_{1}}((b_{1})_{(2)})(a_{1})_{(1)}\nu_{s _{2}}((b_{2})_{(2)})\ldots(a_{n})_{(2)}.\]
By Remark 2.6 one has \(E_{H}((b_{k})_{(1)})=0\) hence, the left part of every term of the sum is a reduced operator whenever there is no \(k\) such that \(i_{k}=i_{k+1}\) and \(E_{i_{k}}((a_{k})_{(1)})\neq 0\), we need only to take care of the remaining cases. Assume that we have \(k\) such that \(i_{k}=i_{k+1}\), and \(E_{i_{k}}((a_{k})_{(1)})\neq 0\), with \(k\) being the smallest such integer. Then we can write \((a_{k})_{(1)}=E_{i_{k}}((a_{k})_{(1)})+(a_{k})^{\circ}_{(1)}\), with \(E_{i_{k}}((a_{k})^{\circ}_{(1)})=0\). Fixing \(s=(s_{1},s_{2},\ldots,s_{n})\in N^{n}\), we can write
\[(a_{0})_{(1)}\nu_{i_{1}}((b_{1})_{(1)})u_{i_{1},s_{1}}(a_{1})_{(1 )}\ldots(a_{n})_{(1)}\otimes(a_{0})_{(2)}\nu_{s_{1}}((b_{1})_{(2)})(a_{1})_{(1 )}\nu_{s_{2}}((b_{2})_{(2)})\ldots(a_{n})_{(2)}\] \[= (a_{0})_{(1)}\nu_{i_{1}}((b_{1})_{(1)})u_{i_{1},s_{1}}(a_{1})_{(1 )}\ldots(a_{k})^{\circ}_{(1)}\ldots(a_{n})_{(1)}\otimes(a_{0})_{(2)}\ldots(a_ {n})_{(2)}\] \[+(a_{0})_{(1)}\nu_{i_{1}}((b_{1})_{(1)})u_{i_{1},s_{1}}(a_{1})_{(1 )}\ldots E_{i_{k}}((a_{k})_{(1)})\ldots(a_{n})_{(1)}\otimes(a_{0})_{(2)}\ldots( a_{n})_{(2)},\]
we see that the first term of the decomposition is itself reduced up to the \(k\)-th term, and we can iterate the process with the next term such that \(i_{k^{\prime}}=i_{k^{\prime}+1}\). In the end we get a sum of tensor such that the first term is always reduced, and we only have to show that the second vanishes. The second term is always of the form
\[\alpha_{k,s}=(a_{0})_{(1)}\nu_{i_{1}}((b_{1})_{(1)})u_{i_{1},s_{1}}(a_{1})_{(1 )}\ldots E_{i_{k}}((a_{k})_{(1)})\ldots(a_{n})_{(1)}\otimes(a_{0})_{(2)} \ldots(a_{n})_{(2)},\]
with maybe other conditional expectations appearing after rank \(k\). The definition of the conditional expectations \((E_{i})\) gives \(E_{i_{k}}((a_{k})_{(1)})=N\sum_{t=1}^{N}h((a_{k})_{(1)}u_{i_{k}t})u_{i_{k}t}\) hence,
\[\alpha_{k,s} = N\sum_{t=1}^{N}h((a_{k})_{(1)}u_{i_{k}t})(a_{0})_{(1)}\nu_{i_{1} }((b_{1})_{(1)})u_{i_{1},s_{1}}(a_{1})_{(1)}\ldots\nu_{i_{k}}((b_{k})_{(1)})u_ {i_{k},t}\nu_{i_{k}}((b_{k+1})_{(1)})\ldots(a_{n})_{(1)}\] \[\otimes(a_{0})_{(2)}\ldots(a_{n})_{(2)}\] \[= N\sum_{t=1}^{N}h((a_{k})_{(1)}u_{i_{k}t})(a_{0})_{(1)}\nu_{i_{1} }((b_{1})_{(1)})u_{i_{1},s_{1}}(a_{1})_{(1)}\ldots\nu_{i_{k}}((b_{k})_{(1)}(b _{k+1})_{(1)})u_{i_{k},t}\ldots(a_{n})_{(1)}\] \[\otimes(a_{0})_{(2)}\ldots(a_{n})_{(2)}\] \[= N\sum_{t=1}^{N}h((a_{k})_{(1)^{\prime}}u_{i_{k}t})(a_{0})_{(1)}\nu_{i _{1}}((b_{1})_{(1)})u_{i_{1},s_{1}}(a_{1})_{(1)}\ldots\nu_{i_{k}}((b_{k})_{(1) }(b_{k+1})_{(1)})u_{i_{k},t}\ldots(a_{n})_{(1)}\] \[\otimes(a_{0})_{(2)}\ldots(a_{k})_{(1)^{\prime}}\ldots(a_{n})_{(2)}\]
The symbols \((1)^{\prime}\) and \((2)^{\prime}\) are used to differentiate between the summation indices of the scalars we will now move to the right side of the tensor product and the others summation indices:
\[\alpha_{k,s}= N\sum_{t=1}^{N}(a_{0})_{(1)}\nu_{i_{1}}((b_{1})_{(1)})u_{i_{1},s_{ 1}}(a_{1})_{(1)}\dots\nu_{i_{k}}((b_{k})_{(1)}(b_{k+1})_{(1)})u_{i_{k},t}\dots( a_{n})_{(1)}\] \[\otimes(a_{0})_{(2)}\dots h((a_{k})_{(1)}u_{i_{k}t})(a_{k})_{(2)^{ \prime}}\dots(a_{n})_{(2)}.\]
Now, by Lemma 2.19, we get \(\sum_{(1)^{\prime},(2)^{\prime}}h((a_{k})_{(1)^{\prime}}u_{i_{k}t})(a_{k})_{( 2)^{\prime}}=0\). This is true for every value of \(t\), hence \(\alpha_{k}\) vanishes whenever we considered \(k\) as before. This proves that \(\widetilde{\omega}\) is \(\Delta\)-invariant.
The explicit computation of the Haar state can be used to show the following property of dual quantum subgroups in free wreath products.
**Proposition 3.3**.: _Let \(C(H)\subset C(G)\) be a dual quantum subgroup and consider, by the universal property of \(C(H\wr_{*}S_{N}^{+})\), the unique unital \(*\)-homomorphism \(\iota\,:\,C(H\wr_{*}S_{N}^{+})\to C(G\wr_{*}S_{N}^{+})\) such that \(\iota\circ\nu_{i}=\nu_{i}\,:\,C(H)\to C(G\wr_{*}S_{N}^{+})\) for all \(1\leq i\leq N\) and \(\iota|_{C(S_{N}^{+})}\) is the inclusion \(C(S_{N}^{+})\subset C(G\wr_{*}S_{N}^{+})\). Then, \(\iota\) is faithful so \(H\wr_{*}S_{N}^{+}\) is a dual quantum subgroup of \(G\wr_{*}S_{N}^{+}\)._
Proof.: It is clear that \(\iota\) intertwines the comultiplications and, by Theorem 3.2, it also intertwines the Haar states so the restriction \(\iota|_{\operatorname{Pol}(H\wr_{*}S_{N}^{+})}:\operatorname{Pol}(H\wr_{*}S_ {N}^{+})\to\operatorname{Pol}(G\wr_{*}S_{N}^{+})\) is injective (by faithfulness of the Haar states) and still intertwines the comultiplication. Hence, \(H\wr_{*}S_{N}^{+}\) is a dual quantum subgroup of \(G\wr_{*}S_{N}^{+}\) and \(\iota\) itself faithful.
### The von Neumann version
Consider the graph of von Neumann algebras over \(\mathcal{T}_{N}\) given by \(M_{p_{0}}=\operatorname{L}^{\infty}(H)\otimes\operatorname{L}^{\infty}(S_{N} ^{+})\), \(M_{p_{k}}:=\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}\), \(N_{e_{k}}=N_{\overline{e}_{k}}=\operatorname{L}^{\infty}(H)\otimes \operatorname{C}^{N}\) for all \(1\leq k\leq N\) with source map \(s_{e_{k}}\,:\,\operatorname{L}^{\infty}(H)\otimes\operatorname{C}^{N}\to \operatorname{L}^{\infty}(H)\otimes L_{k}\subset\operatorname{L}^{\infty}(H) \otimes\operatorname{L}^{\infty}(S_{N}^{+})\), \(h\otimes e_{j}\mapsto h\otimes u_{kj}\) and range map \(r_{e_{k}}\,:\,\operatorname{L}^{\infty}(H)\otimes\operatorname{C}^{N}\to \operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}\) being the canonical inclusion. We also consider family of faithful normal states \(\omega_{p_{0}}=h_{H}\otimes h_{S_{N}^{+}}\), \(\omega_{p_{k}}=h_{G}\otimes\operatorname{tr}\) (where \(\operatorname{tr}\) is the normalized uniform trace on \(\operatorname{C}^{N}\) i.e. \(\operatorname{tr}(e_{j})=\frac{1}{N}\)) and \(\omega_{e_{k}}=h_{H}\otimes\operatorname{tr}\) for all \(1\leq k\leq N\). Hence, we get a graph of von Neumann algebras as defined in [17, Definition A.1]. Let us denote by \(M\) the fundamental von Neumann algebra at \(p_{0}\) and view \(M_{p_{0}}=\operatorname{L}^{\infty}(H)\otimes\operatorname{L}^{\infty}(S_{N} ^{+})\subset M\). Let \(\varphi\) the associated fundamental normal faithful state which is a trace if and only if \(G\) is Kac from the results of [17], as \(H\) is automatically Kac in that case. As a direct consequence of Proposition 3.2 and [17, Section A.5] we have the following. Note that, by Theorem 3.2, the unital faithful \(*\)-homomorphism \(\nu_{i}\,:\,C(G)\to C(G\wr_{*,H}S_{N}^{+})\) preserves the Haar states hence, it induces a unital normal faithful \(*\)-homomorphism \(\nu_{i}\,:\,\operatorname{L}^{\infty}(G)\to\operatorname{L}^{\infty}(G\wr_{*, H}S_{N}^{+})\).
**Proposition 3.4**.: _There exists a unique unital normal \(*\)-isomorphism \(\pi_{r}^{\prime\prime}\,:\,M\to\operatorname{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\) extending the isomorphism \(\pi_{r}\,:\,A\to C_{r}(G\wr_{*,H}S_{N}^{+})\) from Theorem 3.2. Moreover, \(h\circ\pi_{r}^{\prime\prime}=\varphi\), where \(h\) also denotes the Haar state on \(\operatorname{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\)._
Recall that \(\operatorname{Aut}_{\operatorname{L}^{\infty}(H)}(\operatorname{L}^{\infty}(G),h _{G})=\{\beta\in\operatorname{Aut}(\operatorname{L}^{\infty}(G),h_{G}),\,\beta( \operatorname{L}^{\infty}(H))=\operatorname{L}^{\infty}(H)\}\).
**Proposition 3.5**.: _For all \(\alpha\in\operatorname{Aut}_{\operatorname{L}^{\infty}(H)}(\operatorname{L}^{ \infty}(G),h_{G})\) there exists a unique \(h\)-preserving automorphism \(\psi(\alpha)\) of \(\operatorname{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\) such that_
\[\psi(\alpha)|_{\operatorname{L}^{\infty}(S_{N}^{+})}=\operatorname{id}\quad \text{and}\quad\psi(\alpha)\circ\nu_{i}=\nu_{i}\circ\alpha\quad\forall 1\leq i\leq N. \tag{2}\]
_Moreover, \(\psi\,:\,\operatorname{Aut}_{\operatorname{L}^{\infty}(H)}(\operatorname{L}^{ \infty}(G),h_{G})\to\operatorname{Aut}(\operatorname{L}^{\infty}(G\wr_{*,H}S_{N} ^{+}),h)\) is a continuous group homomorphism._
Proof.: Let \(\alpha\in\operatorname{Aut}_{\operatorname{L}^{\infty}(H)}(\operatorname{L}^{ \infty}(G),h_{G})\) and write \((\operatorname{L}^{2}(G\wr_{*,H}S_{N}^{+}),\lambda,\xi)\) the GNS of the Haar state. To show that \(\psi(\alpha)\) exists, it suffices to show that there exists a well defined unitary \(U_{\alpha}\in\mathcal{B}(\operatorname{L}^{2}(G\wr_{*,H}S_{N}^{+}))\) such that, for all reduced operator \(a_{0}\nu_{i_{1}}(b_{1})\dots\nu_{i_{n}}(b_{n})a_{n}\in\operatorname{L}^{\infty}(G \wr_{*,H}S_{N}^{+})\),
\[U_{\alpha}(a_{0}\nu_{i_{1}}(b_{1})\dots\nu_{i_{n}}(b_{n})a_{n}\xi)=a_{0}\nu_{i_{1}} (\alpha(b_{1}))\dots\nu_{i_{n}}(\alpha(b_{n}))a_{n}\xi\]
Indeed, if such a unitary is constructed, then \(\psi(\alpha)(x):=U_{\alpha}xU_{\alpha}^{*}\) does the job. To show that such a \(U_{\alpha}\) exists it suffices to check that, for any reduced operator \(x=a_{0}\nu_{i_{1}}(b_{1})\ldots\nu_{i_{n}}(b_{n})a_{n}\in\mathrm{L}^{\infty}(G \wr_{*,H}S_{N}^{+})\) one has \(h(x^{*}x)=h(y^{*}y)\), where \(y:=a_{0}\nu_{i_{1}}(\alpha(b_{1}))\ldots\nu_{i_{n}}(\alpha(b_{n}))a_{n}\). This can be shown by induction on \(n\), by first observing that \(E_{H}\circ\alpha=\alpha\circ E_{H}\), where \(E_{H}\::\:\mathrm{L}^{\infty}(G)\to\mathrm{L}^{\infty}(H)\) is the canonical normal faithful conditional expectation. Indeed, since \(\alpha^{-1}\circ E_{H}\circ\alpha\) is a normal \(h\)-preserving conditional expectation onto \(\mathrm{L}^{\infty}(H)\), we have, by uniqueness in Takesaki's Theorem, that \(\alpha^{-1}\circ E_{H}\circ\alpha=E_{H}\). It follows that the element \(y\) is reduced whenever \(x\) is. Then, one can easily use the Haar state formula in Theorem 3.2 to prove our claim by induction. The uniqueness of \(\psi(\alpha)\) and the fact that it preserves the Haar state are clear.
The fact that it is a group homomorphism follows by uniqueness. To prove continuity, it suffices to check that the map \(\psi_{x}\::\:\mathrm{Aut}_{\mathrm{L}^{\infty}(H)}(\mathrm{L}^{\infty}(G),h_{ G}))\to\mathrm{L}^{2}(G\wr_{*,H}S_{N}^{+},h)\), \((\alpha\mapsto\psi(\alpha)(x)\xi)\), is continuous for all \(x\in\mathrm{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\), where \(\xi\in\mathrm{L}^{2}(G\wr_{*,H}S_{N}^{+},h)\) is the canonical cyclic vector. Define \(\mathcal{C}\subset\mathrm{L}^{\infty}(G\wr_{*}S_{N}^{+})\) to be the linear span of \(\mathrm{L}^{\infty}(S_{N}^{+})\) and the reduced operators and observe that \(\mathcal{C}\) is a \(\sigma\)-strongly dense unital \(*\)-subalgebra of \(\mathrm{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\). Note that the subset:
\[\{x\in\mathrm{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\::\:\psi_{x}\text{ is continuous }\}\subseteq\mathrm{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\]
is a clearly a subspace and it is \(\sigma\)-strongly closed since, \(\psi(\alpha)\) being \(h\)-preserving, we have,
\[\|\psi_{x}(\alpha)-\psi_{y}(\alpha)\|^{2}=\|\psi(\alpha)(x)\xi-\psi(\alpha)(y )\xi\|^{2}=\|\psi(\alpha)(x-y)\xi\|^{2}=h((x-y)^{*}(x-y))\]
for all \(\alpha\in\mathrm{Aut}_{\mathrm{L}^{\infty}(H)}(\mathrm{L}^{\infty}(G),h_{G})\) and \(x,y\in\mathrm{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\). Hence, it suffices to show that \(\psi_{x}\) is continuous for all \(x\in\mathrm{L}^{\infty}(S_{N}^{+})\) or for a reduced operator. When \(x\in\mathrm{L}^{\infty}(S_{N}^{+})\) the map \(\psi_{x}\) is the constant map equals to \(x\xi\). When \(x=a_{0}\nu_{i_{1}}(b_{1})\ldots\nu_{i_{n}}(b_{n})a_{n}\in\mathrm{L}^{\infty}(G \wr_{*,H}S_{N}^{+})\) is a reduced operator, we have, by induction on \(n\), \(\psi_{x}(\alpha)-\psi_{x}(\mathrm{id})=\sum_{k=1}^{n}\varphi_{k}(\alpha)\), where:
\[\varphi_{k}(\alpha):=a_{0}\nu_{i_{1}}(b_{1})a_{1}\ldots\nu_{i_{k-1}}(b_{k})a_{ k}\nu_{i_{k}}(\alpha(b_{k})-b_{k})a_{k}\nu_{i_{k+1}}(\alpha(b_{k+1}))a_{k+1} \ldots\nu_{i_{n}}(\alpha(b_{n}))a_{n}\xi\]
with the natural conventions so that \(\varphi_{0}(\alpha)\) and \(\varphi_{n}(\alpha)\) make sense. Using the formula for the Haar state given in Theorem 3.2 and the fact that \(\alpha\) is \(h_{G}\) preserving, a direct computation gives:
\[\|\varphi_{k}(\alpha)\|^{2}=\left(\prod_{l=0}^{n}\|a_{l}\|_{2,S_{N}^{+}}^{2} \right)\left(\prod_{1\leq l\leq n,l\neq k}\|b_{l}\|_{2,G}^{2}\right)\|\alpha(b_ {k})-b_{k}\|_{2,G}^{2}\]
where, for a CQG \(G\), \(\|\cdot\|_{2,G}\) is the 2-norm given by the Haar state on \(\mathrm{L}^{\infty}(G)\). It follows that if \(\alpha_{s}\to_{s}\mathrm{id}\) in \(\mathrm{Aut}_{\mathrm{L}^{\infty}(H)}(\mathrm{L}^{\infty}(G),h_{G})\) then, for all \(1\leq k\leq n\), \(\varphi_{k}(\alpha_{s})\to_{s}0\) in \(\mathrm{L}^{2}(G\wr_{*,H}S_{N}^{+})\) for the norm. Hence, \(\psi_{x}\) is continuous at \(\mathrm{id}\) so it is continuous since it is a group homomorphism.
### An inductive construction for free wreath products algebras
We define inductively the C*-algebra \(\mathcal{A}_{i}\) (\(1\leq i\leq N\)) containing \(C(H)\otimes C(S_{N}^{+})\) with a state \(\omega_{i}\in\mathcal{A}_{i}^{*}\) by \(\mathcal{A}_{0}=C(H)\otimes C(S_{N}^{+})\) and \(\omega_{0}=h_{H}\otimes h_{S_{N}^{+}}\) the Haar state on \(\mathcal{A}_{0}\). Let \(A_{0}:=C_{r}(H)\otimes C_{r}(S_{N}^{+})\) be the GNS-construction of \(\omega_{0}\) and \(M_{0}:=\mathrm{L}^{\infty}(H)\otimes\mathrm{L}^{\infty}(S_{N}^{+})=A_{0}^{\prime\prime}\) the von Neumann algebra generated in the GNS construction. Write \(\lambda_{0}\::\:\mathcal{A}_{0}\to A_{0}\) the GNS morphism. We still denote by \(\omega_{0}\) the associated faithful state on \(A_{0}\) (resp. faithful normal state on \(M_{0}\)). Suppose that \((\mathcal{A}_{i},\omega_{i})\) is constructed and let \(A_{i}\) be the C*-algebra of the GNS construction of \(\omega\) with GNS morphism \(\lambda_{i}\::\:\mathcal{A}_{i}\to A_{i}\) and \(M_{i}=A_{i}^{\prime\prime}\) the von Neumann algebra generated in the GNS construction. Define the full free product \(\mathcal{A}_{i+1}:=\left(C(G)\otimes\mathbb{C}^{N}\right)\underset{C(H)\otimes \mathbb{C}^{N}}{*}\mathcal{A}_{i}\), where the amalgamation is with respect to the faithful normal unital \(*\)-homomorphisms \(r_{e_{i+1}}\::\:C(H)\otimes\mathbb{C}^{N}\to C(G)\otimes\mathbb{C}^{N}\) and \(s_{e_{i+1}}\::\:C(H)\otimes\mathbb{C}^{N}\mapsto C(H)\otimes L_{i+1}\subset C(H) \otimes C(S_{N}^{+})\subset\mathcal{A}_{i}\). Let \(A_{i+1}:=\left(C_{r}(G)\otimes\mathbb{C}^{N},h_{G}\otimes\mathrm{tr}\right) \underset{C_{r}(H)\otimes\mathbb{C}^{N}}{*}(A_{i},\omega_{i})\) be the reduced free product with amalgamation with respect to the same maps but at the reduced level. Let \(\lambda_{i+1}\::\:\mathcal{A}_{i+1}\to A_{i}\) the unique surjective unital \(*\)-homomorphism such that \(\lambda_{i+1}(a\otimes x)=\lambda_{G}(a)\otimes x\) for all \(a\in C(G)\), \(x\in\mathbb{C}^{N}\) and \(\lambda_{i+1}\wr_{\mathcal{A}_{i}}=\lambda_{i}\). Define \(\omega_{i+1}:=\left(\left(h_{G}\otimes\mathrm{tr}\right)*\omega_{i}\right) \circ\lambda_{i+1}\) and note that \(A_{i+1}\) is the C*-algebra of the GNS construction of \(\omega_{i+1}\) and \(\lambda_{i+1}\::\:\mathcal{A}_{i+1}\to A_{i+1}\) is the GNS morphism. It follows that \(M_{i+1}=A_{i+1}^{\prime\prime}=\left(\mathrm{L}^{\infty}(G)\otimes\mathbb{C} ^{N}\right)\underset{\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^{N}}{*}M_{i}\), where the amalgamated free product von Neumann
algebra is with respect to the faithful normal unital \(*\)-homomorphisms \(r_{e_{i}}:\operatorname{L}^{\infty}(H)\otimes\mathbb{C}^{N}\to\operatorname{L}^{ \infty}(G)\otimes\mathbb{C}^{N}\) and \(s_{e_{i}}:\operatorname{L}^{\infty}(H)\otimes\mathbb{C}^{N}\mapsto\operatorname {L}^{\infty}(H)\otimes L_{i+1}\subset\operatorname{L}^{\infty}(H)\otimes \operatorname{L}^{\infty}(S_{N}^{+})\subset M_{i}\) and with respect to the faithful normal states \(h_{G}\otimes\operatorname{tr}\) and \(\omega_{i}\). We still denote by \(\omega_{i+1}\) the faithful free product state on \(A_{i+1}\) and the faithful normal free product state on \(M_{i+1}\).
**Proposition 3.6**.: _There exists unique state preserving \(*\)-isomorphisms_
\[\rho\,:\,(C(G\operatorname{\wrcorner}_{*,H}S_{N}^{+}),h)\to(\mathcal{A}_{N}, \omega_{N})\quad\text{and}\quad\rho_{r}\,:\,(C_{r}(G\operatorname{\wrcorner}_{ *,H}S_{N}^{+}),h)\to(A_{N},\omega_{N})\]
_and a normal \(*\)-isomorphism \(\rho_{r}^{\prime\prime}:\,(\operatorname{L}^{\infty}(G\operatorname{\wrcorner }_{*,H}S_{N}^{+}),h)\to(M_{N},\omega_{N})\), where \(h\) the Haar state on \(G\operatorname{\wrcorner}_{*,H}S_{N}^{+}\), such that, writing \(\mathcal{A}_{N}=\cup_{i}^{\uparrow}\mathcal{A}_{i}\), \(\rho\) maps \(C(S_{N}^{+})\) onto \(\mathcal{A}_{0}=C(H)\otimes C(S_{N}^{+})\) via \(x\mapsto 1\otimes x\) and, for all \(1\leq i\leq N\), \(\rho(\nu_{i}(a))=a\otimes 1\in C(G)\otimes\mathbb{C}^{N}\subset\mathcal{A}_{i}=(C(G) \otimes\mathbb{C}^{N})\operatorname{\begin{matrix}*\\ C(H)\otimes\mathbb{C}^{N}=C(H)\otimes L_{i}\end{matrix}}\mathcal{A}_{i-1}\) and, \(\lambda\circ\rho=\rho_{r}\circ\lambda_{N}\), \(\lambda\circ\rho_{r}^{\prime\prime}=\rho_{r}^{\prime\prime}\circ\lambda_{N}\), where \(\lambda:\,C(G\operatorname{\wrcorner}_{*,H}S_{N}^{+})\to C_{r}(G \operatorname{\wrcorner}_{*,H}S_{N}^{+})\) is the canonical surjection._
Proof.: Uniqueness being obvious, it suffices to show the existence and that \(\rho\) intertwines the states. The existence of \(\rho_{r}\) and \(\rho_{r}^{\prime\prime}\) follows from that. It is easy to see, by the universal property of \(C(G\operatorname{\wrcorner}_{*,H}S_{N}^{+})=C(G)^{*_{H}N}*C(S_{N}^{+})/I\) that there exists a unital \(*\)-homomorphism \(\rho\) satisfying the conditions of the statement. To show that \(\rho\) is an isomorphism, we construct an inverse \(\rho^{\prime}:\mathcal{A}_{N}\to C(G\operatorname{\wrcorner}_{*,H}S_{N}^{+})\). To do so, we construct inductively unital \(*\)-homomorphisms \(\rho_{i}^{\prime}:\mathcal{A}_{i}\to C(G\operatorname{\wrcorner}_{*,H}S_{N}^{+})\) by letting \(\rho_{0}^{\prime}:\mathcal{A}_{0}=C(H)\otimes C(S_{N}^{+})\to C(G \operatorname{\wrcorner}_{*,H}S_{N}^{+})\), \(a\otimes b\mapsto\nu(a)b\) and, if \(\rho_{i-1}^{\prime}:\mathcal{A}_{i-1}\to C(G\operatorname{\wrcorner}_{*,H}S_{N }^{+})\) is defined, we use the universal property of the full amalgamated free product \(\mathcal{A}_{i}=(C(G)\otimes\mathbb{C}^{N})\operatorname{\begin{matrix}*\\ C(H)\otimes C^{N}=C(H)\otimes L_{i}\end{matrix}}\mathcal{A}_{i-1}\) to define \(\rho_{i}^{\prime}:\mathcal{A}_{i}\to C(G\operatorname{\wrcorner}_{*,H}S_{N}^ {+})\) to be the unique unital \(*\)-homomorphism such that \(\rho_{i}^{\prime}|_{\mathcal{A}_{i-1}}=\rho_{i-1}^{\prime}\) and \(\rho_{i}^{\prime}(a\otimes e_{j})=\nu_{i}(a)u_{ij}\) for all \(a\otimes e_{j}\in C(G)\otimes\mathbb{C}^{N}\), where \((e_{j})_{1\leq j\leq N}\) is the canonical orthonormal basis of \(\mathbb{C}^{N}\). Then, since \(\rho_{i}^{\prime}|_{\mathcal{A}_{i-1}}=\rho_{i-1}^{\prime}\), there exists a unique unital \(*\)-homomorphism \(\rho^{\prime}:\mathcal{A}_{N}=\cup_{i}^{\uparrow}\mathcal{A}_{i}\to C(G \operatorname{\wrcorner}_{*,H}S_{N}^{+})\) such that \(\rho^{\prime}|_{\mathcal{A}_{i}}=\rho_{i}^{\prime}\). It is then easy to check that \(\rho^{\prime}\) is the inverse of \(\rho\). It remains to show that \(\rho\) intertwines the states. By the uniqueness property of the state \(h\) stated in Theorem 3.2, it suffices to show that \(\omega_{N}(\rho(c))=0\) for \(c\in C(G\operatorname{\wrcorner}_{*,H}S_{N}^{+})\) a reduced operator in the sense of Theorem 3.2. So let \(c=a_{0}\nu_{i_{1}}(b_{1})a_{1}\dots\nu_{i_{n}}(b_{n})a_{n}\in C(G\operatorname{ \wrcorner}_{*,H}S_{N}^{+})\) be a reduced operator and note that, if at least one the \(i_{k}\) is equal to \(N\) then \(\rho(c)\) is a word with letters alternating from \(C(G)\otimes\mathbb{C}^{N}\ominus C(H)\otimes\mathbb{C}^{N}\), and \(\mathcal{A}_{N-1}\ominus C(H)\otimes L_{N}\) (where, when, for \(C\subset B\) with conditional expectation, \(B\ominus C\) denotes the kernel of the conditional expectation onto \(C\)) so, by definition of the free product state, \(\omega_{N}(\rho(c))=0\). If for all \(k\), \(i_{k}\leq N-1\), then we can view \(\rho(c)\in\mathcal{A}_{N-1}\) and since \(\omega_{N}|_{\mathcal{A}_{N-1}}=\omega_{N-1}\), we can repeat the argument inductively and eventually deduce that \(\omega_{N}(\rho(c))=0\).
In the setting of von Neumann algebras, the inductive construction implies the following.
**Proposition 3.7**.: _The following holds for \(G\operatorname{\wrcorner}_{*,H}S_{N}^{+}\), with \(\psi\) defined in Equation (2),_
1. _The modular group of the Haar state_ \(h\) _is_ \(\sigma_{t}:=\psi(\sigma_{t}^{G})\)_, for all_ \(t\in\mathbb{R}\)_, where_ \((\sigma_{t}^{G})_{t\in\mathbb{R}}\) _is the modular group of the Haar state of_ \(G\) _on_ \(\operatorname{L}^{\infty}(G)\)_._
2. _The scaling group is_ \(\tau_{t}:=\psi(\tau_{t}^{G})\)_,_ \(\forall t\in\mathbb{R}\)_, where_ \(\tau_{t}^{G}\) _is the scaling group of_ \(G\)_._
3. \(T(G\operatorname{\wrcorner}_{*}S_{N}^{+})=\{t\in\mathbb{R}:\,\tau_{t}^{G}= \operatorname{id}\}\)__\(\forall N\geq 2\) _and_ \(N\neq 3\) _and_ \(\tau(G\operatorname{\wrcorner}_{*,H}S_{N}^{+})=\tau(G)\)__\(\forall N\in\mathbb{N}^{*}\)_._
Proof.: (1). We first note that, for all \(t\in\mathbb{R}\), \(\sigma_{t}^{G}\in\operatorname{Aut}_{\operatorname{L}^{\infty}(H)}( \operatorname{L}^{\infty}(G),h_{G})\). From [11, Theorem 2.6], the modular group of the free product state \(\omega_{1}\) on the amalgamated free product
\[M_{1}=(\operatorname{L}^{\infty}(G)\otimes\mathbb{C}^{N})\operatorname{ \begin{matrix}*\\ \operatorname{L}^{\infty}(H)\otimes\mathbb{C}^{N}=\operatorname{L}^{\infty}(H) \otimes L_{1}\end{matrix}}(\operatorname{L}^{\infty}(H)\otimes\operatorname{ L}^{\infty}(G)\otimes\operatorname{L}^{\infty}(S_{N}^{+}))\text{ satisfies }\sigma_{t}^{\omega_{1}}|_{\operatorname{L}^{\infty}(G)}=\sigma_{t}^{h_{S_{N}^{+}}}= \operatorname{id}\]
and \(\sigma_{t}(x\otimes 1)=\sigma_{t}^{G}(x)\otimes 1\) for all \(x\in\operatorname{L}^{\infty}(G)\). Identifying \((M_{N},\omega_{N})\simeq(\operatorname{L}^{\infty}(G\operatorname{\wrcorner}_{*,H}S_{N }^{+}),h)\) with the isomorphism of Proposition 3.6 and applying inductively [11, Theorem 2.6] to our finite sequence of von Neumann algebra \((M_{i})_{1\leq i\leq N}\) gives the result.
(2). Note that, for all \(t\in\mathbb{R}\), \(\tau_{t}^{G}\in\operatorname{Aut}_{\operatorname{L}^{\infty}(H)}( \operatorname{L
definition of \(\Delta\) we have, for all \(1\leq i\leq N\) and all \(a\in\operatorname{Pol}(G)\),
\[\Delta(\sigma_{t}(\nu_{i}(a))) = \Delta(\nu_{i}(\sigma_{t}^{G}(a)))=\sum_{j=1}^{N}(\nu_{i}\otimes \nu_{j})(\Delta_{G}(\sigma_{t}^{G}(a)))(u_{ij}\otimes 1)\] \[= \sum_{j=1}^{N}(\nu_{i}\otimes\nu_{j})((\tau_{t}^{G}\otimes\sigma_ {t}^{G})\Delta_{G}(a))(u_{ij}\otimes 1)\] \[= \sum_{j=1}^{N}(\tau_{t}\otimes\sigma_{t})(\nu_{i}\otimes\nu_{j}) (\Delta_{G}(a))(u_{ij}\otimes 1)=(\tau_{t}\otimes\sigma_{t})\Delta(\nu_{i}(a)),\]
where, in the last equality, we used that \(\tau_{t}(u_{ij})=u_{ij}\). Since the modular group and scaling group act as the identity on \(\operatorname{L}^{\infty}(S_{N}^{+})\), the equality \(\Delta\sigma_{t}(u_{ij})=(\tau_{t}\otimes\sigma_{t})\Delta(u_{ij})\) is clear.
(3). Let us compute the \(T\)-invariant in the non-amalgamated case. For \(N\geq 4\), the irreducible representations of \(G_{\wr}S_{N}^{+}\) are completely classified in [11] and it follows that the only dimension \(1\) irreducible representation is the trivial representation. Hence, \(T(G\wr_{*}S_{N}^{+})=\{t\in\mathbb{R}\,:\,\tau_{t}=\operatorname{id}\}\) and we conclude the proof using assertion (1). For \(N=2\), we know from Proposition 2.23 that \(G\wr_{*}S_{2}^{+}\simeq G^{*2}\rtimes S_{2}\) and the \(T\)-invariant of such quantum groups is computed in Proposition 2.17. Using also Proposition 2.8 concludes the computation.
To compute the \(\tau\)-invariant, we may and will assume that \(N\geq 2\). Since \(\psi\) is continuous, one has \(\tau(G\wr_{*,H}S_{N}^{+})\subseteq\tau(G)\). Let us show that \(\tau(G)\subseteq\tau(G\wr_{*,H}S_{N}^{+})\). Applying Remark 2.1 with \(M=\operatorname{L}^{\infty}(G\wr_{*,H}S_{N}^{+})=\left(\operatorname{L}^{ \infty}(G)\otimes\operatorname{C}^{N}\right)_{\operatorname{L}^{\infty}(H) \otimes\operatorname{C}^{N}=\operatorname{L}^{\infty}(H)\otimes L_{N}}M_{N-1}\), \(\omega\) the Haar state and \(A=\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}\) we see that the map \(\mathbb{R}\to\operatorname{Aut}(\operatorname{L}^{\infty}(G))\), \(t\mapsto\sigma_{t}|_{A}=\sigma_{t}^{G}\otimes\operatorname{id}\) is \(\tau(G\wr_{*,H}S_{N}^{+})\)-continuous. Hence, \((t\mapsto\sigma_{t}^{G})\) is \(\tau(G\wr_{*,H}S_{N}^{+})\)-continuous so \(\tau(G)\subseteq\tau(G\wr_{*,H}S_{N}^{+})\).
## 4. Approximation properties
Theorem A is a consequence of the following more general statement.
**Theorem 4.1**.: _For \(G\) a CQG with \(H\) a dual quantum subgroup and \(N\geq 1\), the following holds_
1. \(G\) _is exact if and only if_ \(G\wr_{*,H}S_{N}^{+}\) _is exact._
2. _If_ \(\operatorname{Irr}(H)\) _is finite, then_ \(G\) _has the Haagerup property if and only if_ \(G\wr_{*,H}S_{N}^{+}\) _has the Haagerup property._
3. _If_ \(G\) _is Kac and_ \(H\) _is co-amenable then_ \(G\) _is hyperlinear if and only if_ \(G\wr_{*,H}S_{N}^{+}\) _is hyperlinear._
4. \(\bullet\) _If_ \(N\geq 5\)_,_ \(G\wr_{*,H}S_{N}^{+}\) _is not co-amenable for all_ \(G\) _and all_ \(H\)_._ * _If_ \(N\in\{3,4\}\) _and_ \(H\) _is a proper dual quantum subgroup of_ \(G\) _(_2.11_) then_ \(G\wr_{*,H}S_{N}^{+}\) _is not co-amenable whenever_ \(G\) _is Kac._
5. \(G\wr_{*,H}S_{2}^{+}\) _is co-amenable if and only if_ \(G^{*\mu 2}\) _is co-amenable._
6. \(G\) _is_ \(K\)_-amenable if and only if_ \(G\wr_{*,H}S_{N}^{+}\) _is_ \(K\)_-amenable._
Proof.: (1). It is known from [1] that \(C_{r}(S_{N}^{+})\) is exact. Hence, the result follows from [11, Corollary 3.30].
(2). Note that, at \(N\leq 4\), \(S_{N}^{+}\) is a co-amenable Kac CQG [10] so that \(\operatorname{L}^{\infty}(S_{N}^{+})\) has the Haagerup property and, at \(N\geq 5\), \(\operatorname{L}^{\infty}(S_{N}^{+})\) has the Haagerup property by [1]. We show the result by using the inductive construction of \(\operatorname{L}^{\infty}(G\wr_{*,H}S_{N}^{+})=M_{N}\), with \(M_{0}=\operatorname{L}^{\infty}(H)\otimes\operatorname{L}^{\infty}(S_{N}^{+})\) and \(M_{i+1}=(\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N})\underset{ \operatorname{L}^{\infty}(H)\otimes\operatorname{C}^{N}}{\ast}M_{i}\). At \(i=0\), \(M_{0}=\operatorname{L}^{\infty}(H)\otimes\operatorname{L}^{\infty}(S_{N}^{+})\) has the Haagerup property, because \(\operatorname{L}^{\infty}(H)\) is finite-dimensional. Assume that \(M_{i}\) has the Haagerup property. It follows from [10, Corollary 8.2] that \(M_{i+1}=(\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N})\underset{ \operatorname{L}^{\infty}(H)\otimes\operatorname{C}^{N}}{\ast}M_{i}\) also has the Haagerup property whenever \(G\) has the Haagerup property. By induction, \(M_{N}=\operatorname{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\) has the Haagerup property. Suppose now that \(G\wr_{*,H}S_{N}^{+}\) has the Haagerup property. It is easy to check, using the definition of Haagerup property of [10] with respect to a faithful normal
state (f.n.s.) that if \((M,\omega)\), \(\omega\in M_{*}\) a f.n.s., has the Haagerup property and \(P\subset M\) is a von Neumann subalgebra with a normal and state-invariant conditional expectation \(E\,:\,M\to P\) then, \((P,\omega_{|P})\) also has the Haagerup property. Hence, since by construction of amalgamated free products we do have such conditional expectations on each leg of the amalgamated free product we first deduce from the inductive construction state-preserving isomorphism
\[(\mathrm{L}^{\infty}(G\wr_{*,H}S_{N}^{+}),h)\simeq\left((\mathrm{L}^{\infty}(G) \otimes\mathbb{C}^{N})\underset{\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^{N}}{ \ast}M_{N-1},\omega_{N}\right)\]
that \(\mathrm{L}^{\infty}(G)\otimes\mathbb{C}^{N}\) has the Haagerup property which implies that \(\mathrm{L}^{\infty}(G)\) also has it.
(3). By Proposition [13, Corollary 3.7], \(S_{N}^{+}\) is hyperlinear for all \(N\in\mathbb{N}^{*}\). Assuming that \(G\wr_{*,H}S_{N}^{+}\) also is), and that \(H\) is co-amenable so that \(\mathrm{L}^{\infty}(H)\) is amenable, we can apply the strategy of the proof of (2) by using [11, Corollary 4.5] to deduce the fact that if \(G\) is hyperlinear then so is \(G\wr_{*,H}S_{N}^{+}\). The converse is clear.
(4). The case \(N=2\) is a consequence of Propositions 2.23 and 2.17. Suppose that \(N\geq 3\) and \(G\wr_{*,H}S_{N}^{+}\) is co-amenable. Since \(G^{*_{H}N}\) and \(S_{N}^{+}\) are both compact quantum subgroup of \(G\wr_{*,H}S_{N}^{+}\) by Remark 2.22, it follows that \(G^{*_{H}N}\) and \(S_{N}^{+}\) are both co-amenable [10]. It implies that \(N\leq 4\), since \(S_{N}^{+}\) is not co-amenable for all \(N\geq 5\)[1]. Moreover, since \(N\geq 3\), \(G^{*_{H}N}\) is not co-amenable whenever \(H\) is proper by Proposition 2.13 since \(H\) has infinite index in \(G\ast_{H}G\).
(5). By [10]\(S_{N}^{+}\) is \(K\)-amenable hence, if \(G\) is \(K\)-amenable, so is \(H\) and, a direct application of [14, Theorem 5.1 and 5.2], implies that \(G\wr_{*,H}S_{N}^{+}\) is \(K\)-amenable. Let us prove the converse. To ease the notations we write \(\mathcal{A}=C(G\wr_{*,H}S_{N}^{+})\) and \(A=C_{r}(G\wr_{*,H}S_{N}^{+})\) during this proof. Let \(\lambda\,:\,\mathcal{A}\to A\) be the canonical surjection and assume that \(G\wr_{*,H}S_{N}^{+}\) is \(K\)-amenable i.e. there exists \(\alpha\in KK(A,\mathbb{C})\) such that \([\lambda]\otimes\alpha=[\varepsilon]\in KK(\mathcal{A},\mathbb{C})\), where \(\otimes\) denotes here the Kasparov product and \(\varepsilon\,:\,\mathcal{A}\to\mathbb{C}\) the counit of the free wreath product. Let us show that \(G^{*_{H}N}\) is \(K\)-amenable (and so \(G\) also is). Consider the canonical inclusion \(\pi\,:\,C(G)^{*_{H}N}\to\mathcal{A}\). Note that \(\pi\) does not intertwine the comultiplications so that we can not deduce the \(K\)-amenability of \(G^{*_{H}N}\) by using the stability of \(K\)-amenability by dual quantum subgroup. However, by Theorem 3.2, the canonical inclusion \(\pi\,:\,C(G)^{*_{H}N}\to\mathcal{A}\) intertwines the Haar states hence, there exists a unital \(*\)-homomorphism \(\pi_{r}\,:\,C_{r}(G^{*_{H}N})\to A\) such that \(\pi_{r}\circ\lambda_{G}=\lambda\circ\pi\), where \(\lambda_{G}\,:\,C(G^{*_{H}N})\to C_{r}(G^{*_{H}N})\) is the canonical surjection. Define \(\beta:=[\pi_{r}]\otimes\alpha\in KK(C_{r}(G^{*_{H}N}),\mathbb{C})\). One has:
\[[\lambda_{G}]\otimes\beta = [\lambda_{G}]\otimes[\pi_{r}]\otimes\alpha=[\pi_{r}\circ\lambda_ {G}]\otimes\alpha=[\lambda\circ\pi]\otimes\alpha\] \[= [\pi]\otimes[\lambda]\otimes\alpha=[\pi]\otimes[\varepsilon]=[ \varepsilon\circ\pi]=[\varepsilon_{G}].\]
This shows that \(G^{*_{H}N}\) is \(K\)-amenable.
**Remark 4.2**.: To deduce Theorem A it remains to deduce the co-amenability statement for \(N=2\) which follows from Proposition 2.8.
**Remark 4.3**.: As explained in the Introduction, except for \(N=2\), the stability of the Haagerup property under the free wreath product construction was open. In the case where \(G=\widehat{\Gamma}\) is the dual of a discrete group, and \(H\) is a finite subgroup, then the stability of the Haagerup property and of the weak amenability with constant \(1\) has been proved in [11]. In the general but non-amalgamated case, only the stability of the central ACPAP, which is a strengthening of both the Haagerup property and the weak amenability with constant \(1\) was known. However, it follows from Proposition 2.17 that \(\Lambda_{cb}(G\wr_{*}S_{2}^{+})=\Lambda_{cb}(G)\). For \(N\geq 3\), we believe that weak amenability with constant \(1\) is also stable under the free wreath product construction and one obvious way to do the proof would be by induction, using our inductive construction of the von Neumann algebra as well as an amalgamated version (over a finite dimensional subalgebra) of the result of Xu-Ricard [15]. Let us mention that for classical wreath products it has been proved by Ozawa [16] that for any non-trivial discrete group \(\Lambda\) and any non-amenable discrete group \(\Gamma\), the classical wreath product \(\Lambda\wr\Gamma\) is not weakly amenable.
## 5. The von Neumann algebra of a free wreath product
This section contains the proofs of Theorems B and C. Recall that \((\sigma_{t}^{G})_{t\in\mathbb{R}}\) denotes the modular group of the Haar state on \(\mathrm{L}^{\infty}(G)\).
Theorem B is a direct consequence of the following more general statement.
**Theorem 5.1**.: _Suppose that \(\mathrm{Irr}(G)\) is infinite, \(\mathrm{Irr}(H)\) is finite, \(\mathrm{L}^{\infty}(G)^{\prime}\cap\mathrm{L}^{\infty}(H)=\mathbb{C}1\) and \(N\geq 4\). Then \(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})\) is a non-amenable full and prime factor without any Cartan subalgebra and the following holds._
* _If_ \(G\) _is Kac then_ \(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})\) _is a type_ \(\mathrm{II}_{1}\) _factor._
* _If_ \(G\) _is not Kac then_ \(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})\) _is a type_ \(\mathrm{III}_{\lambda}\) _factor for some_ \(\lambda\neq 0\) _and:_ \[T(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+}))=\{t\in\mathbb{R}\,: \exists u\in\mathcal{U}(\mathrm{L}^{\infty}(H)),\ \sigma_{t}^{G}=\mathrm{Ad}(u)\}.\]
_Moreover, in the non-amalgamated case, we have \(\tau(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+}))=\tau(G)\)._
Proof of Theorem B.: First observe that, by Lemma 2.4, both \(\mathrm{L}^{\infty}(G)\) and \(\mathrm{L}^{\infty}(S_{N}^{+})\) are diffuse (since \(N\geq 4\)). Hence, \(\mathrm{L}^{\infty}(G)\otimes\mathbb{C}^{N}\) and \(\mathrm{L}^{\infty}(H)\otimes\mathrm{L}^{\infty}(S_{N}^{+})\) are also diffuse and the inclusions \(\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^{N}\subset\mathrm{L}^{\infty}(G) \otimes\mathbb{C}^{N}\) and \(\mathrm{L}^{\infty}(H)\otimes L_{k}\subset\mathrm{L}^{\infty}(H)\otimes \mathrm{L}^{\infty}(S_{N}^{+})\) are without trivial corner for all \(1\leq k\leq N\), in the sense of [13] (since \(\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^{N}\) is finite dimensional hence purely atomic, see [13, Lemma 5.2]).
We show the result by using the inductive construction of \(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})=M_{N}\), with \(M_{0}=\mathrm{L}^{\infty}(H)\otimes\mathrm{L}^{\infty}(S_{N}^{+})\) and \(M_{i+1}=(\mathrm{L}^{\infty}(G)\otimes\mathbb{C}^{N})\underset{\mathrm{L}^{ \infty}(H)\otimes\mathbb{C}^{N}}{*}M_{i}\). At \(i=0\), we know that \(M_{0}\) and \(\mathrm{L}^{\infty}(G)\otimes\mathbb{C}^{N}\) are diffuse so the inclusions \(B=\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^{N}\subset\mathrm{L}^{\infty}(G) \otimes\mathbb{C}^{N}\) and \(B=\mathrm{L}^{\infty}(H)\otimes L_{1}\subset M_{0}\) are without trivial corner. Moreover, it follows from Lemma 2.20 that:
\[M_{0}^{\prime}\cap(\mathrm{L}^{\infty}(G)\otimes\mathbb{C}^{N})^{\prime}\cap B \subset(\mathrm{L}^{\infty}(G)^{\prime}\cap\mathrm{L}^{\infty}(H))\otimes( \mathrm{L}^{\infty}(S_{N}^{+})^{\prime}\cap L_{1})=\mathbb{C}.\]
Hence, we may apply [13, Theorem E] to deduce that \(M_{1}\) is a non-amenable prime factor. In particular \(M_{1}\) is diffuse and a direct induction shows that \(M_{k}\) is a non-amenable prime factor for all \(1\leq k\leq N\), in particular, \(M_{N}=\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})\) is a non-amenable prime factor.
Suppose that is a Cartan subalgebra.
By the previous discussion, \(M_{N-1}\) is a non-amenable factor. Hence, it has no amenable direct summand and we may apply [1, Theorem B] to deduce that \(A\prec_{M}\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^{N}\). Since \(\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^{N}\) is finite dimensional, this contradicts the fact that \(A\), being maximal abelian in the diffuse von Neumann algebra \(M\), is itself diffuse.
To see that \(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})\) is full, we may now apply [1, Theorem 4.10] (since \(\mathrm{L}^{\infty}(G)\otimes\mathbb{C}^{N}\) is diffuse and \(\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^{N}\) is finite dimensional so the inclusion \(\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^{N}\subset\mathrm{L}^{\infty}(G) \otimes\mathbb{C}^{N}\) is entirely non trivial and, as shown before, \(M_{N-1}\) is diffuse).
When \(G\) is Kac, \(G\mathbin{\,\ast}_{\!,H}S_{N}^{+}\) is also Kac hence, \(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})\) is a \(\mathrm{II}_{1}\)-factor. When \(G\) is not Kac, \(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})\) is a non-amenable factor by the first part of the proof but not a finite factor by [14, Theorem 8]. Hence, it is a type \(\mathrm{III}_{\lambda}\) factor with \(\lambda\neq 0\) since it is a full factor [14, Proposition 3.9]. To compute the \(T\)-invariant, we view again \(M:=\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})=\bigl{(}\mathrm{L}^{ \infty}(G)\otimes\mathbb{C}^{N}\bigr{)}\underset{\mathrm{L}^{\infty}(H)\otimes \mathbb{C}^{N}}{*}M_{N-1}\) and we also note that, since \(M_{N-1}\) is diffuse and the amalgam \(B:=\mathrm{L}^{\infty}(H)\otimes L_{N}=\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^ {N}\) is finite dimensional, we have \(M_{N-1}\not\prec_{M}B\). We can now use [10] to deduce that \(T:=T(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+}))=\{t\in\mathbb{R}\,: \,\exists u\in\mathcal{U}(B)\,\,\sigma_{t}=\mathrm{Ad}(u)\}\), where \((\sigma_{t})_{t\in\mathbb{R}}\) is the modular group of the Haar state on \(\mathrm{L}^{\infty}(G\mathbin{\,\ast}_{\!,H}S_{N}^{+})\), which coincides with the free product state. In particular one has \(\sigma_{t}|_{\mathrm{L}^{\infty}(G)\otimes\mathbb{C}^{N}}=\sigma_{t}^{G}\otimes \mathrm{id}\). Let \(T^{\prime}:=\{t\in\mathbb{R}\,:\,\sigma_{t}^{G}=\mathrm{id}\}\). It is clear that \(T^{\prime}\subseteq T\). Let \(t\in T\) and \(u\in\mathcal{U}(B)\) such that \(\sigma_{t}=\mathrm{Ad}(u)\). Write \(u=\sum_{k=1}^{N}u_{k}\otimes e_{k}\in\mathrm{L}^{\infty}(H)\otimes\mathbb{C}^ {N}\); where \(u_{k}\in\mathrm{L}^{\infty}(H)\) is unitary for all \(1\leq k\leq N\). For all \(b\in\mathrm{L}^{\infty}(G)\) and all \(1\leq k\leq N\) we find \(\sigma_{t}(b\otimes e_{k})=\sigma_{t}^{G}(b)\otimes e_{k}=u(b\otimes e_{k})u^{ \ast}=u_{k}bu_{k}^{\ast}\otimes e_{k}\). Hence, \(\sigma_{t}^{G}=\mathrm{Ad}(u_{1})\) which implies that \(t\in T^{\prime}\). Note that we could also have computed the \(T\)-invariant by using the
results of [10]. Let us now compute Connes' \(\tau\)-invariant in the non-amalgamated case. It suffices to show that, for any sequence \((t_{n})_{n}\) of real numbers, one has \(t_{n}\to 0\) for \(\tau(G\wr_{*}S_{N}^{+})\) if and only if \(t_{n}\to 0\) for \(\tau(G)\). Assume that \(t_{n}\to 0\) for \(\tau(\operatorname{L}^{\infty}(G\wr_{*}S_{N}^{+}))\). Then, viewing again \(\operatorname{L}^{\infty}(G\wr_{*}S_{N}^{+})=\big{(}\operatorname{L}^{\infty} (G)\otimes\operatorname{C}^{N}\big{)}\underset{B}{\ast}M_{N-1}\), where this time \(B=\operatorname{C}^{N}=L_{N}\), we may apply [11, Theorem B], to deduce that there exists a sequence of unitaries \(v_{n}\in\mathcal{U}(B)\) such that \(\alpha_{n}:=\operatorname{Ad}(v_{n})\circ\sigma_{t_{n}}\to\operatorname{id}\). To conclude the proof, we apply the restriction argument from Remark 2.1 to deduce that \(\alpha_{n}|_{\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}}\to \operatorname{id}\) in \(\operatorname{Aut}(\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N})\). Note that, since \(B\) is in the center of \(\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}\) one has \(\alpha_{n}|_{\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}}=\sigma _{t_{n}}^{G}\otimes\operatorname{id}\) and we deduce that \(\sigma_{t_{n}}^{G}\to\operatorname{id}\) i.e. \(t_{n}\to 0\) in \(\tau(G)\). The converse statement follows from the continuity of the group homomorphism \(\psi\) defined by Equation (2).
**Remark 5.2**.: The assumption \(N\neq 2\) in Theorem B is necessary. Indeed, when \(N=2\), we know from Proposition 2.23 and Section 2.5 that \(\operatorname{L}^{\infty}(G\wr_{*}S_{2}^{+})=\big{(}\operatorname{L}^{\infty} (G\wr_{*}^{2})\big{)}\otimes\operatorname{C}^{2}\). In particular, \(\operatorname{L}^{\infty}(G\wr_{*}S_{2}^{+})\) is never a factor. For \(N=3\), our proof does not work \((\operatorname{L}^{\infty}(S_{3}^{+})\simeq\operatorname{C}^{6}\) is not diffuse) but we don't know if \(\operatorname{L}^{\infty}(G\wr_{*}S_{3}^{+})\) can nonetheless be a factor. However, whenever \(|\operatorname{Irr}(G)|\geq 3\) if \(N=2\) or whenever \(G\) is non-trivial if \(N=3\), Proposition 2.8 shows that \(\operatorname{L}^{\infty}(G\wr_{*}^{N})\) is a full and prime factor without any Cartan subalgebras and of type II\({}_{1}\) if \(G\) is Kac or of type III\({}_{\lambda}\), \(\lambda\neq 0\), with \(T\) invariant given by \(\{t\in\mathbb{R}\,:\,\sigma_{t}^{G}=\operatorname{id}\}\), if \(G\) is not Kac. Finally, let us mention that the assumption \(|\operatorname{Irr}(G)|=\infty\) in Theorem B does not seems necessary but is useful in our proof.
Theorem C is a direct consequence of the following.
**Theorem 5.3**.: _Suppose that \(\operatorname{Irr}(G)\) is infinite, \(\operatorname{Irr}(H)\) is finite and \(N\geq 2\). The following holds._
1. _If_ \(\operatorname{L}^{\infty}(G)\) _is amenable then,_ \(\forall 1\leq i\leq N\)_, the von Neumann subalgebra of_ \(\operatorname{L}^{\infty}(G\wr_{*H}S_{N}^{+})\) _generated by_ \(\{\nu_{i}(a)u_{ij}\,:\,a\in\operatorname{L}^{\infty}(G),\,1\leq j\leq N\}\) _is maximal amenable with expectation._
2. _If_ \(G\) _is Kac then_ \(\operatorname{L}^{\infty}(H)\otimes\operatorname{L}^{\infty}(S_{4}^{+})\simeq \big{(}\nu(\operatorname{L}^{\infty}(H))\cup\operatorname{L}^{\infty}(S_{4}^{+ })\big{)}^{\prime\prime}\subset\operatorname{L}^{\infty}(G\wr_{*}S_{4}^{+})\) _is maximal amenable._
Proof.: (1). We use Serre's _devissage_ in the following way. Fix \(1\leq i\leq N\) and Let \(\mathcal{T}_{N}^{\prime}\) be the graph obtained from \(\mathcal{T}_{N}\) by removing the edge \(v_{i}\) as well as its inverse edge \(\overline{v}_{i}\). This graph is still connected. Let us denote by \(M_{2}\) the fundamental von Neumann algebra of our graph of von Neumann algebras restricted to \(\mathcal{T}_{N}^{\prime}\) so that we have \(\operatorname{L}^{\infty}(H)\otimes\operatorname{L}^{\infty}(S_{N}^{+}) \subset M_{2}\). It follows from [12] that \(\operatorname{L}^{\infty}(G\wr_{*,H}S_{N}^{+})\) is canonically isomorphic to \(M_{1}\underset{B}{\ast}M_{2}\), where the amalgamation is \(B=\operatorname{L}^{\infty}(H)\otimes\operatorname{C}^{N}\subset M_{1}:= \operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}\) and \(B=\operatorname{L}^{\infty}(H)\otimes L_{i}\subset\operatorname{L}^{\infty}(H) \otimes\operatorname{L}^{\infty}(S_{N}^{+})\subset M_{2}\). Note that the von Neumann algebra generated by \(\{\nu_{i}(a)u_{ij}\,:\,a\in C(G),\,1\leq j\leq N\}\) is then identified with \(M_{1}\subset M_{1}\underset{B}{\ast}M_{2}\) and \(M_{1}\simeq\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}\simeq \operatorname{L}^{\infty}(G\times\overline{\mathbb{Z}/N\mathbb{Z}})\) is diffuse, by Lemma 2.4. Hence \(M_{1}\not\prec_{M_{1}}B\) and since \(M_{1}\) is amenable, we may apply [11, Main Theorem] to deduce that \(M_{1}\) is maximal amenable.
(2). Consider the inductive construction \(\operatorname{L}^{\infty}(H)\otimes\operatorname{L}^{\infty}(S_{4}^{+})=M_{0} \subset M_{1}\subset\cdots\subset M_{N}=\operatorname{L}^{\infty}(G\wr_{*,H} S_{4}^{+})\), where \(M_{k+1}=\big{(}\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}\big{)} \underset{\operatorname{L}^{\infty}(H)\otimes\operatorname{C}^{N}}{\ast}M_{k}\). Recall that \(\operatorname{L}^{\infty}(S_{4}^{+})\) is diffuse and amenable and so is \(\operatorname{L}^{\infty}(H)\otimes\operatorname{L}^{\infty}(S_{4}^{+})=M_{0}\). Let \(M_{0}\subset Q\subset M_{N}\) be an amenable von Neumann algebra. Then \(M_{0}\subset Q\cap M_{N-1}\) so \(Q\cap M_{N-1}\underset{M_{N-1}}{\ast}\operatorname{L}^{\infty}(H)\otimes \operatorname{C}^{N}\) since \(M_{0}\) is diffuse. Hence, we can apply [11, Main Theorem] to the amalgamated free product \(M_{N}=\big{(}\operatorname{L}^{\infty}(G)\otimes\operatorname{C}^{N}\big{)} \underset{\operatorname{L}^{\infty}(H)\otimes\operatorname{C}^{N}}{\ast}M_{N-1}\) and deduce that \(Q\subset M_{N-1}\). By a direct induction we find that \(Q\subset M_{0}\). Note that the Kac assumption on \(G\) is to ensure that each \(M_{k}\) is finite so the inclusion \(Q\cap M_{k}\subset M_{k}\) is always with expectation. We do not know if this result still holds in the non Kac case.
## 6. Free wreath product of a fundamental quantum group
Let \((\mathcal{G},(G_{q})_{q\in V(\mathcal{G})},(G_{e})_{e\in E(\mathcal{G})},(s_{e})_{e \in E(\mathcal{G})})\) be _a graph of CQG_ over the connected graph \(\mathcal{G}\) i.e.:
* For every \(q\in V(\mathcal{G})\) and every \(e\in E(\mathcal{G})\), \(G_{q}\) and \(G_{e}\) are CQG.
* For all \(e\in E(\mathcal{G})\), \(G_{\overline{e}}=G_{e}\).
* For every \(e\in E(\mathcal{G})\), \(s_{e}\,:\,C(G_{e})\to C(G_{s(e)})\) is a faithful unital \(*\)-homomorphism intertwining the comultiplications (so \(G_{e}\) is a dual quantum subgroup of \(G_{s(e)}\)).
Consider the graph of \(C^{*}\)-algebras \((\mathcal{G},C(G_{q}),C(G_{e}),s_{e})\) and fix a maximal subtree \(\mathcal{T}\subset\mathcal{G}\). Define \(C(G)\) as the maximal fundamental C*-algebra of the graph of C*-algebras \((\mathcal{G},C(G_{q}),C(G_{e}),s_{e})\) relative to the maximal subtree \(\mathcal{T}\). By the universal property of \(C(G)\), there exists a unique unital \(*\)-homomorphism \(\Delta\,:\,C(G)\to C(G)\otimes C(G)\) such that \(\Delta|_{C(G_{q})}=\Delta_{C(G_{q})}\) and \(\Delta(u_{e})=u_{e}\otimes u_{e}\) for all \(q\in V(\mathcal{G})\) and all \(e\in E(\mathcal{G})\). The pair \(G:=(C(G),\Delta)\) is a CQG, known as the fundamental quantum group and studied in [10]. We will denote this CQG by \(G=\pi_{1}(\mathcal{G},G_{p},G_{e},\mathcal{T})\). It is known from [10] that \(C(G)\) is indeed the full C*-algebra of \(G\). Let us note that, since \(s_{e}\) identifies \(G_{e}\) as a dual quantum subgroup of \(G_{s(e)}\), we have conditional expectations \(E_{e}^{s}:C(G_{s(e)})\to s_{e}(C(G_{e}))\) which are non-necessary GNS-faithful.
Since \(G_{e}\subset G_{s(e)}\) is a dual quantum subgroup \(s_{e}\) factorizes to a faithful unital \(*\)-homomorphism \(s_{e}\,:\,C_{r}(G_{e})\to C_{r}(G_{s(e)})\) intertwining the comultiplications and we have Haar-state-preserving faithful conditional expectations \(C_{r}(G_{s(e)})\to s_{e}(C_{r}(G_{e}))\). Hence we get a graph of C*-algebras \((\mathcal{G},C_{r}(G_{q}),C_{r}(G_{e}),s_{e})\) with faithful conditional expectations whose vertex-reduced fundamental C*-algebra is the reduced C*-algebra \(C_{r}(G)\) of \(G\)[10].
**Definition 6.1**.: The _loop subgroup_ of \(G\) is the group \(\Gamma\subset\mathcal{U}(C(G))\) generated by \(\{u_{e}\,:\,e\in E(\mathcal{G})\}\).
Note that the inclusion \(\Gamma\subset\mathcal{U}(C(G))\) extends to a unital \(*\)-homomorphism \(\pi\,:\,C^{*}(\Gamma)\to C(G)\).
**Proposition 6.2**.: _For \(G=\pi_{1}(\mathcal{G},G_{p},G_{e},\mathcal{T})\) with loop subgroup \(\Gamma\) the following holds._
1. \(\pi\,:\,C^{*}(\Gamma)\to C(G)\) _is faithful and intertwines the comultiplications._
2. _For all_ \(e\in E(\mathcal{G})\)_,_ \(N\in\mathbb{N}^{*}\)_, there exists a unique unital_ \(*\)_-homomorphism_ \[s_{e}^{N}\,:\,C\left(G_{e}\wr_{*}S_{N}^{+}\right)\,\,\text{such that}\,\,s_{e}^{N}\circ\nu_{i}=\nu_{i}\circ s_{e},\,\,s_{e}^{N}|_{C(S_{N}^ {+})}=\mathrm{id},\,\,\forall i.\]
_Moreover, \(s_{e}^{N}\) intertwines the comultiplications and restricts to a faithful map \(\mathrm{Pol}(G_{e}\wr_{*}S_{N}^{+})\hookrightarrow\mathrm{Pol}(G_{s(e)}\wr_{ *}S_{N}^{+})\) i.e. \(G_{e}\wr_{*}S_{N}^{+}\) is a dual quantum subgroup of \(G_{s(e)}\wr_{*}S_{N}^{+}\)._
Proof.: (1). Let \(C_{\Gamma}\) be the image of \(\pi\) i.e. the C*-algebra generated by \(\{u_{e}\,:\,e\in E(\mathcal{G})\}\) in \(C(G)\). To show that that \(\pi\) is faithful, it suffices to check that \(C_{\Gamma}\) satisfies the universal property of \(C^{*}(\Gamma)\). Recall that \(\varepsilon_{s(e)}\circ s_{e}(b)=\varepsilon_{e}(b)\), where, for \(p\in V(\mathcal{G})\), \(\varepsilon_{p}\) is the counit on \(C(G_{p})\) and, for \(e\in E(\mathcal{G})\), \(\varepsilon_{e}\) is the counit on \(C(G_{e})\). Let now \(\rho\,:\,\Gamma\to\mathcal{U}(H)\) be a unitary representation of \(\Gamma\). By the universal property of \(C(G)=\pi_{1}(\mathcal{G},C(G_{p}),C(G_{e}),\mathcal{T})\), there exists a unique unital \(*\)-homomorphism \(\widetilde{\rho}\,:\,C(G)\to\mathcal{B}(H)\) such that \(\widetilde{\rho}(u_{e})=\rho(u_{e})\) and \(\widetilde{\rho}(a)=\varepsilon_{p}(a)\mathrm{id}_{H}\), for all \(a\in C(G_{p})\) and all \(p\in V(\mathcal{G})\). Hence, \(\widetilde{\rho}|_{C_{\Gamma}}\,:\,C_{\Gamma}\to\mathcal{B}(H)\) is a unital \(*\)-homomorphism such that \(\widetilde{\rho}(g)=\rho(g)\), for all \(g\in\Gamma\). It follows that \(\pi\) is faithful. The fact that \(\pi\) intertwines the comultiplications follows from the equality \(\Delta(u_{e})=u_{e}\otimes u_{e}\) for all \(e\in E(\mathcal{G})\).
(2). The existence and the properties of the morphism \(s_{e}^{N}\) follows from Proposition 3.3.
By the previous Proposition, we will always view \(C^{*}(\Gamma)=C(\widehat{\Gamma})\subset C(G)\) as the C*-algebra generated by \(\{u_{e}\,:\,e\in E(\mathcal{G})\}\) and such that the inclusion intertwines the comultiplications. In particular, \(\widehat{\Gamma}\) is a dual quantum subgroup of \(G\). Moreover, the previous Proposition shows that \((\mathcal{G},(G_{q}\wr_{*}S_{N}^{+})_{q\in V(\mathcal{G})},(G_{e}\wr_{*}S_{N} ^{+})_{e\in E(\mathcal{G})},(s_{e}^{N})_{e\in E(\mathcal{G})})\) is a graph of quantum groups. Its fundamental quantum group is determined in the following Theorem.
**Theorem 6.3**.: _If \(G=\pi_{1}(\mathcal{G},G_{p},G_{e},\mathcal{T})\) with loop subgroup \(\Gamma\) then_
\[G\wr_{*,\widehat{\Gamma}}S_{N}^{+}\simeq\pi_{1}\left(\mathcal{G},G_{p}\wr_{*}S _{N}^{+},G_{e}\wr_{*}S_{N}^{+},\mathcal{T}\right).\]
Proof.: Define \(\mathcal{A}:=C\left(\pi_{1}\left(\mathcal{G},G_{p}\wr_{*}S_{N}^{+},G_{e}\wr_{ *}S_{N}^{+},\mathcal{T}\right)\right)\) and \(\mathcal{B}:=C\left(G\wr_{*,\widehat{\Gamma}}S_{N}^{+}\right)\). For all \(p\in V(\mathcal{G})\), we have a unital \(*\)-homomorphism \(\varphi_{p}\,:\,C(G_{p}\wr_{*}S_{N}^{+})\to\mathcal{B}\) coming from the inclusion \(C(G_{p})\subset C(G)\). Moreover, \(\forall e\in E(\mathcal{G})\setminus E(\mathcal{T})\), there is a unitary \(u_{e}\in\mathcal{B}\) such that
\[u_{e}^{*}\varphi_{s(e)}\circ s_{e}(a)u_{e}=\varphi_{r(e)}\circ r_{e}(a)\quad \forall a\in C(G_{e}\wr_{*}S_{N}^{+}),\]
the unitary being the one in \(C^{*}(\Gamma)\), which is unique because of the amalgamation and works for every copy of \(C(G_{e})\) in \(C(G_{e}\wr_{*}S_{N}^{+})\) by construction of \(G\). Hence, the universal property of \(\mathcal{A}\) gives a morphism \(\pi\,:\,\mathcal{A}\to\mathcal{B}\) which clearly intertwines the comultiplications. We will give an inverse of this map, using the universal property of \(\mathcal{B}\). For \(1\leq i\leq N\), there is for any \(p\in V(\mathcal{G})\) a map \(\nu_{i}\,:\,C(G_{p})\to\mathcal{A}\) sending \(C(G_{p})\) on its \(i\)-th copy in the free product defining \(C(G_{p}\wr_{*}S_{N}^{+})\) in \(\mathcal{A}\). There is also for every edge \(e\in E(\mathcal{G})\setminus E(\mathcal{T})\) a unitary \(u_{e}\in\mathcal{A}\), which satisfies the relations of the universal property of \(\mathcal{B}\), and is the same for every one of the \(N\) copies of \(C(G_{e})\). Therefore, we get a map \(\psi_{0}\,:\,C(G\wr_{*}S_{N}^{+})\to\mathcal{A}\) which factors through \(\mathcal{B}\), because the unitaries \(u_{e}\) are independent of the copy of \(C(G_{e})\), thus we can factor the map through the quotient amalgamating the algebras generated by the \(u_{e}^{i}\) for \(1\leq i\leq N\), which is exactly \(\mathcal{B}\). We get a morphism \(\psi:\mathcal{B}\to\mathcal{A}\) for which it is easy to see that it intertwines the comultiplications. The maps are inverse of each other, because they send the \(N\) copies of \(C(G)\) to \(N\) corresponding copies of \(C(G)\) and respect the unitaries from the fundamental algebra construction.
**Example 6.4**.: Suppose that \(\mathcal{G}\) has two edges \(e\) and \(\overline{e}\). We have two cases.
1. If \(s(e)\neq r(e)\) then \(\mathcal{G}\) is a tree so \(\mathcal{T}=\mathcal{G}\) and the loop subgroup is trivial. We are in the situation of an amalgamated free product \(G=G_{1}\underset{H}{*}G_{2}\) and, by Theorem 6.3, \[G\wr_{*}S_{N}^{+}\simeq G_{1}\wr_{*}S_{N}^{+}\underset{H_{*}S_{N}^{+}}{*}G_{2 }\wr_{*}S_{N}^{+}.\]
2. If \(s(e)=r(e)\) then \(\mathcal{T}\) has no edges and the loop subgroup is \(\Gamma=\langle u_{e}\rangle\simeq\mathbb{Z}\). We are in the situation of an HNN extension i.e. \(H\) and \(\Sigma\) are CQG such that \(C(\Sigma)\subset C(H)\) is a dual quantum subgroup and \(\theta\,:\,C(\Sigma)\hookrightarrow C(H)\) is a faithful unital \(*\)-homomorphism intertwining the comultiplications then the HNN extension \(G:=\operatorname{HNN}(H,\Sigma,\theta)\) is the CQG with \(C(G)\) the universal unital C*-algebra generated by \(C(H)\) and a unitary \(u\in C(G)\) with the relations \(\theta(b)=ubu^{*}\) for all \(b\in C(\Sigma)\subset C(H)\) and comultiplication given by \(\Delta(u)=u\otimes u\) and \(\Delta|_{C(H)}=\Delta_{H}\). The loop subgroup \(\Gamma=\mathbb{Z}\) satisfies \(C^{*}(\Gamma)=\langle u\rangle\subset C(G)\) and, by Theorem 6.3, \(\operatorname{HNN}(H,\Sigma,\theta)\wr_{*,\Gamma}S_{N}^{+}\simeq\operatorname{ HNN}(H\wr_{*}S_{N}^{+},\Sigma\wr_{*}S_{N}^{+},\hat{\theta})\).
## 7. K-theory
In [10] is obtained a 6-term exact sequences for the KK-theory of the reduced and the full fundamental algebras of any graph of \(C^{*}\)-algebras \((\mathcal{G},A_{p},B_{e},s_{e})\) with (non-necessary GNS-faithful) conditional expectations. It is shown in [10] that the canonical surjection from the full fundamental C*-algebra \(P\) to the vertex-reduced fundamental C*-algebra \(P_{r}\) is a KK-equivalence and, denoting by \(P_{\bullet}\) either \(P\) or \(P_{r}\) one has the following exact sequences, for any C*-algebra \(C\).
(3)
(4)
Recall that, given a CQG \(G\), \(C_{\bullet}(G)\) denotes either \(C(G)\) or \(C_{r}(G)\).
Proof of Theorem D.: We can use the exact sequence (3) with \(C=\mathbb{C}\) and with the graph of C*-algebra over \(\mathcal{T}_{N}\) from Section 3.1 in the case of \(C(G)\) and from Section 3.2 in the case of \(C_{r}(G)\)
since both graphs of \(C^{*}\)-algebras have injective connecting maps and conditional expectations.
\[\begin{CD}\bigoplus\limits_{v\in E^{+}}K_{0}(C_{\bullet}(H)\otimes\mathbb{C}^{N} )\overbrace{\sum_{v_{\bullet}=-r_{v}^{*}}^{\sum_{\bullet}=-r_{v}^{*}}}^{\sum_{ \bullet}=-r_{v}^{*}}\bigoplus\limits_{p\in V\setminus\{p_{0}\}}K_{0}(C_{ \bullet}(G)\otimes\mathbb{C}^{N})\oplus K_{0}(C_{\bullet}(S_{N}^{+}))\parbox{ \includegraphics[height=142.26378pt]{images/2014.eps}}K_{0}(C_{\bullet}(G \wr_{*,H}S_{N}^{+}))\\ \parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}K_{1}(C_{\bullet}(G \wr_{*,H}S_{N}^{+}))\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}} \bigoplus\limits_{p\in V\setminus\{p_{0}\}}K_{1}(C_{\bullet}(G)\otimes\mathbb{C }^{N})\oplus K_{1}(C_{\bullet}(S_{N}^{+}))\overbrace{\sum_{\bullet}=-r_{v}^{*} }^{\sum_{\bullet}=-r_{v}^{*}}\bigoplus\limits_{w\in E^{+}}K_{1}(C_{\bullet}(H) \otimes\mathbb{C}^{N}).\end{CD}\]
But we have that the K-theory of \(\mathbb{C}^{N}\) is given by \(K_{0}(\mathbb{C}^{N})\simeq\mathbb{Z}^{N}\) and \(K_{1}(\mathbb{C}^{N})\simeq 0\) so, using the Kunneth formula, as \(\mathbb{C}^{N}\) is a commutative algebra, we get:
\[\begin{CD}K_{0}(C_{\bullet}(H))\otimes\mathbb{Z}^{N^{2}}\overbrace{\sum_{ \bullet}^{\bullet}=-r_{v}^{*}}^{\sum_{\bullet}=-r_{v}^{*}}\left(K_{0}(C_{ \bullet}(G))\otimes\mathbb{Z}^{N^{2}}\right)\oplus K_{0}(C_{\bullet}(S_{N}^{ +}))\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}K_{0}(C_{ \bullet}(G\wr_{*,H}S_{N}^{+}))\\ \parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}K_{1}(C_{\bullet}(G \wr_{*,H}S_{N}^{+}))\parbox{\includegraphics[height=142.26378pt]{images/2014. eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.26378pt]{images/2014.eps}}\parbox{\includegraphics[ height=142.
reflection groups \(H^{s+}_{N}=\widehat{\mathbb{Z}}_{\prec}\wr_{s}S^{+}_{N}\), for \(1\leq s\leq+\infty\), which are K-amenable, note that, for \(s<\infty\), since \(C^{*}(\mathbb{Z}_{s})\simeq C(\widehat{\mathbb{Z}}_{s})\simeq\mathbb{C}^{s}\), we have \(K_{0}(C^{*}(\mathbb{Z}_{s}))\simeq\mathbb{Z}^{s}\) and \(K_{1}(C^{*}(\mathbb{Z}_{s}))\simeq\{0\}\). Hence,
\[K_{0}(C_{\bullet}(H^{s+}_{N}))\simeq\left(K_{0}(C^{*}(\mathbb{Z}_{s}))\otimes \mathbb{Z}^{N^{2}}\right)/([1]\otimes\mathbb{Z}^{2N-2})\simeq(\mathbb{Z}^{s} \otimes\mathbb{Z}^{N^{2}})/([1]\otimes\mathbb{Z}^{2N-2})\simeq\mathbb{Z}^{sN^{2 }-2N+2}.\]
The same computation works when \(s=+\infty\) with \(K_{0}(C^{*}(\mathbb{Z}))\simeq\mathbb{Z}\), for \(K_{0}\), but for \(K_{1}\), as \(K_{0}(C^{*}(\mathbb{Z}))\simeq\mathbb{Z}\),.
We are grateful to Adam Skalski for suggesting to us to use the uniqueness of the trace from [12] to deduce the following from Theorem D.
**Corollary 7.1**.: _If \(M,N\geq 8\) and \(s,t\geq 1\), then \(C_{r}(H^{s+}_{N})\simeq C_{r}(H^{t+}_{M})\Leftrightarrow(N,s)=(M,t)\)._
Proof.: The dimensions of the K-theory groups may happen to be equal for different pairs of integers, so we need to go a bit further to differentiate between them. We use the value of the Haar state applied to the generators of \(K_{0}(C_{r}(H^{s+}_{N}))\simeq\mathbb{Z}^{sN^{2}-2N+2}\). Thanks to the computation, we know that there are, in addition to the class of the unit [1], \(N^{2}-2N+1\) generators coming from the ones of \(K_{0}(C_{r}(S^{+}_{N}))\), namely \([1\otimes u_{ij}]\) for \(1\leq i,j\leq N-1\), they are equal to \([u_{ij}]\) in \(K_{0}(C_{r}(H^{s+}_{N}))\). The trace of such elements is the same as the trace of the corresponding element in \(C_{r}(S^{+}_{N})\), thanks to 3.2, which is equal to \(1/N\). The \((s-1)N^{2}\) remaining generators are the ones coming from the \(N\) copies of \(K_{0}(C^{*}(\mathbb{Z}_{s})\otimes\mathbb{C}^{N})\) and are of the form \([\delta_{k}\otimes e_{j}]\), for \(k\in\mathbb{Z}_{s}\), \(k\neq 0\), and \(1\leq j\leq N\). Such a class, in the \(i\)-th copy, is sent to \([\nu_{i}(\delta_{k})u_{ij}]\), which is of trace \(1/(sN)\). Thus, using the uniqueness of the trace of these algebras, as proved in [12], we get that the pair \((N,s)\) can be retrieved from the data of the K-theory, and it allows to discriminate the algebras of the different quantum reflection groups.
**Corollary 7.2**.: _For all \(m\geq 1\), and \(N\geq 4\), \(\widehat{\mathbb{F}}_{m}\wr_{*}S^{+}_{N}\) is K-amenable and we have:_
\[K_{0}(C_{\bullet}(\widehat{\mathbb{F}}_{m}\wr_{*}S^{+}_{N}))\simeq\mathbb{Z} ^{N^{2}-2N+2}\quad\text{and}\quad K_{1}(C_{\bullet}(\widehat{\mathbb{F}}_{m} \wr_{*}S^{+}_{N}))\simeq\mathbb{Z}^{N^{2}m+1}.\]
_If \(1\leq N\leq 3\), then \(\widehat{\mathbb{F}}_{m}\wr_{*}S^{+}_{N}\) is also K-amenable, with_
\[K_{0}(C_{\bullet}(\widehat{\mathbb{F}}_{m}\wr_{*}S^{+}_{N}))\simeq\mathbb{Z} ^{N!}\quad\text{and}\quad K_{1}(C_{\bullet}(\widehat{\mathbb{F}}_{m}\wr_{*}S^{ +}_{N}))\simeq\mathbb{Z}^{N^{2}m}.\]
_In particular, for all \(n,m\geq 1\) and \(N,M\geq 1\), \(C_{\bullet}(\widehat{\mathbb{F}}_{n}\wr_{*}S^{+}_{N})\simeq C_{\bullet}( \widehat{\mathbb{F}}_{m}\wr_{*}S^{+}_{M})\Leftrightarrow(n,N)=(m,M)\)._
Proof.: K-amenability of free groups has been proved by Cuntz in [13], when he introduced the notion of K-amenability for discrete groups. The K-theory for the maximal C*-algebra was initially computed in [13] and for the reduced C*-algebra in [14]. The result is \(K_{0}(C^{*}(\mathbb{F}_{m}))\simeq\mathbb{Z}\) and \(K_{1}(C^{*}(\mathbb{F}_{m}))\simeq\mathbb{Z}^{m}\). Using this and Theorem D, we get the first result. For the last statement, we first use equality of the \(K_{0}\)-groups to deduce that \(N=M\) and then, equality of the \(K_{1}\)-groups to deduce that \(n=m\).
We can use the results of Section 6 to compute the KK-theory of \(C^{*}\)-algebras of a free wreath product of a fundamental quantum group of graph of CQG. The main theorem is the following, using the notations of Section 6 and denoting by \(E^{+}\) and \(V\) the positive edges and vertices of the connected graph \(\mathcal{G}\).
**Theorem 7.3**.: _If \(G=\pi_{1}(\mathcal{G},G_{p},G_{e},\mathcal{T})\) for any \(C^{*}\)-algebra \(A\), there is a cyclic exact sequences:_
\[\begin{CD}\bigoplus\limits_{e\in E^{+}}KK^{0}(A,C_{\bullet}(G_{e}\wr_{*}S^{+} _{N}))\xlongrightarrow{\sum\limits_{s_{e}^{*}-r_{e}^{*}}^{*}}\bigoplus\limits_{p \in V}KK^{0}(A,C_{\bullet}(G_{p}\wr_{*}S^{+}_{N}))\xlongrightarrow{KK^{0}(A,C_{ \bullet}(G_{p}\wr_{*}\widehat{\mathbb{F}}\ S^{+}_{N}))}\\ KK^{1}(A,C_{\bullet}(G\wr_{*,\widehat{\mathbb{F}}\ S^{+}_{N})}) \xlongrightarrow{\bigoplus\limits_{p\in V}KK^{1}(A,C_{\bullet}(G_{p}\wr_{*}S^{+} _{N}))\xlongrightarrow{s_{e}^{*}-r_{e}^{*}}}\bigoplus\limits_{e\in E^{+}}KK^{1 }(A,C_{\bullet}(G_{e}\wr_{*}S^{+}_{N}))\end{CD}\]
\[\begin{CD}\bigoplus_{e\in E^{+}}KK^{0}(C_{\bullet}(G_{e}\wr_{*}S_{N}^{+}),A) \x@<{}<{}<{}<KK^{0}(C_{\bullet}(G_{p}\wr_{*}S_{N}^{+}),A)\x@<{}<{}<KK^{0}(C_{ \bullet}(G_{\wr_{*}\mathbb{S}}\,S_{N}^{+}),A)\\ \Big{\downarrow}\\ KK^{1}(C_{\bullet}(G\wr_{*,\hat{\Gamma}}\,S_{N}^{+}),A)\x@>{}<{}<{}<{}<K^{1}(C_{ \bullet}(G_{p}\wr_{*}S_{N}^{+}),A)\x@>{\sum\limits_{e\in E^{+}}}{}<KK^{1}(C_{ \bullet}(G_{e}\wr_{*}S_{N}^{+}),A)\end{CD}\]
Proof.: As observed in Section 6 we have, at the level of full as well as reduced C*-algebras, a graph of C*-algebras \((\mathcal{G},C_{\bullet}(G_{q}\wr_{*}S_{N}^{+}),C_{\bullet}(G_{q}\wr_{*}S_{N}^ {+}),s_{E}^{N})\) with conditional expectations (which are GNS-faithful only at the reduced level). By Theorem 6.3 the full/reduced C*-algebra \(C_{\bullet}(G\wr_{*,\hat{\Gamma}}\,S_{N}^{+})\) is the full/vertex-reduced fundamental algebra of \((\mathcal{G},C_{\bullet}(G_{q}\wr_{*}S_{N}^{+}),C_{\bullet}(G_{r}\wr_{*}S_{N}^ {+}),s_{E}^{N})\). Hence, we may apply the exact sequences (3) and (4).
We compute now K-theory groups of some free wreath products of an amalgamated free product with \(S_{N}^{+}\).
**Corollary 7.4**.: _If \(G=G_{1}*_{H}G_{2}\) then there is a cyclic sequence of K-theory groups:_
\[\begin{CD}K_{0}(C_{\bullet}(H\wr_{*}S_{N}^{+}))@>{}<{}<{}<K_{0}(C_{\bullet}(G _{1}\wr_{*}S_{N}^{+}))\oplus K_{0}(C_{\bullet}(G_{2}\wr_{*}S_{N}^{+}))@>{}<{}<K_ {0}(C_{\bullet}(G\wr_{*}S_{N}^{+}))\\ @V{}V{}V\\ K_{1}(C_{\bullet}(G\wr_{*}S_{N}^{+}))@<{}<{}<{}<K_{1}(C_{\bullet}(G_{1}\wr_{ *}S_{N}^{+}))\oplus K_{1}(C_{\bullet}(G_{2}\wr_{*}S_{N}^{+}))@<{}<{}<{}<K_{1}(C_{ \bullet}(H\wr_{*}S_{N}^{+})).\end{CD}\]
Proof.: As observed in Example 6.4 the loop subgroup is trivial in the case of an amalgamated free product. The proof follows from the first exact sequence in Theorem 7.3 with \(A=\mathbb{C}\).
**Corollary 7.5**.: _For the K-amenable quantum group \(\widehat{\mathrm{SL}_{2}(\mathbb{Z})}\wr_{*}S_{N}^{+}\) we have for \(N\geq 4\):_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{SL}_{2}(\mathbb{Z})}\wr_{*}S_{N}^{+})) \simeq\mathbb{Z}^{8N^{2}-2N+2}\quad\text{and}\quad K_{1}(C_{\bullet}(\widehat{ \mathrm{SL}_{2}(\mathbb{Z})}\wr_{*}S_{N}^{+}))\simeq\mathbb{Z}.\]
_For \(1\leq N\leq 3\), we have:_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{SL}_{2}(\mathbb{Z})}\wr_{*}S_{N}^{+})) \simeq\left\{\begin{array}{ll}\mathbb{Z}^{69}&\text{if}\quad N=3,\\ \mathbb{Z}^{8N^{2}-2N+2}&\text{if}\quad N\in\{1,2\}.\end{array}\right.\quad \text{and}\quad K_{1}(C_{\bullet}(\widehat{\mathrm{SL}_{2}(\mathbb{Z})}\wr_{ *}S_{N}^{+}))\simeq 0.\]
Proof.: We use the well known isomorphism \(\mathrm{SL}_{2}(\mathbb{Z})\simeq\mathbb{Z}_{6}*_{\mathbb{Z}_{2}}\mathbb{Z}_{4}\) which implies the K-amenability of \(\mathrm{SL}_{2}(\mathbb{Z})\) hence of \(\widehat{\mathrm{SL}_{2}(\mathbb{Z})}\wr_{*}S_{N}^{+}\) as well by Theorem A. Moreover, by Example 6.4, we have \(\widehat{\mathrm{SL}_{2}(\mathbb{Z})}\wr_{*}S_{N}^{+}\simeq H_{N}^{6+}*_{H_{N} ^{2+}}H_{N}^{4+}\). We may use the K-theory of the quantum reflection groups computed in Theorem D to deduce the one for the group \(\widehat{\mathrm{SL}_{2}(\mathbb{Z})}\wr_{*}S_{N}^{+}\). Applying Corollary 7.4 and \(K\)-amenability, we get the following cyclic exact sequence
The maps in the bottom line are the diagonal embedding and the map \((x,y)\mapsto x-y\), which are respectively injective and surjective \(\mathbb{Z}\to\mathbb{Z}\oplus\mathbb{Z}\to\mathbb{Z}\). Hence, the top line is a short exact sequence and the map \(\psi\) is injective, and we have
\[K_{0}(C_{\bullet}(\widehat{\mathrm{SL}_{2}(\mathbb{Z})}\wr_{*}S_{N}^{+})) \simeq\mathbb{Z}^{6N^{2}-2N+2}\oplus\mathbb{Z}^{4N^{2}-2N+2}/\mathrm{Im}(\psi).\]
The map \(\psi\) sends the generators of \(K_{0}(C_{\bullet}(H_{N}^{2+}))\), coming from the classes of the form \([v\otimes e_{j}]\) in \(K_{0}(C^{*}(\mathbb{Z}_{2})\otimes\mathbb{C}^{N})\) to the corresponding classes \([\beta^{*}v\otimes e_{j}]-[\alpha^{*}v\otimes e_{j}]\), where \(\beta:\mathbb{Z}_{2}\to\mathbb{Z}_{6}\) and \(\alpha:\mathbb{Z}_{2}\to\mathbb{Z}_{4}\) are the canonical embeddings. Hence, the quotient by \(\mathrm{Im}(\psi)\) identifies free copies of \(\mathbb{Z}\) and the group is isomorphic to \(\mathbb{Z}^{8N^{2}-2N+2}\)
**Remark 7.6**.: We can use Proposition 7.4 to give another proof of Corollary 7.2. It is also possible to compute first the K-theory of the amalgamated free products and then apply Theorem D to get the results of 7.4.
Let us now do a \(K\)-theory computation in the context of an HNN extension. Recall that the Baumslag-Solitar group \(\mathrm{BS}(n,m)\), for \(n,m\in\mathbb{Z}^{*}\), is defined by generators and relations:
\[\mathrm{BS}(n,m):=\langle a,b\,|\,ab^{n}a^{-1}=b^{m}\rangle.\]
Let \(\langle a\rangle<\mathrm{BS}(n,m)\) be the subgroup generated by \(a\) and view \(\widehat{\langle a\rangle}\) as a dual quantum subgroup of the compact quantum group \(\widehat{\mathrm{BS}(n,m)}\).
**Proposition 7.7**.: \(\widehat{\mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{\langle a\rangle}}S_{N}^{+}\) _is K-amenable and, \(\forall n,m\in\mathbb{Z}^{*}\), \(N\geq 4\), if \(n=m\), then_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{\langle a \rangle}}S_{N}^{+}))\simeq\mathbb{Z}^{2N^{2}-2N+3}\text{ and }K_{1}(C_{\bullet}(\widehat{\mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{\langle a \rangle}}S_{N}^{+}))\simeq\mathbb{Z}^{N^{2}-2N+3},\]
_and if \(n\neq m\),_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{\langle a \rangle}}S_{N}^{+}))\simeq\mathbb{Z}^{N^{2}-2N+3},\ K_{1}(C_{\bullet}(\widehat{ \mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{\langle a\rangle}}S_{N}^{+}))\simeq \mathbb{Z}^{N^{2}-2N+3}\oplus(\mathbb{Z}_{|n-m|})^{N^{2}}.\]
_For \(1\leq N\leq 3\), if \(n=m\), then_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{\langle a \rangle}}S_{N}^{+}))\simeq\mathbb{Z}^{N^{2}+N!}\text{ and }K_{1}(C_{\bullet}(\widehat{\mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{\langle a \rangle}}S_{N}^{+}))\simeq\mathbb{Z}^{N^{2}+N!}\]
_and if \(n\neq m\),_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{\langle a \rangle}}S_{N}^{+}))\simeq\mathbb{Z}^{N!}\text{ and }K_{1}(C_{\bullet}(\widehat{\mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{\langle a \rangle}}S_{N}^{+}))\simeq\mathbb{Z}^{N!}\oplus(\mathbb{Z}_{|n-m|})^{N^{2}}.\]
Proof.: Note that \(\mathrm{BS}(n,m)\) is the HNN-extension \(\mathrm{BS}(n,m)=\mathrm{HNN}(\mathbb{Z},\mathbb{Z},\theta_{n},\theta_{m})\), where \(\theta_{l}:\mathbb{Z}\to\mathbb{Z}\) is the multiplication by \(l\in\{n,m\}\). In particular, \(\mathrm{BS}(n,m)\) is K-amenable and the CQG \(\widehat{\mathrm{BS}(n,m)}\) is also an HNN-extension. We still denote by \(\theta_{l}:\,C_{\bullet}(\widehat{\mathbb{Z}}\wr_{\iota_{*}S_{N}^{+}})\to C _{\bullet}(\widehat{\mathbb{Z}}\wr_{\iota_{*}S_{N}^{+}})\) the map defined by \(\theta_{l}(\nu_{i}(k)u_{ij})=\nu_{i}(lk)u_{ij}\) for all \(l\in\{n,m\}\), \(k\in\mathbb{Z}\) and \(1\leq i,j\leq N\). By Example 6.4, the loop subgroup is \(\Gamma=\langle a\rangle\) and, by using Theorem 7.3 (with \(A=\mathbb{C}\)), we get,
where \(\mathcal{B}:=C_{\bullet}(\widehat{\mathrm{BS}(n,m)}\wr_{\iota_{*}\widehat{ \langle a\rangle}}S_{N}^{+})\). As the group \(K_{0}(C_{\bullet}(\widehat{\mathbb{Z}}\wr_{\iota_{*}S_{N}^{+}}))\) is generated by the image of \(K_{0}(C_{\bullet}(S_{N}^{+}))\) in it, and since we have, for \(l\in\{n,m\}\), \(\theta_{l}^{*}([u_{ij}])=[\theta_{l}(u_{ij})]=[u_{ij}]\) and \(\theta_{l}^{*}([1])=[\theta_{l}(1)]=[1]\), where \([u_{ij}]\) and \([1]\) are the generators of the \(K_{0}\) group, the map \(\theta_{n}^{*}-\theta_{m}^{*}\) is trivial at the \(K_{0}\) level. It is however nontrivial at the \(K_{1}\) level: \(K_{1}(C(\widehat{\mathbb{Z}}\wr_{\iota_{*}S_{N}^{+}}))\) is generated by the images of the generator \([1]\) of \(K_{1}(C^{*}(\mathbb{Z}))\), the one of the generator of \(K_{1}(C(S_{N}^{+}))\). The action of \(\theta_{n}\) at the K-theory level is thus given by multiplication by \(n\) on the generators coming from \(K_{1}(C^{*}(\mathbb{Z}))\), and acts trivially on the one coming from \(K_{1}(C(S_{N}^{+}))\). Thus the sequence splits differently depending on if \(m\) and \(n\) are equal or not. If \(n=m\), then the maps are trivial and the sequence splits in two short sequences as follows:
\[0\to\mathbb{Z}^{N^{2}-2N+2}\to K_{0}(\mathcal{B})\to\mathbb{Z}^{N^{2}+1}\to 0 \quad\text{and}\quad 0\to\mathbb{Z}^{N^{2}+1}\to K_{0}(\mathcal{B})\to\mathbb{Z}^{N^{2}-2N+2}\to 0,\]
giving the first part of the result. If \(m\neq n\) then the map is still trivial at the \(K_{0}\)-level, but acts as the multiplication by \(m-n\) on the \(N\) first generators of \(K_{1}(C(\widehat{\mathbb{Z}}\wr_{\iota_{*}S_{N}^{+}}))\) and as \(0\) on the last one. The sequence splits in two short sequences:
\[0\to\mathbb{Z}^{N^{2}-2N+2}\to K_{0}(\mathcal{B})\to\mathbb{Z}\to 0\quad\text{ and}\quad 0\to\mathbb{Z}^{N^{2}}\overset{\psi}{\to}\mathbb{Z}^{N^{2}+1}\to K_{1}( \mathcal{B})\to\mathbb{Z}^{N^{2}-2N+2}\to 0,\]
the map \(\psi\) in the second being multiplication by \((n-m)\) on each of the first \(N^{2}\) terms of the sum, and the sequence becomes \(0\to(\mathbb{Z}_{|n-m|})^{N^{2}}\oplus\mathbb{Z}\to K_{1}(\mathcal{B})\to\mathbb{ Z}^{N^{2}-2N+2}\to 0\), and thus the
result in the remaining case because the group on the right-hand-side is free. The proof for the cases \(1\leq N\leq 3\) works the same way.
**Remark 7.8**.: The computation of this case can also be achieved thanks to the six-term exact sequence for the free wreath product with amalgamation of Theorem D, written in the beginning of the proof in section 7. The main point in the use of this exact sequence is that the maps \(\sum s_{v}^{*}-r_{v}^{*}\) appearing in the sequence are injective and explicit in this special case, allowing an easy computation.
We can compare it to the non-amalgamated case. Using the following proposition about the K-theory of the Baumslag-Solitar groups \(\mathrm{BS}(m,n)=\mathrm{HNN}\left(\mathbb{Z},\mathbb{Z},\theta_{m},\theta_{n}\right)\). We include a proof as we couldn't find the result stated except in the solvable case in [11].
**Proposition 7.9**.: _Let \(m\) and \(n\) be integers, then the Baumslag-Solitar group \(\mathrm{BS}(m,n)\) is K-amenable and its K-theory is given, if \(n=m\), by_
\[K_{0}(C_{\bullet}^{*}(\mathrm{BS}(m,m)))\simeq\mathbb{Z}^{2}\text{ and }K_{1}(C_{\bullet}^{*}(\mathrm{BS}(m,m)))=\mathbb{Z}^{2},\]
_and if \(n\neq m\), by_
\[K_{0}(C_{\bullet}^{*}(\mathrm{BS}(m,n)))\simeq\mathbb{Z}\text{ and }K_{1}(C_{\bullet}^{*}( \mathrm{BS}(m,n)))=\mathbb{Z}\oplus\mathbb{Z}_{|n-m|}.\]
Proof.: The Baumslag-Solitar groups are the fundamental group of the graph with only one vertex and one edge, with \(\mathbb{Z}\) on each and with \(\mathbb{Z}\) being sent to the subgroups \(n\mathbb{Z}\) and \(m\mathbb{Z}\) by multiplication by \(m\) and \(n\). Thus, the corresponding C*-algebras are the full and vertex-reduced fundamental C*-algebra of this graph with \(C^{*}(\mathbb{Z})\) and the maps induced from the multiplications, which are K-equivalent. The exact sequence of K-theory is then
which becomes
giving the result as the map \((m-n)\) is injective if \(m\neq n\) and \(0\) if \(m=n\).
From this we can compute the K-theory of the free wreath products using Theorem D.
**Proposition 7.10**.: _Let \(m\) and \(n\) be integers and \(N\geq 4\), then the quantum group \(\widehat{\mathrm{BS}(n,m)}\wr_{*}S_{N}^{+}\) is K-amenable and if \(n=m\), then_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{BS}(m,n)}\wr_{*}S_{N}^{+}))\simeq\mathbb{ Z}^{2N^{2}-2N+2}\text{ and }K_{1}(C_{\bullet}(\widehat{\mathrm{BS}(m,n)}\wr_{*}S_{N}^{+}))=\mathbb{Z}^{2N^ {2}+1}.\]
_If \(n\neq m\),_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{BS}(m,n)}\wr_{*}S_{N}^{+}))\simeq\mathbb{ Z}^{N^{2}-2N+2}\text{ and }K_{1}(C_{\bullet}(\widehat{\mathrm{BS}(m,n)}\wr_{*}S_{N}^{+}))=\mathbb{Z}^{N^ {2}+1}\oplus(\mathbb{Z}_{|n-m|})^{N^{2}}.\]
_For \(1\leq N\leq 3\), if \(n=m\), then_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{BS}(m,n)}\wr_{*}S_{N}^{+}))\simeq\mathbb{ Z}^{N^{2}+N^{1}}\text{ and }K_{1}(C_{\bullet}(\widehat{\mathrm{BS}(m,n)}\wr_{*}S_{N}^{+}))=\mathbb{Z}^{2N^ {2}}.\]
_If \(n\neq m\),_
\[K_{0}(C_{\bullet}(\widehat{\mathrm{BS}(m,n)}\wr_{*}S_{N}^{+}))\simeq\mathbb{ Z}^{N^{1}}\text{ and }K_{1}(C_{\bullet}(\widehat{\mathrm{BS}(m,n)}\wr_{*}S_{N}^{+}))=\mathbb{Z}^{N^ {2}}\oplus(\mathbb{Z}_{|n-m|})^{N^{2}}.\]
## Acknowledgements
The authors would like to thank Kenny De Commer, Adam Skalski, Roland Vergnioux and Makoto Yamashita for their help and their suggestions during the reduction of this article. |
2307.15943 | Learning a Common Dictionary for CSI Feedback in FDD Massive
MU-MIMO-OFDM Systems | In a transmit preprocessing aided frequency division duplex (FDD) massive
multi-user (MU) multiple-input multiple-output (MIMO) scheme assisted
orthogonal frequency-division multiplexing (OFDM) system, it is required to
feed back the frequency domain channel transfer function (FDCHTF) of each
subcarrier at the user equipment (UE) to the base station (BS). The amount of
channel state information (CSI) to be fed back to the BS increases linearly
with the number of antennas and subcarriers, which may become excessive. Hence
we propose a novel CSI feedback compression algorithm based on compressive
sensing (CS) by designing a common dictionary (CD) to reduce the CSI feedback
of existing algorithms. Most of the prior work on CSI feedback compression
considered single-UE systems. Explicitly, we propose a common dictionary
learning (CDL) framework for practical frequency-selective channels and design
a CD suitable for both single-UE and multi-UE systems. A set of two methods is
proposed. Specifically, the first one is the CDL-K singular value decomposition
(KSVD) method, which uses the K-SVD algorithm. The second one is the
CDL-orthogonal Procrustes (OP) method, which relies on solving the orthogonal
Procrustes problem. The CD conceived for exploiting the spatial correlation of
channels across all the subcarriers and UEs compresses the CSI at each UE, and
{upon reception} reconstructs it at the BS. Our simulation results show that
the proposed dictionary's estimated channel vectors have lower normalized
mean-squared error (NMSE) than the traditional fixed Discrete Fourier Transform
(DFT) based dictionary. The CSI feedback is reduced by 50%, and the memory
reduction at both the UE and BS starts from 50% and increases with the number
of subcarriers. | Pavan Kumar Gadamsetty, K. V. S. Hari, Lajos Hanzo | 2023-07-29T09:33:27Z | http://arxiv.org/abs/2307.15943v1 | # Learning a Common Dictionary for CSI Feedback in FDD Massive MU-MIMO-OFDM Systems
###### Abstract
In a transmit preprocessing aided frequency division duplex (FDD) massive multi-user (MU) multiple-input multiple-output (MIMO) scheme assisted orthogonal frequency-division multiplexing (OFDM) system, it is required to feed back the frequency domain channel transfer function (FDCHTF) of each subcarrier at the user equipment (UE) to the base station (BS). The amount of channel state information (CSI)1 to be fed back to the BS increases linearly with the number of antennas and subcarriers, which may become excessive. Hence we propose a novel CSI feedback compression algorithm based on compressive sensing (CS) by designing a common dictionary (CD) to reduce the CSI feedback of existing algorithms. Most of the prior work on CSI feedback compression considered single-UE systems. Explicitly, we propose a common dictionary learning (CDL) framework for practical frequency-selective channels and design a CD suitable for both single-UE and multi-UE systems. A set of two methods is proposed. Specifically, the first one is the CDL-K singular value decomposition (KSVD) method, which uses the K-SVD algorithm. The second one is the CDL-orthogonal Procrustes (OP) method, which relies on solving the orthogonal Procrustes problem. The CD conceived for exploiting the spatial correlation of channels across all the subcarriers and UEs compresses the CSI at each UE, and upon reception reconstructs it at the BS. Our simulation results show that the proposed dictionary's estimated channel vectors have lower normalized mean-squared error (NMSE) than the traditional fixed Discrete Fourier Transform (DFT) based dictionary. The CSI feedback is reduced by 50%, and the memory reduction at both the UE and BS starts from 50% and increases with the number of subcarriers.
Wideband, frequency domain channel transfer function (FDCHTF), channel state information (CSI), compressive sensing (CS), massive MIMO, common dictionary learning (CDL), common dictionary (CD), orthogonal Procrustes (OP) problem, K-SVD algorithm.
## I Introduction
Massive multiple-input multiple-output (MIMO) systems constitute a promising enabling technique for 5G/6G cellular networks as a benefit of their substantial spatial multiplexing gain [1] in both time division duplex (TDD) and frequency division duplex (FDD) scenarios. At the base station (BS), combining the massive MIMO technology with orthogonal frequency-division multiplexing (OFDM) is capable of transmitting multiple data symbols to multiple UEs on the same time-frequency resource block, resulting in increased system throughput [2]. In a multi-user (MU) massive MIMO-OFDM system, the knowledge of CSI is needed at the BS to implement transmit precoding (TPC) for suppressing the co-channel interference (CCI) [3]. In FDD systems, due to the absence of channel reciprocity [4], the user equipment (UE) has to feed back the downlink (DL) frequency domain channel transfer function (FDCHTF) of each subcarrier to the BS. Feeding back the accurate CSI becomes more challenging with the increased number of antennas, subcarriers, and UEs [5].
The compression of high-dimensional CSI is essential for reducing the CSI feedback. The wireless channels can be represented in a sparse form in the spatial-frequency domain using'sparsifying' bases termed as a dictionary [6]. In the compressive sensing (CS)-based feedback schemes [7], the original CSI is mapped to a sparse domain using a dictionary. The traditional choice of the dictionary is a fixed Discrete Fourier Transform (DFT) matrix. Then a random Gaussian measurement matrix is introduced to compress the sparse vector for feeding it back to the BS at a reduced rate. The sparse signal is then reconstructed at the BS using CS-based algorithms, such as the orthogonal matching pursuit (OMP) [8], basis pursuit (BP) [9] or covariance-assisted matching pursuit (CAMP) [10] procedures. The original CSI is then reconstructed by mapping the regenerated sparse signal back to the same dictionary used at the UE side.
The authors of [11] proposed a rotated version of the DFT basis to provide improved sparsity that results in reduced CSI mean-squared error (MSE) for a narrowband multi-user system with each UE having a single receive antenna. However, the proposed rotated basis does not exploit the antennas' spatial correlation. The massive MIMO-OFDM channel between a multi-antenna UE and the BS can be represented by a matrix [12]. In such systems, the UE has to feed back the FDCHTF of each subcarrier, which results in huge feedback overhead. The FDCHTF feedback algorithm of a massive MIMO-OFDM single-UE system based on multidimensional compressive sensing theory using Tucker's tensor decomposition model is developed in [13]. Briefly, Tucker's tensor decomposition exploits the structure hidden in all the dimensions of the channel matrix and compresses it simultaneously in each dimension. The proposed scheme has a significant feedback reduction and hence improves the spectral efficiency. However, both the basis and measurement matrices should be learned. The authors of [14] introduced a recursive least squares dictionary learning algorithm (RLS-DLA) for CSI feedback. The proposed scheme achieves a substantial reduction in feedback requirements, however it requires the computation of large matrix inverses during the dictionary learning process.
Another line of work focused on designing non-dictionary
based methods for FDCHTF feedback [19]. In [20], an antenna grouping-based method was proposed for reducing the feedback overhead by grouping multiple correlated antenna elements into a single representative value. By considering a ray-based channel model, the authors of [21] and [22] designed an angle-of-departure (AoD) based adaptive subspace codebook for feedback compression. In [16] and [23] the authors exploited the low-rank characteristics of a large channel matrix for recovering the CSI at the BS.
Recent solutions include Deep Learning (DL) techniques conceived for CSI compression and recovery using so-called the Bi-LSTM [24], CsiNet-LSTM [25], DNNet [26], CS-ReNet [27], and DCRNet [28] frameworks2. Additionally, the application of Deep unfolding techniques has also shown promising results, as demonstrated in [29, 30]. These techniques have better reconstruction performance than the conventional CS algorithms of [31], albeit at significantly increased computational complexity.
Footnote 2: For the expansion of these acronyms please refer to the relevant papers
Massive MIMO-OFDM channels tend to be individually sparse and simultaneously share a common support set that typically exhibit joint sparsity in the time domain (TD) [32], which results in correlation among subcarriers in the frequency domain (FD). Since the DFT dictionary does not exploit spatial correlation across antenna arrays, we design a dictionary for massive MIMO-OFDM systems that can exploit the spatial correlation, hence achieving improved CSI reconstruction performance. The dictionary is generally learned from a training data set by relying on learning-based approaches [17, 18, 33, 34, 35]. The dictionaries learned have the potential to offer improved normalized mean squared error (NMSE) performance compared to fixed dictionaries, like the DFT-based one. In [15] a CS-based method was proposed, which exploited the spatial correlation among the antennas in a narrowband single-UE system using the K-SVD [36] algorithm. The method relies on learning the K-SVD dictionary from the training data set and on feeding back the K-SVD dictionary learned at the UE to the BS. Using this K-SVD dictionary, the CSI is compressed at the UE and reconstructed at the BS. The motivation for this K-SVD based dictionary is not only to reduce the CSI feedback, but also to reduce the NMSE of CSI reconstruction.
As the channel-induced dispersion is increased, the number of OFDM subcarriers also has to be increased to avoid an
\begin{table}
\begin{tabular}{|l|c|} \hline
**Acronyms** & **Meaning** \\ \hline AOD & Angle-of-Departure \\ BER & Bit Error Rate \\ BS & Base Station \\ CD & Common Dictionary \\ CDL & Common Dictionary Learning \\ CFR & Channel Frequency Response \\ CIR & Channel Impulse Response \\ CS & Compressive Sensing \\ CSI & Channel State Information \\ DFT & Discrete Fourier Transform \\ DL & Deep Learning \\ FD & Frequency Domain \\ FDD & Frequency Division Duplex \\ FDCHTF & Frequency Domain Channel \\ Transfer Function \\ GR-SVD & Golub-Reinsch SVD \\ K-SVD & K Singular Value Decomposition \\ MIMO & Multiple-Input Multiple-Output \\ MP & Matching Pursuit \\ MSE & Mean Squared Error \\ MU & Multi User \\ NMSE & Normalized Mean Squared Error \\ OF & Objective Function \\ OFDM & Orthogonal Frequency Division \\ \multicolumn{2}{|l|}{Multiplexing} \\ OMP & Orthogonal Matching Pursuit \\ OP & Orthogonal Procrustes \\ QuaDRiGa & Quasi Deterministic Radio \\ \multicolumn{2}{|l|}{Channel Generator} \\ RA & Receive Antenna \\ SU & Single User \\ TA & Transmit Antenna \\ TD & Time Domain \\ TDD & Time Division Duplex \\ TPC & Transmit Precoding \\ UE & User Equipment \\ VQC & Vector Quantization Codebook \\ \hline \end{tabular}
\end{table} TABLE II: LIST OF ACRONYMS
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & [11] & [14] & [15] & [16] & [17] & [18] & Our \\ \hline Massive MIMO architecture & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Spatially correlated channels & ✓ & ✓ & ✓ & ✓ & & ✓ \\ \hline Sparsity & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Dictionary learning & & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Feedback savings & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Memory savings at UE & & & & ✓ & & ✓ \\ \hline Multi-user MIMO OFDM & & & & ✓ & & ✓ \\ \hline UE mobility & & & & & & ✓ \\ \hline \end{tabular}
\end{table} TABLE I: COMPARING OUR CONTRIBUTION TO THE EXISTING LITERATURE
excessive performance degradation. Hence upon using the K-SVD algorithm of [15], the number of subcarrier K-SVD based dictionaries increases as the number of subcarriers increases. Handling ubiquitous subcarrier K-SVD based dictionaries is cumbersome in terms of memory management and feedback load. To circumvent this problem, we propose a novel common dictionary learning (CDL) technique, which can replace the requirement of individual subcarrier K-SVD dictionaries, leading to the concept of a common dictionary (CD). The CD effectively captures the channel characteristics of all the subcarriers and UEs, making it the optimal sparsifying dictionary for representing the channel's sparsity in massive MIMO systems. Given the CD learned, compressive channel estimation techniques can be constructed for acquiring the CSI. A set of two methods having different pros and cons are proposed for CDL, namely the CDL-KSVD method and the CDL-orthogonal Procrustes (OP) [37] based method. These methods are detailed in Section III of the paper. Again, our primary motivation is to reduce the CSI feedback overhead on the uplink as well as the memory requirement at both the UE and the BS in FDD massive MU-MIMO-OFDM systems.
Main contributions of this article:
1. We proposed a novel CDL framework for learning a CD, mainly using the CDL-KSVD and the CDL-OP methods. In the CDL framework proposed for a multi-UE system, the CD conceived exploits the spatial correlation of the FD channels across all the subcarriers and the UEs. We demonstrate that this implementation improves the NMSE performance when compared to the existing methods.
2. In the CDL framework proposed for a single-UE system, the learning of CD is implemented at the UE. The UE sends only the CD to the BS in the uplink instead of all the subcarrier K-SVD dictionaries. This implementation reduces the dictionary feedback to the BS by a factor of \(N_{c}\) and also reduces the memory requirement by having a single CD at the UE and the BS.
3. We evaluate the proposed CD in the context of various system configurations and channel conditions in the face of UE mobility. The numerical results show a significant reduction in the NMSE of channel estimation and highlight the bit error rate (BER) performance of the channel estimates when using our learned dictionary. This corroborates the effectiveness of the CDL framework proposed over existing methods in wideband massive MIMO systems.
The remainder of this paper is organized as follows. Section II presents the system model, CS procedure, and the motivation. In Section III, the proposed methods are discussed. Then the application of the proposed methods in wideband systems is discussed in Section IV. Our simulation results are provided in Section V to show the NMSE performance of the proposed method compared to state-of-the-art methods. Finally, in Section VI, our conclusions are given.
Notations: We use lower (upper) bold letters to denote
column vectors (matrices) and super-scripts \((.)^{-1},(.)^{*},(.)^{H}\) to represent the inverse, complex conjugate and Hermitian operators respectively, \(\|.\|_{F}\) denotes the Frobenius norm of a matrix; \(\otimes\) denotes the Kronecker product, tr(.) is the trace of the matrix, vec(.) operation returns a column vector by stacking all the columns of a matrix.
## II System Model
In this section, we first introduce the massive MU-MIMO-OFDM channel and the associated spatial correlation matrices at the BS and UEs. Furthermore, we conceive the CS-based channel reconstruction procedure of massive MIMO channels. Next, we highlight the dictionary learning algorithms available in the literature. Then, in the final sub-section we describe the motivation of the proposed CDL framework.
### _The Massive MU-MIMO-OFDM Channel_
We consider a massive MU-MIMO-OFDM system using a uniform linear array (ULA) of \(N_{t}\) TAs at the BS, \(N_{r}\) RAs at all the \(K\) UEs and \(N_{c}\) subcarriers. For the \(k\)-th UE (\(k=1\) to \(K\)), consider a frequency-selective channel having \(L\) taps in the TD. Let \(\mathbf{H}_{l,k}\) represent the FDCHTF of the \(l\)-th subcarrier of the \(k\)-th UE given by
\[\mathbf{H}_{l,k}=\sum_{i=0}^{L-1}\mathbf{\bar{H}}_{i,k}e^{-\frac{2\pi i}{N_{c }}}, \tag{1}\]
where \(\mathbf{\bar{H}}_{i,k}\in\mathbb{C}^{N_{r}\times N_{t}}\) is the \(i\)-th tap TD channel matrix. The tap coefficient \(\mathbf{\bar{H}}_{i,k}(p,q)\) represents the channel impulse response (CIR) of the link spanning from the \(q\)-th BS antenna to the \(p\)-th UE antenna.
The spatial correlation of massive MIMO channels can be modeled by a Kronecker structure having separable transmit and receive correlation matrices [31], with \(\mathbf{\bar{H}}_{n,k}\) given by
\[\mathbf{\bar{H}}_{i,k}=\frac{1}{\sqrt{\text{tr}(\mathbf{R}_{UE,k})}}\mathbf{R }_{UE,k}^{\frac{1}{2}}\mathbf{\bar{H}}_{i,k}\mathbf{R}_{BS}^{\frac{1}{2}}, \tag{2}\]
where \(\mathbf{\bar{H}}_{i,k}\) is a \(N_{r}\times N_{t}\) matrix whose elements are independent and identically distributed (i.i.d.) complex zero-mean, unit variance, Gaussian random entries. Furthermore, \(\mathbf{R}_{BS}\) and \(\mathbf{R}_{UE,k}\) are the spatial correlation matrices at the BS and \(k\)-th UE, respectively.
The spatial correlation matrices are generated by Jakes' model often used in the literature, so the \(uv\)-th element of \(\mathbf{R}_{BS}\) and \(\mathbf{R}_{UE,k}\), can be modeled by \(r_{uv}=J_{0}(\,2\pi d_{uv}/\lambda)\) where \(d_{uv}\) is the distance between the antennas \(u\) and \(v\), \(\lambda\) is the carrier wavelength and \(J_{0}(.)\) denotes the zeroth-order Bessel function of first kind [39].
### _Compressive Sensing Based Channel Reconstruction_
Fig. 2 shows the basic schematic of the FDCHTF compression and reconstruction across the \(l\)-th subcarrier of the \(k\)-th UE. More specifically, observe in Fig. 3 that at the \(k\)-th UE we present the FDCHTF view across all the subcarriers and its sparsification using the existing as well as the proposed methods in parallel. The remainder of this section introduces each of the steps numbered in both figures.
1 In between the BS and a \(k\)-th UE the complete channel frequency response matrix that includes all the \(N_{c}\) subcarrier channels is formed by stacking the channel matrices column-wise:
\[\mathbf{H}_{k}=[\mathbf{H}_{1,k},\ \ldots,\ \mathbf{H}_{l,k},\ \ldots,\ \mathbf{H}_{N_{c},k}]. \tag{3}\]
2 We assume that the \(k\)-th UE perfectly estimates its channel matrix \(\mathbf{H}_{k}\), which should be shared with the BS through feedback. Instead of sending the FDCHTF of each subcarrier, the matrix \(\mathbf{H}_{l,k}\) is vectorized first into an \(N_{r}N_{t}\times 1\) column vector using the vec(.) operation
\[\mathbf{h}_{l,k}=\text{vec}(\mathbf{H}_{l,k}). \tag{4}\]
3
4 In practical systems, the UE has to compress the estimated channel vector \(\mathbf{h}_{l,k}\in\mathbb{C}^{N_{r}N_{t}\times 1}\) to avoid high feedback load. The wireless channel vector \(\mathbf{h}_{l,k}\) can be represented by a sparse vector [31] after a transformation
\[\mathbf{h}_{l,k}=\mathbf{\Psi}\mathbf{\tilde{h}}_{l,k}, \tag{5}\]
where \(\mathbf{\tilde{h}}_{l,k}\) is the sparse representation of \(\mathbf{h}_{l,k}\). The number of non-zero components of a sparse channel vector is called the sparsity or sparsity index, and it is denoted by \(S\), while \(\mathbf{\Psi}\) is a \(N_{r}N_{t}\times N_{r}N_{t}\) dictionary known to both the UE and the BS. A popular example of \(\mathbf{\Psi}\) is the DFT matrix. Next, we introduce the measurement (sensing) matrix \(\mathbf{\Phi}\), which plays a crucial role in compressive sensing. The measurement (sensing) matrix defines the measurement process in CS, which influences the reconstruction quality and efficiency of
\begin{table}
\begin{tabular}{|l|l|l|} \hline & Single-UE & Multi-UE \\ \hline \multirow{2}{*}{Narrowband} & \multirow{2}{*}{CS basis [14, 31]} & Rotated DFT [11] \\ & & K-SVD dictionary [15] \\ \multirow{4}{*}{Wideband} & Multidimensional CSI [17] & \\ & CsiNet [38] & \\ \cline{1-1} & CsiNet-LSTM [25] & CS-ReNet [27] \\ \cline{1-1} & DNN [26] & \\ \hline \end{tabular}
\end{table} TABLE IV: References for CSI compression techniques
Fig. 1: Overview of the considered massive MU-MIMO-OFDM system. Right: \(K\) UEs with \(N_{r}\) RAs each for \(k=1\) to \(K\); Left: massive MIMO base station with \(N_{t}\) TAs. \(\mathbf{H}_{k}\) represents the complete FDCHTF of the \(k\)-th UE and \(\mathbf{H}_{l,k}\) represents the FDCHTF across the \(l\)-th subcarrier of the \(k\)-th UE.
the signal recovery algorithm. It is responsible for mapping the original high-dimensional signal to a lower-dimensional signal.
3 6 To compress the channel vector \(\mathbf{h}_{l,k}\), a measurement matrix \(\mathbf{\Phi}\in\mathbb{C}^{N_{g}\times N_{r}N_{t}}\)\((N_{g}{<<}N_{r}N_{t})\) that satisfies the Restricted Isometry Property (RIP) [8], which facilitates sparse vector recovery relying on:
\[\mathbf{h}_{c,l}^{k}=\mathbf{\Phi}\mathbf{\Psi}\tilde{\mathbf{h}}_{l,k}, \tag{6}\]
where \(\mathbf{h}_{c,l}^{k}\) is the compressed channel vector with dimension \(N_{g}\times 1\). Let us now define \(\mathbf{\Theta}=\mathbf{\Phi}\mathbf{\Psi}\).
7 Then the reconstruction of \(\mathbf{h}_{l,k}\) can be formulated as an \(\ell_{0}\)-norm minimization problem and the sparse vector \(\tilde{\mathbf{h}}_{l,k}\) can be obtained by solving
\[\min_{\tilde{\mathbf{h}}_{l,k}}\|\tilde{\mathbf{h}}_{l,k}\|_{0}\quad s.t.\ \ \mathbf{h}_{c,l}^{k}=\mathbf{\Theta}\tilde{\mathbf{h}}_{l,k}. \tag{7}\]
Thus, instead of feeding back \(\mathbf{h}_{l,k}\), the UE sends a low-dimensional vector \(\mathbf{h}_{c,l}^{k}\) to the BS for reducing the FDCHTF feedback. The BS reconstructs \(\hat{\mathbf{h}}_{l,k}\) from \(\mathbf{h}_{c,l}^{k}\), where \(\hat{\mathbf{h}}_{l,k}\) represents the reconstructed \(\mathbf{h}_{l,k}\). The reconstructed channel vector \(\hat{\mathbf{h}}_{l,k}\) at the BS is utilized for precoding during the data transmission stage. The precoder matrices employed at the BS are denoted as \(\mathbf{W}^{g}\) and \(\mathbf{W}_{gp}^{g}\), which correspond to the beamforming weights. These weights are obtained from the true channels and the channels estimated using CDL-OP dictionary, respectively, for a compression factor of \(g\). Here, the compression factor \(g\) is defined as \(g=\frac{N_{r}N_{t}}{N_{g}}\), where \(N_{g}\times 1\) represents the dimension of the compressed channel vector \(\mathbf{h}_{c,l}^{k}\).
### _Motivation for the Common Dictionary Learning Framework_
In CS-based feedback schemes, the traditional choice of the dictionary is a fixed DFT matrix, which does not exploit the spatial correlation between the antennas. The authors of [11] proposed a rotated version of the DFT dictionary for better exploiting the sparsity, resulting in reduced FDCHTF mean-squared error (MSE) for a narrowband multi-user system supporting single antenna UEs. But this rotated basis still failed to exploit the antenna's spatial correlation for improving the MSE further.
The authors of [15] have shown that a dictionary can be learned using the K-SVD algorithm for narrowband FDD massive SU-MIMO systems. This K-SVD dictionary learned exploits the spatial correlation between the antennas, and its FDCHTF reconstruction performance is improved compared to the fixed DFT dictionary. The proposed method requires FDCHTF and dictionary feedback to the BS. However, in practical communication systems, the channels are frequency-selective, and OFDM is a ubiquitous technique for such systems. In a massive MU-MIMO-OFDM system, to extend the idea of dictionary learning, it is necessary to feed back the FDCHTF and K-SVD based dictionary of each subcarrier of all UEs. Feeding back the entire FDCHTF \(\mathbf{H}_{k}\) of the \(k\)-th subcarrier will be a huge burden in the uplink. Another important issue is that substantial memory is required for saving all the \(N_{c}\) subcarrier dictionaries at both the UE and the BS. The dimension of each subcarrier dictionary is \(N_{r}N_{t}\times N_{r}N_{t}\), hence the memory required to store \(N_{c}\) dictionaries is \(N_{c}(N_{r}N_{t})^{2}\).
To overcome these challenges, we propose the novel idea of a common dictionary, which can replace the requirement of individual subcarrier dictionaries. The CD is designed for exploiting the spatial correlation across all the subcarriers and UEs in the FD, hence improving the CSI reconstruction accuracy. The proposed CD reduces the CSI feedback load and memory requirement in both single and multi-UE systems. In particular, the feedback load is further reduced for a single
Fig. 2: Overview of the considered CSI feedback compression scheme in the massive MU-MIMO-OFDM system. The FDCHTF across the \(l\)-th subcarrier of the \(k\)-th UE is compressed at the UE and reconstructed at the BS.
UE system by sending only a single CD from the UE to the BS. Hence, the proposed CDL framework reduces the CSI feedback and memory requirements, and we also study the NMSE performance compared to the DFT and subcarrier K-SVD dictionaries.
## III Proposed Common Dictionary Learning Framework
In this section, we detail the CDL framework proposed for a multi-UE system that constructs a dictionary from the estimated channel vectors and K-SVD based dictionaries of UEs. Before introducing the framework proposed, in Fig. 4 we provided a diagram showing the flow of the analysis described in the paper. This diagram guides the reader through the paper.
### _Common Dictionary Learning Framework_
The main goal of the proposed CDL framework is to construct a CD, denoted by \(\boldsymbol{\Psi}_{c}\), that can exploit the correlation of channels across all the UEs and BS, for improving the CSI reconstruction at the BS. The matrix of training channel vectors is denoted by \(\mathbf{H}^{\prime}\), which consists of \(M^{\prime}\) channel vectors collected for \(N\) different frames across \(N_{c}\) subcarriers and \(K\) UEs. Then we have \(M^{\prime}=N\times N_{c}\times K\).
To elaborate further, \(\mathbf{H}^{\prime}\) is structured as \(\mathbf{H}^{\prime}=[\mathbf{H}^{\prime}_{1},\ldots,\mathbf{H}^{\prime}_{k}, \ldots,\mathbf{H}^{\prime}_{K}]\), and each sub-matrix in \(\mathbf{H}^{\prime}\) is represented as \(\mathbf{H}^{\prime}_{k}=[\mathbf{H}^{\prime}_{1,k},\ldots,\mathbf{H}^{\prime }_{l,k},\ldots,\mathbf{H}^{\prime}_{N_{c},k}]\;\;\forall k\in\mathbf{T}_{k}\). Main Objective:
\begin{tabular}{l l} & Obtaining the common dictionary \(\boldsymbol{\Psi}_{c}\) that \\ & results in a low NMSE. \\ \end{tabular}
**Step2.**: Optimization Problem:
\begin{tabular}{l l} & Finding the optimal CD that could minimize \\ & the objective function \(\{\|\mathbf{H}^{\prime}-\boldsymbol{\Psi}\hat{\mathbf{H}}\|_{F}^{2}\}\) and \\ & ultimately minimizes NMSE. \\ \end{tabular}
**Step3.**: Designing the CD:
\begin{tabular}{l l} & We first initialize \(\boldsymbol{\Psi}\) with a DFT matrix and \\ & then find \(\boldsymbol{\Psi}_{c}\) that minimizes the product \\ & term \(\boldsymbol{\Psi}\hat{\mathbf{H}}^{\prime}\) and approaches \(\hat{\mathbf{H}}^{\prime}\). \\ \end{tabular}
**Step4.**: The Strategy to Solve the Objective Function:
\begin{tabular}{l l} & Since the objective function is a non-convex \\ & problem, iterative algorithms are one of the \\ & solutions shown in Section III. A. 1. \\ \end{tabular}
**Step5.**: Proposed Solutions:
\begin{tabular}{l l} & Two solutions the CDL-OP and the CDL- \\ & KSVD are proposed in Section III to obtain \\ & \(\boldsymbol{\Psi}_{c}\). \\ \end{tabular}
Fig. 4: Flow of the mathematical analysis.
Fig. 3: Overview of the massive MU-MIMO-OFDM system having a \(k\)-th UE with \(N_{r}\) RAs and a BS with \(N_{t}\) TAs. \(\mathbf{H}_{k}\) represents the complete FDCHTF at the \(k\)-th UE and \(\mathbf{H}_{l,k}\) represents the FDCHTF across the \(l\)-th subcarrier of the \(k\)-th UE. The FDCHTFs are vectorized and then undergo sparse transformation using the dictionary obtained from the existing methods 1, 2 of [15] and the proposed framework. The circled numbers and the notations are the same as in Fig.2.
\(\{1,2,\ldots,K\}\), \(\forall l\in\{1,2,\ldots,N_{c}\}\). Similarly, \(\mathbf{H}^{\prime}_{l,k}\) is defined as \(\mathbf{H}^{\prime}_{l,k}=[\mathbf{h}_{l_{1},k},\ldots,\mathbf{h}_{l_{n},k}, \ldots,\mathbf{h}_{l_{N},k}]\), where \(\mathbf{h}_{l_{n},k}\) represents the channel vector transformation of \(\mathbf{H}_{l_{n},k}\) at the \(n\)-th MU-MIMO-OFDM frame (time-instant). We assume that the channel envelope remains constant for an OFDM frame and then changes for the successive frames, according to the vehicular velocity. Hence the consecutive frames are correlated.
The sparse representation of the matrix \(\mathbf{H}^{\prime}\) is denoted as \(\mathbf{\tilde{H}}^{\prime}=[\mathbf{\tilde{H}}^{\prime}_{1},\ldots,\mathbf{ \tilde{H}}^{\prime}_{k},\ldots,\mathbf{\tilde{H}}^{\prime}_{K}]\), and each sub-matrix in \(\mathbf{\tilde{H}}^{\prime}\) is represented as \(\mathbf{\tilde{H}}^{\prime}_{k}=[\mathbf{\tilde{H}}^{\prime}_{1,k},\ldots, \mathbf{\tilde{H}}^{\prime}_{l,k},\ldots,\mathbf{\tilde{H}}^{\prime}_{N_{c},k}]\)\(\forall k\in\{1,2,\ldots,K\}\), \(\forall l\in\{1,2,\ldots,N_{c}\}\). Similarly \(\mathbf{\tilde{H}}^{\prime}_{l,k}=[\mathbf{\tilde{h}}_{l_{1},k},\ldots, \mathbf{\tilde{h}}_{l_{n},k},\ldots,\mathbf{\tilde{h}}_{l_{N},k}]\), where \(\mathbf{\tilde{h}}_{l_{n},k}\) denotes the sparse representation of the channel vector \(\mathbf{h}_{l_{n},k}\).
The CDL optimization problem is formulated as:
\[\min_{\mathbf{\Psi},\mathbf{\tilde{H}}^{\prime}}\{\|\mathbf{H}^{ \prime}-\mathbf{\Psi}\mathbf{\tilde{H}}^{\prime}\|_{F}^{2}\}\] \[s.t.\ \|\mathbf{\tilde{h}}_{l_{n},k}\|_{0}\leq S,\ \forall\ n\in\{1,2,\ldots,N\},\] \[\forall\ l\in\{1,2,\ldots,N_{c}\},\ \forall\ k\in\{1,2,\ldots,K\}. \tag{8}\]
To solve the optimization problem in (8) we propose the following methods.
#### Iii-A1 CDL-KSVD method
In the CDL-KSVD method the training set \(\mathbf{H}^{\prime}\) consists of the channel vectors of all the UEs. The training set is employed to learn the dictionary \(\mathbf{\Psi}_{c}\) using the K-SVD algorithm [36]. The K-SVD algorithm has two stages: the sparse coding stage and the dictionary update stage. In the sparse coding stage, each column of \(\mathbf{H}^{\prime}\) is sparsely represented using a dictionary. The dictionary update stage involves updating each column of \(\mathbf{\Psi}\) with a dominant singular vector. As a result, the learned dictionary \(\mathbf{\Psi}_{c}\) has unit-norm columns.
* _Sparse coding stage_: In the first stage of the K-SVD algorithm, the optimization problem is formulated as: \[\min_{\mathbf{\tilde{H}}^{\prime}}\{\|\mathbf{H}^{\prime}- \mathbf{\Psi}\mathbf{\tilde{H}}^{\prime}\|_{F}^{2}\}\] \[s.t.\ \|\mathbf{\tilde{h}}_{l_{n},k}\|_{0}\leq S,\ \forall\ n\in\{1,2,\ldots,N\},\] \[\forall\ l\in\{1,2,\ldots,N_{c}\},\ \forall\ k\in\{1,2,\ldots,K\}.\] (9) To solve (III-A1), we begin by initializing the matrix \(\mathbf{\Psi}\) with a DFT dictionary. The next step involves finding the matrix \(\mathbf{H}^{\prime}\) having a sparse representation, which is an \(\ell_{0}\) problem and it is carried out by using the OMP algorithm [8]. The objective of the OMP algorithm is to find a sparse representation of \(\mathbf{H}^{\prime}\) using a small number of non-zero elements in the matrix \(\mathbf{\tilde{H}}^{\prime}\).
* _Dictionary update stage_: In the second stage of the K-SVD algorithm, the optimization problem is formulated as: \[\min_{\mathbf{\Psi}}\{\|\mathbf{H}^{\prime}-\mathbf{\Psi}\mathbf{\tilde{H}}^{ \prime}\|_{F}^{2}\}.\] (10) The solution to the problem posed in (10) is obtained by updating each column of the dictionary by computing a partial SVD of a matrix [6]. After the dictionary update stage, the dictionary \(\mathbf{\Psi}\) gets updated to \(\mathbf{\Psi}_{c}\). It is to be noted that this method updates only the columns corresponding to sparse coefficients of the channel matrix \(\mathbf{\tilde{H}}^{\prime}\).
* Repeat sparse coding and dictionary update stages until the stopping criterion is met.
The main advantage of the CDL-KSVD method is that it captures the spatial correlation of the channel vectors. But its drawback is that it requires a partial SVD operation for each column update in the dictionary update stage, which is computationally expensive.
#### Iii-A2 CDL-OP method
The CDL-OP method stands for common dictionary learning - orthogonal Procrustes method. In this method we solve the orthogonal Procrustes problem to learn the CD [37]. The CD obtained is a square matrix with dimensions \(N_{r}N_{t}\times N_{r}N_{t}\). The constraint imposed is that the columns of the dictionary should be orthogonal. The optimization problem is formulated as:
\[\min_{\mathbf{\Psi}}\{\|\mathbf{H}^{\prime}-\mathbf{\Psi}\mathbf{\tilde{H}}^{ \prime}\|_{F}^{2}\}\quad s.t.\ \ \mathbf{\Psi}^{H}\mathbf{\Psi}=\mathbf{I}, \tag{11}\]
where \(\mathbf{\Psi}\) in (11) may be found explicitly by singular value decomposition (SVD).
\[\text{Let} \mathbf{C}=\mathbf{\tilde{H}}^{\prime}\mathbf{H}^{\prime H}\] \[[\mathbf{U},\mathbf{\Sigma},\mathbf{V}]=\text{SVD}(\mathbf{C})\] \[\mathbf{\Psi}=\mathbf{V}\mathbf{U}^{H}. \tag{12}\]
The resulting dictionary \(\mathbf{\Psi}\) obtained in (III-A1) is the CD (\(\mathbf{\Psi}_{c}\)). The main advantage of the CDL-OP method is that it captures the spatial correlation of the channel vectors, but at the cost of an SVD operation.
### _Common Dictionary for Wideband Multi-UE System_
A wideband channel has a broader signal bandwidth than the coherence bandwidth. In this section, we first discuss the CDL framework conceived for a wideband system supporting \(K\) UEs, and then highlight the simplified scenario, where a single UE is present. Next, we will quantify the memory savings of storing only a single CD. Then in the final sub-section, we elaborate on the dictionary feedback reduction by sending only a single CD in a single-UE system.
In the multi-UE system, since no communication takes place among the \(K\) UEs, CDL is impossible at any UE. Consequently, the CDL is only feasible at the BS. For the CDL at the BS, we require the subcarrier dictionaries and the reconstructed channels (used as training channel vectors). Let the reconstructed sparse channel matrix at the BS be represented by \(\mathbf{\hat{\tilde{H}}^{\prime}}=[\mathbf{\hat{\tilde{H}}}^{\prime}_{1},\ldots, \mathbf{\hat{\tilde{H}}}^{\prime}_{k},\ldots,\mathbf{\hat{\tilde{H}}}^{\prime }_{K}]\), where each sub-matrix in \(\mathbf{\hat{\tilde{H}}^{\prime}}\) is represented as \(\mathbf{\hat{\tilde{H}}}^{\prime}_{k}=[\mathbf{\hat{\tilde{H}}}^{\prime}_{1,k}, \ldots,\mathbf{\hat{\tilde{H}}}^{\prime}_{l,k},\ldots,\mathbf{\hat{\tilde{H}}}^{ \prime}_{N_{c},k}]\ \forall k\in\{1,2,\ldots,K\}\), with \(\mathbf{\hat{\tilde{H}}}^{\prime}_{l,k}=[\mathbf{\hat{\tilde{h}}}_{l_{1},k}, \ldots,\mathbf{\hat{\tilde{h}}}_{l_{n},k},\ldots,\mathbf{\hat{\tilde{h}}}_{l_{N},k}]\), containing \(N\) reconstructed sparse channel vectors of each sub-carrier, i.e., \(\forall l\in\{1,2,\ldots,N_{c}\}\).
Let the reconstructed matrix of training channel vectors at the BS be represented by \(\mathbf{\hat{H}}^{\prime}=[\mathbf{\hat{H}}^{\prime}_{1},\ldots,\mathbf{\hat{ \tilde{H}}}^{\prime}_{k},\ldots,\mathbf{\hat{\tilde{H}}}^{\prime}_{K}]\), where each sub-matrix in \(\mathbf{\hat{H}}^{\prime}\) is represented as
\([\hat{\mathbf{H}}^{\prime}_{1,k},\ldots,\hat{\mathbf{H}}^{\prime}_{l,k},\ldots, \hat{\mathbf{H}}^{\prime}_{N_{c},k}]\)\(\forall k\in\{1,2,\ldots,K\}\), with \(\hat{\mathbf{H}}^{\prime}_{l,k}=[\hat{\mathbf{h}}_{l,k},\ldots,\hat{\mathbf{h}} _{l_{N},k}]\) when considering \(N\) reconstructed channel vectors for each subcarrier i.e., \(\forall l\in\{1,2,\ldots,N_{c}\}\). The total number of reconstructed training channel vectors in \(\hat{\mathbf{H}}^{\prime}\) is \(M^{\prime}=N\times N_{c}\times K\).
Importantly, at this stage we have to consider \(\hat{\mathbf{H}}^{\prime}\) instead of \(\mathbf{H}^{\prime}\) and \(\hat{\mathbf{H}}^{\prime}\) instead of \(\hat{\mathbf{H}}^{\prime}\) in (8). The optimization problem of finding \(\mathbf{\Psi}_{c}\) in the multi-UE system is formulated as follows:
\[\min_{\mathbf{\Psi},\hat{\mathbf{H}}^{\prime}}\{\|\hat{\mathbf{H }}^{\prime}-\mathbf{\Psi}\hat{\mathbf{H}}^{\prime}\|_{F}^{2}\}\] \[s.t. \|\hat{\hat{\mathbf{h}}}_{l_{n},k}\|_{0}\leq S,\ \forall\ n\in\{1,2,\ldots,N\},\] \[\forall\ l\in\{1,2,\ldots,N_{c}\},\ \forall\ k\in\{1,2,\ldots,K\}. \tag{13}\]
The single-UE system (\(K\) = 1) is a special case of a multi-UE system. For the current system when compared to the multi-UE system of Fig. 5 there is no need to send the subcarrier dictionaries (\(\mathbf{\Psi}_{ksend,l}^{E}\), \(\forall\ n\in\{1,2,\ldots,N\},\ \forall\ l\in\{1,2,\ldots,N_{c}\}\)) from the UEs to the BS. The CD is learned at the UE itself using one of the two proposed methods and then the UE sends \(\mathbf{\Psi}_{c}\) to the BS. Now both the BS and the UE will start using \(\mathbf{\Psi}_{c}\). The total number of training channel vectors in \(\mathbf{H}^{\prime}\) is \(M^{\prime}=N\times N_{c}\).
The optimization problem of finding \(\mathbf{\Psi}_{c}\) is as follows:
\[\min_{\mathbf{\Psi},\hat{\mathbf{H}}^{\prime}}\{\|\mathbf{H}^{ \prime}-\mathbf{\Psi}\bar{\mathbf{H}}^{\prime}\|_{F}^{2}\}\] \[s.t. \|\hat{\mathbf{h}}_{l_{n}}\|_{0}\leq S,\ \forall\ n\in\{1,2,\ldots,N\},\] \[\forall\ l\in\{1,2,\ldots,N_{c}\}. \tag{14}\]
The CDL framework of our multi-UE system is outlined in Algorithm 1 and its block diagram is shown in Fig. 5. More specifically, in the Algorithm 1 we present the step-by-step procedure of the CDL framework. The remainder of this subsection introduces each of the steps numbered in the Fig. 5 and the corresponding steps in the algorithm.
1. The BS sends the pilots to all the \(K\)-UEs in the system and each UE estimates its channels and follows the steps 4 to 10 in the algorithm for \(N\) frames and learns the K-SVD based subcarrier dictionaries.
2. Each UE sends the \(N_{c}\) K-SVD dictionaries to the BS. For the next \(N\) frames each UE compresses the \(l\)-th subcarrier FDCHTF using the K-SVD dictionary \((\mathbf{\Psi}_{ksend,l}^{k})\) and sends it to the BS. Then the BS reconstructs the \(l\)-th subcarrier FDCHTF with the aid of the same K-SVD dictionary. Using the reconstructed FDCHTFs and the K-SVD based subcarrier dictionaries of all the \(K\) UEs, the BS learns the \(\mathbf{\Psi}_{c}\). The 2 procedure corresponds to the steps 14 to 19 in the algorithm.
3. After learning the CD at the BS, the BS sends the CD to all the \(K\)-UEs, the UEs and the BS will follow the steps 22 to 25 in the algorithm for FDCHTF compression and reconstruction using \(\mathbf{\Psi}_{c}\).
#### Iii-B1 Memory Reduction Calculation
* The memory required to store each subcarrier dictionary \(\mathbf{\Psi}_{ksvd,l}^{k}\) is \((N_{r}N_{t})^{2}\), \(\forall l\in\{1,2,\ldots,N_{c}\},\forall k\in\{1,2,\ldots,K\}\).
* The total memory required to store \(N_{c}\) subcarrier dictionaries is \(N_{c}(N_{r}N_{t})^{2}\).
* The memory required to store \(\mathbf{\Psi}_{c}\) is \((N_{r}N_{t})^{2}\).
* Total memory storage reduction for a \(K\)-UE system \(=K\times\) ([Memory required to store \(N_{c}\) subcarrier dictionaries] \(-\) [Memory required to store \(\mathbf{\Psi}_{c}\)]), which is formulated as: \[\Delta_{saved}=K(N_{c}-1)(N_{r}N_{t})^{2}.\] (15)
* The total memory storage reduction for a single-UE is \[\Delta_{saved}=(N_{c}-1)(N_{r}N_{t})^{2}.\] (16)
Fig. 5: Overview of the CDL framework for FDD massive MU-MIMO-OFDM system. Right: \(K\) UEs with \(N_{r}\) receive-antennas each; Left: massive MIMO base station with \(N_{t}\) transmit antennas; Center: For simplicity, only dictionary feedback is shown.
#### Iii-B2 Dictionary Feedback Reduction Calculation by Sending a CD in a Single-UE System
* The dimension of each subcarrier K-SVD dictionary \(\boldsymbol{\Psi}_{ksvd,l}\) is \(N_{r}N_{t}\times N_{r}N_{t}\), \(\forall l\in\{1,2,\ldots,N_{c}\}\).
* The total dimension of \(N_{c}\) subcarrier dictionaries is \(N_{c}\times N_{r}N_{t}\times N_{r}N_{t}\).
* The dimension of \(\boldsymbol{\Psi}_{c}\) is \(N_{r}N_{t}\times N_{r}N_{t}\).
* [Feedback required for sending \(\boldsymbol{\Psi}_{c}\) (\(\mathcal{T}_{com}\))], where we have \[\mathcal{T}_{ksvd}=N_{c}(N_{r}N_{t})^{2}\] \[\mathcal{T}_{com}=(N_{r}N_{t})^{2}\] \[\mathcal{T}_{saved}=\mathcal{T}_{ksvd}-\mathcal{T}_{com}=(N_{c}-1)(N_{r}N_{t })^{2}.\]
* _Reduction in dictionary feedback_: We define the feedback reduction factor by (\(\Upsilon\)): \[\Upsilon=\frac{\mathcal{T}_{com}}{\mathcal{T}_{ksvd}}=\frac{1}{N_{c}}\] (17)
#### Iii-B3 Computational Complexity of the Algorithm
We calculate the computational complexity of the dictionary learning stage for both the CDL-OP and the CDL-KSVD methods.
_a) CDL-OP method_:
* In (12), the SVD operation requires all the eigenvectors, resulting in a full SVD operation.
* The computation of a full SVD operation, specifically using the Golub-Reinsch SVD (GR-SVD) method, requires \(21(N_{r}N_{t})^{3}\) floating-point operations (FLOPS) [6]. On the other hand, the Chan-SVD (R-SVD) method requires \(26(N_{r}N_{t})^{3}\) FLOPS [6].
* Therefore, to update the dictionary, we require a computational complexity of \(\mathcal{O}[(N_{r}N_{t})^{3}]\).
_b) CDL-KSVD method_:
* In (10), updating each column of the dictionary requires an SVD operation. This SVD operation only requires the dominant eigenvector, resulting in a partial SVD computation.
* The GR-SVD method requires \(14N_{r}N_{t}N_{c}^{\prime 2}+9N_{c}^{\prime 3}\) FLOPS [6], while the R-SVD method requires \(6N_{r}N_{t}N_{c}^{\prime 2}+20N_{c}^{\prime 3}\) FLOPS [6]. Here, \(N_{c}^{\prime}\) represents the number of non-zero coefficients corresponding to the \(k\)-th row in \(\tilde{\mathbf{H}}^{\prime}\), and \(N_{c}^{\prime}\) ranges from 0 to \(M^{\prime}\).
* Therefore, to update the complete dictionary, a total of \(N_{r}N_{t}\) partial SVD operations are required.
c) For example, let us consider \(N_{t}=64\), \(N_{r}=1\), \(M^{\prime}=1600\), and an average value of \(N_{c}^{\prime}=M^{\prime}/4\). The computational complexity in FLOPS is provided in Table VI. We represent the number of FLOPS required for updating a single column in the dictionary by CDL-KSVD (min), and that imposed by updating all columns in the dictionary using CDL-KSVD (max).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \(N_{t}\) & \(N_{r}\) & \(N_{c}\) & \(\mathcal{T}_{ksvd}\) & \(\mathcal{T}_{com}\) & \(\Upsilon\) \\ \hline
64 & 1 & 4 & 16384 & 4096 & \(1/4\) \\ \hline
64 & 1 & 32 & 131072 & 4096 & \(1/32\) \\ \hline \end{tabular}
\end{table} TABLE V: Dictionary feedback reduction factor in a single-UE System.
#### Iii-B4 CSI Feedback Case Study in a Single UE
The dimension of the compressed channel vector \(\mathbf{h}_{c,l}^{k}\) (\(\in\mathbb{C}^{N_{g}\times 1}\)) sent from the UE in the uplink can be varied by adjusting the compression factor \(g\). Specifically, the dimension is given by \(N_{g}=\frac{N_{r}N_{t}}{g}\). By tuning the value of \(g\), we can beneficially reduce the amount of CSI feedback required.
To quantify the feedback requirements, we introduce the variables \(\gamma_{u}\) and \(\gamma_{c}\) to represent the feedback for the non-dictionary and dictionary-based methods, respectively. In a non-dictionary based method without compression, the CSI fed back from the UE corresponds to \(N_{c}N_{r}N_{t}\) elements for one frame. For \(N^{\prime}\) frames the CSI feedback will be \(\gamma_{u}\) = \(N^{\prime}N_{c}N_{r}N_{t}\). However, in a dictionary-based method associated with compression, the feedback is constituted by the CSI information having \(\frac{N_{r}N_{r}N_{t}}{g}\) elements for one frame, along with a one-time transmission of a dictionary with \((N_{r}N_{t})^{2}\) elements. So, for a total of \(N^{\prime}\) frames the feedback is given by \(\gamma_{c}\) = \((N_{r}N_{t})^{2}\) + \(\frac{N^{\prime}N_{r}N_{r}N_{t}}{g}\).
We define \(\Gamma\) as the CSI feedback ratio, which is calculated as the ratio between \(\gamma_{c}\) and \(\gamma_{u}\). If \(\Gamma<1\), it indicates that the value of \(\gamma_{c}\) is lower than \(\gamma_{u}\), resulting in a saving in CSI feedback.
\[\Gamma=\frac{\gamma_{c}}{\gamma_{u}}=\frac{(N_{r}N_{t})^{2}+\frac{N^{\prime}N_ {c}N_{r}N_{t}}{g}}{N^{\prime}N_{c}N_{r}N_{t}}=\frac{N_{r}N_{t}}{N^{\prime}N_{c }}+\frac{1}{g} \tag{18}\]
For example in Table VII we consider a scenario associated with \(N^{\prime}=2^{10}\) and vary the values of \(N_{c}\), \(N_{r}\), and \(g\). Using the formula given in 18, we demonstrate significant reductions in CSI feedback. The level of compression applied to the channel vector \(\mathbf{h}_{c,l}^{k}\) depends on the sparsity parameter \(S\). As per the lower bound, the number of elements in \(\mathbf{h}_{c,l}^{k}\) must satisfy \(N_{g}>2S\)[9]. In Fig. 13, we illustrate the impact of the compression factor \(g\) on the BER vs. signal-to-noise ratio (SNR) performance for both the non-dictionary and dictionary-based methods.
## IV Numerical Results
In this section, we provide the simulation results for characterising the NMSE performances of the DFT dictionary (\(\mathbf{\Psi}_{DFT}\)), of the individual K-SVD based subcarrier dictionaries (\(\mathbf{\Psi}_{ksvd,l}^{k}\)) and of the proposed CD (\(\mathbf{\Psi}_{c}\)). Using the DFT dictionary as the initial reference dictionary, we obtain the individual K-SVD based subcarrier dictionaries. Then \(\mathbf{\Psi}_{c}\) is learned by the proposed methods. For NMSE calculations, each dictionary is used for reconstructing the \(P\) channel vectors of each subcarrier at the BS.
The NMSE of the reconstructed channel is used as a performance metric defined as
\[\mathbf{NMSE}=\frac{1}{P}\sum_{i=1}^{P}\frac{\|\hat{\mathbf{h}}_{l_{n},k}- \mathbf{h}_{l_{n},k}\|_{2}^{2}}{\|\mathbf{h}_{l_{n},k}\|_{2}^{2}}, \tag{19}\]
where \(\hat{\mathbf{h}}_{l_{n},k}\) is the reconstructed channel vector and \(\mathbf{h}_{l_{n},k}\) is the original one.
### _Simulation Settings_
The simulations are carried out for a massive MIMO-OFDM system having \(N_{t}=64\), antenna spacing of \(d=\lambda/15\), carrier wavelength \(\lambda\), operating at a carrier frequency of \(f_{c}\) = 2 GHz. Furthermore, we have a communication bandwidth of \(B\) = \(20\) MHz, \(K=1\) and \(K=3\) UEs, \(N=50\), \(P=500\) test channel vectors, \(N_{c}=32\) and \(M=N_{r}N_{t}/2\). For \(M=32\), all the existing and the proposed dictionaries reduce the CSI feedback by 50% in the uplink. For all the experiments, the subcarrier K-SVD based dictionaries are learned for \(N=50\) from each subcarrier.
For experiments in the multi-UE system, the channels are generated using a Quasi Deterministic Radio Channel Generator (QuaDRiGa) [40, 41] for the three UEs having velocities of \(V=10\), \(15\) and \(20\) kmph. For experiments in the single-UE system, the channels are taken from the UE of \(V=20\) kmph. The channel update rate (CUR) considered to generate channels in QuaDRiGa is \(10\) ms. The QuaDRiGa simulation platform is recommended by 3GPP (3rd Generation Partnership Project) for designing and simulating wireless communications systems.
The main motivation for the CDL is not only to reduce the CSI feedback but also to reduce the CSI reconstruction NMSE. The benefit of the proposed methods in terms of the NMSE performance has to be studied. Hence we have conducted experiments for determining which of the proposed methods will best replace the DFT dictionary and subcarrier K-SVD based dictionaries in both single-UE and multi-UE systems. Since we have considered \(N_{c}=32\), it is not feasible to show the CD performance across all the subcarriers. In the single-UE system, we have considered subcarriers \(l=1\) or \(8\), since the channel gains of these two subcarriers are relatively low over the period of time. In the multi-UE system, we have considered the first subcarrier \(l=1\) of all three UEs to evaluate the NMSE performance as a function of sparsity and compared the proposed methods' CD performance to the DFT and K-SVD dictionaries in the literature.
In OFDM systems, the need for subcarrier K-SVD based dictionaries increases with the number of subcarriers. The FDCHTFs of each subcarrier are considered to be independent in a wideband OFDM system, but this is only realistic for extremely long CIRs. Hence in the proposed system we assume having realistic correlation among subcarriers in the FD. This correlation among the subcarriers is captured by
\begin{table}
\begin{tabular}{l c c} \hline Method & GR-SVD & Chan-SVD \\ \hline CDL-OF & \(5.505*10^{6}\) & \(6.8157*10^{6}\) \\ \hline CDL-KSVD (min) & \(7.1936*10^{8}\) & \(1.3414*10^{9}\) \\ \hline CDL-KSVD (max) & \(4.6039*10^{10}\) & \(8.5852*10^{10}\) \\ \hline \end{tabular}
\end{table} TABLE VI: Computational complexity in FLOPS
\begin{table}
\begin{tabular}{c c c c c} \hline \(g\) & \(N_{c}\) & \(N_{t}\) & \(N_{r}\) & \(\Gamma=\frac{2c}{2\gamma_{u}}\) \\ \hline
2 & 32 & 64 & 1 & \(0.5\) \\ \hline
2 & 64 & 64 & 2 & \(0.5\) \\ \hline
4 & 32 & 64 & 1 & \(0.252\) \\ \hline
4 & 64 & 64 & 2 & \(0.252\) \\ \hline \end{tabular}
\end{table} TABLE VII: CSI feedback savings comparison table for \(N^{\prime}\) = \(2^{10}\).
the CD using one of the two proposed methods. Our CDL procedure may also be extended to larger \(N_{c}\).
In all the simulation results, for each subcarrier it can be observed that the NMSE decreases as the sparsity increases. This is because in the sparse vector transformation (5) the sparse vector \(\hat{\mathbf{h}}\) picks many columns in the dictionary for higher sparsities, which in turn helps to solve the optimization problem (7) by minimizing the distance between \(\mathbf{h}\) and \(\hat{\mathbf{h}}\). Hence, higher sparsity will improve the reconstruction performance.
**Experiment 1**.: _In this single-UE experiment, we study the NMSE performance across a particular subcarrier of the UE using the CD learned from the proposed methods and existing methods as a function of sparsity. The proposed methods' CD is learned across all the subcarriers of the UE. For this experiment we consider a massive SU-MIMO-OFDM system having \(N_{c}=32\) subcarriers and a UE having \(N_{r}=1\) RA and moving with a velocity of 20 kmph._
In Fig. 6, we consider the subcarrier \(l=1\) to study the NMSE performance of all the dictionaries. For a particular sparsity index of \(S=8\), the NMSE of the CDL-KSVD dictionary is \(1.7\times 10^{-2}\), of the CDL-OP dictionary is \(1.3\times 10^{-2}\), of the subcarrier K-SVD based dictionary is \(1.8\times 10^{-2}\), and of the DFT dictionary is \(3.3\times 10^{-2}\). The CDL-OP dictionary has the lowest NMSE at \(S=8\). For all other sparsities, the CDL-OP and the CDL-KSVD method dictionaries perform similarly, and the NMSE values are close to those of the subcarrier K-SVD dictionary. All the proposed methods' CD exhibit better performance than the DFT dictionary.
In Fig. 7, we consider a scenario where the BS has a different number of antennas, namely \(N_{t}\) = 16, 32, and 64. We consider the subcarrier \(l=1\) to study the NMSE performance of the CDL-OP dictionaries learned for different \(N_{t}\) values. We observe that the NMSE value increases with the number of antennas at the BS.
**Experiment 2**.: _In this single-UE experiment, we study the NMSE performance across different subcarriers of the UE employing the CD learned by the proposed methods and the individual subcarrier K-SVD based dictionaries as a function of sparsity. For this experiment we consider a massive SU-MIMO-OFDM system having \(N_{c}=32\) subcarriers. The UE is equipped with \(N_{r}=1\) (and \(2\)) RAs and is moving at a velocity of 20 kmph._
In Fig. 8, the \(\boldsymbol{\Psi}_{c}\) employed for NMSE characterization is learned from the CDL-KSVD method and in Fig. 9, the \(\boldsymbol{\Psi}_{c}\) employed for NMSE characterization is learned from the CDL-OP method. Observe from Figs. 8 and 9, for subcarriers \(l=1\) and \(8\) at all the sparsity index values, the NMSE values are close to each other, but the CDL-OP method is the best among all the three methods in terms of the NMSE performance attained. Both the CDL-KSVD and CDL-OP methods rely on the SVD operation and learn the CD from the channels estimated at the UE. Consequently, the NMSE reconstruction results shown in Figs. 8 and 9 exhibit a high degree of similarity.
We have carried out a simulation and presented the results in Fig. 10, where we specifically focus on the scenario where \(N_{r}>1\), indicating the presence of multiple receive antennas. By considering this scenario, we ensure that our analysis is not limited to a specific number of receive antennas. It is observed that all the proposed methods' CD exhibit better NMSE performance than the DFT dictionary.
Fig. 6: A single-UE system is considered, where the NMSE performance comparison among all the dictionaries is carried out for \(N_{t}\) = 64, \(N_{r}\) = 1 and \(N_{c}\) = 32. Subcarrier 1 is considered for comparison and the UE velocity is 20 kmph.
Fig. 7: A single-UE system is considered, where the NMSE performance comparison among the CDL-OP dictionaries is carried out for \(N_{t}\) = 16, 32 and 64, \(N_{r}\) = 1 and \(N_{c}\) = 32. Subcarrier 1 is considered for comparison and the UE velocity is 20 kmph.
**Experiment 3**.: _In this multi-UE experiment, we initially study the NMSE performance of the dictionary generated using the CDL-OP method and existing methods for a particular UE's subcarrier as a function of sparsity. Then we study the NMSE performance of the CDL-OP dictionary across different UEs for a particular subcarrier. The CDL-OP dictionary is learned across all the subcarriers and all the \(K\) UEs. For the experiment, we consider a massive MU-MIMO-OFDM system having \(N_{c}=32\) subcarriers and \(K=3\) UEs each associated with \(N_{r}=1\) RA._
The wireless channels are generated for three UEs having velocities of \(V=\) 10, 15, and 20 kmph using a QuaDRiGa simulator. Let us assume that a UE changes its velocity to that of another UE, but experiences different channel characteristics. In that case, there is no need to learn a new dictionary for that particular UE. The CD has already captured all the
Fig. 11: A multi-UE system is considered, where the NMSE performance comparison of the DFT, the K-SVD and the CDL-OP methods is carried out for \(K\) = 3, \(N_{t}\) = 64, \(N_{r}\) = 1 and \(N_{c}\) = 32. Subcarrier 1 of UE\({}_{1}\) is considered for comparison. The velocities of the three UEs are 10, 15, and 20 kmph.
Fig. 8: A single-UE system is considered, where the NMSE performance of the K-SVD dictionary and the CDL-KSVD dictionary are used for \(N_{t}\) = 64, \(N_{r}\) = 1 and \(N_{c}\) = 32. Subcarriers 1 and 8 are considered for comparison and the UE velocity is 20 kmph.
Fig. 10: A single-UE system with multiple UE antennas is considered, where the NMSE performance comparison among all the dictionaries is carried out for \(N_{t}\) = 64, \(N_{r}\) = 2 and \(N_{c}\) = 32. Subcarrier 1 is considered for comparison and the UE velocity is 20 kmph.
Fig. 9: A single-UE system is considered, where the NMSE performance of the K-SVD dictionary and the CDL-OP dictionary are used for for \(N_{t}\) = 64, \(N_{r}\) = 1 and \(N_{c}\) = 32. Subcarriers 1 and 8 are considered for comparison and the UE velocity is 20 kmph.
three UE channel characteristics, and this procedure can be extended to larger \(K\) and \(N_{c}\) values.
In Fig. 11, we consider the first UE and subcarrier \(l=1\) to study the NMSE performance of the dictionaries from the full set of \(N_{c}=32\) subcarriers and \(K=3\) UEs. For a particular sparsity of \(S=8\), the NMSE value of the CDL-OP dictionary is \(1.3\times 10^{-2}\), of the subcarrier K-SVD based dictionary is \(1.8\times 10^{-2}\), and of the DFT dictionary is \(3.3\times 10^{-2}\). We observe that all the proposed methods' CD have better NMSE performance than the DFT dictionary. Among the proposed methods, the CDL-KSVD method exhibits the poorest NMSE performance. Therefore, we are not pursuing this method any further in the simulations.
In Fig. 12, the \(\mathbf{\Psi}_{c}\) employed for NMSE characterization is learned from the CDL-OP method. Observe from the Fig. 12, for subcarrier \(l=1\) of UE\({}_{2}\) and UE\({}_{3}\) at almost all the sparsity index values, the NMSE values for the proposed CDL-OP method's dictionary is better than subcarrier K-SVD based dictionaries, because the CDL-OP dictionary is learned from the estimated channels of K-SVD based dictionaries.
**Experiment 4**.: _In this single-UE experiment, we study the BER performance at the UE as a function of the SNR. The data symbols are transmitted from the BS to the UE in two ways: a) Using the true channel estimates without compression and b) Using the channel estimates with compression that are obtained from the CDL-OP dictionary._
To elaborate further on the performance of our proposed framework, we analyze the BER in a downlink scenario. In Fig. 13 we evaluate the BER using two sets of channels: the true uncompressed channels and the channels estimated using the CDL-OP dictionary on subcarrier \(1\). The precoder matrices employed at the BS are denoted as \(\mathbf{W}^{g}=diag(\frac{\mathbf{h}_{1,op}^{H}}{|\mathbf{h}_{1}|})\) and \(\mathbf{W}_{op}^{g}=diag(\frac{\mathbf{h}_{1,op}^{H}}{|\mathbf{h}_{1,op}|})\), which correspond to the weights obtained from the true channels and to the channels estimated using CDL-OP dictionary, respectively. Let \(\mathbf{x}\) represents the modulated symbol vector. Explicitly, in our BER analysis, we harness a half-rate convolutional encoder having the generator sequences of G = [101, 111] and the resultant bits are 16-PSK modulated for generating the symbol vector \(\mathbf{x}\). The received signal in the case of true uncompressed channels at the UE can be represented as \(y=\mathbf{h}_{1}^{T}\mathbf{W}^{g}\mathbf{x}+n\), where \(n\) is the additive noise. Similarly the received signal for channels estimated using the CDL-OP dictionary can be represented as \(y_{op}=\mathbf{h}_{1}^{T}\mathbf{W}_{op}^{g}\mathbf{x}+n\).
The CDL-OP dictionaries used in this experiment are learned at a sparsity of 16 for \(g=2\) and at a sparsity of 8 for \(g=4\). As we increase the compression factor \(g\) to reduce the CSI feedback from the UE, we observe a BER performance erosion. The lower bound shown in Fig. 13 represents the BER performance when the BS employs uncompressed CSI. Notably, as depicted in Fig. 13, the BER of the UE, recorded for \(g=2\) when utilizing the CDL-OP dictionary approaches the lower bound.
## V Summary and Conclusions
We proposed a novel CDL framework for reducing the FDCHTF feedback and memory requirement of the UE and BS. The framework is more beneficial for the UE, which is usually resource-constrained, and the savings can be significant. For the simulations, the channels are generated using the
Fig. 12: In a multi-UE system, the NMSE performance of the K-SVD dictionary and CDL-OP method’s dictionary for \(K\) = 3, \(N_{t}\) = 64, \(N_{r}\) = 1 and \(N_{c}\) = 32. Subcarrier 1 of UE\({}_{2}\) and UE\({}_{3}\) are considered for comparison. Velocities of three UEs are 10, 15, and 20 kmph.
Fig. 13: In a single-UE system with 64 TAs at the BS and a UE with 1 RA, we examine the BER performance of channel estimates using the CDL-OP methods’ dictionary. We consider two scenarios: \(N_{g}=N_{r}N_{t}/2\) and \(N_{g}=N_{r}N_{t}/4\). For comparison, we assume uncompressed channel estimates for subcarrier 1. The UE has a velocity of 20 kmph.
QuaDRiGa simulator. In a multi-UE system of three UEs and in a single-UE system, all the proposed method's dictionaries have better NMSE performance than the DFT dictionary. The CDL-OP dictionary performs better than the CDL-KSVD dictionary for all the sparsities. Hence the CDL-OP method can be beneficially employed for CDL to compress the CSI and improve the CSI reconstruction performance. In terms of the computational complexity, the CDL-OP method requires only a single SVD operation to learn the CD, while the CDL-KSVD method requires an SVD operation for learning each column in the CD. So the CDL-OP method has lower computational complexity than the CDL-KSVD method. To minimize the impact imposed on the BER performance, it is important to choose an appropriate compression ratio \(g\). In our case, a compression ratio of \(g=2\) is considered to have a modest impact on the BER performance. By selecting a suitable compression ratio, the system can strike a balance between reducing the amount of feedback, while maintaining a satisfactory BER performance.
We conclude by highlighting the differences between the multi-UE and single-UE systems as follows:
1. In the multi-UE system, the memory is reduced by a factor of \(N_{c}\) at the UE and \(KN_{c}\) at the BS by having only a single CD instead of multiple subcarrier dictionaries. For \(N_{t}=64\) and \(N_{c}=32\), the memory required for storing the CD at each UE is reduced by a factor of 32 and the CSI feedback is reduced by a factor of two.
2. In the single-UE system, the memory is reduced by a factor of \(N_{c}\) at both the UE and BS, the dictionary feedback is also reduced by a factor of \(N_{c}\) in the uplink. This is achieved by sending only a single CD instead of \(N_{c}\) subcarrier dictionaries to the BS. For \(N_{t}=64\) and \(N_{c}=32\), the memory is reduced by a factor of 32 for storing only a CD, and the dictionary feedback is also reduced by a factor of 32. Finally, the CSI feedback is reduced by a factor of two.
3. In the multi-UE system, the CD (\(\mathbf{\Psi}_{c}\)) generated using the CDL-OP has a better NMSE performance to that of the individual subcarrier K-SVD dictionaries \(\mathbf{\Psi}_{ksvd,l}^{k}\)\(\forall l\in[1,N_{c}],\forall k\in[1,K]\).
4. In the single-UE system, the CD (\(\mathbf{\Psi}_{c}\)) generated using the CDL-OP method has a better NMSE performance to that of the individual subcarrier K-SVD dictionaries \(\mathbf{\Psi}_{ksvd,l}^{k}\)\(\forall l\in[1,N_{c}]\).
|
2303.06576 | Implications on Cosmology from Dirac Neutrino Magnetic Moments | The mechanism for generating neutrino masses remains a puzzle in particle
physics. If neutrino masses follow from a Dirac mass term, then neutrino states
exist with opposite chirality compared to their weakly-interacting
counterparts. These inactive states do not interact with their active
counterparts at measurable scales in the standard model. However, the existence
of these states can have implications for cosmology as they contribute to the
radiation energy density at early times, and the matter energy density at late
times. How Dirac neutrinos may populate thermal states via an anomalous
magnetic moment operator is the focus of this work. A class of models where all
neutrinos have a magnetic moment independent of flavor or chirality is
considered. Subsequently, the cross sections for neutrinos scattering on
background plasma particles are calculated so that the relic inactive neutrino
energy is derived as a function of plasma temperature. To do so, one needs
cross sections for scattering on all electrically charged standard-model
particles. Therefore, the scattering cross section between a neutrino and
$W$-boson via the magnetic moment vertex is derived. Current measurements put a
constraint on the size of the neutrino magnetic moment from the cosmological
parameter $N_{\rm eff}$ and light-element primordial abundances. Finally, how
the extra Dirac states contribute to the matter energy density at late times is
investigated by examining neutrino free-streaming. | E. Grohs, A. B. Balantekin | 2023-03-12T05:17:39Z | http://arxiv.org/abs/2303.06576v3 | # Implications on Cosmology from Dirac Neutrino Magnetic Moments
###### Abstract
The mechanism for generating neutrino masses remains a puzzle in particle physics. If neutrino masses follow from a Dirac mass term, then neutrino states exist with opposite chirality compared to their weakly-interacting counterparts. These inactive states do not interact with their active counterparts at measurable scales in the standard model. However, the existence of these states can have implications for cosmology as they contribute to the radiation energy density at early times, and the matter energy density at late times. How Dirac neutrinos may populate thermal states via an anomalous magnetic moment operator is the focus of this work. A class of models where all neutrinos have a magnetic moment independent of flavor or chirality is considered. Subsequently, the cross sections for neutrinos scattering on background plasma particles are calculated so that the relic inactive neutrino energy is derived as a function of plasma temperature. To do so, one needs cross sections for scattering on all electrically charged standard-model particles. Therefore, the scattering cross section between a neutrino and \(W\)-boson via the magnetic moment vertex is derived. Current measurements put a constraint on the size of the neutrino magnetic moment from the cosmological parameter \(N_{\rm eff}\) and light-element primordial abundances. Finally, how the extra Dirac states contribute to the matter energy density at late times is investigated by examining neutrino free-streaming.
## I Introduction
In his "Dear Radioactive Ladies and Gentlemen" letter to the Tubingen meeting of the German Physical Society (reproduced in Ref. [1]) Wolfgang Pauli, in addition to proposing the existence of neutrino itself, implied that the neutrino is massive and hence it interacts via its magnetic dipole moment. Since Pauli did not explore the possibility of a new kind of interaction, i.e., the weak interaction, the magnetic moment he could deduce was too large. After the weak interaction was introduced by Enrico Fermi and it was realized that neutrinos could be massless, interest in the electromagnetic interactions of neutrinos waned since symmetry considerations suggest that neutrino magnetic moment would vanish for massless neutrinos. Earlier surveys did not come up with any experimental evidence for electromagnetic interactions of neutrinos [2]. However, as solar neutrino experiments found increasingly strong evidence for the presence of non-zero neutrino masses, papers exploring astrophysical and cosmological implications of a non-zero neutrino magnetic moment started to appear in the literature [3; 4; 5; 6; 7; 8; 9; 10; 11]. Indeed one of the solutions of the solar neutrino problem was to invoke interactions of neutrinos with solar magnetic fields [12].
Within the standard model, neutrinos are taken to be massless. If indeed neutrinos are massive, as the solar neutrino experiments suggested, the question arised as to how they obtain their masses in an extension to the standard model. Since they are neutral fermions, a neutrino mass term added to the standard-model Lagrangian can produce a discernible difference between Dirac or Majorana character. A Dirac neutrino is distinct from its antiparticle: Dirac neutrinos carry lepton number \(+1\) and Dirac antineutrinos carry lepton number \(-1\). Conversely, a Majorana neutrino is identical to its antiparticle and consequently there is no conserved lepton number with Majorana neutrinos. A free Dirac neutrino, like all the other charged fermions, is described by a spinor with four independent components. In contrast, since a Majorana neutrino is its own antiparticle (i.e., equal to its charge-conjugate up to a phase), its spinor has only two independent components. A direct consequence of this is that a Majorana neutrino cannot have a diagonal (i.e., connecting two mass eigenstates which are the same) magnetic moment; but magnetic moments connecting two different mass eigenstates are permitted. Dirac neutrinos have no such constraint.
Determining the Dirac versus Majorana character of neutrinos is a major area of research and there exist many terrestrial experiments dedicated to this search, e.g., neutrinoless double beta decay [13; 14; 15; 16; 17; 18; 19; 20]. Complementary to the terrestrial searches, there has been a long history of using cosmology to probe neutrino properties and interactions. Early work by Schramm and his collaborators [21; 22] connecting the number of neutrinos to cosmological parameters and observables brought those efforts to the forefront. This work particularly emphasized using the effective relativistic degrees of freedom, \(N_{\rm eff}\), and the neutrino mass density in the universe to constrain the neutrino parameters. The interplay of terrestrial experiments and cosmological observations continues to the present day [23] and this work follows in the same spirit.
Only left-handed neutrinos (and right-handed antineutrinos) 1, which are referred to as "active", take place in weak interactions. Any neutral fermion which does not participate in weak interactions is "sterile", although this term is more frequently used for those neutral fermions which mix with the active states and have different mass eigenvalues. To avoid confusion in this work, we will label the opposite chirality Dirac states (right-handed neutrinos and left-handed antineutrinos) as "inactive." Additional interactions of neutrinos beyond the weak interactions (such as electromagnetic couplings) allow neutrinos to remain in thermal contact longer during the Big Bang Nucleosfuselis (BBN) epoch [3]. Imposing the condition that production of inactive neutrino states does not alter the primordial \({}^{4}\)He abundance Morgan obtained a limit of \(\sim 10^{-11}\mu_{B}\)[4] on the neutrino magnetic moment, where \(\mu_{B}=e/(2m_{e})\) is the Bohr magneton. Further imposing the condition that inactive states do not increase the effective relativistic degrees of freedom in excess of one more neutrino species, Morgan's limit was relaxed by a factor of \(\sim 3\)[24]. This limit only applies to Dirac neutrinos since for Majorana neutrinos right-handed states are not additional neutrino states, but represent antineutrinos. The energy dependence of the reaction cross sections due to the contribution of the electromagnetic couplings of neutrinos is different than that of the usual weak interaction couplings. For Majorana neutrinos with transition (i.e., connecting two different mass eigenstates) magnetic moments such reactions convert neutrinos into antineutrinos and vice versa. The resulting change in the reaction rates would alter the way neutrinos decouple from the plasma of electrons/positrons and photons. Such considerations can be used to limit magnetic moments of Majorana neutrinos as was done in Ref. [25]. The purpose of this paper is to improve limits on the magnetic moments of Dirac neutrinos using a careful assessment of the physics of decoupling in the Early Universe, i.e., the epoch at which the scattering rates of inactive neutrinos become too small to maintain thermal equilibrium with the plasma of standard-model constituents.
Footnote 1: We note that these states should properly be called left-chiral or right-chiral, not left-handed or right-handed. Nevertheless, we adopt the nomenclature present in the literature when referring to the chiral states.
To determine the decoupling of the inactive neutrinos, we require a form for the electromagnetic interaction. We introduce the electromagnetic vertex function to characterize electromagnetic interactions below electroweak symmetry breaking [26]
\[F_{\alpha}(k)=f_{Q}(k^{2})\gamma_{\alpha}+f_{M}(k^{2})i\sigma_{\alpha\beta}k^{ \beta}-f_{E}(k^{2})\sigma_{\alpha\beta}k^{\beta}\gamma_{5}+f_{A}(k^{2})(k^{2} \gamma_{\alpha}-k_{\alpha}k\!\!\!/)\gamma_{5}. \tag{1}\]
In Eq. (1), we adopt the conventions \(\sigma_{\alpha\beta}=i[\gamma_{\alpha},\gamma_{\beta}]/2\), \(\gamma_{5}=-i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}\), and \(k\!\!\!/=\gamma_{\alpha}k^{\alpha}\). \(f_{Q}\), \(f_{M}\), \(f_{E}\), and \(f_{A}\) are the electric monopole, magnetic dipole, electric dipole, and magnetic form factors respectively, for momentum transfer \(k_{\alpha}\). Using the operator for the magnetic dipole interaction in Eq. (1), we will calculate scattering amplitudes, cross sections, and rates as a function of plasma temperature \(T\). We calculate both elastic scattering (\(\nu+c\leftrightarrow c+\nu\)) and annihilation (\(\nu+\overline{\nu}\leftrightarrow c+\overline{c}\)) processes between neutrinos and charged particles \(c\). As a result of considering the scattering interactions at times _after_ the ElectroWeak Transition (EWT), we take the Higgs and electroweak bosons as massive particles. A corollary of this treatment is the inclusion of electromagnetic interactions between \(W^{\pm}\) bosons and neutrinos, which we will show have profound effects on setting limits on the neutrino magnetic moment. We will loosen the restriction for the Quark-Hadron Transition (QHT) - where quark and gluon degrees of freedom disappear and are replaced by hadrons - and consider epochs before and after this transition. The QHT is included using the approximate treatment described in Appendix C and based off of Ref. [27].
The outline of this paper is as follows. We summarize neutrino magnetic moment interactions and the pertinent cosmology in Secs. II and III, respectively. Our results for the Early and Later Universe are given in Secs. IV and V. In Section VI we present our conclusions. Appendices A and B cover description of differential cross sections with magnetic moment vertices and thermal averaging of the cross sections. Appendix C details our treatment of the QHT. Throughout this work, we use natural units where \(\hbar=c=k_{B}=1\).
## II Magnetic moments
Comprehensive reviews of neutrino electromagnetic interactions in the context of both the Standard Model and physics beyond the Standard Model are available in the literature [28; 29; 30]. The value of the neutrino magnetic moment in the minimally-extended (i.e., to include the neutrino mass) standard electroweak theory is very small. Using the expression for the one-loop electromagnetic vertex for fermions [31] it was calculated to be order of \(10^{-20}\mu_{B}\)[32]. Given our current knowledge of the neutrino masses and mixing angles, the updated prediction of the Standard Model, minimally extended to allow massive neutrinos, for the electron neutrino magnetic moment is even smaller [33]. In contrast, the most stringent laboratory limit on the neutrino magnetic moment obtained from electron scattering experiments is orders of magnitude larger: \(2.9\times 10^{-11}\mu_{B}\)[34]. Recently excess electron recoil events at the XENON1T detector [35] was interpreted as a possible signature of the neutrino magnetic moment [36]. PandaX
collaboration reports a neutrino magnetic moment limit of \(4.9\times 10^{-11}\mu_{B}\) using the low energy electron recoil events [37]. A recent analysis of the LUX-ZEPLIN data similarly limits the effective neutrino magnetic moment data to be less than \(1.1\times 10^{-11}\mu_{B}\)[38]. Finally, a recent analysis of XENONnT data yields the most stringent limit for electron-flavor neutrino magnetic moment of \(0.9\times 10^{-11}\mu_{B}\)[39]. All three limits would rule out the neutrino magnetic moment interpretation of the XENON1T data.
Large magnetic moments of neutrinos would have very interesting implications for astrophysics and cosmology. If there is an electromagnetic channel to produce neutrinos besides the usual weak one, then these additional neutrinos transfer more of the energy and entropy over large distances. It was remarked quite some time ago that extra energy loss due to the additional electromagnetic neutrino pair emission can limit the value of neutrino magnetic moment [2]. Indeed right after the observation of SN 1987A, it was shown that bounds on the flux of right-handed neutrinos from a core-collapse supernova can be translated into bounds on neutrino magnetic moments [40; 41; 42]. Perhaps the tightest astrophysical bound comes from red giant stars at globular clusters; the increased energy loss resulting from the electromagnetic neutrino pair production near the helium flash could lead to an increased core mass [7]. The most recent such analysis yields a limit in the range of \((1.2-1.5)\times 10^{-12}\mu_{B}\)[43]. Other energy loss arguments typically yield less stringent limits. For example additional energy losses would eliminate the blue loops in the evolution of intermediate-mass stars; hence for Cepheid stars to exist, the neutrino magnetic moment should be smaller than the range \(\sim 2\times 10^{-10}\mu_{B}-4\times 10^{-11}\mu_{B}\)[44]. Similarly if the neutrino magnetic moment is of the order of \(10^{-12}\mu_{B}\), additional energy losses can explain the enhanced lithium abundance observed in red clump stars [45]. An examination of the pulsations [46] or the luminosity function of hot white dwarfs [47] give similar limits. However such limits are subject to large uncertainties, such as the rate of the \({}^{12}\)C reaction or the stellar metallicity. It was suggested that it is possible to evade such astrophysical limits [48] by invoking new interactions of the neutrino with a light scalar boson [49; 50; 51]. One can use spin-flavor precession of neutrinos to assess the value of the neutrino magnetic moment. A more recent analysis using this approach with ultra-high-energy neutrinos is consistent with a limit of \(1.2\times 10^{-11}\mu_{B}\)[52]. More recent work from astrophysics and cosmology considered transition magnetic moments between active and additional sterile neutrino states [53; 54; 55]. These limits are also of the order of \(10^{-11}\mu_{B}\).
The constraints on neutrino magnetic moment, such as those listed in the previous paragraph, are obtained considering neutrino electromagnetic scattering takes place in a plasma consisting of charged particles and antiparticles in the early universe. In such an environment screening of photons needs to be taken into account. We adopt a static screening prescription. Hence photons acquire an effective mass, which we denote by \(m_{\gamma}\). The inverse of this mass is the Debye screening length for electromagnetic interactions. It is given by
\[m_{\gamma}^{2}=\frac{1}{\lambda_{D}^{2}}=4\pi\alpha\sum_{i}q_{i}^{2}\frac{ \partial}{\partial\mu_{i}}[n_{i}^{(-)}-n_{i}^{(+)}]. \tag{2}\]
In Eq. (2), \(\alpha\) is the fine structure constant, \(n_{i}^{(\mp)}\) is the number density for particles (antiparticles), respectively. The partial derivative is with respect to the particle chemical potential and \(q_{i}\) is the charge-coefficient of the particle for each particle-antiparticle pair (e.g., for an electron-positron plasma \(q_{i}^{2}=1\)). Assuming thermal equilibrium and vanishing chemical potentials and masses for all particles, we obtain
\[m_{\gamma}^{2}\bigg{|}_{m_{i}=0,\mu_{i}=0}=\frac{2\pi\alpha}{3}T^{2}\sum_{i}q_ {i}^{2}g_{i}, \tag{3}\]
where \(T\) is the plasma temperature and \(g_{i}\) are the internal degrees of freedom from spin, color, etc. The effective photon mass is plotted in Fig. 1 as a function of the temperature. At very early times many particles are present in the plasma. As the universe evolves, the particle-antiparticle pairs annihilate one by one into lighter particles and no longer contributing to the effective photon mass in Eq. (2).
One can also explore dynamic effects on neutrinos coming from the plasma background [56; 57]. The authors of Refs. [58; 59] explored such effects and found changes in \(N_{\rm eff}\) comparable to what we describe later in this article. This is perhaps not unexpected. For example in Ref. [60] it was reported that light element abundances in BBN does not change when one considers dynamic screening.
In our analysis we will use only the magnetic-moment term of the neutrino electromagnetic vertex function in Eq. (1), namely \(f_{M}(k^{2})i\sigma_{\alpha\beta}k^{\beta}\). We adopt the symbol \(\kappa\) for the neutrino magnetic moment, defined as the magnetic dipole form factor in the forward-scattering limit, i.e., \(\kappa\equiv f_{M}(k^{2}=0)\), which we scale to the Bohr magneton as
\[\kappa=\mu\frac{e}{2m_{e}}, \tag{4}\]
using the dimensionless parameter \(\mu\) and not to be confused with the chemical potentials in Eq. (2). We will include the effective photon mass from Eq. (2) when calculating scattering amplitudes and the resultant cross sections and
rates. For example: including in-medium effects modifies the well-known differential cross-section expression for elastic scattering from charged fermions to the following
\[\left(\frac{d\sigma}{dt}\right)_{\nu f}=\frac{\pi q_{f}^{2}\alpha^{2}}{m_{e}^{2} }\mu^{2}\frac{t}{(t-m_{\gamma}^{2})^{2}}\frac{s+t-m_{f}^{2}}{s-m_{f}^{2}} \tag{5}\]
for each charged fermion with a mass \(m_{f}\) and charge-coefficient \(q_{f}\). In Eq. (5), the magnetic moment \(\mu\) is given in units of Bohr magneton, hence \(m_{e}\) in the prefactor is the same for all fermions since it comes from the definition of \(\mu_{B}\). \(s\) and \(t\) are the usual Mandelstam variables.
We list the differential cross sections used in this work in Appendix A. To obtain integrated cross sections, these expressions need to be integrated from \(t_{\rm min}=-(s-m_{i}^{2})^{2}/s\) to \(t_{\rm max}=0\) to give the cross section as a function of \(s\) and \(T\), for each target mass \(m_{i}\). Returning to the example for the differential cross section in Eq. (5), integrating over \(t\) gives
\[\sigma_{\nu f}(s) = \frac{\pi q_{f}^{2}\alpha^{2}}{m_{e}^{2}}\mu^{2}\left[\left(1+ \frac{2m_{\gamma}^{2}}{s-m_{f}^{2}}\right)\log\left(1+\frac{(s-m_{f}^{2})^{2} }{sm_{\gamma}^{2}}\right)\right. \tag{6}\] \[- \left.\frac{s-m_{f}^{2}}{s}-1+\frac{m_{\gamma}^{2}m_{f}^{2}}{sm_ {\gamma}^{2}+(s-m_{f}^{2})^{2}}\right].\]
We have explicitly given the cross section for elastic scattering off of fermions in Eq. (6), but for all the other cross sections we numerically integrate the differential cross sections. After obtaining a cross-section like the one in Eq. (6), we can calculate the thermal average of cross section multiplied by the Moller speed [61], namely
\[\langle\sigma_{k}v_{\rm Mol}\rangle=\frac{\frac{g_{1}g_{2}}{(2\pi)^{6}}\int d ^{3}p_{1}\frac{1}{e^{E_{1}/T}+1}\int d^{3}p_{2}\,\sigma_{k}v_{\rm Mol}\frac{1} {e^{E_{2}/T}\pm 1}}{\frac{g_{1}g_{2}}{(2\pi)^{6}}\int d^{3}p_{1}\frac{1}{e^{E_{1}/T} +1}\int d^{3}p_{2}\frac{1}{e^{E_{2}/T}\pm 1}}, \tag{7}\]
where the \(k\) subscript indicates a specific scattering target and process. In writing Eq. (7), we have assumed equilibrium distributions for the incoming neutrino (labeled as particle 1) and the scattering target (particle 2) and ignored the Pauli blocking/Bose enhancement of the products. The \(\pm 1\) in the distribution function for particle 2 corresponds to either fermions (\(+\)) or bosons (\(-\)). For elastic scattering, the target particle is charged, whereas for annihilation it is an antineutrino. Appendix B gives simplified expressions for Eq. (7) in the case of scattering off of either fermions or bosons, and the annihilation process.
The last step in the procedure is to calculate the scattering rate using the thermally-averaged \(\langle\sigma_{k}v_{\rm Mol}\rangle\)number density of incoming neutrinos, \(n_{\nu}\)
\[\Gamma_{k}=n_{\nu}\langle\sigma_{k}v_{\rm Mol}\rangle, \tag{8}\]
Figure 1: Effective in-medium photon mass plotted as a function of plasma temperature [see Eq. (2)].
Including the example specifically given in Eq. (5), we calculate the individual rates for the processes of elastic scattering and annihilation for each charged particle target or product. Summing over all of the individual rates gives us a total scattering rate \(\Gamma_{\nu}\) as a function solely of temperature and \(\mu\).
Returning again to the example in Eq. (6), one can see that to leading order \(\sigma\) does not scale with temperature. In fact, \(\langle\sigma v_{\rm Mol}\rangle\) also does not scale with \(T\) to leading order. Only the number density \(n_{\nu}\) in Eq. (8) provides a nontrivial \(T^{3}\) scaling. The Hubble expansion rate scales as \(T^{2}\), implying that the magnetic moment interaction keeps the inactive neutrino states thermally populated at high temperatures, but become ineffective at lower temperatures.
## III Cosmology
The scattering and annihilation rates via the magnetic-moment vertex all scale as \(\mu^{2}\), implying that increasing \(\mu\) will increase the interaction rate and postpone the point when the inactive states decouple from the plasma. In principle, decoupling could occur at low temperatures when the matter energy density comprises a significant fraction of the total energy density. As we will show in Sec. IV, current cosmological bounds imply that inactive neutrinos must decouple at early times, when the universe is dominated by radiation.
For radiation-dominated conditions, we will parameterize the energy density in two different ways. When doing a calculation to determine decoupling, we use the parameter \(g_{\star}\) as an effective spin statistic constant [62]
\[\rho=\frac{\pi^{2}}{30}g_{\star}T^{4}. \tag{9}\]
When showing results on extra radiation energy density, we use the effective number of degrees of freedom, \(N_{\rm eff}\), to parameterize the radiation energy density. We delay discussion of \(N_{\rm eff}\) until Sec. IV. \(g_{\star}\) contains contributions from massless and massive particles. To determine \(g_{\star}\), we first calculate the total energy density using the appropriate Bose-Einstein or Fermi-Dirac (FD) equilibrium distribution function
\[\rho=\sum_{i}g_{i}\int\frac{d^{3}p}{(2\pi)^{3}}Ef_{i}(E), \tag{10}\]
where the energy \(E\) is related to the rest mass \(m_{i}\) through \(E=\sqrt{p^{2}+m_{i}^{2}}\). We equate Eqs. (9) and (10) and solve for \(g_{\star}\) as a function of temperature. Figure 2 shows the relation between \(g_{\star}\) and plasma temperature employed in our decoupling calculations. At the TeV scale, the entire standard model is present with ultra-relativistic kinematics. As the universe expands and the temperature decreases, the equilibrium abundances of massive particles become Boltzmann suppressed and their respective degrees of freedom vanish. The "plateau-hill" pattern in Fig. 2 shows multiple instances of vanishing degrees of freedom. When the temperature reaches \(10\,{\rm MeV}\), only photons, electrons, and neutrinos contribute to \(g_{\star}\). Included in the calculation of \(g_{\star}\) are the six inactive neutrino states at all temperatures.
To construct the plot in Fig. 2 we make a number of simplifying assumptions. First, we take the masses of the Higgs, vector bosons, and the top quark to be constant and equal to their vacuum values at all times. In reality, these and other particles acquire their masses during the EWT which occurs at \(\sim 140\,{\rm GeV}\)[63]. As a result, our values for \(g_{\star}\) for \(T\sim 140\,{\rm GeV}\) are an underestimate in Fig. 2. Second, we use a fitting function to model the dynamics of the QHT centered at \(T\sim 170\,{\rm MeV}\). The fitting procedure produces a local maximum at \(T\sim 200\,{\rm MeV}\) when bound hadronic states coexist with free quarks and gluons. Despite the local maximum, the energy density monotonically decreases with decreasing temperature at all times during the transition. Appendix C gives details on the fitting procedure adopted from Ref. [27].
Equation (9) ignores any contribution to the total energy density from matter or vacuum, appropriate for our purposes of inactive neutrino decoupling in the radiation-dominated regime. We can calculate the Hubble expansion rate with \(g_{\star}\) to yield
\[H=\sqrt{\frac{8\pi}{3m_{\rm pl}^{2}}}\rho=\sqrt{\frac{4\pi^{3}}{45}g_{\star}} \frac{T^{2}}{m_{\rm pl}} \tag{11}\]
where \(m_{\rm pl}=1.2\times 10^{19}\,{\rm GeV}\) is the Planck mass. Equation (11) shows that the Hubble expansion rate scales as \(T^{2}\). The previous section showed that the magnetic-moment interaction rates scale as \(\sim T^{3}\). As a result, inactive neutrinos will maintain thermal equilibrium with the plasma at high temperatures, and eventually freeze-out and free-stream at lower temperatures. Figure 3 shows the total magnetic-moment interaction rate (solid blue) and Hubble expansion rate (dashed green) each as a function of temperature. For this particular example, the magnetic-moment strength is
taken to be \(\mu=10^{-13}\). For our purposes, we approximate decoupling as an instantaneous event when the interaction rate falls below the Hubble expansion rate
\[\Gamma_{\nu}<H\implies\text{decoupled}. \tag{12}\]
For the example in Fig. 3 we estimate the decoupling temperature as \(T_{\text{dec}}\simeq 200\,\text{GeV}\). The magnetic-moment interaction rate scales as \(T^{3}\) at low temperatures. Once \(W^{\pm}\) bosons are present in the plasma (\(T\sim 100\,\text{GeV}\)), the interaction rate increases dramatically, scaling as \(T^{7}\). This change in the scaling law is present in Fig. 3 at a temperature scale comparable to the \(W^{\pm}\) rest mass.
In our calculations, the magnetic-moment interaction rate is solely a function of the dynamical variable \(T\) and the model parameter \(\mu\). All interaction rates are proportional to \(\mu^{2}\), so an individual rate has the same temperature dependence as the blue curve in Fig. 3 with an overall scaling dependent on \(\mu\). As a result, we can fix a decoupling temperature \(T_{\text{dec}}\) and solve for the corresponding \(\mu\) by locating where the interaction rate falls below the Hubble expansion rate. Figure 4 shows the magnetic moment strength as a function of the decoupling temperature. The general behavior of the curve shows that increasing magnetic-moment strengths delays decoupling. The shoulder at \(T\sim 100\,\text{GeV}\) is again due to the presence of \(W^{\pm}\) bosons in the plasma, akin to the behavior of the blue curve in Fig.
Figure 3: Rates plotted against temperature. The inactive neutrino scattering rate (solid blue) is for a magnetic moment strength \(\mu=10^{-13}\). Also given is the Hubble expansion rate (dashed green).
Figure 2: Hubble expansion rate parameter \(g_{*}\) plotted as a function of temperature. Included in the calculation of \(g_{*}\) are the six degrees of freedom from the inactive neutrino states.
Early Universe Results
With the presence of the inactive Dirac states, there exists more energy density in the neutrino sector. The inactive states have identical mass eigenvalues to those of the active neutrinos, and so their masses are small. At early times, before photon decoupling, all of the neutrinos are ultrarelativistic and their energy density contributes to radiation. At later times and the current epoch, the neutrino energy density contributes to matter. In this section we discuss the implications at early times for the Cosmic Microwave Background (CMB) and BBN.
During atomic recombination, the radiation energy density is composed of photons, active neutrinos, and inactive neutrinos. We assume that inactive-neutrino decoupling and active neutrino decoupling preserve the Fermi-Dirac spectra of the various neutrino species (see Refs. [64; 65; 66; 67; 68; 69; 70; 71; 72] among others on non-instantaneous decoupling). The implication is that we can use temperature-like variables for the three components of the radiation. Therefore, we write the radiation energy density during recombination as the following
\[\rho_{\rm rad}=\frac{\pi^{2}}{15}T^{4}+3\times\frac{7\pi^{2}}{120}T_{a}^{4}+3 \times\frac{7\pi^{2}}{120}T_{i}^{4}, \tag{13}\]
where \(T_{a}\) is the active neutrino temperature-like quantity, and \(T_{i}\) is a comparable quantity for the inactive neutrinos. Conservation of comoving entropy gives the familiar relation between \(T_{a}\) and \(T\)[62]
\[\frac{T_{a}}{T}=\left(\frac{4}{11}\right)^{1/3}. \tag{14}\]
The same principle applies for deducing the ratio \(T_{i}/T\), and we find
\[\frac{T_{i}}{T}=\left(\frac{43}{11}\frac{1}{g_{\star,S}^{\rm dec}}\right)^{1/3}, \tag{15}\]
where \(g_{\star,S}^{\rm dec}\) is the effective entropic degrees of freedom at inactive-neutrino decoupling \(T_{\rm dec}\) (see Fig. 4). \(g_{\star,S}^{\rm dec}\) is related to \(g_{\star}^{\rm dec}\) by subtracting off the inactive neutrino degrees of freedom
\[g_{\star,S}^{\rm dec}=g_{\star}^{\rm dec}-\frac{7}{8}\times 6. \tag{16}\]
Using the cosmological parameter \(N_{\rm eff}\) and the temperature ratios in Eq. (13), we can relate \(N_{\rm eff}\) to \(g_{\star,S}^{\rm dec}\)
\[N_{\rm eff}=3\left[1+\left(\frac{43}{11}\frac{1}{g_{\star,S}^{\rm dec}}\right) ^{4/3}\right]. \tag{17}\]
Figure 4: Magnetic moment strength \(\mu\), corresponding to an interaction rate below the Hubble expansion rate, plotted as a function of decoupling temperature.
Figure 5 shows the change in \(N_{\rm eff}\) in the presence of the inactive neutrino states. The vertical axes are the change in \(N_{\rm eff}\) from 3, namely
\[\Delta N_{\rm eff}\equiv N_{\rm eff}-3=3\left(\frac{43}{11}\frac{1}{g_{\rm s,S}^{ \rm dec}}\right)^{4/3}. \tag{18}\]
The horizontal axes give a range of \(\mu\). In the top panel, we show the entire range of \(\mu\) studied in this work, where the lower limit corresponds to \(T_{\rm dec}\sim 1\,{\rm TeV}\) and the upper limit to \(T_{\rm dec}\sim 1\,{\rm MeV}\). For the large magnetic-moment strengths, the inactive neutrinos decouple at the same time as the active neutrinos do, implying that \(T_{a}=T_{i}\) and both sectors contribute equally to \(N_{\rm eff}\).
The bottom panel of Fig. 5 shows a restricted range of \(\mu\), corresponding to decoupling before the QHT. We have inserted horizontal lines to show the \(1\sigma\) limits from the Planck mission [73] and projections from CMB Stage IV [74]. We observe that at the level of \(1\sigma\), \(\mu\simeq 5\times 10^{-12}\) would produce a value of \(N_{\rm eff}\) in tension with Planck. In the future, if CMB-S4 does not see any evidence of extra radiation energy density, then Dirac neutrinos could not have been in thermal equilibrium below the EWT to nearly \(4\sigma\) level.
With an increase in radiation energy density, the Hubble expansion rate also increases which leads to an earlier epoch of weak freeze-out and nuclear freeze-out during BBN [75] [see Ref. [76] for the present status on BBN observations].
Figure 5: Change in \(N_{\rm eff}\) versus magnetic moment strength. The top panel shows the entire range of magnetic moments explored in this work. The bottom panel is a narrow range for low values of \(\mu\). Also plotted is the \(1\sigma\) uncertainty from the Planck mission [73] and the proposed \(1\sigma\) uncertainty from CMB-S4 [74].
Figure 6 shows the relative differences in the helium-4 mass fraction, \(Y_{\rm P}\), and the ratio of deuterium to hydrogen, D/H, as solid blue and dashed green lines, respectively. The relative differences are computed by comparing to a baseline where there are no inactive Dirac states, and the active neutrinos decouple at a temperature of \(10\,{\rm MeV}\). The horizontal axis in Fig. 6 is the same range in \(\mu\) as the bottom panel of Fig. 5. The shapes of the curves in Fig. 6 and \(\Delta N_{\rm eff}\) in Fig. 5 are all simlar to one another as the abundances linearly scale with \(N_{\rm eff}\) in this range [77]. D/H is more sensitive to \(N_{\rm eff}\) and so has a larger deviation from the baseline value than \(Y_{\rm P}\). For \(\delta({\rm D/H})<1\%\), \(\mu\lesssim 4\times 10^{-12}\), in line with the \(1\sigma\) Planck limit from Fig. 5.
## V Later Universe Results
For the various epochs prior to photon decoupling, neutrino masses are small compared to the momenta, and so we approximated neutrinos as massless for calculations of energy density and interaction rates in Section IV. After neutrinos decouple from electrons and positrons (an epoch well before the photon-decoupling one), they continue to have ultra-relativistic kinematics and move at speeds nearly that of light. During these later epochs neutrino kinematics will become increasingly non-relativistic in an expanding universe. Accordingly, neutrino 3-momenta will redshift and asymptotically approach zero, implying the neutrino rest mass contribution evolves from a negligible to the dominant component of the neutrino energy density. As a result, we will need to discard our early-universe approximation of neutrinos being massless.
Typically, the dynamics of massive neutrinos is included in cosmology by extending the \(\Lambda\)CDM model to include the "sum of the light neutrino masses" parameter, denoted as \(\Sigma m_{\nu}\)[78; 79]. The presence of neutrino rest mass changes the growth of smaller versus larger-scale structure as neutrinos free stream during the initial stages of structure formation, but act as component of the total matter energy density at later stages. The difference in the structure growth rates yields a modified matter power spectrum, which can be elucidated by considering weak gravitational-lensing of the CMB convolved with matter distributions from cosmological surveys [80]. The transition between the small and large-scale regimes depends on \(\Sigma m_{\nu}\), but also depends on the spectrum of the neutrinos. If we introduce a non-thermal portion to the total cosmic neutrino spectrum via a low-energy contribution from the inactive Dirac states, we change the epoch when neutrinos become non-relativistic and hence the matter power spectrum and weak-lensing potential are appropriately altered.
A theoretical calculation of the lensing potential in the presence of anomalous magnetic moments for massive neutrinos is beyond the scope of our exploratory work. As an alternative to calculating the lensing potential, we will investigate the dependence of \(\Sigma m_{\nu}\) and \(\mu\) on free-streaming. Neutrinos move at speeds less than the speed of light, implying the more massive the neutrino the earlier it will become nonrelativistic over the history of the universe. This implies a smaller free-streaming scale, \(\lambda_{\rm fs}\). We will use the free-streaming wavenumber \(k_{\rm fs}=2\pi a/\lambda_{\rm fs}\) (at scale factor \(a\)) to evaluate the roles of neutrino rest mass and anomalous magnetic moment strength on late-time cosmology.
Neutrinos will begin to move at speeds appreciably less than the speed of light once their momenta become comparable to their masses. The results of \(N_{\rm eff}\) from the previous section showed that the inactive neutrinos must decouple
Figure 6: Relative changes in primordial abundances plotted as a function of \(\mu\). The solid blue curve gives the relative change in the helium mass fraction (\(Y_{\rm P}\)) and the dashed green curve gives the relative change in the deuterium abundance (D/H).
from the plasma prior to the active neutrinos, implying that the comoving temperature quantity for the inactive states, \(T_{i}\), is smaller than the counterpart quantity for the actives, \(T_{a}\). In fact, if \(T_{i}<<T_{a}\), there is a possibility that the inactive states could become nonrelativistic in the early universe. For the range of magnetic moment strength we consider in this work, the inactive states do not become non-relativistic until well after photon-decoupling.
We adopt the definition of \(k_{\rm fs}\) from Eq. (93) in Ref. [78]
\[k_{\rm fs}(t)=\sqrt{\frac{3}{2}}\frac{a(t)H(t)}{v_{\rm th}(t)} \tag{19}\]
where \(t\) is the time coordinate and \(v_{\rm th}\) is akin to the thermal speed with more explanation below. For our purposes, we will use the scale factor \(a\) as an independent variable. If we consider the current epoch where \(a=a_{0}\), we will denote the free-streaming wavenumber as \(k_{\rm fs,0}\). To incorporate the physics of anomalous magnetic moments, we will calculate the thermal speed using an ensemble average over the inactive and active states. We describe both the active and inactive neutrino states using FD distributions with their respective comoving temperature quantities. All mass eigenstates for the active neutrinos have the same distribution with \(T_{a}\), and similarly for the inactive states with \(T_{i}\). Due to \(T_{i}\neq T_{a}\), the thermal speed is not a true thermal average, but instead an ensemble average. Nevertheless, we adopt the nomenclature of thermal speed for consistency with the literature. The thermal speed in our cosmology with anomalous magnetic moments is
\[v_{\rm th}=\frac{2\times\sum\limits_{j=1}^{3}\int_{0}^{\infty} \frac{d^{3}p}{(2\pi)^{3}}\frac{1}{e^{p/T_{a}+1}}\frac{p}{E_{j}}+2\times\sum \limits_{j=1}^{3}\int_{0}^{\infty}\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{e^{p/T_{i} +1}}\frac{p}{E_{j}}}{2\times\sum\limits_{j=1}^{3}\int_{0}^{\infty}\frac{d^{3}p} {(2\pi)^{3}}\frac{1}{e^{p/T_{a}+1}}+2\times\sum\limits_{j=1}^{3}\int_{0}^{ \infty}\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{e^{p/T_{i}+1}}}, \tag{20}\]
where \(E_{j}=\sqrt{p^{2}+m_{j}^{2}}\) and the summations over \(j\) are for the three separate mass eigenstates. Note that \(T_{i}\) and \(T_{a}\) redshift with scale factor, so \(v_{\rm th}\) also depends on scale factor. At high temperatures - equivalently low scale factor \(a\) - the neutrinos are ultrarelativistic and \(p/E\sim 1\) and so \(v_{\rm th}\sim 1\). Conversely, at high scale factor \(p/E<1\). We can simplify Eq. (20) to the following
\[v_{\rm th}=\frac{2}{9\zeta(3)[T_{a}^{3}+T_{i}^{3}]}\int_{0}^{ \infty}d\epsilon\frac{\epsilon^{2}}{e^{\epsilon}+1}\sum\limits_{j=1}^{3} \left[\frac{T_{a}^{3}}{\sqrt{1+\left(\frac{m_{j}}{\epsilon T_{a}}\right)^{2}} }+\frac{T_{i}^{3}}{\sqrt{1+\left(\frac{m_{j}}{\epsilon T_{i}}\right)^{2}}} \right], \tag{21}\]
where we have used \(\epsilon=p/T_{x}\) for either \(x=a,i\). Figure 7 shows the evolution of \(k_{\rm fs}\) as a function of the ratio \(a/a_{0}\). To plot \(k_{\rm fs}\), we need the following model input parameters: the ratios of temperature quantities \(T_{i}/T\) and \(T_{a}/T\); and the light neutrino mass eigenstates. \(T_{i}/T\) is a function of the magnetic moment strength \(\mu\) implied in Eq. (15), and \(T_{a}/T=(4/11)^{1/3}\). For the mass eigenstates, we use the parameter \(\Sigma m_{\nu}\) and specify an ordering, either normal or inverted, using the solar and atmospheric mass splitting values where appropriate [81]. For both curves in Fig. 7, we pick \(\mu=1.88\times 10^{-14}\). The solid blue curve uses \(\Sigma m_{\nu}=60.6\,{\rm meV}\) with a normal mass ordering, i.e., a smallest mass eigenvalue \(m_{1}=1\,{\rm meV}\). To show the effect of mass on \(k_{\rm fs}\), we also plot a dashed green curve using massless neutrinos, i.e., using \(v_{\rm th}=1\). The neutrino energy density differs between the two cosmologies. For the purposes of comparing the two models, we preserve the Hubble expansion rate at the current epoch by adjusting the vacuum energy density, i.e., we decrease \(\rho_{\Lambda}\) for increasing \(\Sigma m_{\nu}\). For \(a/a_{0}\gtrsim 10^{-3}\), we see that the blue curve diverges from the green curve due to massive neutrinos becoming nonrelativistic. The increase in \(k_{\rm fs}\) corresponds to a decrease in power on small scales at later times. The divergence increases to the current epoch, at which point \(k_{\rm fs}\) differs by an order of magnitude between the two cosmologies. For concreteness, we give the two values at the current epoch
\[k_{\rm fs,0}(\Sigma m_{\nu}=60.6\,{\rm meV}) =1.70\times 10^{-3}, \tag{22}\] \[k_{\rm fs,0}(\Sigma m_{\nu}=\phantom{-}0\,{\rm meV}) =2.74\times 10^{-4}. \tag{23}\]
We have employed a cosmology in Fig. 7 where the magnetic-moment interaction populates the inactive states resulting in a larger value of \(N_{\rm eff}\). At this point, we give a brief digression to discuss how neutrino rest mass affects \(k_{\rm fs}\) when magnetic moments are not present but \(\Delta N_{\rm eff}>0\). For a cosmology where \(\Delta N_{\rm eff}=0\) yet neutrinos have non-zero masses, we can compensate for the larger neutrino energy density by using a smaller vacuum energy density fraction, \(\Omega_{\Lambda}\), to preserve the Hubble expansion rate at the current epoch. We use the same prescription when \(\Delta N_{\rm eff}>0\), regardless of whether that extra radiation energy density is from neutrinos or some other undetermined particles. For
the base values of \(\Omega_{\Lambda}\) and the cold dark matter fraction we use in this work [73], a small decrease in \(\Omega_{\Lambda}\) implies a younger universe, and therefore a lower value of the free streaming length and larger value of \(k_{\rm fs}\). When we introduce the inactive Dirac states via \(\mu=1.88\times 10^{-14}\), \(k_{\rm fs,0}\) does not vary at all from the base cosmology if neutrinos were massless. This is a result of \(v_{\rm th}=1\) and our prescription of fixing the Hubble expansion rate at the current epoch to be the same in Eq. (19) regardless of the cosmological model. On the other hand, for massive neutrinos with \(\Sigma m_{\nu}=60.6\,{\rm meV}\) and \(v_{\rm th}\neq 1\), the value in Eq. (22) is 5% higher than the comparable cosmological model with massive neutrinos and unpopulated inactive states.
Figure 7 shows the impact on \(k_{\rm fs}\) when neutrinos become nonrelativistic. At the current epoch, the CMB photon temperature is \(T=2.726\,{\rm K}\), implying that the active neutrino temperature is \(T_{a}=0.17\,{\rm meV}\). For \(\Sigma m_{\nu}=60.6\,{\rm meV}\) in a normal ordering, the neutrinos with the two heavier mass eigenvalues are nonrelativistic and have been for much of the history of the universe. Decreasing \(\Sigma m_{\nu}\) below 60.6 meV to its absolute minimum of 59.6 meV only slightly changes the values of \(m_{2}\) and \(m_{3}\), but has a significant effect on \(k_{\rm fs}\). Although \(\Sigma m_{\nu}\) changes by less than 2%, \(k_{\rm fs}\) decreases by nearly 60%. The decrease is entirely due to the kinematics of the neutrinos with the smallest mass eigenvalue \(m_{1}\). Figure 8 shows the quantity \(k_{\rm fs,0}^{(1)}\) plotted against \(m_{1}\). \(k_{\rm fs,0}^{(1)}\) is the free-streaming wavenumber for only the neutrinos with \(m=m_{1}\). We calculate \(k_{\rm fs,0}^{(1)}\) by first replacing the summations in Eq. (20) for \(v_{\rm th}\) with single calculations where \(m_{j}=m_{1}\), and then calculating the free-streaming wavenumber with Eq. (19). For \(m_{1}=1\,{\rm meV}\), Fig. 8 shows that the distribution of lightest mass neutrinos is in transition from ultrarelativistic to nonrelativistic at the current epoch. Decreasing \(m_{1}\lesssim 0.1\,{\rm meV}\) ensures the lightest neutrinos are ultrarelativistic for the entire history of the universe.
We have used a model where \(\mu=1.88\times 10^{-14}\) when plotting \(k_{\rm fs,0}^{(1)}\) in Fig. 8. For other values of \(\mu\), \(k_{\rm fs,0}^{(1)}\) vs. \(m_{1}\) would look qualitatively identical to Fig. 8. One of the quantitative differences for differing \(\mu\) models is the value of \(m_{1}\) where the kinematics of the neutrinos transition from relativistic (\(E=\sqrt{p^{2}+m^{2}}\)) to ultrarelativistic (\(E=p\)), represented by the ramp-up from the plateau of \(k_{\rm fs,0}^{(1)}\) in Fig. 8. There are two competing effects which alter the point of departure from the plateau. First, the inactive neutrino temperature is always smaller than the active neutrino temperature, and so the inactive neutrinos with a given mass depart from ultrarelativistic kinematics before the actives with that same mass in the history of the universe. Figure 4 implies that the ratio \(T_{i}/T_{a}\) decreases with decreasing \(\mu\), so models with smaller \(\mu\) have earlier points of departure in Fig. 8. However, in opposition to this first effect is the fact that smaller \(T_{i}\) implies a smaller number density for the inactive neutrinos. A smaller number density increases \(k_{\rm fs}\) (and by extension \(k_{\rm fs,0}^{(1)}\)) in Eq. (21), implying models with smaller \(\mu\) would have a later point of departure in Fig. 8.
To show the competition between temperature and number density, we consider how \(k_{\rm fs,0}\) varies with \(\mu\) for a fixed \(m_{1}\) or equivalently a fixed \(\Sigma m_{\nu}\). Figure 9 shows how \(k_{\rm fs,0}\) changes with \(\mu\) for \(\Sigma m_{\nu}=60.6\,{\rm meV}\) in a normal ordering. The higher values of \(\mu\) show the effect of \(T_{i}/T_{a}\) close to unity, whereas the lower values show the effects of a smaller number density. There is a global maximum for these models at \(\mu\lesssim 10^{-10}\), corresponding to a decoupling temperature \(T_{\rm dec}\simeq 100\,{\rm MeV}\). This epoch occurs in proximity to the QHT and therefore the exact value of the global maximum is dependent on the treatment of the QHT. The shape of the curve in Fig. 9 is a function of the temperature ratio
Figure 7: \(k_{\rm fs}\) plotted as a function of expansion parameter ratio \(a/a_{0}\). Solid blue line is for massive neutrinos as given in Eq. (21) with \(\Sigma m_{\nu}=60.6\,{\rm meV}\) in the normal mass ordering. Dashed green line is \(k_{\rm fs}\) if neutrinos were massless, i.e., \(m_{j}=0\) in Eq. (21). For both curves, we take the neutrino magnetic moment strength to be \(\mu=1.88\times 10^{-14}\).
\(T_{i}/T_{a}\). A larger value of \(\Sigma m_{\nu}\) acts to shift the curve down to smaller values of \(k_{\rm fs,0}\) while preserving \(T_{i}/T_{a}\) and the shape of that curve.
Finally, we show how \(k_{\rm fs,0}\) changes with \(\Sigma m_{\nu}\) for the normal (solid blue) and inverted (dashed green) orderings in Fig. 10. Both curves use a model where \(\mu=1.88\times 10^{-14}\). The apparent asymptote at the lowest values of \(\Sigma m_{\nu}\) for each ordering are a result of neutrinos with mass eigenvalue \(m_{1}\) staying ultrarelativistic until the current epoch, analogous to the descent to the plateau in Fig. 8. \(k_{\rm fs,0}\) has a smaller minimum value for the normal ordering as a result of a smaller neutrino energy density and older universe. We have plotted the \(2\sigma\) constraint on \(\Sigma m_{\nu}\) from the Planck mission [73] and a \(4\sigma\) forecast from CMB-S4 [74]. If CMB-S4 finds a nonzero value for \(\Sigma m_{\nu}\), scales such as the free-streaming length would differ between the two orderings.
## VI Conclusions
The mechanism which generates the neutrino mass has yet to be determined, and as a result the nature of whether neutrinos are Majorana or Dirac is unknown. In this work, we have considered the cosmological implications on the
Figure 8: Free-streaming wavenumber for the neutrinos with lightest mass eigenvalue \(m_{1}\) plotted as a function of \(m_{1}\) (meV). \(k_{\rm fs,0}^{(1)}\) is calculated by setting the summations in Eq. (20) for \(v_{\rm th}\) only to range over \(j=1\), and includes active and inactive neutrinos for the model \(\mu=1.88\times 10^{-14}\).
Figure 9: \(k_{\rm fs,0}\) plotted as a function of magnetic moment strength \(\mu\). \(\Sigma m_{\nu}=60.6\,\)meV in the normal ordering.
existence of the inactive Dirac states and how they may be thermally populated at an early epoch in the history of the Universe. Both Dirac and Majorana neutrinos could impact \(N_{\rm eff}\) through undetected interactions. For example, electromagnetic scattering of electrons and positrons with Majorana neutrinos changes the spectra of the active component through a heat flow from the electromagnetic plasma into the neutrino seas [25]. In the models considered in this work, electromagnetic scattering channels of charged particles with Dirac neutrinos populate the inactive states while preserving the FD spectra of the active states.
Motivated by the search for phenomenological differences between the Majorana versus Dirac nature of neutrinos, we studied a class of interactions between Dirac neutrinos and standard-model particles not mediated by the weak force. As a hypothesis, the model we employed utilizes anomalous neutrino magnetic moments and an associated electromagnetic vertex, namely \(i\kappa\sigma_{\alpha\beta}k^{\beta}\) in Eq. (1), to couple neutrinos to charged leptons, quarks, and \(W^{\pm}\) bosons. We calculated thermally-averaged cross sections for both the elastic scattering and the annihilation processes using the Debye screening length to reflect the bath of charged particles present in the plasma of the Early Universe. In addition, we calculated the two scattering cross sections between neutrinos and \(W^{\pm}\) for the first time using a magnetic moment vertex (see Appendix A). With these cross sections, we compare the associated scattering rates to the Hubble expansion rate to find a decoupling temperature as a function of the neutrino magnetic moment, parameterized using \(\mu\). Figure 4 shows the relation between the decoupling temperature and \(\mu\), where the scaling law changes around \(T\sim 100\,\)GeV due to the presence of the \(W^{\pm}\) bosons.
With the additional neutrino states populated, we were able to calculate changes to \(N_{\rm eff}\), the primordial abundances \(Y_{\rm P}\) and D/H, and the free-streaming wavenumber. Our strongest limits on \(\mu\) come from those of \(N_{\rm eff}\) using parameter estimations from the Planck mission [73]. Table 1 gives limits from experiment [82, 83, 84, 34, 85], other astrophysical or cosmological sources [43, 44, 25, 45], and finally our current work, and will be further constrained with upcoming CMB experiments. More specifically, the relation between \(N_{\rm eff}\) and \(\mu\) in Fig. 5 is sensitive to how one treats the EWT and QHT. In terms of computing direct energy density, the QHT is obviously the more sensitive probe. However, if CMB-S4 pushes the limits on \(N_{\rm eff}\) to epochs preceding the QHT, the EWT becomes more pertinent. We caution though that the EWT is not well-understood and the scaling relations between the rates and \(\mu\) may differ above electroweak symmetry breaking. Our results above \(\sim 100\,\)GeV should be taken as extrapolations.
We note two points about our work which is unique to Dirac neutrinos but not anomalous magnetic moments in the context of cosmology. The first is the fact that there must exist 3 eigenstates for the inactive states with mass eigenvalues identical to the active neutrinos. When considering the energy density of a new low-mass particle in the Early Universe, Dirac neutrinos come with a factor of 3 attached and as a result increase the energy density over that of a single neutral fermion [see Fig. (21) in Ref. [74]; and Ref. [87]]. Second and related to the first, the inactive neutrinos have non-zero masses - negligible at early times but not at late. Structure growth and neutrino free-streaming are indeed dependent on the population of the inactive neutrinos, although those states cannot be thermally populated. A corollary of this result is that the inactive states must have different temperatures or spectra than the actives. Together, the low-mass and extra states of Dirac neutrinos give two methods to probe the neutrino spectra and search for new physics in cosmology.
Figure 10: \(k_{\rm g,0}\) from Eq. (21) plotted as a function of \(\Sigma m_{\nu}\) (meV). Solid blue line is for the normal ordering, and dashed green line for the inverted ordering. Plot is for the model \(\mu=1.88\times 10^{-14}\). Also plotted are the current constraints on \(\Sigma m_{\nu}\) from the Planck mission at \(2\sigma\) (Ref. [73]) and a forecast from CMB-S4 at \(4\sigma\) (Ref. [74]).
We have examined the implications of Dirac neutrinos with anomalous magnetic moments on early and late-time cosmology in this work; and is a follow-up to the Majorana case of Ref. [25]. In a standard seesaw mechanism [30; 88], the three active states are Majorana neutrinos; the three sterile states have mass eigenvalues much heavier than the active states and cannot be probed by current cosmological observations. However, there exists a possibility that those three sterile states could have mass eigenvalues nearly degenerate with the active ones. This possibility of neutrinos being "pseudo-Dirac" has been studied in the case of the diffuse supernova background [89] and mentioned in the case of early-time cosmology [90]. If this mass model holds and neutrinos have anomalous magnetic moments, then they are Majorana particles and the analysis of BBN in Ref. [25] would apply. In addition, the analyses on early and late time cosmological energy density in this work would also be relevant. Although such a hybrid situation is intriguing, Ref. [25] showed that the magnetic moment needs to be \(\sim 10^{-10}\mu_{B}\) to influence the neutron-to-proton rates (and subsequent abundances) through altered neutrino spectra. If additional non-active or sterile states can be populated via an anomalous magnetic moment, then Fig. 5 shows that the \(N_{\rm eff}\) would be nearly 6.0 and ruled out by current cosmological parameter estimation. For early-time cosmology, pseudo-Dirac cannot be distinguished from uniquely Dirac via anomalous magnetic moments.
Where there might be a difference between these two mass models is interpreting \(\Sigma m_{\nu}\) from large-scale-structure growth. If \(\kappa<10^{-10}\mu_{B}\) and the mass eigenvalues for the sterile states are nearly degenerate, then this situation is a close reproduction to the situation studied in Sec. V. To be precise, we would need to slightly alter the summations over the sterile states in Eq. (20) to account for different masses, although this should not make a significant difference if the sterile mass eigenvalues are nearly degenerate with the active ones. If, however, the sterile masses are _smaller_ than the active ones, there would be a contribution to \(N_{\rm eff}\) but not to the dark-matter contribution at late times (see Fig. 8), thereby changing the free-streaming scale for the active neutrinos. In addition, lighter sterile states also introduce the possibility of active neutrino decays which alter the dynamics in late-time cosmology [91; 92; 93]. For this scenario of light sterile states and anomalous magnetic moments, pseudo-Dirac and uniquely Dirac give different predictions in late-time cosmology.
Finally, we comment on an often quoted result in the literature, colloquially referred to as the "Kayser confusion theorem" [94; 95]. The theorem points out that other neutrino properties could lead to similar effects as the magnetic moment in scattering processes. Specifically, a Dirac magnetic moment could be confused with a Majorana anapole moment [see the \(f_{M}\) and \(f_{A}\) terms in Eq. (1)]. In this connection it is worthwhile to point out that cosmology differs from particle-beam experiments in one key aspect. In brief, the cosmological parameters \(N_{\rm eff}\) and \(\Sigma m_{\nu}\) give a measure of the energy density, i.e., they indicate which states are populated by particles and have distinct manifestations for Majorana versus Dirac character. Conversely, particle-beam experiments measure cross sections and have difficulties discerning between the two characters as discussed in Refs. [94; 95]. We do not advocate for using one method over the other. Rather, both should be pursued as they complement one another in probing the nature of neutrino interactions.
## Acknowledgements
The authors thank Volker Koch, Volodymyr Vovchenko, Nicole Vassh, George Fuller, James Kneller, Gail McLaughlin, and Amol Patwardhan for useful discussions. EG is supported in part by the Department of Energy Office of
\begin{table}
\begin{tabular}{c c c} Method & Limit (in units of \(\mu_{B}\)) & Notes \\ \hline Reactor & \(2.9\times 10^{-11}\) & GEMMA [34] \\ Accelerator \(\nu_{e}\)-\(e^{-}\) & \(10^{-11}\) & LAMPF [82] \\ Accelerator \((\nu_{\mu},\overline{\nu}_{\mu})\)-\(e^{-}\) & \(6.8\times 10^{-10}\) & LSND [83] \\ Accelerator \((\nu_{\tau},\overline{\nu}_{\tau})\)-\(e^{-}\) & \(3.9\times 10^{-7}\) & DONUT [84] \\ Dark Matter Direct Detection & \(4.9\times 10^{-11}\) & PandaX [37] \\ Dark Matter Direct Detection \((\nu_{e})\) & \(0.9\times 10^{-11}\) & XENONnT [39] \\ Solar (\({}^{7}\)Be) & \(5.4\times 10^{-11}\) & Borexino [85] \\ Red Giant Stars & \(1.2\times 10^{-12}\) & Ref. [43] \\ Cepheid Stars & \(2\times 10^{-10}\) & Ref. [44] \\ Lithium in red clump stars & \(10^{-12}\) & Ref. [45] \\ Cosmology (Majorana) & \(10^{-10}\) & Ref. [25] \\ Cosmology (Dirac) & \(5\times 10^{-12}\) & This work (\(N_{\rm eff}\) limit from Planck [73]) \\ \end{tabular}
\end{table}
Table 1: Summary of limits on neutrino magnetic moments. Adapted from Table III of Ref. [86].
Nuclear Physics award DE-FG02-02ER41216, and by the National Science Foundation grant No. PHY-1430152 (Joint Institute for Nuclear Astrophysics Center for the Evolution of the Elements). ABB is supported in part by the National Science Foundation Grant PHY-2108339 at University of Wisconsin Madison. ABB and EG acknowledge supported in part by the National Science Foundation Grants No. PHY-1630782 and PHY-2020275 (Network for Neutrinos Astrophysics and Symmetries). This research used resources provided by the Los Alamos National Laboratory Institutional Computing Program, which is supported by the U.S. Department of Energy National Nuclear Security Administration under Contract No. 89233218CNA000001.
## Appendix A Differential Cross Sections with Magnetic Moment Vertex
Here we give differential cross sections for the various scattering and annihilation processes with fermions and bosons. The differential cross sections are functions of Mandelstam variable \(t=(p_{1}-p_{3})^{2}\) for the reaction \(1+2\leftrightarrow 3+4\), and depend on Mandelstam variable \(s=(p_{1}+p_{2})^{2}\), the neutrino magnetic moment strength \(\mu\), the effective in-medium photon mass \(m_{\gamma}\) from Eq. (3), and the vacuum mass of the charged boson/fermion. We calculate the integrated cross sections using
\[\sigma(s)=\int_{t_{\rm min}}^{t_{\rm max}}dt\,\frac{d\sigma}{dt}, \tag{10}\]
where the limits of integration are
\[t_{\rm min} =-\frac{(s-m_{i}^{2})^{2}}{s} \tag{11}\] \[t_{\rm max} =0 \tag{12}\]
### Fermions
Equation (5) gave the differential cross section for the scattering of neutrinos from charged fermions of mass \(m_{f}\) and charge-coefficient \(q_{f}\)
\[\left(\frac{d\sigma}{dt}\right)_{vf}=\frac{\pi q_{f}^{2}\alpha^{2}}{m_{e}^{2} }\mu^{2}\frac{t}{(t-m_{\gamma}^{2})^{2}}\frac{s+t-m_{f}^{2}}{s-m_{f}^{2}}. \tag{13}\]
Figure 11 shows a contour plot of the total cross section in the \(m_{\gamma}\) versus \(s\) plane. The magnetic moment contribution to the neutrino-antineutrino annihilation differential cross section into charged fermion-antifermion pairs each with mass \(m_{f}\) is given by
\[\left(\frac{d\sigma}{dt}\right)_{f\overline{f}}=\left(\frac{2\pi q_{f}^{2} \alpha^{2}}{m_{e}^{2}}\right)\mu^{2}\frac{1}{s(s-m_{\gamma}^{2})^{2}}(t+s-m_ {f}^{2})(m_{f}^{2}-t). \tag{14}\]
### Bosons
The only boson particle we consider is \(W^{\pm}\). The cross section for the magnetic moment contribution to the scattering of neutrinos from \(W\)-bosons with mass \(m_{W}\simeq 80.4\,\)GeV is given by
\[\left(\frac{d\sigma}{dt}\right)_{\nu W}=\frac{\pi\alpha^{2}}{m_{e}^{2}}\mu^{ 2}\frac{t}{(t-m_{\gamma}^{2})^{2}}\left[\left(1-\frac{t}{3m_{W}^{2}}+\frac{t^ {2}}{4m_{W}^{4}}\right)\frac{s+t-m_{W}^{2}}{s-m_{W}^{2}}+\left(-\frac{5}{12}- \frac{t}{16m_{W}^{2}}+\frac{t^{2}}{16m_{W}^{4}}\right)\frac{t^{2}}{(s-m_{W}^{ 2})^{2}}\right]. \tag{15}\]
Figure 12 gives contours of total cross section in the \(m_{\gamma}\) versus \(s\) plane. Finally, the magnetic moment contribution to the neutrino-antineutrino annihilation cross section into \(W^{+}\)-\(W^{-}\) pairs is given by
\[\left(\frac{d\sigma}{dt}\right)_{W^{+}W^{-}}=\frac{e^{2}\kappa^{2}}{16\pi} \frac{1}{s(s-m_{\gamma}^{2})^{2}}\left[(s+2t-m_{W}^{2})^{2}\left(3-\frac{s}{m _{W}^{2}}+\frac{s^{2}}{4m_{W}^{4}}\right)+8s\left(-s+\frac{s^{2}}{4m_{W}^{2}} \right)\right]. \tag{16}\]
In calculating the cross sections involving \(W\)-bosons we ignored the four-boson couplings since the associated amplitudes are suppressed by another order of the magnetic moment.
### Hadrons
The cross section for scattering on scalar charged hadrons with mass \(m_{h}\) is
\[\left(\frac{d\sigma}{dt}\right)_{\nu h}=\frac{e^{2}\kappa^{2}}{4\pi}\frac{t}{(t- m_{\gamma}^{2})^{2}}\left[1+\frac{t}{s-m_{h}^{2}}+\frac{t^{2}}{4(s-m_{h}^{2})^{2}} \right]. \tag{10}\]
Figure 13 gives contours of total cross section in the \(m_{\gamma}\) versus \(m_{h}\) plane. We take the annihilation cross section into scalar hadron-antihardron pairs to be zero. In the case of charged vector hadrons, we ignore the contributions to the scattering rates, and so do not provide the scattering and annihilation differential cross sections. Although these rates would be non-zero, we estimate only a small error as the vector hadrons have large masses and do not appear in appreciable numbers at temperatures below the QHT (see rows 7 and 9 in Table 2).
## Appendix B Thermally-averaged Cross Sections
### Elastic Scattering
We use the thermally-averaged product of \(\sigma\) and \(v_{\text{Mol}}\), denoted \(\langle\sigma v_{\text{Mol}}\rangle\), to calculate scattering rates between neutrinos and other particles via the magnetic-moment vertex. The formula for the thermal average is the following
\[\langle\sigma v_{\text{Mol}}\rangle =\frac{\frac{g_{1}g_{2}}{(2\pi)^{6}}\int d^{3}p_{1}\frac{1}{e^{E_{ 1}/T}+1}\int d^{3}p_{2}\,\sigma v_{\text{Mol}}\frac{1}{e^{E_{2}/T}\pm 1}}{ \frac{g_{1}g_{2}}{(2\pi)^{6}}\int d^{3}p_{1}\frac{1}{e^{E_{1}/T}+1}\int d^{3}p_{ 2}\,\frac{1}{e^{E_{2}/T}\pm 1}} \tag{15}\] \[=\frac{g_{1}g_{2}}{(2\pi)^{6}n_{1}n_{2}}\int d^{3}p_{1}\frac{1}{e ^{E_{1}/T}+1}\int d^{3}p_{2}\,\sigma v_{\text{Mol}}\frac{1}{e^{E_{2}/T}\pm 1}, \tag{16}\]
where we have assumed equilibrium distributions with ignored the Pauli blocking/Bose enhancement of the products. Particle 1 is the neutrino with zero rest mass, and particle 2 is the scattering target with rest mass \(m\). The \(\pm 1\) in the distribution function for the 2nd particle corresponds to either fermions (\(+\)) or bosons (\(-\)). Both \(\sigma\) and \(v_{\text{Mol}}\) are given in terms of Mandelstam variable \(s\), particle 2 mass \(m\), and the in-medium photon mass \(m_{\gamma}\).
With a change in variables and using \(v_{\text{Mol}}=(s-m^{2})/2E_{1}E_{2}\)[61], we can reduce the expression in Eq. (16) to a double integral. For fermions, that expression is
\[\langle\sigma v_{\text{Mol}}\rangle_{\text{FD}}=\frac{2g_{1}g_{2}\pi^{2}T^{6}} {(2\pi)^{6}n_{1}n_{2}}\int\limits_{\epsilon_{m}^{2}}^{\infty}d\epsilon_{s}\left( \epsilon_{s}-\epsilon_{m}^{2}\right)\sigma\int\limits_{\sqrt{\epsilon_{s}}}^{ \infty}d\epsilon_{+}\,\frac{1}{e^{\epsilon_{+}}-1}\left\{\beta+\ln\left[\frac {1+2e^{(-\beta-\epsilon_{+})/2}\cosh\left(\frac{\alpha}{2}\right)+e^{-\beta- \epsilon_{+}}}{1+2e^{(\beta-\epsilon_{+})/2}\cosh\left(\frac{\alpha}{2} \right)+e^{\beta-\epsilon_{+}}}\right]\right\}, \tag{17}\]
where the \(\epsilon\) notation denotes an energy quantity normalized by the appropriate power of \(T\), namely \(\epsilon_{s}=s/T^{2}\), \(\epsilon_{+}=E_{+}/T\), \(\epsilon_{m}=m/T\), and \(\epsilon_{\gamma}=m_{\gamma}/T\). We rewrite \(\sigma\) as a function of \(\epsilon_{s}\), \(\epsilon_{m}\), \(\epsilon_{\gamma}\) and \(T\). In Eq. (17), we have also defined new quantities for ease in writing
\[\alpha=\epsilon_{+}\frac{\epsilon_{m}^{2}}{\epsilon_{s}},\quad\beta=\frac{ \epsilon_{s}-\epsilon_{m}^{2}}{\epsilon_{s}}\sqrt{\epsilon_{+}^{2}-\epsilon_ {s}}. \tag{18}\]
Our expression in Eq. (17) is the same as Eq. (14) in Ref. [25] where we have corrected a few typographical errors.
For bosons, the thermal average for elastic scattering is
\[\langle\sigma v_{\text{Mol}}\rangle_{\text{BE}}=\frac{2g_{1}g_{2}\pi^{2}T^{6}} {(2\pi)^{6}n_{1}n_{2}}\int\limits_{\epsilon_{m}^{2}}^{\infty}d\epsilon_{s} \left(\epsilon_{s}-\epsilon_{m}^{2}\right)\sigma\int\limits_{\sqrt{\epsilon_{ s}}}^{\infty}d\epsilon_{+}\,\frac{1}{e^{\epsilon_{+}}+1}\ln\left[\frac{\sinh \left(\frac{\alpha}{2}\right)+\sinh\left(\frac{\epsilon_{+}+\beta}{2}\right)} {\sinh\left(\frac{\alpha}{2}\right)+\sinh\left(\frac{\epsilon_{+}-\beta}{2} \right)}\right], \tag{19}\]
with the same notation as Eq. (17).
### Annihilation Scattering
For the annihilation channels, the expression for \(\langle\sigma v_{\rm Mol}\rangle\) is the same for either boson or fermion pairs with mass \(m\), as we average over the initial neutrino-antineutrino distributions. The result is the same expression as Eq. (10) except with a different threshold value of \(s\) and massless reactants
\[\langle\sigma v_{\rm Mol}\rangle_{\rm ann}=\frac{4g_{1}g_{2}\pi^{2}T^{6}}{(2 \pi)^{6}n_{1}n_{2}}\int\limits_{4\epsilon_{m}^{2}}^{\infty}d\epsilon_{s}\, \epsilon_{s}\,\sigma\int\limits_{\sqrt{\epsilon_{s}}}^{\infty}d\epsilon_{+}\, \frac{1}{e^{\epsilon_{+}}-1}\ln\left[\frac{\cosh\left(\frac{\epsilon_{+}+\beta}{ 4}\right)}{\cosh\left(\frac{\epsilon_{+}-\beta}{4}\right)}\right], \tag{12}\]
where \(\beta=\sqrt{\epsilon_{+}^{2}-\epsilon_{s}}\).
## Appendix C Treatment of Quark-Hadron Transition in the Early Universe
We have considered a range of models of anomalous magnetic moments which include decoupling of the inactive Dirac states in the \(T\sim 100\,\)MeV range. Decoupling in this range is complicated by the transition from free quarks and gluons to bound hadrons in an expanding and cooling universe. As a result, we model this epoch using a smooth crossover from a quark-gluon equation of state to one dominated by hadrons.
At high temperatures, we approximate the quark-gluon (\(qg\)) component as an ideal gas with negligible chemical potential. The \(qg\) component includes the six quark and gluon degrees of freedom with appropriate degeneracy factors. Conversely, at low temperature, we also approximate the hadron (\(h\)) component as an ideal gas with negligible chemical potential. We use the lightest hadrons shown in Table 2. The next heaviest hadrons after the \(K^{*0}(896)\) states are protons and neutrons. We have verified that excluding those baryons from the hadron component do not alter any of our results.
We use the combination of \(qg\) and \(h\) components to calculate thermodynamic quantities and the Debye screening length. We denote the pressure, entropic density, number density, and energy density for the \(qg\) component as \(P_{qg}\), \(s_{qg}\), \(n_{gq}\) and \(\rho_{qg}\), respectively. For the hadrons, we replace the \(qg\) subscript with an \(h\). To weight the contributions from the two seas when both components are present, we use a switching function following a prescription from Ref. [27]
\[S(T,\mu) = \exp[-\theta(T,\mu)] \tag{13}\] \[\theta(T,\mu) = \left[\left(\frac{T}{T_{0}}\right)^{r}+\left(\frac{\mu}{\mu_{0}} \right)^{r}\right]^{-1}. \tag{14}\]
The switching function uses data calculated with lattice QCD [96] to fit the parameters \(T_{0},\mu_{0}\), and \(r\). We use \(r=4\), \(T_{0}=145.33\,\)MeV, and \(\mu_{0}=3\pi T_{0}\) from the first row of Table 1 in Ref. [27]. \(\mu\) is the chemical potential.
According to the procedure in Ref. [27], we apply the switching function directly to the pressure
\[P_{qgh}=S(T,\mu)P_{qg}+[1-S(T,\mu)]P_{h}, \tag{15}\]
where \(P_{qgh}\) is the total pressure supplied by the quarks, gluons, and hadrons. Figure 14 shows \(P_{qgh}/T^{4}\) as a function of \(T\) for \(\mu=0\). The increase in \(P_{qgh}\) between \(T=100\,\)MeV and \(T=200\,\)MeV is due to the appearance of the \(qg\)
\begin{table}
\begin{tabular}{c c c c} Name & mass (MeV) & charge & degeneracy \\ \hline \(\pi^{0}\) & 135.0 & 0 & 1 \\ \(\pi^{+}\) & 140.0 & 1 & 1 \\ \(K^{+}\) & 494.0 & 1 & 1 \\ \(K^{0}\) & 498.0 & 0 & 2 \\ \(\eta^{0}\) & 548.0 & 0 & 1 \\ \(\rho^{0}\) & 775.0 & 0 & 3 \\ \(\rho^{+}\) & 775.0 & 1 & 3 \\ \(\omega^{0}\) & 783.0 & 0 & 3 \\ \(K^{**+}(892)\) & 892.0 & 1 & 3 \\ \(K^{*0}(896)\) & 896.0 & 0 & 6 \\ \end{tabular}
\end{table}
Table 2: Table of hadrons used in the early universe for this work. First column is name/symbol of the particle. Second, third, and fourth columns are mass (MeV), charge, and degeneracy (respectively). All positively charged hadrons have negatively charged partners. All particles are bosons.
degrees of freedom and concomitant disappearance of the \(h\) degrees of freedom. The expressions for \(s\), \(n\), and \(\rho\) follow from derivatives of the pressure
\[s_{qgh} =Ss_{qg}+(1-S)s_{h}+S\frac{r\theta^{2}}{T}\left(\frac{T}{T_{0}} \right)^{r}(P_{qg}-P_{h}), \tag{40}\] \[n_{qgh} =Sn_{qg}+(1-S)n_{h}+S\frac{r\theta^{2}}{\mu}\left(\frac{\mu}{\mu_{ 0}}\right)^{r}(P_{qg}-P_{h}),\] (41) \[\rho_{qgh} =Ts_{qgh}-P_{qgh}+\mu n_{qgh}. \tag{42}\]
When calculating the inverse square of the Debye length in Eq. (2), the relevant quantity is the derivative of the number density with respect to \(\mu\). We would need to take the derivative of Eq. (41) with respect to \(\mu\) to calculate \(m_{\gamma}^{2}\) during the QHT with the switching function. However, in the \(CP\)-symmetric conditions of the early universe, all derivatives of the switching function with respect to \(\mu\) are zero for \(\mu=0\). Hence, the expression for the contribution to \(m_{\gamma}^{2}\) from the quark-gluon-hadron components are
\[m_{\gamma,\,qgh}^{2}=4\pi\alpha\left\{S(T,\mu=0)\sum_{j}q_{j}^{2}\frac{\partial }{\partial\mu}[n_{j}^{(-)}-n_{j}^{(+)}]+[1-S(T,\mu=0)]\sum_{k}q_{k}^{2}\frac{ \partial}{\partial\mu}[n_{k}^{(-)}-n_{k}^{(+)}]\right\}, \tag{43}\]
where the first summation is over quark pairs and the second summation is over charged hadron pairs.
|
2307.14485 | The South African Software Industry as a Key Component of Economic
Development: Pipedream or Possibility | The Information and Communication sector has undoubtedly played a pivotal
role in changing the way people live nowadays. Almost every area of our lives
is affected by the presence and the use of the new information and
communication technologies. In this regard, many researchers' attention has
been attracted by the influence or the significant impact of these technologies
on economic growth and development. Although the history of South Africa has
had some drawbacks that could constitute a big obstacle to the emergence of a
successful economic environment, the actual status of the country regarding its
economy and the role that it plays in Africa towards the rest of the African
countries is a vital example of an emerging economic force in Africa. This
paper examines the crucial role that ICT has played and is still playing in the
South African economy growth and more specifically the significance of the
economic effects of the software industry. It makes use of the framework used
by Heavin et al. (2003) to investigate the Irish software industry in order to
analyze the impact of endogenous factors -- national, enterprise and individual
-- on the software industry and its implication on the economic growth in South
Africa. | Patrick Mukala | 2023-07-15T22:31:18Z | http://arxiv.org/abs/2307.14485v1 | The South African Software Industry as a Key Component of Economic Development: Pipedream or Possibility?
###### Abstract
The Information and Communication sector has undoubtedly played a pivotal role in changing the way people live nowadays. Almost every area of our lives is affected by the presence and the use of the new information and communication technologies. In this regard, many researchers' attention has been attracted by the influence or the significant impact of these technologies on economic growth and development. Although the history of South Africa has had some drawbacks that could constitute a big obstacle to the emergence of a successful economic environment, the actual status of the country regarding its economy and the role that it plays in Africa towards the rest of the African countries is a vital example of an emerging economic force in Africa. This paper examines the crucial role that ICT has played and is still playing in the South African economy growth and more specifically the significance of the economic effects of the software industry. It makes use of the framework used by Heavin et al. (2003) to investigate the Irish software industry in order to analyze the impact of endogenous factors-national, enterprise and individual - on the software industry and its implication on the economic growth in South Africa.
**Keywords**: South African Software Industry, economic growth, ICT sector, government intervention, exogenous (external) factors, endogenous (internal) factors.
## I Introduction
The survival of any country in times in crisis and its potential aim to its global development are critical factors that constitute key roles of any government in any nation world wide. The economic area of the country constitutes its enormous potential for its development and well being of its population. On the other hand, supporting the economy requires a deep and careful management of resources at the country's disposal as well as the contribution of other vital areas of the country's asset such us industry. The case of South Africa's economy constitutes the main focal point of its paper; the importance and diverse industrial means that are in place to sustain this country's economic force for the well being of its 44 millions population forms a complete interest of many researchers.
The information and Technology sector and telecommunications are nowadays regarded as a potential cornerstone to economic boost. The software industry in particular has had an increasing impact on the development and spectacular growth of economic sectors in many countries world wide. Considered as one of major industries in the world, the software industry has successfully contributed to the development and significant growth of the economic sectors in an impressive growing number of countries such as Singapore, China, India and most definitely Ireland. The latter's changes in its economic development is an inspiring example from which this paper draws a perspective view on how concentrating on Information Technology and especially the software industry in the economic policies of the government has a significant impact on the growth of the economic area towards the country's industrial development.
In their analysis of the successful influence of the software industry on the Irish economic growth, Heavin et al. (2003) support that any other countries that would like to strategically adopt software industry as a key component towards accomplishing economic development should understand the interaction between both policy and socio-cultural factors. This just implies that these countries that aim to achieve the same results or grow their economic force need to have a better comprehension of the vital factors that have critically influenced the |
2301.08322 | Gravitational Baryogenesis: Problems and Possible Resolution | The coupling of baryonic current to the derivative of the curvature scalar,
$R$, inherent to gravitational baryogenesis (GBG), leads to a fourth order
differential equation of motion for $R$ instead of the algebraic one of General
Relativity (GR). The fourth-order differential equation is generically
unstable. We consider a possible mechanism of stabilization of GBG by
modification of gravity, introducing an $R^2$-term into the canonical action of
GR.
It is shown that this mechanism allows for stabilization of GBG with bosonic
and fermionic baryon currents. We have established the region of the model
parameters leading to stabilization of $R$. Still, the standard cosmology would
be noticeably modified. | E. Arbuzova, A. Dolgov, K. Dutta, R. Rangarajan | 2023-01-19T21:25:08Z | http://arxiv.org/abs/2301.08322v1 | # Gravitational Baryogenesis: Problems and Possible Resolution
###### Abstract
The coupling of baryonic current to the derivative of the curvature scalar, \(R\), inherent to gravitational baryogenesis (GBG), leads to a fourth order differential equation of motion for \(R\) instead of the algebraic one of General Relativity (GR). The fourth-order differential equation is generically unstable. We consider a possible mechanism of stabilization of GBG by modification of gravity, introducing an \(R^{2}\)-term into the canonical action of GR. It is shown that this mechanism allows for stabilization of GBG with bosonic and fermionic baryon currents. We have established the region of the model parameters leading to stabilization of \(R\). Still, the standard cosmology would be noticeably modified.
Introduction
An excess of matter over antimatter in our Universe is crucial for our very existence and is well supported by various observations. The local Universe is clearly matter dominated. The amount of antimatter is very small and it can be explained as the result of high energy collisions in space. On the other hand, matter and antimatter seem to have similar properties, therefore we could expect a matter-antimatter symmetric universe. The existence of large regions of antimatter in our neighbourhood would produce high energy radiation created by matter-antimatter annihilation on the boundaries between matter and antimatter domains, which is not observed. A satisfactory model of our Universe should be able to explain the origin of the matter-antimatter asymmetry. Any initial asymmetry at inflation could not solve the problem of observed excess of matter over antimatter, because the energy density associated with the observed non-zero baryonic number density would not allow for sufficiently long inflation.
The term baryogenesis is used to indicate the generation of the excess of matter (baryons) over antimatter (antibaryons) or vice versa.
In 1967 Andrey Sakharov formulated three conditions today know as Sakharov Principles [1], necessary to produce a matter-antimatter asymmetry in the initially symmetric universe. These conditions include:
1. Non-conservation of baryonic number;
2. Breaking of symmetry between particles and antiparticles;
3. Deviation from thermal equilibrium.
However, not all of three Sakharov Principles are strictly necessary. For example, spontaneous baryogenesis (SBG) and gravitational bayogenesis (GBG) do not demand an explicit C and CP violation and can proceed in thermal equilibrium. Moreover, these mechanisms are usually most efficient in thermal equilibrium.
The statement that the cosmological baryon asymmetry can be created by spontaneous baryogenesis in thermal equilibrium was mentioned in the original paper by A. Cohen and D. Kaplan in 1987 [2] and in the subsequent papers by A. Cohen, D. Kaplan, and A. Nelson [3, 4] (for a review see [5, 6, 7, 8]).
The term "spontaneous" is related to spontaneous breaking of underlying symmetry of the theory, which ensures the conservation of the total baryonic number in the unbroken phase. This symmetry is supposed to be spontaneously broken and in the broken phase the Lagrangian density acquires the term
\[{\cal L}_{SBG}=(\partial_{\mu}\theta)J^{\mu}_{B}\,, \tag{1}\]
where \(\theta\) is a (pseudo) Goldstone field, and \(J^{\mu}_{B}\) is the baryonic current of matter fields, which becomes non-conserved as a result of the symmetry breaking.
For a spatially homogeneous field, \(\theta=\theta(t)\), the Lagrangian is reduced to a simple form
\[{\cal L}_{SBG}=\dot{\theta}\,n_{B}\,,\ n_{B}\equiv J^{0}_{B}. \tag{2}\]
Here \(n_{B}\) is the baryonic number density, so it is tempting to identify \(\dot{\theta}\) with the chemical potential, \(\mu_{B}\), of the corresponding system. However, such an identification is questionable [9, 10]. It depends upon the representation chosen for the fermionic fields and is heavily based on the assumption \(\dot{\theta}\approx const\). In Ref. [9] the assumption \(\dot{\theta}\approx const\) is relaxed.
Stimulated by spontaneous baryogenesis the idea of gravitational baryogenesis was put forward [11]. The scenario of SBG was modified by the introduction of the coupling of the baryonic current to the derivative of the curvature scalar \(R\):
\[\mathcal{S}_{GBG}=-\frac{1}{M^{2}}\int d^{4}x\sqrt{-g}\,(\partial_{\mu}R)J^{ \mu}_{B}\,, \tag{3}\]
where \(g\) is the determinant of the space-time metric tensor and the mass parameter \(M\) determines the energy scale of baryogenesis. There are a lot of articles on the subject, and a partial list of references is included in Refs. [12, 13, 14, 15, 16]. According to these papers, the GBG mechanism can successfully explain the magnitude of the cosmological baryon asymmetry of the universe.
However, it was argued in Refs. [17, 18], that the back reaction of the created non-zero baryonic density on the space-time curvature leads to strong instability of the cosmological evolution. In this paper we show that the problem of stability can be solved by adding to the Hilbert-Einstein action the quadratic in curvature term generated by quantum corrections [19, 20]. The underlying gravitational action has the form:
\[S_{Grav}=-\frac{M_{Pl}^{2}}{16\pi}\int d^{4}x\,\sqrt{-g}\left(R-\frac{R^{2}}{6 M_{R}^{2}}\right), \tag{4}\]
where \(M_{Pl}=1.22\cdot 10^{19}\) GeV is the Planck mass, and we use the metric signature \((+,-,-,-)\). As is known, the \(R^{2}\)-term leads to excitation of the scalar degree of freedom, named scalaron, and \(M_{R}\) is the scalaron mass. In the very early universe the \(R^{2}\)-term can generate inflation [21], and density perturbations. The amplitude of the observed density perturbations demands that \(M_{R}=3\cdot 10^{13}\) GeV [22] if the scalaron is the inflaton. Otherwise \(M_{R}>3\cdot 10^{13}\) GeV is allowed. Below we presume that the scalaron is the inflaton.
## 2 Instability problem of gravitational baryogenesis
The essential ingredient of the spontaneous baryogenesis is the coupling of the baryonic current the derivative of the curvature scalar \(\partial_{\mu}R\) (3). Taken over canonical cosmological Friedmann-Lemaitre-Robertson-Walker background, this interaction can successfully fulfil the task of generating the proper value of the baryon asymmetry of the universe.
However, any curvature dependent term in the Lagrangian of the theory would modify the equations of the General Relativity (GR). The modified GR equations have been analysed in Refs. [9, 18]. Since interaction (3) is not just linear in the curvature term multiplied by a constant, it leads to higher order equations describing evolution of gravitational fields. Higher order equations of motion are typically unstable with respect to small perturbations. According to the results of Refs. [9, 18], it indeed happens in the
frameworks of the SBG scenario and the characteristic time of the exponential instability is much shorter than the cosmological time. It creates serious problem for realisation of the SBG mechanism.
In this work we suggest to consider possible stabilisation of SBG and have proved that it can be realised but the resulting cosmological model suffers from too large value of \(R\), much larger than that in the classical Friedmann cosmology. Possible ways to cure this shortcoming are mentioned.
## 3 Stabilisation of gravitational baryogenesis in modified gravity
### Bosonic case.
Let us first consider the case when baryonic number is carried by a complex scalar field \(\phi\)[17]. The total action has the form:
\[S_{tot}[\phi]=-\int d^{4}x\,\sqrt{-g}\left[\frac{M_{Pl}^{2}}{16 \pi}\left(R-\frac{R^{2}}{6M_{R}^{2}}\right)+\frac{1}{M^{2}}(\partial_{\mu}R)J _{(\phi)}^{\mu}-g^{\mu\nu}\partial_{\mu}\phi\,\partial_{\nu}\phi^{*}+U(\phi, \phi^{*})\right]\] \[+S_{matt}\, \tag{5}\]
where \(U(\phi,\phi^{*})\) is the potential of field \(\phi\) and \(S_{matt}\) is the matter action which does not include the field \(\phi\). In Eq. (5) \(R(t)\) is the classical curvature field, while \(\phi(\vec{x},t)\) is the quantum operator of light scalar particles.
We assume that the potential \(U(\phi,\phi^{*})\) is not invariant with respect to phase transformation \(\phi\rightarrow\exp{(iq\beta)}\phi\) and thus the corresponding current
\[J_{(\phi)}^{\mu}=iq\,g^{\mu\nu}(\phi^{*}\partial_{\nu}\phi-\phi \partial_{\nu}\phi^{*}) \tag{6}\]
is not conserved. Here \(q\) is the baryonic number of field \(\phi\). The non-conservation of the current is necessary for the proper performance of the model, otherwise \(S_{GBG}\) in Eq. (3) can be integrated away by parts.
Varying action (5) over \(g^{\mu\nu}\) we come to the following equations:
\[\frac{M_{Pl}^{2}}{16\pi}\left[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R- \frac{1}{3M_{R}^{2}}\left(R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R+g_{\mu\nu}D^{2}-D_ {\mu}D_{\nu}\right)R\right]\] \[-\frac{1}{M^{2}}\left[\left(R_{\mu\nu}-(D_{\mu}D_{\nu}-g_{\mu\nu} D^{2})\right)D_{\alpha}J_{(\phi)}^{\alpha}+\frac{1}{2}g_{\mu\nu}J_{(\phi)}^{ \alpha}\,D_{\alpha}R-\frac{1}{2}\left(J_{(\phi)\nu}D_{\mu}R+J_{(\phi)\mu}D_{ \nu}R\right)\right]\] \[-\frac{1}{2}\left(D_{\mu}\phi\,D_{\nu}\phi^{*}+D_{\nu}\phi\,D_{ \mu}\phi^{*}\right)+\frac{1}{2}g_{\mu\nu}\left[D_{\alpha}\phi\,D^{\alpha} \phi^{*}-U(\phi)\right]-(D_{\mu}\phi)(D_{\nu}\phi^{*})\] \[=\frac{1}{2}\,T_{\mu\nu}^{(matt)}\,, \tag{7}\]
where \(D_{\mu}\) is the covariant derivative in metric \(g_{\mu\nu}\) (of course, for scalars \(D_{\mu}=\partial_{\mu}\)) and \(T_{\mu\nu}^{(matt)}\) is the energy-momentum tensor of matter obtained from action \(S_{matt}\).
Taking the trace of equation (7) with respect to \(\mu\) and \(\nu\) and changing sign we obtain:
\[\frac{M_{Pl}^{2}}{16\pi}\,\left(R+\frac{1}{M_{R}^{2}}D^{2}R\right)+ \frac{1}{M^{2}}\left[(R+3D^{2})D_{\alpha}J^{\alpha}_{(\phi)}+J^{\alpha}_{(\phi)} \,D_{\alpha}R\right]-D_{\alpha}\phi\,D^{\alpha}\phi^{*}+2U(\phi)\] \[=-\frac{1}{2}\,T^{(matt)} =0\,, \tag{8}\]
where \(T^{(matt)}=g^{\mu\nu}T^{(matt)}_{\mu\nu}\) is the trace of the energy-momentum tensor of matter. For the usual relativistic matter \(T^{(matt)}=0\), while for scalar field \(\phi\) the trace of the energy-momentum tensor is nonzero:
\[T^{\mu}_{\mu}(\phi)=-2D_{\alpha}\phi\,D^{\alpha}\phi^{*}+4U(\phi). \tag{9}\]
The equation of motion for field \(\phi\) is:
\[D^{2}\phi+\frac{\partial U}{\partial\phi^{*}}=-\frac{iq}{M^{2}}\left(2D_{\mu} R\,D^{\mu}\phi+\phi D^{2}R\right)\,. \tag{10}\]
According to definition (6) and Eq. (10), the current divergence is:
\[D_{\mu}J^{\mu}=\frac{2q^{2}}{M^{2}}\left[D_{\mu}R\,(\phi^{*}D^{\mu}\phi+\phi D ^{\mu}\phi^{*})+|\phi|^{2}D^{2}R\right]+iq\left(\phi\frac{\partial U}{\partial \phi}-\phi^{*}\frac{\partial U}{\partial\phi^{*}}\right)\,. \tag{11}\]
For homogeneous curvature scalar \(R(t)\) in spatially flat FLRW-metric
\[ds^{2}=dt^{2}-a^{2}(t)d{\bf r}^{2} \tag{12}\]
Eq. (8) is reduced to:
\[\frac{M_{Pl}^{2}}{16\pi}\,\left[R+\frac{1}{M_{R}^{2}}(\partial_{t }^{2}+3H\partial_{t})R\right]+\frac{1}{M^{2}}\left[(R+3\partial_{t}^{2}+9H \partial_{t})D_{\alpha}J^{\alpha}_{(\phi)}+\dot{R}\,J^{0}_{(\phi)}\right]\] \[+2U(\phi)-(D_{\alpha}\phi)(D^{\alpha}\phi^{*})=0. \tag{13}\]
where \(J^{0}_{(\phi)}\) is the baryonic number density of the \(\phi\)-field, \(H=\dot{a}/a\) is the Hubble parameter, and the divergence of the current is given by the expression:
\[D_{\alpha}J^{\alpha}_{(\phi)}=\frac{2q^{2}}{M^{2}}\left[\dot{R}\,(\phi^{*} \dot{\phi}+\phi\dot{\phi}^{*})+(\ddot{R}+3H\dot{R})\,\phi^{*}\phi\right]+iq \left(\phi\frac{\partial U}{\partial\phi}-\phi^{*}\frac{\partial U}{\partial \phi^{*}}\right)\,. \tag{14}\]
As we see in what follows, the last two terms in Eq. (13) do not have an essential impact on the cosmological instability found in Ref. [17] and will be disregarded below.
Let us note that the statement of exponential instability of \(R(t)\)[17] does not depend on the conservation or non-conservation of the current from the potential term \((\phi\partial U/\partial\phi-\phi^{*}\partial U/\partial\phi^{*})\) in Eq. (14). However if the current from this term is conserved then the baryon asymmetry is not generated. On the other hand the term in square brackets in Eq. (14) does not lead to generation of the baryon asymmetry but leads to exponential instability of \(R(t)\). Below we ignore the last term of Eq. (14).
Performing thermal averaging of the normal ordered bilinear products of field \(\phi\) in the high temperature limit (see Appendix of Ref. [17]) in accordance with equations:
\[\langle\phi^{*}\phi\rangle=\frac{T^{2}}{12}\,,\quad\langle\phi^{*}\dot{\phi}+ \dot{\phi}^{*}\phi\rangle=0\,, \tag{15}\]
and using Eq. (14) we obtain the fourth order differential equation:
\[\frac{M_{Pl}^{2}}{16\pi}\,\left(R+\frac{1}{M_{R}^{2}}D^{2}R \right)+\frac{q^{2}}{6M^{4}}\left(R+3\partial_{t}^{2}+9H\partial_{t}\right) \left[\left(\ddot{R}+3H\dot{R}\right)T^{2}\right]+\frac{1}{M^{2}}\dot{R}\, \langle J^{0}_{(\phi)}\rangle\] \[=-2U(\phi)+(D_{\alpha}\phi)(D^{\alpha}\phi^{*}). \tag{16}\]
Here \(\langle J^{0}_{(\phi)}\rangle\) is the thermal average value of the baryonic number density of \(\phi\), which is supposed to vanish initially, but created through the process of the gravitational baryogenesis. This term can be neglected because the baryon asymmetry is normally quite small. Even if it is not small it does not have considerable impact on the explosive rise of the curvature scalar. As we see in what follows the evolution of \(R(t)\) proceeds much faster than the cosmological evolution, that is \(\ddot{R}/\dot{R}\gg H\). Consequently, we neglect the terms proportional to \(R\) with respect to the terms proportional to the second derivative of \(R\), \(\ddot{R}\). We also consider the terms of the type \(HR\) as small w.r.t. to \(dR/dt\). We can check that this presumption is true a posteriori with the obtained solution for \(R(t)\).
Keeping only the dominant terms we simplify the above equation to:
\[\frac{d^{4}R}{dt^{4}}+\frac{\kappa^{4}}{M_{R}^{2}}\frac{d^{2}R}{dt^{2}}+\kappa ^{4}R=-T_{\mu}^{\mu}(\phi)\frac{M^{4}}{q^{2}T^{2}}, \tag{17}\]
where
\[\kappa^{4}=\frac{M_{Pl}^{2}M^{4}}{8\pi q^{2}T^{2}}\,. \tag{18}\]
While studying the instability of the solution we do not take into account the r.h.s. of Eq. (17) which does not depend upon R. Looking for the solution of Eq. (17) in the form \(R=R_{in}\exp(\lambda t)\), we obtain the characteristic equation:
\[\lambda^{4}+\frac{\kappa^{4}}{M_{R}^{2}}\lambda^{2}+\kappa^{4}=0 \tag{19}\]
with the eigenvalues \(\lambda\) defined by the expression:
\[\lambda^{2}=-\frac{\kappa^{4}}{2M_{R}^{2}}\pm\kappa^{2}\sqrt{\frac{\kappa^{4 }}{4M_{R}^{4}}-1}. \tag{20}\]
There is no instability, if \(\lambda^{2}<0\) and Eq. (17) has only oscillating solutions. It is realised, if \(\kappa^{4}>4M_{R}^{4}\). Using the expression in Eq. (18) for \(\kappa^{4}\) and taking \(M_{R}=3\cdot 10^{13}\) GeV we find the stability condition:
\[M>3\cdot 10^{4}\,\mbox{GeV}\left(\frac{q\,T}{\mbox{GeV}}\right)^{1/2}, \tag{21}\]
which is fulfilled for all interesting values of \(M\).
The value of \(\lambda\) depends upon the relation between \(\kappa\) and \(M_{R}\). If \(\kappa\sim M_{R}\) then the frequency of the oscillations of curvature is of the order of \(M_{R}\) and \(|\lambda|\sim M_{R}\). If \(\kappa\gg M_{R}\) then there are two possible solutions \(|\lambda|\sim M_{R}\) and \(\ |\lambda|\sim\kappa(\kappa/M_{R})\gg M_{R}\). High frequency oscillations of \(R\) would lead to efficient gravitational particle production and, as a result, to damping of the oscillations.
### Fermionic case
In this section we consider the case when baryonic number is carried by fermions. The gravitational part of the action has the form as in Eq. (4), while the fermionic part of the action is the same as in Refs. [10, 18]:
\[{\cal L}[Q,L] = \frac{i}{2}(\bar{Q}\gamma^{\mu}\nabla_{\mu}Q-\nabla_{\mu}\bar{Q} \,\gamma^{\mu}Q)-m_{Q}\bar{Q}\,Q \tag{22}\] \[+ \frac{i}{2}(\bar{L}\gamma^{\mu}\nabla_{\mu}L-\nabla_{\mu}\bar{L} \gamma^{\mu}L)-m_{L}\bar{L}\,L\] \[+ \frac{g}{m_{X}^{2}}\left[(\bar{Q}\,Q^{c})(\bar{Q}L)+(\bar{Q}^{c} Q)(\bar{L}Q)\right]+\frac{d}{M^{2}}(\partial_{\mu}R)J^{\mu}+{\cal L}_{matt}\,,\]
where \(Q\) is the quark-like field with non-zero baryonic number \(B_{Q}\), \(Q^{c}\) is the charged conjugated quark operator, \(L\) is another fermionic field (lepton),, and \(\nabla_{\mu}\) is the covariant derivative of the Dirac fermions in tetrad formalism. The quark current is \(J^{\mu}=B_{Q}\bar{Q}\gamma^{\mu}Q\) with \(\gamma^{\mu}\) being the curved space gamma-matrices, and \({\cal L}_{matt}\) describes all other forms of matter. The four-fermion interaction between quarks and leptons is introduced to ensure the necessary non-conservation of the baryon number with \(m_{X}\) being a constant parameter with dimension of mass and \(g\) being a dimensionless coupling constant. In the term, describing interaction of the baryonic current of fermions with the derivative of the curvature scalar, \(M\) is a constant parameter with dimension of mass and \(d=\pm 1\) is dimensionless coupling constant which is introduced to allow for an arbitrary sign of the above expression.
Gravitational equations of motion with an account of \(R^{2}/M_{R}^{2}\)-term in analogy with Eq. (7) take the form:
\[\frac{M_{Pl}^{2}}{8\pi}\left[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R- \frac{1}{3M_{R}^{2}}\left(R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R+g_{\mu\nu}D^{2}-D _{\mu}D_{\nu}\right)R\right] \tag{23}\] \[= \frac{g_{\mu\nu}}{2}\frac{g}{m_{X}^{2}}\left[(\bar{Q}\,Q^{c})( \bar{Q}L)+(\bar{Q}^{c}Q)(\bar{L}Q)\right]\] \[+ \frac{i}{4}\left[\bar{Q}(\gamma_{\mu}\nabla_{\nu}+\gamma_{\nu} \nabla_{\mu})Q-(\nabla_{\nu}\bar{Q}\,\gamma_{\mu}+\nabla_{\mu}\bar{Q}\,\gamma _{\nu})Q\right]\] \[+ \frac{i}{4}\left[\bar{L}(\gamma_{\mu}\nabla_{\nu}+\gamma_{\nu} \nabla_{\mu})L-(\nabla_{\nu}\bar{L}\,\gamma_{\mu}+\nabla_{\mu}\bar{L}\,\gamma _{\nu})L\right]\] \[- \frac{2d}{M^{2}}\left[R_{\mu\nu}+g_{\mu\nu}D^{2}-D_{\mu}D_{\nu} \right]D_{\alpha}J^{\alpha}+\frac{d}{2M^{2}}\left(J_{\mu}\partial_{\nu}R+J_{ \nu}\partial_{\mu}R\right)+T^{matt}_{\mu\nu}\,.\]
Taking the trace of Eq. (23) with an account of fermion equations of motion we obtain:
\[-\frac{M_{Pl}^{2}}{8\pi}\,\left(R+\frac{1}{M_{R}^{2}}D^{2}R\right)=m_ {Q}\bar{Q}Q+m_{L}\bar{L}L+\frac{2g}{m_{X}^{2}}\left[(\bar{Q}\,Q^{c})(\bar{Q}L)+( \bar{Q}^{c}Q)(\bar{L}Q)\right]\] \[-\frac{2d}{M^{2}}(R+3D^{2})D_{\alpha}J^{\alpha}+T_{matt}\,, \tag{24}\]
where \(T_{matt}\) is the trace of the energy momentum tensor of all other fields. In the early universe when various species are relativistic, we can take \(T_{matt}=0\). The average expectation value of the quark-lepton interaction term proportional to \(g\) is also small, so the contribution of all matter fields may be neglected and hence the only term which remains in the r.h.s. of Eq. (24) is that proportional to \(D_{\alpha}J^{\alpha}\).
A higher order differential equation for \(R\) is obtained after we substitute the current divergence, \(D_{\alpha}J^{\alpha}\), calculated from the kinetic equation in the external field \(R\)[18], into Eq. (24). For the spatially homogeneous case
\[D_{\alpha}J^{\alpha}=(\partial_{t}+3H)n_{B}=I_{B}^{coll}, \tag{25}\]
where the collision integral, \(I_{B}^{coll}\), in the lowest order of perturbation theory is equal to:
\[I_{B}^{coll}=-3B_{q}(2\pi)^{4}\int\,d\nu_{q_{1},q_{2}}\,d\nu_{ \bar{q}_{3},l_{4}}\delta^{4}(q_{1}+q_{2}-q_{3}-l_{4})\] \[\left[|A(q_{1}+q_{2}\rightarrow\bar{q}_{3}+l_{4})|^{2}f_{q_{1}}f_ {q_{2}}-|A(\bar{q}_{3}+l_{4}\to q_{1}+q_{2})|^{2}f_{\bar{q}_{3}}f_{l_{4}} \right]. \tag{26}\]
Here \(A(a\to b)\) is the amplitude of the transition from state \(a\) to state \(b\), \(B_{Q}\) is the baryonic number of quark, \(f_{a}\) is the phase space distribution (the occupation number), and
\[d\nu_{q_{1},q_{2}}=\frac{d^{3}q_{1}}{2E_{q_{1}}(2\pi)^{3}}\,\frac{d^{3}q_{2}}{ 2E_{q_{2}}(2\pi)^{3}}, \tag{27}\]
where \(E_{q}=\sqrt{q^{2}+m^{2}}\) is the energy of particle with three-momentum \(q\) and mass \(m\). The element of phase space of final particles, \(d\nu_{\bar{q}_{3},l_{4}}\), is defined analogously.
We choose such representation of the quark operator, \(Q\), for which the interaction of baryonic current with the derivative of the curvature scalar in Eq. (22) vanishes but reappears in the quark-lepton interaction term:
\[\frac{2g}{m_{X}^{2}}\left[e^{-3idB_{Q}R/M^{2}}\,(\bar{Q}\,Q^{c})(\bar{Q}L)+e^ {3idB_{Q}R/M^{2}}\,(\bar{Q}^{c}Q)(\bar{L}Q)\right]. \tag{28}\]
We make the simplifying assumption that the evolution of \(R\) can be approximately described by the law
\[R(t)\approx R(t_{0})+(t-t_{0})\dot{R}. \tag{29}\]
We assume that \(\dot{R}(t)\) slowly changes at the characteristic time scale of the reactions, which contribute to the collision integral (26), and so we can approximately take \(\dot{R}\approx const\).
According to the rules of quantum field theory the reaction probability is given by the square of the integral over space and time of the amplitude of the corresponding process. In
the case of time independent interaction it leads to the energy conservation, \(\Sigma E_{in}=\Sigma E_{fin}\). If the interaction depends upon time the energy evidently is non-conserved and in our case, e.g. for the reaction \(q_{1}+q_{2}\rightarrow\bar{q}_{3}+l_{4}\), the energy balance has the form:
\[E(q_{1})+E(q_{2})=E(q_{3})+E(l_{4})+3dB_{Q}\dot{R}/M^{2}. \tag{30}\]
In kinetic equilibrium the phase space distribution of fermions has the form
\[f=\frac{1}{e^{(E/T-\xi)}+1}\approx e^{-E/T+\xi}, \tag{31}\]
where \(\xi=\mu/T\) is the dimensionless chemical potential, different for quarks, \(\xi_{q}\), and leptons, \(\xi_{l}\). In thermal equilibrium case the condition of conservation of chemical potentials is fulfilled, that is \(\Sigma\,\xi_{in}=\Sigma\,\xi_{fin}\). In particular it demands that chemical potentials of particles and antiparticles are equal by magnitude and have opposite signs: \(\xi=-\bar{\xi}\), as follows e.g. from the consideration of particle-antiparticle annihilation into different numbers of photons. If energy is not conserved, due to time-dependent \(R(t)\), the conservation of chemical potentials is also broken, as we see in what follows.
We assume that \(\xi\ll 1\) and hence distribution (31) turns into:
\[f\approx e^{-E/T}(1+\xi). \tag{32}\]
We also assume that \(3d\,B_{Q}\dot{R}/(M^{2}\,T)\ll 1\) and correspondingly the balance of chemical potentials in equilibrium for the reactions \(q_{1}+q_{2}\leftrightarrow\bar{q}_{3}+l_{4}\) leads to:
\[3\xi_{q}-\xi_{l}-\frac{3d\,B_{Q}\dot{R}(t)}{M^{2}\,T}=0. \tag{33}\]
Following Ref. [18], we express
\[n_{B}\approx\frac{g_{s}B_{Q}}{6}\xi_{q}T^{3}, \tag{34}\]
where \(g_{s}\) is the number of quark spin states. Since we are studying instability of \(R\) whose timescale is presumed to be much smaller than the expansion rate of the Universe, we approximate
\[D_{\alpha}J^{\alpha}\approx\dot{n}_{B} \approx\frac{g_{s}B_{Q}}{6}\dot{\xi}_{q}T^{3} \tag{35}\] \[\approx\frac{g_{s}B_{Q}}{6}\dot{\xi}_{q}^{eq}T^{3}, \tag{36}\]
\(\xi_{q}^{eq}\) is obtained from Eq. (33), using the conservation of the sum of baryonic and leptonic numbers which implies \(\xi_{l}=-\xi_{q}/3\). Then
\[\xi_{q}^{eq}=\frac{9d\,B_{Q}\dot{R}(t)}{10M^{2}\,T}\,. \tag{37}\]
Substituting Eq. (37) in Eq. (36) and neglecting the \(\dot{T}\)-term, Eq. (24) gives the following fourth order differential equation for the curvature scalar:
\[\frac{d^{4}R}{dt^{4}}+\frac{\kappa_{f}^{4}}{M_{R}^{2}}\frac{d^{2}R}{ dt^{2}}+\kappa_{f}^{4}R=0, \tag{38}\]
where
\[\kappa_{f}^{4}=\frac{5M_{Pl}^{2}M^{4}}{36\pi g_{s}B_{Q}^{2}T^{2}}\,. \tag{39}\]
Once again, we consider terms containing \(R\) as small with respect to the terms containing \(\ddot{R}\). The value of \(\kappa_{f}\) is only slightly numerically different from \(\kappa\) in Eq. (18) and has the same dependence upon the essential parameters, so the solutions of Eqs. (17) and (38) practically coincide.
## 4 Discussion
We have shown that discovered in Refs. [17, 18] exponential instability of the curvature scalar inherent to the mechanism of spontaneous baryogenesis can be successfully cured in modified gravity. The special form of gravity modification by introduction of \(R^{2}\)-term into canonical Hilbert-Einstein action of General Relativity was explored as a workable mechanism.
However, the stabilized asymptotic value of \(R\) is extremely large and together with possibly successful baryogenesis would still strongly perturb canonical cosmology. Possible ways out of this problem could either be a more complicated model of \(F(R)\) gravity or a proper account of particle production created by high frequency oscillations of \(R(t)\). Both options open interesting possibilities for future research.
|
2303.09645 | Development of a Voice Controlled Robotic Arm | This paper describes a robotic arm with 5 degrees-of-freedom (DOF) which is
controlled by human voice and has been developed in the Mechatronics
Laboratory, CUET. This robotic arm is interfaced with a PC by serial
communication (RS-232). Users' voice command is captured by a microphone, and
this voice is processed by software which is made by Microsoft visual studio.
Then the specific signal (obtained by signal processing) is sent to control
unit. The main control unit that is used in the robotic arm is a
microcontroller whose model no. is PIC18f452. Then Control unit drives the
actuators, (Hitec HS-422, HS-81) according to the signal or signals to give
required motion of the robotic arm. At present the robotic arm can perform a
set action like pick & pull, gripping, holding & releasing, and some other
extra function like dance-like movement, and can turn according to the voice
commands. | Akkas U. Haque, Humayun Kabir, S. C. Banik, M. T. Islam | 2023-03-16T20:53:44Z | http://arxiv.org/abs/2303.09645v1 | # Development of a Voice Controlled Robotic Arm
###### Abstract
This paper describes a robotic arm with 5 degrees-of-freedom (DOF) which is controlled by human voice and has been developed in the Mechatronics Laboratory, CUET. This robotic arm is interfaced with a PC by serial communication (RS-232). Users' voice command is captured by a microphone, and this voice is processed by software which is made by Microsoft visual studio. Then the specific signal (obtained by signal processing) is sent to control unit. The main control unit that is used in the robotic arm is a microcontroller whose model no. is PIC18f452. Then Control unit drives the actuators, (Hitec HS-422, HS-81) according to the signal or signals to give required motion of the robotic arm. At present the robotic arm can perform a set action like pick & pull, gripping, holding & releasing, and some other extra function like dance-like movement, and can turn according to the voice commands.
Speech recognition; Artificial Neural Networks; PWM; Serial communication; Microcontroller interfacing; SAPI.
## 1 Introduction
Nowadays industries, service centers(Hospital), shopping centers, and house hold works are fully dependent on robotics & automation. For example, in a surgery, there are very few people besides the surgeon who actually contribute to the surgery. Most of the other people are there just to hand different tools and instruments to the surgeon, who is the one who actually does the surgery. Or, take the example of a mechanic. A mechanic almost always encounters situations in which he is forced to use a helper or assistant to do different things like holding two pieces of a machine together while welding, etc. It is thus seen that extra workforce is required, workforce that could otherwise be set to do other tasks. This is where the voice controlled robotic arm comes in. A robotic arm that is voice controlled enables the user to have more control over whatever task he is doing and also eliminates the need of unnecessary workforce. Also, the amount of stability and precision offered by the robot will be an added advantage over human assistants.
That is why, we think about a robot which will be controlled by human voice command that is versatile and can be used in a variety of different atmospheres and scenarios. We have successfully completed the first step in achieving our goal. That is, our robot can now listen to vocal commands given to it and respond accordingly. The final plan involves the robot understanding common phrases from natural speech and acquiring the ability to work seamlessly and in perfect coordination with a person.
## 2 The Robotic Arm
To investigate the feasibility of a robot that operates on the basis of natural speech processing, we built a robotic arm that responds to basic commands given to it.
### Working
A microphone receives the voice commands and feeds them to the computer. The SAPI1 engine detects the commands given and matches the commands with the dictionary created before. Once the commands are decoded, the necessary coordinates are fed to the function that calculates the angles required at each joint by the inverse kinematics mode of mechanics. These angles are then converted to the corresponding on-times required for the Pulse Width Modulation (PWM) for the servos. This information is then fed to the microcontroller via the USART mode of communication. The microcontroller used is an 8-ISBN: 978-984-33-2140-4
bit microcontroller, PIC16F852 from the microchip family.
The microcontroller then sends the required pulses to the servos attached to the each of the joints of the robot.
### Flow of Control
### Arm Overview
The robotic arm we used in this exploratory research is a small hobbyist device called the Lynx 6 by Lynxmonion. The arm has a total of five degrees of freedom (DOF): shoulder rotation, shoulder bend, elbow bend, wrist rotate, and wrist bend. A simple 2-prong gripper at the end of the arm is used to hold small objects. Figure 5 demonstrates these controllable features. Although the arm has no feedback or sensors, it is still sufficient as a prototype arm for use in this proof of concept exploration.
The circuit board provided along with the package was replaced by one that we designed specifically for our purpose.
### Speech Recognition
The Speech Application Programming Interface or SAPI is an API developed by Microsoft to allow the use of speech recognition and speech synthesis within Windows applications. The SAPI engine is incorporated as a part of the Visual basic program, developed by us, which is responsible for decoding the vocal commands and sending the coordinates to the microcontroller. The SAPI engine is also responsible for the computer responding by means of speech. The version used in our research is SAPI4.0 which has features that include Voice Command, Voice Dictation, Direct Speech Recognition, Direct Text To Speech etc.
### Artificial Neural Network
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. Artificial neural networks are made up of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex and includes some features that may seem superfluous based on an understanding of artificial networks.
Every neural network possesses knowledge which is contained in the values of the connections weights. Modifying the knowledge stored in the network as a function of experience implies a learning rule for changing the values of the weights.
Information is stored in the weight matrix W of a neural network. Learning is the determination of the weights. Following the way learning is performed, we can distinguish two major categories of neural networks:
Fig. 1: Flow of control
Fig. 3: Artificial Neural Network
Fig. 2: Lynxmotion Lynx 6
**Fixed networks** in which the weights cannot be changed. In such networks, the weights are fixed a priori according to the problem to solve.
**Adaptive networks** which are able to change their weights.
All learning methods used for adaptive neural networks can be classified into two major categories:
**Supervised learning** which incorporates an external teacher, so that each output unit is told what its desired response to input signals ought to be. During the learning process global information may be required. Paradigms of supervised learning include error-correction learning, reinforcement learning and stochastic learning.
**Unsupervised learning** uses no external teacher and is based upon only local information. It is also referred to as self-organization, in the sense that it self-organizes data presented to the network and detects their emergent collective properties. Paradigms of unsupervised learning are Hebbian learning and competitive learning.
The SAPI engine uses the adaptive ANN of the first type i.e. Supervised Neural Learning. The computer uses the engine to collect a large amount of speech data from the user and trains itself to recognize the sounds that it receives and interpret the words correctly. The training generates a dictionary of words and phrases that are continually updated with more training. Once ample training is given to the SAPI engine, it is then able to recognize the words or commands given to it easily. After training the engine, we then wrote a program in Microsoft Visual Basic that uses this engine to communicate with the user. At present, our program can recognize around 100 different commands and respond vocally as well as do the work it was designed to do, i.e. control the robotic arm.
### Serial Communication
We chose serial communication (RS-232) to interface our robotic arm with the computer. Serial communication was chosen because most of the computers have the necessary hardware and also because, it can be used over long distances without much loss in signal strength.
**Serial communication** is the process of sending data one bit at a time, sequentially, over a communication channel or computer bus. **RS-232** (Recommended Standard 232) is a standard for serial binary single-ended data and control signals connecting between a _DTE_ (Data Terminal Equipment) and a _DCE_ (Data Circuit-terminating Equipment). It is commonly used in computer serial ports. The standard defines the electrical characteristics and timing of signals, the meaning of signals, and the physical size and pin out of connectors. The signals of RS-232 serial are not suitable for use in TTL(**Transistor-transistor logic**) compatible digital logic circuits. So, **MAX232** integrated circuit is used to convert signals of **RS-232** to make suitable signals for use in TTL(**Transistor-transistor logic**) compatible digital logic circuits. The MAX232 is a dual driver/receiver and typically converts the RX, TX, CTS and RTS signals. It is helpful to understand what occurs to the voltage levels. When a MAX232 IC receives a TTL level to convert, it changes a TTL Logic 0 to between +3 and +15 V, and changes TTL Logic 1 to between -3 to -15 V, and vice versa for converting from RS232 to TTL. RS-232 Voltage Levels are given below:
### Pulse-Width Modulation
Pulse-width modulation (PWM) is a commonly used technique for controlling power to inertial electrical devices, made practical by modern electronic power switches. We used PWM to control Servo motor (HS-422 & HS81). PWM, as the name suggests, is a method of controlling devices by varying the length of the pulse in a given time period.
Duty Cycle describes the proportion of 'on' time to the regular interval or 'period' of time; a low duty cycle corresponds to low power, because the power is off for most of the time. Duty cycle is expressed in percent, 100% being fully on.
Duty Cycle, \(D=\frac{t}{T}\)
Where,
t = on state or high state
T = the period of the function.
We chose the servos because they have the control circuits built in, and they have the highest torque to weight ratio and also because they can be precisely controlled by PWM. On the other hand, stepper motors and DC motors were both out of question due to weight constraints.
In Servo Motor Control, the time period of oscillation is 20ms. And the on-time varies from 1ms for 0 degrees to 2ms for 180 degrees.
### Inverse Kinematics
The kinematics solution of any robot manipulator consists of two sub problems, forward and inverse kinematics. Forward kinematics determines where the robot's manipulator hand will be if all joints are known whereas inverse kinematics is used to find out where each joint variable must be if the desired position and orientation of end-effector is pre-determined.
The kinematics that we applied in this research was Inverse Kinematics. As shown in the diagram, the x and y coordinates of the end effector on the plane along a specified angle of the base is known. The angles at each joint are then calculated based on the equations derived by IK method.
#### 2.8.1 Geometric Solution for IK Equations of the Arm
Let the coordinates of the end effector be (a,b, \(\theta_{1}\)) where a,b are the coordinates of the effector in the plane which is offset at an angle of \(\theta_{1}\)from the xy plane.
From the figure,
\[\begin{array}{l}\text{a}=1_{1}\cos\theta_{2}+1_{2}\cos\varphi\\ \text{b}=1_{1}\sin\theta_{2}+1_{2}\sin\varphi\end{array}\]
Rearranging, we get,
\[\cos(\theta_{2}-\varphi)=\frac{\text{a}^{2}+\text{b}^{2}-({\text{l}_{1}}^{2}+ {\text{l}_{2}}^{2})}{2\text{l}_{1}\text{l}_{2}}\]
Taking \(\gamma=\theta_{2}-\varphi\), we get,
\[\gamma=\arccos\left(\frac{\text{a}^{2}+\text{b}^{2}-({\text{l}_{1}}^{2}+{ \text{l}_{2}}^{2})}{2\text{l}_{1}\text{l}_{2}}\right)\]
Taking,
\[\alpha=\arctan\left(\frac{\text{b}}{\text{a}}\right)\]
Now applying sine rule for the triangle OAB, we get,
Figure 5: Duty cycle
Figure 6: Servomotor control PWM diagram
Figure 7: Geometric Solution for IK equations
\[\frac{\sin\left(\theta_{2}-\alpha\right)}{\mathrm{l}_{2}}=\frac{\sin\left(\alpha- \varphi\right)}{\mathrm{l}_{1}}=\frac{\sin\left(\theta_{3}\right)}{\sqrt{\mathrm{a }^{2}+\mathrm{b}^{2}}}\]
Solving we get the values of \(\theta_{2},\theta_{3}\) and \(\theta_{4}\) as,
\[\theta_{2} =\alpha+\arcsin\left(\frac{\mathrm{l}_{2}\sin\ \gamma}{\sqrt{ \mathrm{a}^{2}+\mathrm{b}^{2}}}\right)\] \[\theta_{3} =\pi-\gamma\] \[\theta_{4} =\pi+\arcsin\left(\frac{\mathrm{l}_{1}\sin\ \gamma}{\sqrt{ \mathrm{a}^{2}+\mathrm{b}^{2}}}\right)-\alpha\]
## 3 Conclusions
The robotic arm was tested with different users and with sufficient training was able to respond to commands with an accuracy of almost 90 percent. Based on our study, we were able to conclude that it is possible to create robots for industrial applications that could interact with humans verbally and also help users to do their required tasks quickly and efficiently. At present our robotic arm can perform griping, holding & releasing object and dancing according to user voice command. It can also perform the conversation with users. This robotic can be developed without PC interfacing by using DSP(digital signal processing) module or HM-2007 & SPO-256 IC's.
|
2304.13862 | Microbial Corrosion Prevention by Citrobacter sp. Biofilms | Microbiologically influenced corrosion (MIC) compromises the integrity of
many technologically relevant metals. Protective coatings based on synthetic
materials pose potential environmental impacts. Here, we report a MIC resistant
coating based on a biofilm matrix of Citrobacter sp. strain MIC21 on underlying
copper (Cu) surfaces. Three identical corrosion cells varying in the type of
working electrode (annealed Cu, 29.5% coldworked, and 56.2% coldworked Cu) were
used. Graphite plate and Ag/AgCl served as counter and reference electrodes,
respectively. The working electrolyte was based on lactate-C media along with
an inocula consisting of Oleidesulfovibrio alaskensis strain G20 and
Citrobacter sp. strain MIC21. Passivating effect of the co-cultured biofilm
matrix was observed in the form of an ennoblement effect. Tests based on
sequencing, microscopy, and spectroscopy revealed the formation of a compact
biofilm matrix dominated by strain MIC21 cells, exopolymers, and insoluble
precipitates. This matrix displayed elastic modulus (a measure of rigidity) as
high as 0.8 Gpa and increased corrosion resistance by ~10-fold. Interestingly,
strain MIC21 has the capacity to inhibit the undesirable growth of aggressive
strain G20. Additional corrosion tests also substantiated the passivation
effects of strain MIC21. We provide mechanistic insight into the underlying
reasons responsible for corrosion prevention behavior of the biofilm matrix. | Pawan Sigdel, Ananth Kandadai, Kalimuthu Jawaharraj, Bharat Jasthi, Etienne Gnimpieba, Venkataramana Gadhamshetty | 2023-04-26T23:11:14Z | http://arxiv.org/abs/2304.13862v1 | # Microbial Corrosion Prevention by _Citrobacter_ sp. Biofilms
###### Abstract
We present a novel approach to the study of the _Citrobacter_ sp. Biofilms are used to determine the _Citrobacter_ sp. Biofilms are used to determine the _Citrobacter_ sp.
###### Abstract
**Microbiologically influenced corrosion (MIC) compromises the integrity of many technologically relevant metals. Protective coatings based on synthetic materials pose potential environmental impacts. Here, we report a MIC resistant coating based on a biofilm matrix of** _Citrobacter_ **sp. strain MIC21 on underlying copper (Cu) surfaces. Three identical corrosion cells varying in the type of working electrode (annealed Cu, 29.5% coldworked, and 56.2% coldworked Cu) were used. Graphite plate and Ag/AgCl served as counter and reference electrodes, respectively. The working electrolyte was based on lactate-C media along with an inocula consisting of** _Oleidesulfovibrio alaskensis_ **strain G20 and** _Citrobacter_ **sp. strain MIC21. Passivating effect of the co-cultured biofilm matrix was observed in the form of an** **ennoblement effect. Tests based on sequencing, microscopy, and spectroscopy revealed the formation of a compact biofilm matrix dominated by strain MIC21 cells, exopolymers, and insoluble precipitates. This matrix displayed elastic modulus (a measure of rigidity) as high as 0.8 Gpa and increased corrosion resistance by ~10-fold. Interestingly, strain MIC21 has the capacity to inhibit the undesirable growth of aggressive strain G20. Additional corrosion tests also substantiated the passivation effects of strain MIC21. We provide mechanistic insight into the underlying reasons responsible for corrosion prevention behavior of the biofilm matrix.**
**Keywords**: biofilm, copper, coating, passivation, EIS, biocorrosion, _Citrobacter_ **sp,** _Oleidesulfovibrio alaskensis_
## 1 Introduction
Microbiologically influenced corrosion (MIC), also known as microbial corrosion or biocorrosion, refers to the accelerated degradation of metals in the presence of microorganisms [1]. The US Airforce alone spends $1 billion annually to address MIC effects caused by sulfate reducing bacteria (SRB) alone [2]. Robust metals, including copper alloys that resist oxidation under abiotic conditions [3], tend to fail in microbial environments [4-6]. The reasons for these failures can be understood by reviewing different types of MIC mechanisms (e.g., cathodic depolarization (1934), King's Mechanism (1971), anodic depolarization (1984), Romero mechanism (2005), biocatalytic cathodic sulfate reduction (2009)) [7] (See Table S1 for details on SRB). Any given mechanism will involve a series of redox reactions, for example, a thermodynamic coupling between lactate oxidation (Eqn 1) and sulfate reduction (Eqn 2) [7, 8] in the case of SRB.
\[2CH_{3}CHOHCOO^{-}+2H_{2}O\to 2CH_{3}COO^{-}+2CO_{2}+\delta H^{+}+\delta e^{-} \left(E^{o\prime}=-430~{}mV\right) \tag{1}\]
\[SO_{4}^{2-}+9H^{+}+\delta e^{-}\to HS^{-}+4H_{2}O\left(E^{o\prime}=-217~{}mV\right) \tag{2}\]
where \(E^{o\prime}=\) modified reduction potential at pH 7, 1 M solutes, or 1 bar gases at 25 \({}^{\circ}\)C.
The bisulfide (HS') from Eqn 2 combines with the hydrogen ions to generate hydrogen sulfide (H\({}_{2}\)S) (Eqn 3).
\[HS^{-}+H^{+}\to H_{2}S \tag{3}\]
The cell potential (\(\Delta E^{o\prime}=+213~{}mV\)) from Eqn (1-2) yields a negative Gibbs free energy change (\(\Delta G^{o\prime}=-164\) kJ/mol) under standard conditions [9], implying a favorable thermodynamic coupling. \(The~{}\Delta G\) values were determined using the following Eqn [10]:
\[\Delta G^{0}=-nFE^{0}_{rxn} \tag{4}\]
where, \(n=\) number of electrons passed per atom, \(F=\) charge on a mole of electrons, and \(E^{0}_{rxn}=\) standard electromotive force (emf) of the cell reaction.
\[\begin{array}{ccccc}\mbox{Coupling between}&\mbox{Cu oxidation}&(\mbox{Cu}^{+}/\mbox{Cu}; \mbox{$E^{0}$}^{\prime}=+520\ mV;\mbox{Cu}^{2+}/\mbox{Cu};\mbox{$E^{0}$}^{ \prime}=+340\ mV\oplus 25\ ^{\circ}C,\mbox{pH}=7),\end{array}\]
and sulfate reduction yields negative cell potential. Based on this scenario, one may interpret that the Cu metals are not vulnerable under ambient and neutral pH conditions. However, the bisulfide generated by the sulfate reducing bacteria (Eqn 2) reacts with Cu ions to generate copper sulfide (Cu\({}_{2}\)S) (\(\Delta G^{0}\)\({}^{\prime}=-62\ kJ/mol\)) [8], promotes copper corrosion under ambient conditions (Eqn 1, 2, 5), which is impossible in the absence of microorganisms.
\[2Cu(crystal)+HS^{-}+H^{+}\to Cu_{2}S(crystal)+H_{2}(g) \tag{5}\]
Despite the antimicrobial properties of Cu [11], SRB cells colonize Cu surfaces by forming biofilms, where they encapsulate themselves within a self-secreted extracellular polymeric substance (EPS) [12]. Such biofilms overcome Cu stress by _(i)_ using sulfide metabolites (Eqn 2) to reduce Cu ions; _(ii)_ restricting permeation of Cu ions from the outer and inner membranes, cytoplasm, and periplasm; _(iii)_ scavenging Cu ions using proteins; and _(iv)_ expelling Cu ions from the cells [11].
Protective coatings are typically used to delay the onset of corrosion in both abiotic and microbial environments. Although they can effectively resist the abiotic forms of corrosion, they are not necessarily suitable for controlling the adherence state and biofilm growth of detrimental microorganisms, including SRB cells. To improve fouling properties, such coatings are often modified with biocides (e.g., tributyltin) and antimicrobial particles (e.g., silver nanoparticles).
Owing to their toxic effects and potential environmental impacts when discharged into the ecosystem, these coatings are being banned in many countries [13; 14]. Furthermore, any protective coatings can exert influence only on the first layer of adhered bacteria, with much less influence on the invasion by other colonizing microorganisms. To address these issues, the scientific community is beginning to explore a microbiologically induced corrosion inhibition (MICI) method that involves the use of living microorganisms for corrosion prevention. The MICI effects have been attributed to the factors related to microbial respiration, EPS protection, mineralization, competition, and secretion of corrosion inhibitors [15]. For example, the axenic biofilms of _Pseudomonas fragi_ and _Escherichia coli_ DH5\(a\) have been reported to inhibit steel corrosion by forming a low-oxygen barrier, as reported by Jayaraman and his coworkers [16]. _S.Oneidensis_ MR-1, a facultative anaerbe, has also been reported to display passivation effects by creating low-oxygen atmosphere for inducing anaerobic respiration with reduction of Fe (III) to Fe (II) [17].
Drawing inspiration from the presence of _Citrobacter_ sp. in both human and animal intestinal tract, we use the term "commensal" to define beneficial bacteria that defend metal surfaces against the colonization of corrosive bacteria. As shown in Table S2, most of the prior studies focused only on using axenic cultures of bacteria for passivating corrosion. Here, we explore the use of _Citrobacter_ sp., a Gram-negative bacterium within the _Enterobacteriaceae_ family, and the _Gammaproteobacteria_ class as a commensal bacterium for defending copper surfaces against the corrosive effects of SRB. _Citrobacter_ sp. are generally considered commensal bacteria in a healthy human gut [18]. They have also been reported to be found in marine environments facing ship hulls. Although they co-exist with community members involved in the corrosion of ship hulls [19], they have been reported to play a neutral role towards corrosion.
They thrive in soil, sewage, and water environments [20, 21], as well as in engineered systems [22]. Unlike typical SRB, _Citrobacter_ facilitates dissimilatory sulfate reduction under micro-aerobic conditions (see Table S3 for prior studies with _Citrobacter_).
We explored the passivation behavior of _Citrobacter_ sp. strain MIC21 grown on different Cu substrates (annealed Cu, 29.5%, and 56.2% collaborated Cu). A co-culture of _Oleidesulfovibrio alaskensis_ strain G20 (strict anaerobe) and _Citrobacter_ sp. strain MIC21 (facultative anaerobe) was used as the inocula. _O. alaskensis_ strain G20 was the lab-maintained model SRB. _Citrobacter_ sp. was isolated from lab-cultivated SRB consortia. Strain MIC21 was found to outcompete strain G20 and evolve into a compact biofilm at the end of the corrosion tests. Results based on Open Circuit Potential (OCP), Electrochemical Impedance Spectroscopy (EIS), and an equivalent electrical circuit fitted with the EIS data were used to quantify the passivating behavior of strain MIC21. The corrosion prevention performance of the biofilm matrix was quantified in terms of corrosion resistance, pore resistance, and charge transfer resistance. To explore the underlying passivation mechanisms of MIC21 biofilm, tests based on 16S rRNA sequencing, Scanning Electron Microscopy (SEM), Energy Dispersive Spectroscopy (EDS), Confocal Laser Scanning Microscopy (CLSM), nanoindentation, and X-Ray Diffraction (XRD) were carried out. Additional MIC tests using individual cultures of _Citrobacter_ sp. strain MIC21, and _O. alaskensis_ strain G20 were used to further corroborate the passivation behavior of MIC21 compared to G20.
## 2 Materials and methods
### Copper samples
Cu cylinders (99.95% purity, 1-inch diameter, Online Metals, USA) were sectioned into 1-inch discs (thickness =1.5 mm), polished manually using 340, 300, 400, and 600 mesh silicon
carbide (SiC) papers, and rinsed in acetone to remove any contaminants. The treated samples were annealed at 950 \({}^{\circ}\)C for 1 h in an argon atmosphere and cooled to room temperature at 5 \({}^{\circ}\)C/min. Two of the annealed samples were subjected to cold working (CW) to achieve a thickness of 29.5% and 56.2%, respectively. The CW process entailed cold rolling of the annealed copper to achieve the samples with 29.5% and 56.2% of the original thicknesses, respectively. The CW process introduced stresses in the samples. The CW Cu by cold rolling is a common practice of copper manufacturing for marine industries and oil exploration. These samples were rinsed with distilled water and alcohol and air-dried prior to use in corrosion tests. The latter steps ensured consistent elemental composition in all the tested samples.
**2.2 Cultures of strain G20 and MIC21 for MIC studies**
Lactate-C media was used to grow individual cultures of the G20 and MIC21 strains as well as their co-culture. Lactate-C media consisted of sodium lactate (6.8 g/L), sodium sulfate (4.5 g/L), sodium citrate (0.3 g/L), dehydrated calcium chloride (0.06 g/L), ammonium chloride (1.0 g/L), magnesium sulfate (2.0 g/L), potassium phosphate monobasic (0.5 g/L), and yeast extract (1.0 g/L). _Citrobacter_ sp. strain MIC21, and _O. alaskensis_ strain G20 were grown separately in 150 mL serum bottles consisting of 63 mL media and 7 mL inocula. The introduction of the inocula was preceded by the following steps: (i) adjust the initial pH of the media to 7, (ii) seal the bottle along with the media using a rubber septum and crimp, (iii) purge with N\({}_{2}\) (95% v/v) and H\({}_{2}\) (5% v/v), respectively, followed by (iv) autoclaving at 121 \({}^{\circ}\)C for 20 minutes. The incubation was carried out at 30 \({}^{\circ}\)C. The optical density of both cultures was nearly 0.2 prior to the use. The doubling time was also investigated for the individual cultures of strain MIC21 under
both aerobic and anaerobic conditions along with individual culture of strain G20 under anaerobic conditions [23].
**2.3 Microbiology influenced corrosion (MIC) tests**
Three different MIC reactors varying in the type of working electrode, namely annealed Cu, 29.5% CW, and 56.2% CW, were setup. The working electrodes were mounted on the stainless-steel sample brackets (Gamry part number 990/00254), and an electroplating tape was to expose 1 cm\({}^{2}\) of the electrode to the electrolyte. The reference electrode and counter electrode was based on Ag/AgCl and graphite plate, respectively. The electrolyte consisted of 360 mL of lactate-C media and 40 mL of the co-culture (i.e., MIC21 and G20 (50% v/v)). Additional corrosion tests were carried out using 40 mL of individual cultures of _Citrobacter_ sp. strain MIC21 and \(O\). _alaskensis_ strain G20. These tests were based on annealed Cu as the working electrode. The Gas chromatography (SRI Instruments, model 8610C, CA, USA) equipped with a thermal conductivity detector and a molecular sieve column (Restek Mole sieve 5A 80/100 1.83m \(\times\) 38mm \(\times\) 26 mm) was used to measure the composition of the headspace in the test reactors.
**2.4 Weight loss measurement**
The weight loss measurements were conducted using modified ASTM G 31 [24]. Three sets of immersion tests were carried out using annealed Cu and Lactate-C media. These tests varied in the type of inocula, which included (1) Co-culture of _Citrobacter_ sp. strain MIC21 and \(O\). _alaskensis_ strain G20 (2) Individual culture of strain MIC21, (3) Individual culture of strain G20. Duplicate test specimens were used in each test. Testing apparatus was based on serum bottles containing 70 mL culture and 80 mL headspace. These bottles were crimped with rubber septum to achieve a tight atmospheric seal. Nitrogen was purged to maintain anaerobic conditions for the
test (3). A circular Cu coupon of 12.7 mm diameter and 1 mm thick with a surface area of 10.94 cm\({}^{2}\) was used as the test specimen. The specimens were precleaned using acetone and methanol and were air-dried before measuring the initial weights. The serum bottles were maintained at 30 \({}^{\circ}\)C under stagnant conditions. At the end of the tests, which lasted for 15 days, the Cu coupons were cleaned using ASTM G1 standards [25] and air-dried before measuring the final weights.
**2.5 Electrochemical measurements**
A Gamry potentiostat (Interface 1010) was used to ascertain a steady-state open circuit potential (OCP) value prior to all the electrochemical impedance spectroscopy (EIS) tests. These tests were performed using a 10 mV AC signal within a frequency range of 100 kHz and 0.01 Hz. The EIS spectra were obtained in the forms of Nyquist and Bode curves, respectively, and fitted to an appropriate electrical equivalent circuit (EEC) to determine the impedance between the working electrode and the reference electrode (Section 3.2). As shown in Figure 3a, \(R_{soln}\) is the resistance offered by the electrolyte, \(R_{po}\) is the pore resistance of the biofilm matrix, \(R_{ct}\) is charge transfer resistance. The constant phase element, \(C_{po}\), is the capacitance of porous biofilm. \(C_{dl}\) is double-layer capacitance at the underlying Cu surface. The \(R_{po}C_{po}\) is pore resistance offered by the biofilm and \(R_{ct}C_{dl}\), the corrosion process at the interface of electrode and biofilm. A constant phase element was used instead of capacitance [26] and converted into capacitance as follows [2].
\[C=R^{(\frac{1-n}{n})}.Q^{\frac{1}{n}} \tag{6}\]
Where, C = capacitance
Q = Constant Phase element capacitance as \(C_{dl}\) or \(C_{po}\)
R = Resistance
n = exponent in constant phase element
Potentiodynamic polarization curves were measured with the potential sweep of \(\pm\)250 mV vs. Ag/AgCl. The measurements were carried out at the end of the tests for those based on the co-culture as well as individual cultures of _Citrobacter_ sp. and _O. alaskensis_.
**2.6 Genomic DNA extraction and molecular characterization**
The planktonic cells under the co-culture conditions were harvested from the corrosion cells at the end of the tests. The extracted cells were centrifuged at 8000 rpm for 15 min. The genomic DNA was extracted using a DNA extraction kit following the manufacturer's protocol (PureLink\({}^{\text{TM}}\) Microbiome DNA Purification Kit). Molecular characterization was performed using 16S rRNA gene amplification and gene sequencing techniques. The PCR tests were performed with 100 ng of genomic DNA template using 8F and 1492R as universal primers (Turner et al., 1999). The 16S rRNA conserved region was amplified using 5' - AGAGTTTGATCCTGGCTCAG - 3' and 5' - GGTTACCTTGTTACGACTT - 3' as forward and reverse primers, respectively. A 50 \(\upmu\)L reaction was performed using a 2X PCR master mix (Platinum\({}^{\text{TM}}\) II Hot-Start Green PCR Master Mix) along with 4 \(\upmu\)L of genomic DNA, 4 \(\upmu\)L of forward and reverse primers each (0.4 \(\upmu\)M), and 13 \(\upmu\)L of RT-PCR grade water (Invitrogen). PCR thermal cycler (MiniAmp Plus, Applied Biosystems) was programmed for the following PCR conditions: initial denaturation at 95 \({}^{\circ}\)C for 5 minutes, 35 cycles of denaturation at 95 \({}^{\circ}\)C for 1 minute, annealing at 60 \({}^{\circ}\)C for 1 minute, extension at 72 \({}^{\circ}\)C for 1 minute, and final extension at 72 \({}^{\circ}\)C for 10 minutes. Negative control was maintained using RT-PCR grade water as the PCR template. The amplified PCR products were separated using a 1.2% agarose gel electrophoresis for 30 min at 60 V in an electrophoretic chamber. The PCR amplicons were visualized and confirmed with ethidium bromide staining using a UV transilluminator. The positive PCR amplicons at ~1500 bp were excised and eluted using a gel purification kit (PureLink\({}^{\text{TM}}\) Quick
Gel Extraction Kit). The amplicon was sequenced using Sanger's dideoxy gene sequencing methods and subjected to BLAST analysis (www.ncbi.nlm.nih.gov). The homologous sequences were identified and compared using the e value and query coverage from the published 16S rRNA sequences. A phylogenetic tree was constructed using a neighbor-joining tree method of the phylogeny test. Here, 1000 bootstrap replications were used using a MEGA 11 software employing multiple sequence alignment (ClustalW).
### 2.7 Mechanical Properties of Biofilm
The co-cultured biofilms from the exposed Cu coupons (annealed Cu, 29.5% CW, and 56.2% CW) were removed from the corrosion cells at the end of the tests. The removed samples were stored under sterile conditions in a laminar air flow hood for 60 days. To analyze the intactness and durability of the biofilms, these samples were analyzed using MTS Nano Indenter XP equipped with a Berkovich indenter tip radius of 20 nm. These samples were mounted onto a stub with a low-temperature adhesive and introduced in a closed chamber to reduce noise levels. The test parameters included a maximum indenter depth of 2100 nm, a peak hold time of 5 s, a strain rate of 0.05 s-1. and Poisson's ratio of 0.4 [27]. Fifteen indentations per sample were made, and the nanoindentation measurements were recorded in the form of a load-displacement curve. The hardness and the elastic modulus were determined using TestWorks4 software.
### 2.8 Surface analysis and chemical composition
The images of biofilm matrices on the exposed Cu samples, at the end of the corrosion tests, were acquired using a Zeiss Supra40 Scanning Electron Microscope (SEM) configured with Type II secondary electron (SE2). An accelerated voltage of 1 kV was used for the acquisition. The SEM was fitted with an Oxford Aztec Energy advanced system with Aztec 5.1 program to analyze energy dispersive spectroscopy (EDS) for elemental composition. The electron back
scattered diffraction was modified with an accelerated voltage of 15kV to achieve the EDS. The corrosion products were analyzed using an Ultima-Plus X-ray Diffractometer (XRD, Rigaku, Japan) with CuK\(\alpha\) radiation configuration, scintillator counter, and a graphite monochromator. The Jade 7.5 software was to analyze XRD peaks. The thicknesses of the air-dried biofilm matrices were determined using VK-X250 laser confocal scanning microscope (CLSM) (Keyence Corp, Itasca, IL, USA). The CLSM images were obtained after Cu coupons were removed from the corrosion cells and stored under sterile conditions for 60 days. The average thickness was calculated using a multi-line roughness profile (4 lines in each sample), specifically by comparing the Cu substrate and the biofilm using VK-multifileanalyzer application software. A multi-line roughness profile was used for analyzing the pitting tendency of annealed Cu substrates exposed to the cultures of strain MIC21 and strain G20.
**3. Results and Discussion**
**3.1 Annoblement effect of the co-cultured biofilm**
The three test reactors (annealed Cu, 29.5% CW, and 56.2% CW) initially experienced a decrease in the open circuit potential (E\({}_{\mathrm{ocp}}\) vs. Ag/AgCl) (turned negative over time). This trend was restricted to the first 12-36 days (Figure 1), beyond which the E\({}_{\mathrm{ocp}}\) values increased throughout the test duration. This ennoblement effect manifested passivating behavior of the biofilm matrix [28]. The E\({}_{\mathrm{ocp}}\) values for annealed Cu decreased from -0.691 V to -0.799 V by day 12 and later took an opposite trend, and the values increased to -0.68 V by day 60. The E\({}_{\mathrm{ocp}}\) values for 29.5% CW decreased from -0.733 V (day 0) to -0.746 V (day 24) and later increased to -0.599 V (day 60). The E\({}_{\mathrm{ocp}}\) for 56.2% CW decreased from -0.741 V (day 0) to -0.788 V (day 36), followed by an increase to -0.635 V (day 60).
Figure 1: Open Circuit Potential profiles in the three corrosion cells.
The EIS results corroborated the ennoblement effect. The total impedance, represented by the diameter of the semicircle in the Nyquist plot, increased over time for annealed Cu (Figure 2a, shown for days 0, 30, and 60), 29.5% CW (Figure 2c), and 56.2% CW (Figure 2e), respectively. Higher impedance implies greater resistance to corrosion. Unlike Nyquist plots on day 0, the semicircles for Nyquist plots on days 30 and 60 did not extend fully into the low frequency. This observation further explains high impedance behavior during the latter times compared to day 0. The Bode plots also corroborated the passivation behavior in the three biocorrosion cells. The absolute value of impedance modulus at low frequency (\(|\text{Z}|_{0.01\text{Hz}}\)) on day 60 was 2.5 times higher than day 0 for annealed Cu (Figure 2b), 6.3 times higher for 29.2% CW (Figure 2d), and 4.2 times higher for 56.2% CW (Figure 2f). The resistance observed in the high-frequency region is attributed to the formation of a compact biofilm matrix (discussed in section 3.2.1) and in lower frequency resistance to relevant charge transfer reactions (details in Section 3.2).
The passivation behavior came as a surprise, considering our findings that the culture of _O. alaskensis_ strain G20 aggravates the corrosion of underlying Cu surfaces. The additional test we performed for individual cultures of strain G20 corroborated the aggravation of corrosion on annealed Cu (Figure S1a) (section 3.3). In our earlier tests, we observed that the strain G20 cells promotes corrosion of Cu over time, irrespective of the protective coatings, which included polymer coatings and those modified with nanofillers [29] and nanoscale materials [2, 5, 30]. Prior studies also confirm that _Desulfovibrio_ sp. promotes metallic corrosion under diverse environmental conditions [31]. A recently published review article by the authors' group provides comprehensive information on genes involved in biofilm formation and microbial corrosion that are shared across SRB genomes [32].
Limited studies have been done on the corrosion behavior of _Citrobacter_ sp. _Citrobacter_ sp. has been reported to promote MIC on X70 pipeline steel surfaces because of the metabolic activity of the bacteria in the early phase and the exfoliation and decomposition of the biofilm in the later phase of the exposure time [21]. However, the same study also elucidates the capability of _Citrobacter_ sp. to form a compact biofilm matrix resulting in the enablement effect in OCP with an increase in charge transfer resistance and pore resistance during the midstage [21]. Zhang et al. also reported the ennoblement effect on OCP with microbiologically influenced corrosion inhibition when WC-Co was exposed to _Citrobacter_ sp. in a sterilized emulsion [26]. Our study also showed the temporal increase in the impedance with the increase in the diameter of the Nyquist plot when the individual culture of _Citrobacter_ sp. strain MIC21 was used under semi-aerobic conditions (Figure S1b) (section 3.3).
Figure 2: **Electrochemical Impedance Spectroscopy.** Nyquist and Bode plots for **(a)(b)** annealed copper, respectively, **(c)(d)** copper 29.5% CW, respectively, **(e)(f)** 56.2% CW copper, respectively.
### Passivating mechanisms of _Citrobacter_ sp. strain MIC21 in the co-cultured biofilm matrix
After establishing corrosion resistance of the biofilm matrix, we turn our attention to the underlying reasons for the passivation behavior. An electrical equivalent circuit (EEC) based on a modified Randles circuit (Figure 3a) was used to analyze EIS data from Nyquist and Bode plots (Figure 2). The goodness of fit was observed from the Chi-square parameters that ranged from 10\({}^{-3}\) to 10\({}^{-4}\), indicating a reasonable fit of the original data [33] (Table S4). The EEC analysis suggests that the passivation effects were primarily due to the barrier properties of biofilm matrices (Figure 3a).
The \(R_{ct}\) values in the co-cultured test reactors decreased initially with a corresponding increase in corrosion rates. This anticipated behavior is due to the active metabolism of the proliferating cells [34] and inhomogeneities within the extracellular polymeric substance (EPS) film during the early stages of biofilm growth. The inhomogeneities in the EPS films allow corrosive ions (HS') to penetrate onto the Cu surfaces. Such penetration is evident from the formation of Cu\({}_{2}\)S, a corrosion product (Eq 5) [35], as confirmed by the EDS and XRD analysis (See latter sections).
A key finding here is regarding the passivation behavior, reflected in the form of increasing values of \(R_{ct}\) and \(R_{po}\) over time. The \(R_{ct}\) for annealed Cu on day 60 (5.39 \(\pm\)0.35 k\(\Omega\).cm\({}^{2}\)) was approximately 2.5-fold higher than day 0 (2.0\(\pm\)0.084 k\(\Omega\).cm\({}^{2}\)) (Table S4), and that for 29.5% CW on day 60 (9.84\(\pm\)0.29 k\(\Omega\).cm\({}^{2}\)) was approximately 5-fold greater than day 0 (1.86\(\pm\)0.025 k\(\Omega\).cm\({}^{2}\)) (Table S4). The \(R_{ct}\) for 56.2% CW showed approximately 2-fold higher resistance from
Figure 3: Electrical equivalent circuit (EEC) analysis of the biofilm matrix- (a) Electrical circuit with the corresponding physical model used for fitting the impedance spectra (b) Temporal profiles of total corrosion resistance for annealed Cu, 29.5% CW and 56.2% CW
day 0 (2.02\(\pm\)0.023 k\(\Omega\).cm\({}^{2}\)) to day 60 (6.57\(\pm\)0.48 k\(\Omega\).cm\({}^{2}\)). We attribute this phenomenon to the formation of a compact biofilm matrix that served as a barrier to the penetration of any corrosive species [26]. The dense compact nature of the biofilm matrix is reflected by the temporal increase in the pore resistance for all three tests. For instance, the \(R_{po}\) values increased by 18.5-fold on day 60 (1.85\(\pm\)0.086 k\(\Omega\).cm\({}^{2}\)) compared to day 0 (100\(\pm\)7.86 \(\Omega\).cm\({}^{2}\)) for annealed Cu. The \(R_{po}\) for 29.5% and 56.2% CW increased by 65-fold and 113-fold, respectively, from day 0 to day 60 (Table S4). The significant increase in the pore resistance in CW samples can be attributed to the higher stress induced in the Cu coupons due to cold rolling and hence higher bacterial attachment. The higher the \(R_{po}\), the greater is the resistance to penetration of HS'.
The total corrosion resistance (\(R_{corr}=R_{cr+}R_{po}\)) for annealed Cu on day-60 (7.24\(\pm\)0.43 k\(\Omega\).cm\({}^{2}\)) was nearly 3-fold higher than day 0 (2.1\(\pm\)0.092 k\(\Omega\).cm\({}^{2}\)) (Figure 3b). Similarly, the resistances in the CW samples increased by at least 10-fold (Figure 3b). The higher resistance in the CW samples compared to annealed Cu can be attributed to the greater resiliency to overcome the higher degrees of stresses induced by the coldworking process.
**3.2.1 Domination of _Citrobacter_ sp. strain MIC21 in the co-cultured biofilm matrices:** The gene sequencing studies revealed that _Citrobacter_ sp. strain MIC21 outcompeted _O.alaskensis_ strain G20 at the end of the co-cultured corrosion tests. The biofilm samples at the end of the tests showed 98.93% identity (with 99% query coverage) with _Citrobacter freundii_ FC18565 (Acc No: MK561018.1), based on molecular identification by 16S rRNA gene sequencing and BLAST analysis (Figure S2). These samples were quantified based on their 16S rRNA gene copies in log10 gene copies/\(\mu\)L. Phylogenetic analysis of 16S rRNA gene sequences of strain MIC21 showed the evolutionary relationship between similar species. The neighbor-joining method
showed the evolutionary relationship of MIC21 with other _Citrobacter_ sp. The 16S rRNA gene sequences were submitted to GenBank (accession number: OK144236). The SEM images revealed rod-shaped bacterium (L = 2-3 \(\mu\)m; W = -500 nm) with a smoother cell wall (Figure 5a, b, c), reflecting morphological features of Gram-negative bacterial species.
Our gas chromatography tests revealed the presence of minimal levels of oxygen (\(\sim\)14%) in the headspace. The presence of this oxygen can provide a selective advantage for the early colonization by _Citrobacter_ sp. strain MIC21 over _O. alaskensis_ strain G20. This situation allows MIC21 cells to use oxygen as an electron acceptor [36], allowing them to prolong their generation time and lag phase [37]. Upon depletion of the oxygen, strain MIC21 cells shift into a dissimilatory sulfate reduction pathway, which is the only possible pathway for _O. alaskensis_ strain G20, which is an obligate anaerbe. Literature has shown _Citrobacter_ sp.'s ability to grow faster in an aerobic condition and quickly recover to reduce sulfate when transferred from an aerobic to an anaerobic environment [38]. The dissimilatory reduction pathways generate corrosive metabolites such as HS-[39]. However, _Citrobacter_ sp. tolerates higher levels of toxic Cu species compared to _Desulfovibrio vulgaris_[39].
2.2 Compact nature of the _Citrobacter_ sp. strain MIC21 biofilm matrices under the co-cultured condition:
The microscopy studies at the end of the corrosion tests revealed a compact biofilm comprising homogeneous layers of the rod-shaped _Citrobacter_ sp. strain MIC21 cells. This compact nature was consistently observed on annealed Cu, 29.5% CW, and 56.2% CW (Figure 4a, 4b, 4c, respectively). Their average biofilm thicknesses were 29.07\(\pm\)1.45 \(\upmu\)m, 43.40\(\pm\)2.88 \(\upmu\)m, and 26.24\(\pm\)1.79 \(\upmu\)m, respectively (Figure S3a-S3c), that are significantly greater than a typical thickness of _Oleidesulfovibrio alaskensis_ strain G20 (0-15 \(\upmu\)m) [5] and other _Desulfovibrio sp._ (\(\sim\)20 \(\upmu\)m) [40]. The morphological features of the biofilms observed in the current study were significantly different compared to _O. alaskensis_ strain G20 based corrosion systems in our current and earlier studies [2, 30]. The mean hardness of the biofilms for annealed Cu, 29.5% CW, and 56.2% CW (Figures 4d-4e) were as high as 12.7\(\pm\)2.4 MPa, 5.2\(\pm\)1.8 MPa, and 4.5\(\pm\)2.1 MPa, respectively. Their elastic modulus values ranged from 0.4-8 GPa (Figure 4e), which are \(\sim\)4000-fold higher than typical SRB biofilms [41]. A higher modulus indicates higher stiffness of the biofilm matrix composition with less elasticity. The indentation depth was determined to be below 10% of the average biofilm thickness for all three samples. The load/displacement curve (Figure 4d) shows the change in the curve pattern for all three samples at different displacements. We didn't observe any biofilm formation on the Cu samples before the samples were exposed to co-cultured media (Figures S4a, S4c, S4e).
Figure 4: **Compact biofilm matrices and their mechanical properties.** SEM images of **(a)** annealed Cu, **(b)** 29.5% CW, and **(c)** 56.2% CW after 60 days of exposure. Nanoindentation tests **(d)** Load vs. displacement curves of biofilms on different percent CW samples, **(e)** Biofilm hardness/modulus with displacement at maximum load as a function of percent CW samples.
We analyzed the corrosion products in the biofilm matrices using the EDS and XRD tests. The EDS analysis revealed sharp Cu peaks (Figure S5) for pristine Cu samples; specifically, they displayed Cu (111), (200), (220), (311), and (222) orientations (Figures S4b, S4d, S4f). However, the exposed annealed samples developed signatures of carbon (C), sulfur (S), and oxygen (O) (Figure 5a) with compositions of 14.7%, 13.7%, and 11.5% w/w, respectively (Figure S6a). The C signatures are attributed to the EPS products, and S and O signatures to the corrosion products. The 29.5% CW (Figure 5c) showed 21.4% O, 13.2% C, and 11.4% S, respectively (Figure S6b). The 56.2% CW sample (Figure 5e) displayed 15.4% O, 13.5% S, and 12.2% C, respectively (Figure S6c).
The XRD analyses confirmed two key signatures of MIC, namely chalcocite (Cu\({}_{2}\)S) and copper oxide (CuO), in all three corrosion tests, namely annealed Cu (Figure 5b), 29.5% CW (Figure 5d), and 56.2% CW (Figure 5f). The EPS components captured a significant amount of Cu (annealed Cu:57.4% ; 29.5% CW: 44.4% ; 56.2% CW: 56.4% ), thus limiting the mass transfer within the biofilm matrix [42]. The CuO and Cu\({}_{2}\)S signatures observed in the biofilm matrices (Figure 5) represent active constituents of passivating layers [43] that resist the diffusion of corrosive ions [44].
Figure 5: **Composition of biofilm matrix**. EDS data showing the elemental composition for **(a)** annealed Cu, **(c)** 29.5% CW, and **(e)** 56.2% CW; XRD data showing the composition of corrosion products for **(b)** annealed Cu, **(d)** 29.5% CW, and **(f)** 56.2% CW, respectively showing Cu\({}_{2}\)S and CuO.
### Additional tests with individual cultures:
The doubling time of strain MIC21 under the aerobic condition was found to be 13.10 hrs, the anaerobic condition to be 14.9 hrs, and strain G20 under the anaerobic condition to be 17.5 hrs (Figure S7). This explains the capability of strain MIC21 to colonize and form biofilm faster than strain G20.
The passivation effect was further corroborated by observing the electrochemical behavior of individual cultures of strain MIC21 compared with strain G20. The diameter of the semi-circle in the Nyquist plot had a temporal increment for strain MIC21 (Figure S1a), whereas the strain G20 showed the opposite trend (Figure S1b). The total corrosion resistance (\(R_{corr}=\)_Rct + Rpo_) increased by approximately 4-fold from day 0 (15.62\(\pm\)0.49 k\(\Omega\). cm\({}^{2}\)) to day 15 (59.0\(\pm\)11.04 k\(\Omega\). cm\({}^{2}\)) (Table S5). As expected, _O. alaskensis_ strain G20 showed a reduction in the \(R_{corr}\) by nearly 6-fold from 5.50\(\pm\)1.3 k\(\Omega\). cm\({}^{2}\) on day 0 to 0.9\(\pm\)0.022 k\(\Omega\). cm\({}^{2}\) on day 15 (Table S5).
The direct evidence for corrosion resistance by _Citrobacter_ sp. strain MIC21 was achieved from Tafel analysis after 15 days of corrosion test. Strain MIC21 offered corrosion resistance with a rate of 0.5 mpy, approximately 2.5-fold lesser than the resistance offered by strain G20 (1.18 mpy) on annealed copper (Figure S1c). The corrosion rate after 60 days of the test for co-cultured condition showed 0.399 mpy (Figure S1d) for annealed Cu. The weight loss measurement substantiated the results from the Tafel analysis (Figure S8). The corrosion rate of annealed Cu exposed to co-culture (0.430\(\pm\)0.068 mpy) was approximately 1-fold less than the individual culture of strain MIC21 (0.445\(\pm\)0.005 mpy) and approximately 3-fold less than the individual culture of strain G20 (1.371\(\pm\)0.098 mpy).
The pitting profiles also demonstrated the corrosion resistance behavior of strain MIC21 when compared with strain G20. Annealed Cu exposed to the strain G20 showed several pits (Figure S9) with an average peak depth of 0.41\(\pm\)0.10 \(\upmu\)m and a maximum depth of 0.54 \(\upmu\)m, implying severe corrosion. However, the peak density on Cu coupons exposed to strain MIC21 was significantly less, with an average peak depth of 0.25\(\pm\)0.07 \(\upmu\)m and a maximum depth of 0.32 \(\upmu\)m.
**3.4 Outlook**: Members of gamma-proteobacteria have been reported to outcompete their peers in response to stressful environmental conditions resistance mechanisms. For instance, given a co-culture of _Pseudomonas aeruginosa_ and _Vibrio cholerae_[45], _P. aeruginosa_ has been reported to induce an antibacterial attack on _V. cholerae_ in response to the toxic compounds secreted by type 6 secretion system (T6SS) of _V. cholerae_. The latter process is referred as T6SS dueling effect. Along these lines, _Citrobacter_ sp. strain MIC21 outcompeted _Oleidesulfovibrio alaskensis_ strain G20 in response to the presence of toxic copper species from the dissimilatory sulfate reduction pathways of _O.alaskensis_ strain G20 [37]. The _Citrobacter_ sp. strains harbor plasmid-mediated quinolone resistance genes and beta-lactamase gene sets that display virulent mechanisms [46]. Exposure to heavy metals has also been reported to proliferate antibiotic resistance genes (ARGs). _Omics_ studies are warranted to understand the mechanisms of metal resistance, antimicrobial resistance, and ARG that allowed the out competition of _Citrobacter_ sp. Such Omics studies can also reveal the roles of co-resistance and cross-resistance mechanisms in response to selection pressure induced by the metal exposure [47].
## 4 Conclusion
This study demonstrates the use of the biofilm matrix of _Citrobacter_ sp. as a protective coating for microbial corrosion prevention. The protection ability was demonstrated for annealed copper as well as those modified with stresses. The MIC resistance of the biofilm matrix was manifested in the form of a compact and thick barrier film that prohibitively restricted the penetration of corrosion ions. For instance, such compact biofilms are known to yield cathodic inhibitors based on biopolymers from EPS that convert Cu ions into insoluble precipitates and block active sites of the underlying copper surfaces. The evolution of such a compact biofilm was attributed to the bacterial stress response that allowed _Citrobacter_ sp. (facultative anaerb) to outcompete its counterpart (obligate anaerb _O.alaskensis_). Omics studies are warranted to understand the specific stress resistance mechanisms of _Citrobacter_ sp. at a genetic level. The findings revealed in the current study can be used to design minimally invasive approaches for invoking _Citrobacter_ sp. metabolism and, in turn, alleviate undesirable effects of sulfate reducing bacteria, especially obligate anaerbes, involved in the corrosion of metals. Further studies are warranted to reliably transfer large area biofilm matrices of _Citrobacter_ sp. onto arbitrary metal substrates and test their long-term protection ability.
## 5 CRediT authorship contribution statement
**Pawan Sigdel**: Resources, Conceptualization, Visualization, and Validation. **Ananth Kandadai: Validation. Kalimuthu Jawaharraj:** Validation. **Bharat Jasthi :** Resources, Funding acquisition. **Venkataramana Gadhamshetty**: Resources, Conceptualization, Supervision, Validation, Project administration, Funding acquisition.
## 6 Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## 7 Acknowledgments
We acknowledge the funding support from National Science Foundation RII FEC awards (\(\#1849206,\#1920954\)) and support from the Department of Civil and Environmental Engineering at the South Dakota Mines.
## 8 Appendix A. Supplementary data
Supplementary data to this article can be found online
## References
* [1] Alasvand Zarasvand, K. and V. Ravishankar Rai, _Identification of the traditional and non-traditional sulfate-reducing bacteria associated with corroded ship hull._ 3 Biotech, 2016. **6**(2): p. 197-197.
* [2] Chilkoor, G., et al., _Atomic Layers of Graphene for Microbial Corrosion Prevention._ ACS Nano, 2021. **15**(1): p. 447-454.
* [3] Chen, S., P. Wang, and D. Zhang, _Corrosion behavior of copper under biofilm of sulfate-reducing bacteria._ Corrosion Science, 2014. **87**: p. 407-415.
* [4] Licina, G.J. and D. Cubicciotti, _Microbial-induced corrosion in nuclear power plant materials._ JOM, 1989. **41**(12): p. 23-27.
* [5] Chilkoor, G., et al., _Hexagonal boron nitride for sulfur corrosion inhibition._ ACS nano, 2020. **14**(11): p. 14809-14819.
* [6] Huttunen-Saarivirta, E., P. Rajala, and L. Carpen, _Corrosion behaviour of copper under biotic and abiotic conditions in anoxic ground water: electrochemical study._ Electrochimica Acta, 2016. **203**: p. 350-365.
* [7] Kakooei, S., M.C. Ismail, and B. Ariwahjoedi, _Mechanisms of microbiologically influenced corrosion: a review._ World Appl. Sci. J, 2012. **17**(4): p. 524.
* [8] Gu, T., et al., _Toward a better understanding of microbiologically influenced corrosion caused by sulfate reducing bacteria._ Journal of materials science & technology, 2019. **35**(4): p. 631-636.
* [9] Li, Y., et al., _Anaerobic microbiologically influenced corrosion mechanisms interpreted using bioenergetics and bioelectrochemistry: a review._ Journal of Materials Science & Technology, 2018. **34**(10): p. 1713-1718.
* [10] Bard, A.J. and L.R. Faulkner, _Electrochemical Methods: Fundamentals and Applications._ Surface Technology, 1983. **20**(1): p. 91-92.
* [11] Grass, G., C. Rensing, and M. Solioz, _Metallic copper as an antimicrobial surface._ Applied and environmental microbiology, 2011. **77**(5): p. 1541-1547.
* [12] Chen, S. and D. Zhang, _Study of corrosion behavior of copper in 3.5 wt.% NaCl solution containing extracellular polymeric substances of an aerotolerant sulphate-reducing bacteria._ Corrosion Science, 2018. **136**: p. 275-284.
* [13] Qi, H., et al., _Bioinspired multifunctional protein coating for antifogging, self-cleaning, and antimicrobial properties._ ACS applied materials & interfaces, 2019. **11**(27): p. 24504-24511.
* [14] Makhlouf, A.S.H. and N.Y. Abu-Thabit, _Advances in smart coatings and thin films for future Industrial and Biomedical Engineering Applications_. 2019: Elsevier.
* [15] Lou, Y., et al., _Microbiologically influenced corrosion inhibition mechanisms in corrosion protection: A review._ Bioelectrochemistry, 2021. **141**: p. 107883.
* [16] Jayaraman, A., et al., _Axenic aerobic biofilms inhibit corrosion of SAE 1018 steel through oxygen depletion._ Applied microbiology and biotechnology, 1997. **48**(1): p. 11-17.
* [17] Dubiel, M., et al., _Microbial iron respiration can protect steel from corrosion._ Applied and environmental microbiology, 2002. **68**(3): p. 1440-1445.
* [18] Bai, L., et al., _Isolation and characterization of cytotoxic, aggregative Citrobacter freundii._ PLoS One, 2012. **7**(3): p. e33054.
* [19] Alasvand Zarasvand, K. and V. Ravishankar Rai, _Identification of the traditional and non-traditional sulfate-reducing bacteria associated with corroded ship hull._ 3 Biotech, 2016. **6**(2): p. 1-8.
* [20] Yan, J., et al., _Carbon metabolism and sulfate respiration by a non-conventional Citrobacter freundii strain SR10 with potential application in removal of metals and metalloids._ International Biodeterioration & Biodegradation, 2018. **133**: p. 238-246.
* [21] Shahryari, Z., K. Gheisari, and H. Motamedi, _Effect of sulfate reducing Citrobacter sp. strain on the corrosion behavior of API X70 microalloyed pipeline steel._ Materials Chemistry and Physics, 2019. **236**: p. 121799.
* [22] Zhao, C., et al., _Isolation of a sulfate reducing bacterium and its application in sulfate removal from tannery wastewater._ African Journal of Biotechnology, 2011. **10**(56): p. 11966-11971.
* [23] Hu, J., et al., _Increased excess intracellular cyclic di-AMP levels impair growth and virulence of Bacillus anthracis._ Journal of bacteriology, 2020. **202**(9): p. e00653-19.
* [24] NACE, A., _Standard guide for laboratory immersion corrosion testing of metals._ ASTM Int., 2012: p. 1-9.
* [25] G1, A. _Standard Practice for Preparing, Cleaning, and Evaluating Corrosion Test Specimens_. ASTM West Conshohocken, PA.
* [26] Zhang, Q., et al., _Corrosion behavior of WC-Co hardmetals in the oil-in-water emulsions containing sulfate reducing Citrobacter sp._ corrosion science, 2015. **94**: p. 48-60.
* [27] Kundukad, B., et al., _Mechanical properties of the superficial biofilm layer determine the architecture of biofilms._ Soft matter, 2016. **12**(26): p. 5718-5726.
* [28] George, R., et al., _Microbiologically influenced corrosion of AISI type 304 stainless steels under fresh water biofilms._ Materials and Corrosion, 2000. **51**(4): p. 213-218.
* [29] Chilkoor, G., et al., _Maleic anhydride-functionalized graphene nanofillers render epoxy coatings highly resistant to corrosion and microbial attack._ Carbon, 2020. **159**: p. 586-597.
* [30] Chilkoor, G., et al., _Hexagonal Boron Nitride: The Thinnest Insulating Barrier to Microbial Corrosion._ ACS Nano, 2018. **12**(3): p. 2242-2252.
* [31] Ilhan-Sungur, E., N. Cansever, and A. Cotuk, _Microbial corrosion of galvanized steel by a freshwater strain of sulphate reducing bacteria (Desulfovibrio sp.)._ Corrosion Science, 2007. **49**(3): p. 1097-1109.
* [32] Kumar Tripathi, A., et al., _Gene sets and mechanisms of sulfate-reducing bacteria biofilm formation and quorum sensing with impact on corrosion._ Frontiers in microbiology, 2021: p. 3120.
* [33] Unsal, T., et al., _Effects of Ag and Cu ions on the microbial corrosion of 316L stainless steel in the presence of Desulfovibrio sp._ Bioelectrochemistry, 2016. **110**: p. 91-99.
* [34] Guan, F., et al., _Influence of sulfate-reducing bacteria on the corrosion behavior of 5052 aluminum alloy._ Surface and Coatings Technology, 2017. **316**: p. 171-179.
* [35] Yuan, S., et al., _Surface chemistry and corrosion behaviour of 304 stainless steel in simulated seawater containing inorganic sulphide and sulphate-reducing bacteria._ Corrosion Science, 2013. **74**: p. 353-366.
* [36] Shanks, R.M., et al., _Isolation and identification of a bacteriocin with antibacterial and antibioftlm activity from Citobacter freundii._ Archives of microbiology, 2012. **194**(7): p. 575-587.
* [37] Wang, X., et al., _Coupling heavy metal resistance and oxygen flexibility for bioremol of copper ions by newly isolated Citobacter freundii JPG1._ Journal of environmental management, 2018. **226**: p. 194-200.
* [38] Liu, Z.-h., et al., _Sulfate-reducing bacteria in anaerobic bioprocesses: basic properties of pure isolates, molecular quantification, and controlling strategies._ Environmental Technology Reviews, 2018. **7**(1): p. 46-72.
* [39] Qiu, R., et al., _Sulfate reduction and copper precipitation by a Citrobacter sp. isolated from a mining area._ Journal of Hazardous Materials, 2009. **164**(2-3): p. 1310-1315.
* [40] Chen, L., B. Wei, and X. Xu, _Effect of sulfate-reducing bacteria (SRB) on the corrosion of buried pipe steel in acidic soil solution._ Coatings, 2021. **11**(6): p. 625.
* [41] Huang, C., et al., _Effect of Nonphosphorus Corrosion Inhibitors on Biofilm Pore Structure and Mechanical Properties._ Environmental science & technology, 2020. **54**(22): p. 14716-14724.
* [42] Vargas, I.T., et al., _Copper corrosion and biocorrosion events in premise plumbing._ Materials, 2017. **10**(9): p. 1036.
* [43] Videla, H.A. and L.K. Herrera, _Understanding microbial inhibition of corrosion. A comprehensive overview._ International Biodeterioration & Biodegradation, 2009. **63**(7): p. 896-900.
* [44] Hernandez, G., et al., _Corrosion inhibition of steel by bacteria._ Corrosion, 1994. **50**(8): p. 603-608.
* [45] Basler, M., B. Ho, and J. Mekalanos, _Tit-for-tat: type VI secretion system counterattack during bacterial cell-cell interactions._ Cell, 2013. **152**(4): p. 884-894.
* [46] Liu, L., et al., _Antimicrobial resistance and cytotoxicity of Citrobacter spp. in Maanshan Anhui Province, China._ Frontiers in microbiology, 2017. **8**: p. 1357.
* [47] Baker-Austin, C., et al., _Co-selection of antibiotic and metal resistance._ Trends in microbiology, 2006. **14**(4): p. 176-182.
**Supplementary Information**
**Microbial Corrosion Prevention by _Citrobacter_ sp. Biofilms
Pawan Sigdel\({}^{1,3}\), Ananth Kandadai\({}^{2,3}\), Kalimuth Jawaharraj\({}^{1,3,4}\), Bharat Jasthi\({}^{2,3,4}\), Etienne Gnimpieba\({}^{3,4,5}\), Venkataramana Gadhamshetty\({}^{1,3,4}\)\({}^{*}\)
\({}^{1}\)Civil and Environmental Engineering, South Dakota School of Mines and Technology, 501 E. St. Joseph Street, Rapid City, SD, 57701, USA
\({}^{2}\)Materials and Metallurgical Engineering, South Dakota School of Mines and Technology, 501 E. St. Joseph Street, Rapid City, SD, 57701, USA
\({}^{3}\)2D-materials for Biofilm Engineering, Science and Technology (2DBEST) Center, South Dakota School of Mines and Technology, 501 E. St. Joseph Street, Rapid City, SD, 57701, USA
\({}^{4}\)Data-Driven Materials Discovery for Bioengineering Innovation Center, South Dakota Mines, 501 E. St. Joseph Street, Rapid City, SD, 57701, USA
\({}^{5}\)Biomedical Engineering, University of South Dakota, 4800 N Career Ave, Sioux Falls, SD 57107, USA
\({}^{*}\)**Corresponding author.**
_E-mail address:_ [email protected] (V. Gadhamshetty).
**Table S1.** An overview of MIC studies on copper by SRB, NP: Not provided
\begin{tabular}{l l} \hline \# Cu substrate & **MIC mechanism** & **References** \\ (SRB name) & **MIC mechanism** & **References** \\ \hline Cu(110) (Cu \(>\) & \\
99.9\%, \(=\)0.04\% O, & \\ remaining trace & MIC caused by sulfide secreted by SRB & [1] \\ elements, mass\%) & \\ (_Desulfovibrio_ & \\ _Vulgaris_) & \\ \hline Cu (\(>99.9\%\) mass\%; diameter \(=\)10 mm, & Biofilm was formed by the SRB, and cuprous \\ and thickness \(=4\) & \\ mm)(NP) & Corrosion rate directly affected by SRB growth & [2] \\ \hline & \\ & \\ \end{tabular}
\begin{tabular}{l l} \hline \multirow{2}{*}{70Cu-30Ni alloy(NP)} & \multirow{2}{*}{Intergranular corrosion after 7 days of immersion} \\ & in sea water containing SRB. Metabolites \\ \multicolumn{2}{l}{concentration coincided with bacterial growth,} \\ & metal sulfides produced from copper and nickel \\ \hline Cu cylinders & \\ (\(>\)99.9\%, mass\%; diameter \(=0.5\) cm & Extracellular Polymeric Substance of SRB \\ & exposed to Cu on 3.5 wt\% NaCl showed Cu \\ & corrosion inhibition for a short term. Longer & [4] \\
0.5 cm) & immersion time promoted corrosion by degrading \\ (_Desulfovibrio_ Sp.) & protective Cu\({}_{2}\)O film \\ \hline \end{tabular}
**Table S2.** An overview of Microbial Induced Corrosion Inhibition (MICI) mechanisms
\begin{tabular}{l l l l} \hline \hline
**Bacteria (strain)** & **Characteristics** & **Function** & **References** \\ \hline _Serratia marcescens_ & Aerobic & Aerobic respiration resulting in lo- & [5] \\ EF 190, & & oxygen barrier inhibiting & \\ _Pseudomonas_ sp. _S9,_ & & corrosion of steel. & \\ _Staphylococcus_ sp. & Facultative & EPS composed of hydrophobic & [6] \\ & anaerbe & components formed a corrosion & \\ & & inhibition barrier on a low carbon & \\ & & steel surface. & \\ _B. subtilis_ & Facultative & Dense, uniform, and hydrophobic & [7] \\ & anaerbe & biofilm composed of & \\ & & polysaccharides/TasA amyloid & \\ & & fibers in low-alloy steel. & \\ SRB LVform6 & Anaerobic & Dolomite precipitation under low & [8] \\ & & temperatures and anoxic & \\ & & conditions results in corrosion & \\ & & inhibition & \\ Nitrate Reducing & Facultative & In the presence of nitrate, NRB & [9] \\ Bacteria & anaerbe & inhibits SRB growth and thus & \\ & & reduces H\(\Sigma\) production & \\ SRB _Citrobacter_ & Facultative & Citobacter spp. Produced & [10] \\ _freundii_ & anaerbe & stronger biofilm compared to & \\ & & _Desulfovibrio_ spp., lowest mass & \\ & & loss was observed in _C.freundii._ & \\ \hline \hline \end{tabular}
**Table S3.** Corrosion behavior of _Citrobacter_ Species
\begin{tabular}{l l l l} \hline
**\# Strain** & **Metal Substate** & **Major Findings** & **References** \\ \hline \multicolumn{4}{c}{OCV values shifted negative and} \\ & & & continued to become more \\ & & & negative than artificial seawater \\ & & & until 48h implying discontinuous \\
1 _farmeri_ & Q235 Carbon & biofilm formation and microbial \\ & steel coupons & colonization with acidic \\ & & & metabolite production. After 72 \\ & & & h, a noble shift attributed to \\ & & & protective and compact biofilm \\ & & & formation. \\ \hline \multicolumn{4}{c}{Significant reduction in charge} \\ & & & transfer resistance from 400\(\Omega\)cm\({}^{2}\) \\
2 _Citrobacter_ & API X70 & after day 7 to 55 \(\Omega\)cm\({}^{2}\) after day \\ & & microalloyed & 21. Metabolism of bacteria and \\ & & pipeline steel & formation of heterogeneously \\ & & & dispersed compact biofilm was \\ & & & attributed for reduced resistance. \\ \hline \multicolumn{4}{c}{Microbial influence corrosion of} \\ & & & WC-30C\({}_{0}\) in O/W emulsion and \\
3 _Citrobacter_ & Tungsten & nutrient was attributed to \\ & & carbide cobalt in & Citobacter sp. Citobacter sp. \\ & _koseri_ & oil-in-water & containing emulsion showed \\ & & emulsion & microbiologically influenced \\ & & & corrosion inhibition. \\ \hline \end{tabular}
**Table S4.** Corrosion behavior of _Citrobacter_ Species
\begin{tabular}{l l l l} \hline
**\# Strain** & **Metal Substate** & **Major Findings** & **References** \\ \hline \multicolumn{4}{c}{OCV values shifted negative and} \\ & & & continued to become more \\ & & & negative than artificial seawater \\ & & & until 48h implying discontinuous \\
1 _farmeri_ & Q235 Carbon & biofilm formation and microbial \\ & & steel coupons & colonization with acidic \\ & & & metabolite production. After 72 \\ & & & h, a noble shift attributed to \\ & & & protective and compact biofilm \\ & & & formation. \\ \hline \multicolumn{4}{c}{Significant reduction in charge} \\ & & & transfer resistance from 400\(\Omega\)cm\({}^{2}\) \\
2 _Citrobacter_ & API X70 & after day 7 to 55 \(\Omega\)cm\({}^{2}\) after day \\ & & microalloyed & 21. Metabolism of bacteria and \\ & & pipeline steel & formation of heterogeneously \\ & & & dispersed compact biofilm was \\ & & & attributed for reduced resistance. \\ \hline \multicolumn{4}{c}{Microbial influence corrosion of} \\ & & & WC-30C\({}_{0}\) in O/W emulsion and \\
3 _Citrobacter_ & Tungsten & nutrient was attributed to \\ & & carbide cobalt in & Citobacter sp. Citobacter sp. \\
3 _koseri_ & oil-in-water & containing emulsion showed \\ & & emulsion & microbiologically influenced \\ & & corrosion inhibition. \\ \hline \end{tabular}
**Table S5.** Corrosion behavior of _Citrobacter_ Species
\begin{tabular}{l l l l} \hline
**\# Strain** & **Metal Substate** & **Major Findings** & **References** \\ \hline \multicolumn{4}{c}{OCV values shifted negative and} \\ & & & continued to become more \\ & & & negative than artificial seawater \\ & & & until 48h implying discontinuous \\
1 _farmeri_ & Q235 Carbon & biofilm formation and microbial \\ & & steel coupons & colonization with acidic \\ & & & metabolite production. After 72 \\ & & & h, a noble shift attributed to \\ & & & protective and compact biofilm \\ & & formation. \\ \hline \multicolumn{4}{c}{Significant reduction in charge} \\ & & & transfer resistance from 400\(\Omega\)cm\({}^{2}\) \\
2 _Citrobacter_ & API X70 & after day 7 to 55 \(\Omega\)cm\({}^{2}\) after day \\ & & microalloyed & 21. Metabolism of bacteria and \\ & & pipeline steel & formation of heterogeneously \\ & & & dispersed compact biofilm was \\ & & & attributed for reduced resistance. \\ \hline \multicolumn{4}{c}{Microbial influence corrosion of} \\ & & & WC-30C\({}_{0}\) in O/W emulsion and \\
3 _Citrobacter_ & Tungsten & nutrient was attributed to \\ & & carbide cobalt in & Citobacter sp. Citobacter sp. \\
3 _koseri_ & oil-in-water & containing emulsion showed \\ & & emulsion & microbiologically influenced \\ & & corrosion inhibition. \\ \hline \end{tabular}
**Table S6.** Corrosion behavior of _Citrobacter_ Species
\begin{tabular}{l l l l} \hline
**\# Strain** & **Metal Substate** & **Major Findings** & **References** \\ \hline \multicolumn{4}{c}{OCV values shifted negative and} \\ & & & continued to become more \\ & & & negative than artificial seawater \\ & & & until 48h implying discontinuous \\
1 _farmeri_ & Q235 Carbon & biofilm formation and microbial \\ & & steel coupons & colonization with acidic \\ & & metabolite production. After 72 \\ & & h, a noble shift attributed to \\ & & protective and compact biofilm \\ & & formation. \\ \hline \multicolumn{4}{c}{Significant reduction in charge} \\ & & & transfer resistance from 400\(\Omega\)cm\({}^{2}\) \\
2 _Citrobacter_ & API X70 & after day 7 to 55 \(\Omega\)cm\({}^{2}\) after day \\
2 _Citrobacter_ & microalloyed & 21. Metabolism of bacteria and \\ & & pipeline steel & formation of heterogeneously \\ & & dispersed compact biofilm was \\ & & attributed for reduced resistance. \\ \hline \multicolumn{4}{c}{Microbial influence corrosion of} \\ & & & WC-30C\({}_{0}\) in O/W emulsion and \\
3 _Citrobacter_ & Tungsten & nutrient was attributed to \\ & & carbide cobalt in & Citobacter sp. Citobacter sp. \\
3 _koseri_ & oil-in-water & containing emulsion showed \\ & & emulsion & microbiologically influenced \\ & & corrosion inhibition. \\ \hline \end{tabular}
**Table S7.** Corrosion behavior of _Citrobacter_ Species
\begin{tabular}{l l l l} \hline
**\# Strain** & **Metal Substate** & **Major Findings** & **References** \\ \hline \multicolumn{4}{c}{OCV values shifted negative and} \\ & & & continued to become more \\ & & negative than artificial seawater \\ & & until 48h implying discontinuous \\
1 _farmeri_ & Q235 Carbon & biofilm formation and microbial \\ & & colonization with acidic \\ & & metabolite production. After 72 \\ & & h, a noble shift attributed to \\ & & protective and compact biofilm \\ & & formation. \\ \hline \multicolumn{4}{c}{Significant reduction in charge} \\ & & & transfer resistance from 400\(\Omega\)cm\({}^{2}\) \\
2 _Citrobacter_ & API X70 & after day 7 to 55 \(\Omega\)cm\({}^{2}\) after day \\
2 _Citrobacter_ & Tungsten & nutrient was attributed to \\ & & carbide cobalt in & Citobacter sp. Citobacter sp. \\ & & oil-in-water & containing emulsion showed \\ & & emulsion & microbiologically influenced \\ & & corrosion inhibition. \\ \hline \end{tabular}
**Table S8.** Corrosion behavior of _Citrobacter_ Species
\begin{tabular}{l l l l} \hline
**\# Strain** & **Metal Substate** & **Major Findings** & **References** \\ \hline \multicolumn{4}{c}{OCV values shifted negative and} \\ & & & continued to become more \\ & & & negative than artificial seawater \\ & & until 48h implying discontinuous \\
1 _farmeri_ & Q235 Carbon & biofilm formation and microbial \\ & & steel coupons & colonization with acidic \\ & & metabolite production. After 72 \\ & & h, a noble shift attributed to \\ & & protective and compact biofilm \\ & & formation. \\ \hline \multicolumn{4}{c}{Significant reduction in charge} \\ & & & transfer resistance from 400\(\Omega\)cm\({}^{2}\) \\
2 _Citrobacter_ & API X70 & after day 7 to 55 \(\Omega\)cm\({}^{2}\) after day \\
2 _Citrobacter_ & Tungsten & nutrient was attributed to \\ & & carbide cobalt in & Citobacter sp. Citobacter sp. \\ & & oil-in-water & containing emulsion showed \\ & & emulsion & microbiologically influenced \\ & & corrosion inhibition. \\ \hline \end{tabular}
**Table S9.** Corrosion behavior of _Citrobacter_ Species
\begin{tabular}{l l l l} \hline
**\# Strain** & **Metal Substate** & **Major Findings** & **References** \\ \hline \multicolumn{4}{c}{OCV values shifted negative and} \\ & & & continued to become more \\ & & negative than artificial seawater \\ & & until 48h implying discontinuous \\
1 _farmeri_ & Q235 Carbon & biofilm formation and microbial \\ & & steel coupons & colonization with acidic \\ & & metabolite production. After 72 \\ & & h, a noble shift attributed to \\ & & protective and compact biofilm \\ & & formation. \\ \hline \multicolumn{4}{c}{Significant reduction in charge} \\ & & & transfer resistance from 400\(\Omega\)cm\({}^{2}\) \\
2 _Citrobacter_ & Tungsten & nutrient was attributed to \\ & & carbide cobalt in & Citobacter sp. Citobacter sp. \\
3 _Citrobacter_ & Tungsten & containing emulsion showed \\ & & emulsion & microbiologically influenced \\ & & corrosion inhibition. \\ \hline \end{tabular}
**Table S10.** Corrosion behavior of _Citrobacter_ Species
\begin{tabular}{l l l l} \hline
**\# Strain** & **Metal Substate** & **Major Findings** & **References** \\ \hline \multicolumn{4}{c}{OCV values shifted negative and} \\ & & & continued to become more \\ & & negative than artificial seawater \\ & & until 48h implying discontinuous \\
1 _farmeri_ & Q235 Carbon & biofilm formation and microbial \\ & & colonization with acidic \\ & & metabolite production. After 72 \\ & & h, a noble shift attributed to \\ & & protective and compact biofilm \\ & & formation. \\ \hline \multicolumn{4}{c}{Significant reduction in charge} \\ & & & transfer resistance from 400\(\Omega\)cm\({}^{2}\) \\
2 _Citrobacter_ & API X70 & after day 7 to 55 \(\Omega\)cm\({}^{2}\) after day \\
2 _Citrobacter_ & Tungsten & nutrient was attributed to \\ & & carbide cobalt in & Citobacter sp. Citobacter sp. \\
**Figure S1 Electrochemical Impedance Spectroscopy (EIS) and Tafel analysis** - (a) Nyquist plot for copper annealed Cu showing the increase in impedance from day 0 to 15 when exposed to individual culture of strain G20 (b) Nyquist plot for annealed Cu showing the decrease in impedance from day 0 to day 15 when exposed to individual culture of strain MIC21, (c) Tafel analysis shown higher corrosion rate for annealed Cu when exposed to strain G20 in comparison to strain MIC21, (d) Tafel analysis of annealed Cu when exposed to co-culture of strain G20 and MIC21 for 60 days.
**Table S4.** Fitting results of EIS for co-cultured media in annealed Cu, 29.5 % CW and 56.2% CW during 60 days of corrosion test
\begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{Days} & \multicolumn{2}{c}{R\({}_{\rm sol}\)} & \multicolumn{2}{c}{R\({}_{\rm ct}\)} & \multicolumn{2}{c}{R\({}_{\rm po}\)} & \multicolumn{2}{c}{C\({}_{\rm dl}\)} & \multicolumn{2}{c}{C\({}_{\rm po}\)} & \multirow{2}{*}{R\({}_{\rm corr}\)\((\Omega.{\rm cm}^{2})\)} & Goodness of \\ & \((\Omega.{\rm cm}^{2})\) & \((\Omega.{\rm cm}^{2})\) & \((\Omega.{\rm cm}^{2})\) & \((\mbox{F cm}^{-2})\) & \((\mbox{F cm}^{-2})\) & & Fit \\ \hline \multicolumn{10}{l}{Annealed Cu} \\ \hline
0 & 47\(\pm\)0.32 & 2000\(\pm\)84.4 44 & 100\(\pm\)7.861 & 0.000166\(\pm\)00 0016 & 0.0002392\(\pm\)0.4 00011 & 2100\(\pm\)92.30 & 0.005978 \\
30 & 51.62\(\pm\)0.4 473\(\pm\)12 & 1000\(\pm\)63.9 & 0.00005836\(\pm\) 0.001141\(\pm\)0.0 001 & 5473\(\pm\)186.24 & 0.0009278 \\
60 & 29 & 2.3 & 4 & 0.0000219 & 00018 & \\
60 & 47.13\(\pm\)0.5 & 5390\(\pm\)35 & 1850\(\pm\)86.4 & 0.00000983\(\pm\) 0.001847\(\pm\)0.0 000423 & 7240\(\pm\)436.86 & 0.0004023 \\
25 & 0.43 & 3 & 0.0000012 & 00014 & \\ \hline \multicolumn{10}{l}{29.5\% Cu} \\ \hline
0 & 46.61\(\pm\)0.1 & 1860\(\pm\)25.4 40 & 170\(\pm\)16.80 & 0.000128\(\pm\)0.4 0004156\(\pm\)0.4 0001988 & 2030\(\pm\)42.54 4001988 & 0.001988 \\
30 & 50.41\(\pm\)0.3 & 3570\(\pm\)85.4 48 & 850\(\pm\)78.16 & 0.0000902\(\pm\) 0.00111\(\pm\)0.00 4420\(\pm\)163.9 000016 & 4420\(\pm\)163.9 00001452 & 0.001452 \\
60 & 47.5\(\pm\)0.2 & 9840\(\pm\)29 & 11000\(\pm\)905 & 0.00002765 & 0.001089\(\pm\)0.0 001089\(\pm\)0.0 002840\(\pm\)1202 & 0.002786 \\ & 6 & 7.56 & \(\pm\)0.0000071 & 000018 &.56 & 0.002786 \\ \hline \multicolumn{10}{l}{56.2\% CW} \\ \hline
0 & 44.67\(\pm\)0.2 & 2020\(\pm\)23. 40 & 113\(\pm\)22 & 0.00002\(\pm\)0.0 0004026\(\pm\)0.4 000029 & 2130\(\pm\)45.18 0003544 & 0.003544 \\
30 & 46.49\(\pm\)0.3 & 3274\(\pm\)11 & 1220\(\pm\)62.8 & 0.00011\(\pm\)0.0 001262\(\pm\)0.0 4496\(\pm\)174.2 000255 & 0.002556 \\
60 & 42.08\(\pm\)0.6 & 6570\(\pm\)48 & 12800\(\pm\)115 & 0.00003954 & 0.001404\(\pm\)0.0 001404\(\pm\)0.0 0019370\(\pm\)1641 & 0.003298 \\ & 24 & 6 & 5 & \(\pm\)0.0000012 & 00010 & \\ \hline \end{tabular}
\begin{tabular}{c c c c c c c} \hline \multirow{2}{*}{Days} & \multicolumn{2}{c}{R\({}_{\rm sol}\)} & \multicolumn{2}{c}{R\({}_{\rm ct}\)} & \multicolumn{2}{c}{R\({}_{\rm po}\)} & \multicolumn{2}{c}{C\({}_{\rm dl}\)} & \multicolumn{2}{c}{C\({}_{\rm po}\)} & \multirow{2}{*}{R\({}_{\rm corr}\)\((\Omega.{\rm cm}^{2})\)} & Goodness of \\ & \((\Omega.{\rm cm}^{2})\) & \((\Omega.{\rm cm}^{2})\) & \((\Omega.{\rm cm}^{2})\) & \((\mbox{F cm}^{-2})\) & \((\mbox{F cm}^{-2})\) & \((\mbox{F cm}^{-2})\) & \(\mbox{F cm}^{-2}\) \\ \hline \multicolumn{10}{l}{Annealed Cu} \\ \hline
0 & 47\(\pm\)0.32 & 2000\(\pm\)84.4 44 & 100\(\pm\)7.861 & 0.000166\(\pm\)00 0016 & 0.0002392\(\pm\)0.4 00011 & 2100\(\pm\)92.30 & 0.005978 \\
30 & 51.62\(\pm\)0.4 & 4473\(\pm\)12 & 1000\(\pm\)63.9 & 0.00005836\(\pm\) 0.001141\(\pm\)0.0 001 & 5473\(\pm\)186.24 & 0.00009278 \\ & 29 & 2.3 & 4 & 0.0000219 & 00018 & \\
60 & 47.13\(\pm\)0.5 & 5390\(\pm\)35 & 1850\(\pm\)86.4 & 0.00000983\(\pm\) 0.001847\(\pm\)0.0 0 000423 & 7240\(\pm\)436.86 & 0.0004023 \\ & 25 & 0.43 & 3 & 0.0000012 & 00014 & \\ \hline \multicolumn{10}{l}{29.5\% Cu} \\ \hline
0 & 46.61\(\pm\)0.1 & 1860\(\pm\)25.4 40 & 170\(\pm\)16.80 & 0.000128\(\pm\)0.4 0004156\(\pm\)0.4 0001988 & 2030\(\pm\)42.54 4001988 & 0.001988 \\ & 30 & 74 & 0.0000149 & 000010 & 4420\(\pm\)163.9 0001452 & 0.001452 \\
30 & 50.41\(\pm\)0.3 & 3570\(\pm\)85.4 48 & 850\(\pm\)78.16 & 0.00000902\(\pm\) 0.00011 & 4420\(\pm\)163.9 0001452 & 0.001452 \\ & 28 & 79 & 0.000016 & 000094 & 5 & 0.001452 \\
60 & 47.5\(\pm\)0.2 & 9840\(\pm\)29 & 11000\(\pm\)905 & 0.00002765 & 0.001089\(\pm\)0.0 0 002840\(\pm\)1202 & 0.002786 \\ & 6 & 7.56 & \(\pm\)0.0000071 & 0000018 &.56 & 0.002786 \\ \hline \multicolumn{10}{l}{56.2\% CW} \\ \hline
0 & 44.67\(\pm\)0.2 & 2020\(\pm\)23. 40 & 113\(\pm\)22 & 0.00002\(\pm\)0.0 0004026\(\pm\)0.4 00002 & 2130\(\pm\)45.18 0003544 & 0.003544 \\
30 & 46.49\(\pm\)0.3 & 3274\(\pm\)11 & 1220\(\pm\)62.8 & 0.00011\(\pm\)0.0 001262\(\pm\)0.0 4496\(\pm
**Table S5.** Fitting results of EIS for individual cultures in annealed Cu during 15 days of corrosion test
\begin{tabular}{c c c c c c c} \hline
**Days** & **R\({}_{\text{ct}}\)** & **R\({}_{\text{po}}\)** & **C\({}_{\text{dl}}\)** & **C\({}_{\text{po}}\)** & **Total corrosion** & **Goodness of** \\ & **(k\(\Omega\).cm\({}^{2}\))** & **(k\(\Omega\).cm\({}^{2}\))** & **(F.cm\({}^{-2}\))** & **(F.cm\({}^{-2}\))** & **resistance (Rct+Rpo)** & **fit** \\ & & & & & **(k\(\Omega\).cm\({}^{2}\))** & \\ \hline \multicolumn{6}{l}{**MIC21/Annealed Cu**} \\ \hline
**0** & 11.0\(\pm\)0.25 & 4.62\(\pm\)0.24 & 1.23E-06\(\pm\)561.8 & 1.36E-04\(\pm\)813.4 & 15.62\(\pm\)0.49 & 3.25E-03 \\ & & & E-9 & E-9 & & \\
**8** & 20.0\(\pm\)5.2 & 6.0\(\pm\)0.16 & 1.60E-03\(\pm\)2.43E & 8.70E-04\(\pm\)8.43E & 26.0\(\pm\)5.36 & 3.57E-03 \\ & & & -4 & -6 & & \\
**15** & 52.0\(\pm\)10.6 & 7.0\(\pm\)0.44 & 3.22E-04\(\pm\)2.48E & 9.75E-04\(\pm\)7.06E & 59.0\(\pm\)11.04 & 3.88E-03 \\ & & & 04\(\pm\)2.48E & 04\(\pm\)7.06E & & & \\ & & & -5 & -6 & & \\ \hline \multicolumn{6}{l}{**G20/Annealed Cu**} \\ \hline
**0** & 3.5\(\pm\)0.70 & 2.0\(\pm\)0.60 & 5.23E-04\(\pm\)2.85E & 5.00E-04\(\pm\)4.02E & 5.5\(\pm\)1.3 & 6.12E-04 \\ & & & -5 & -6 & & \\
**8** & 2.5\(\pm\)0.10 & 1.0\(\pm\)0.03 & 2.99E-04\(\pm\)1.69E & 1.03E-03\(\pm\)8.35E & 3.5\(\pm\)0.13 & 4.04E-04 \\ & & & -5 & -6 & & \\
**15** & 0.5\(\pm\)0.015 & 0.4\(\pm\)0.007 & 9.80E-04\(\pm\)7.22E & 8.46E-04\(\pm\)8.60E & 0.9\(\pm\)0.022 & 1.58E-03 \\ & & & -5 & -6 & & \\ \hline \end{tabular}
**Table S5.** Fitting results of EIS for individual cultures in annealed Cu during 15 days of corrosion test
\begin{tabular}{c c c c c c c} \hline
**Days** & **R\({}_{\text{ct}}\)** & **R\({}_{\text{po}}\)** & **C\({}_{\text{dl}}\)** & **C\({}_{\text{po}}\)** & **Total corrosion** & **Goodness of** \\ & **(k\(\Omega\).cm\({}^{2}\))** & **(k\(\Omega\).cm\({}^{2}\))** & **(F.cm\({}^{-2}\))** & **(F.cm\({}^{-2}\))** & **resistance (Rct+Rpo)** & **fit** \\ & & & & & **(k\(\Omega\).cm\({}^{2}\))** & \\ \hline \multicolumn{6}{l}{**MIC21/Annealed Cu**} \\ \hline
**0** & 11.0\(\pm\)0.25 & 4.62\(\pm\)0.24 & 1.23E-06\(\pm\)561.8 & 1.36E-04\(\pm\)813.4 & 15.62\(\pm\)0.49 & 3.25E-03 \\ & & & E-9 & E-9 & & \\
**8** & 20.0\(\pm\)5.2 & 6.0\(\pm\)0.16 & 1.60E-03\(\pm\)2.43E & 8.70E-04\(\pm\)8.43E & 26.0\(\pm\)5.36 & 3.57E-03 \\ & & & -4 & -6 & & \\
**15** & 52.0\(\pm\)10.6 & 7.0\(\pm\)0.44 & 3.22E-04\(\pm\)2.48E & 9.75E-04\(\pm\)7.06E & 59.0\(\pm\)11.04 & 3.88E-03 \\ & & & 04\(\pm\)2.48E & 04\(\pm\)7.06E & & \\ & & & -5 & -6 & & \\ \hline \multicolumn{6}{l}{**G20/Annealed Cu**} \\ \hline
**0** & 3.5\(\pm\)0.70 & 2.0\(\pm\)0.60 & 5.23E-04\(\pm\)2.85E & 5.00E-04\(\pm\)4.02E & 5.5\(\pm\)1.3 & 6.12E-04 \\ & & & -5 & -6 & & \\
**8** & 2.5\(\pm\)0.10 & 1.0\(\pm\)0.03 & 2.99E-04\(\pm\)1.69E & 1.03E-03\(\pm\)8.35E & 3.5\(\pm\)0.13 & 4.04E-04 \\ & & & -5 & -6 & & \\
**15** & 0.5\(\pm\)0.015 & 0.4\(\pm\)0.007 & 9.80E-04\(\pm\)7.22E & 8.46E-04\(\pm\)8.60E & 0.9\(\pm\)0.022 & 1.58E-03 \\ & & & 04\(\pm\)7.22E & 04\(\pm\)8.60E & & \\ & & & -5 & -6 & & \\ \hline \end{tabular}
**Table S5.** Fitting results of EIS for individual cultures in annealed Cu during 15 days of corrosion test
\begin{tabular}{c c c c c c c} \hline
**0** & 3.5\(\pm\)0.70 & 2.0\(\pm\)0.60 & 5.23E-04\(\pm\)2.85E & 5.00E-04\(\pm\)4.02E & 5.5\(\pm\)1.3 & 6.12E-04 \\ & & & -5 & -6 & & \\
**8** & 2.5\(\pm\)0.10 & 1.0\(\pm\)0.03 & 2.99E-04\(\pm\)1.69E & 1.03E-03\(\pm\)8.35E & 3.5\(\pm\)0.13 & 4.04E-04 \\ & & & 04\(\pm\)1.69E & 03\(\pm\)8.35E & & \\ & & & -5 & -6 & & \\
**15** & 0.5\(\pm\)0.015 & 0.4\(\pm\)0.007 & 9.80E-04\(\pm\)7.22E & 8.46E-04\(\pm\)8.60E & 0.9\(\pm\)0.022 & 1.58E-03 \\ & & & 04\(\pm\)7.22E & 04\(\pm\)8.60E & & \\ & & & -5 & -6 & & \\ \hline \end{tabular}
**Table S5.** Fitting results of EIS for individual cultures in annealed Cu during 15 days of corrosion test
\begin{tabular}{c c c c c c c} \hline
**Days** & **R\({}_{\text{ct}}\)** & **R\({}_{\text{po}}\)** & **C\({}_{\text{dl}}\)** & **C\({}_{\text{po}}\)** & **Total corrosion** & **Goodness of** \\ & **(k\(\Omega\).cm\({}^{2}\))** & **(k\(\Omega\).cm\({}^{2}\))** & **(F.cm\({}^{-2}\))** & **(F.cm\({}^{-2}\))** & **resistance (Rct+Rpo)** & **fit** \\ & & & & & **(k\(\Omega\).cm\({}^{2}\))** & **(k\(\Omega\).cm\({}^{2}\))** & \\ \hline \multicolumn{6}{l}{**MIC21/Annealed Cu**} \\ \hline
**0** & 11.0\(\pm\)0.25 & 4.62\(\pm\)0.24 & 1.23E-06\(\pm\)561.8 & 1.36E-04\(\pm\)813.4 & 15.62\(\pm\)0.49 & 3.25E-03 \\ & & & E-9 & E-9 & & & \\
**8** & 20.0\(\pm\)5.2 & 6.0\(\pm\)0.16 & 1.60E-03\(\pm\)2.43E & 8.70E-04\(\pm\)8.43E & 26.0\(\pm\)5.36 & 3.57E-0
**Figure S2.** Quantification of bacterial biomass from the 16SrRNA gene copies log10 (gene copies / \(\mu\)L)
**Figure S3.** CLSM image of dry biofilm after 4 month of 60 days corrosion test- **(a)** annealed Cu, **(b)** 29.5% CW, **(c)** 56.2% CW showing the lines for taking the roughness profile of biofilm (black) and underlying Cu (white) **(d)** average thickness of the biofilm matrix ranging from \(\sim\)28\(\upmu\)m to \(\sim\)45\(\upmu\)m.
**Figure S4.** **CLSM image and corresponding XRD for chemical composition before corrosion test- (a, b) annealed Cu, (c)(d)** 29.5% CW and (e)(f)** 56.2% CW showing the absence of biofilm and the presence of polycrystalline Cu with different crystallographic orientation.**
**Figure S5.** EDS analysis- elemental composition of pristine Cu before exposure to co-cultured G20 and MIC21 showing the Cu peaks
**Figure S6.** EDS analysis- elemental composition of **(a)** annealed Cu **(b)** 29.5% CW and **(c)** 56.2% CW showing the presence of sulfur (S), Oxygen (O) and Carbon (C) as major constituents.
Figure S7. Growth Curve for strain MIC21 under aerobic and anaerobic condition and G20 under anaerobic condition
**Figure S8.** Weight loss measurement showed approximately 1-fold less corrosion rate for co-culture when compared to the pure culture of strain MIC21 (0.445\(\pm\)0.005 mpy) and approximately 3-fold less than the pure culture of strain G20 (1.371\(\pm\)0.098 mpy)
**Figure S9**. CLSM images showed multiple pit in annealed Cu exposed to pure culture of _O. alaskensis_ strain G20 when compared to _Citrobacter_ sp. strain MIC21. |
2310.16885 | The Early Ultraviolet Light-Curves of Type II Supernovae and the Radii
of Their Progenitor Stars | We present a sample of 34 normal SNe II detected with the Zwicky Transient
Facility, with multi-band UV light-curves starting at $t \leq 4$ days after
explosion, as well as X-ray detections and upper limits. We characterize the
early UV-optical colors and provide prescriptions for empirical host-extinction
corrections. We show that the $t > 2\,$days UV-optical colors and the blackbody
evolution of the sample are consistent with the predictions of spherical phase
shock-cooling (SC), independently of the presence of `flash ionization"
features. We present a framework for fitting SC models which can reproduce the
parameters of a set of multi-group simulations without a significant bias up to
20% in radius and velocity. Observations of about half of the SNe II in the
sample are well-fit by models with breakout radii $<10^{14}\,$cm. The other
half are typically more luminous, with observations from day 1 onward that are
better fit by a model with a large $>10^{14}\,$cm breakout radius. However,
these fits predict an early rise during the first day that is too slow. We
suggest these large-breakout events are explosions of stars with an inflated
envelope or a confined CSM with a steep density profile, at which breakout
occurs. Using the X-ray data, we derive constraints on the extended
($\sim10^{15}$ cm) CSM density independent of spectral modeling, and find most
SNe II progenitors lose $<10^{-4} M_{\odot}\, \rm yr^{-1}$ a few years before
explosion. This provides independent evidence the CSM around many SNe II
progenitors is confined. We show that the overall observed breakout radius
distribution is skewed to higher radii due to a luminosity bias. We argue that
the $66^{+11}_{-22}\%$ of red supergiants (RSG) explode as SNe II with breakout
radii consistent with the observed distribution of field RSG, with a tail
extending to large radii, likely due to the presence of CSM. | Ido Irani, Jonathan Morag, Avishay Gal-Yam, Eli Waxman, Steve Schulze, Jesper Sollerman, K-Ryan Hinds, Daniel A. Perley, Ping Chen, Nora L. Strotjohann, Ofer Yaron, Erez A. Zimmerman, Rachel Bruch, Eran O. Ofek, Maayane T. Soumagnac, Yi Yang, Steven L. Groom, Frank J. Masci, Reed Riddle, Eric C. Bellm, David Hale | 2023-10-25T18:00:02Z | http://arxiv.org/abs/2310.16885v2 | # The Early Ultraviolet Light-Curves of Type II Supernovae and the Radii of Their Progenitor Stars
###### Abstract
Observations during the first few days of a supernova (SN) explosion are required in order to (1) accurately measure the blackbody evolution, (2) to discriminate between shock-cooling and circumstellar material (CSM) interaction as the primary mechanism for powering the light-curve rise and (3) in order to constrain the progenitor radius and explosion energy. Here we present a sample of 34 normal SNe II detected with the Zwicky Transient Facility, with multi-band UV light-curves starting at \(t\leq 4\) days after explosion, as well as X-ray detections and upper limits. We characterize the early UV-optical colors and provide prescriptions for empirical host-extinction corrections. We show that the \(t>2\) days UV-optical colors and the blackbody evolution of the sample are consistent with the predictions of spherical phase shock cooling, independently of the presence of 'flash ionization" features. We present a framework for fitting shock-cooling models, and validate it by fitting a set of multi-group simulations. Our fitting is capable of reproducing the simulation parameters without a significant bias up to 20% in radius and velocity. Observations of about half of the SNe II in the sample are well-fit by models with breakout radii \(<10^{14}\) cm. The other half are typically more luminous, with observations from day 1 onward that are better fit by a model with a large \(>10^{14}\) cm breakout radius. However, these fits predict an early rise during the first day that is too slow. We suggest these large-breakout events are explosions of stars with an inflated envelope or a confined CSM with a steep density profile, at which breakout occurs. Using the 4 X-ray detections and upper limits of our sample, we derive constraints on the extended (\(\sim 10^{15}\) cm) CSM density independent of spectral modeling, and find most SNe II progenitors lose \(\dot{M}<10^{-4}M_{\odot}\,\mathrm{yr}^{-1}\) a few years before explosion. This provides independent evidence the CSM around many SNe II progenitors is confined. We show that the overall observed breakout radius distribution is skewed to higher radii due to a luminosity bias. Given this bias, we argue that the \(66^{+11}_{-22}\%\) of red supergiants (RSG) explode as SNe II with breakout radii consistent with the observed distribution of field RSG, with a tail extending to large radii, likely due to the presence of CSM.
## 1 Introduction
The progenitor stars of the majority of spectroscopically regular (Gal-Yam, 2017) supernovae (SNe) II are red super-giants (RSG), as confirmed by pre-SN detections (see Smartt, 2009, 2015; Van Dyk, 2017, and references therein). While this is the case, we do not yet know if all RSG stars explode as SNe, and the details of the latest stages of stellar evolution are not accurately known. As we cannot know which star will explode as a SN ahead of time, the only way of systematically observing the short-lasting final stages of stellar evolution are through their terminal explosions as SNe. Using this approach, the properties of a progenitor star immediately prior to explosion can be connected to its observed supernova. Connecting the progenitors to the SN explosions they create has been a long-lasting goal of supernova studies (Gal-Yam et al., 2007; Smartt, 2015; Modjaz et al., 2019). In the last decade, large statistical studies of SNe have become commonplace. While these can place some constraints on the progenitor properties, the progenitor radius, ejected mass and explosion energy have degenerate effects on the SN light curves (Goldberg et al., 2019; Dessart & Hillier, 2019). Acquiring independent estimates of these properties through their peak and plateau properties remains a difficult and unsolved problem.
Measuring the progenitor radius is possible by observing the earliest phase of the SN explosion. The first photons emitted from the SN explosion will be the result of shock breakout of the radiation-mediated shock from the stellar surface - the breakout pulse. The photons that were captured in the shock transition region escape on a timescale of \(\max(R_{bo}/c,\frac{c}{v_{bo}^{2}\,v\,R_{bo}})\), where \(R_{bo},v_{bo},\rho_{bo}\) are the breakout radius, velocity and density, \(\kappa\) is the opacity and \(c\) is the speed of light. Typically, this allows us to constrain the progenitor radius directly from the duration of the breakout pulse (for a review on the subject, see Waxman & Katz, 2017, and references therein). The shocked material, which has been compressed and heated, is then ejected and quickly reaches a state of homologous expansion (Matzner & McKee, 1999). From the moment of shock-breakout and in the absence of interaction with pre-existing material above the photosphere, the dominant emission mechanism is the cooling of this heated envelope, which evolves according to simple analytic solutions until hydrogen recombination becomes significant.
This stage, called the shock-cooling phase, typically lasts a few days for normal SNe II, and less than a day for stripped-envelope supernovae and 1987A-like SNe II. During this time, the temperature and luminosity evolution are highly sensitive to the progenitor radius and to the shock velocity - allowing to constrain these parameters (Chevalier, 1992; Nakar & Sari, 2010; Rabinak & Waxman, 2011). Since the first generation of models, theoretical advancements have extended the applications of shock-cooling models to low-mass envelopes (Piro, 2015; Piro et al., 2021) and later times (Sapir & Waxman, 2017). Recently, Morag et al. (2022, hereafter M22) interpolated between the planar and spherical phases, extending the validity of the model of Sapir & Waxman (2017) to earlier times, and treated the suppression of flux in UV due to line absorption (Morag et al., 2023, M23).
In the past decade, high-cadence and wide-field surveys have enabled the early time detection and multi-band followup of SNe. The Palomar Transient Factory (PTF; Law et al., 2009; Kulkarni, 2013), the Astroid-Terrestrial impact Last Alert System (ATLAS; Tonry et al., 2018), the Zwicky Transient Facility (ZTF; Bellm et al., 2019; Graham et al., 2019), the Distance Less than 40 Mpc Survey (DLT40; Tartaglia et al., 2018), and most recently the Young Supernovae Experiment (YSE; Jones et al., 2021) have been conducting 1-3 day cadence wide-field surveys and regularly detect early phase SNe (e.g. Hachinger et al., 2009; Gal-Yam et al., 2011; Arcavi et al., 2011; Nugent et al., 2011; Gal-Yam et al., 2014; Ben-Ami et al., 2014; Khazov et al., 2016; Yaron et al., 2017; Hosseinzadeh et al., 2018; Ho et al., 2019; Soumagnac et al., 2020; Bruch et al., 2021; Gal-Yam et al., 2022; Perley et al., 2022; Terreran et al., 2022; Jacobson-Galan et al., 2022; Tinyanont et al., 2022; Hosseinzadeh et al., 2022).
Previous attempts to model the early-phase emission of SNe II yield mixed results. Many studies fit the analytical shock cooling models of Nakar & Sari (2010) or Rabinak & Waxman (2011). These models require multiband photometry extending to the early time and the UV, as the model parameters are highly sensitive to the temperature \(\sim\)1 day after explosion. Many works find radii that are small compared to the observed RSG distribution from the Small and Large Magellanic Clouds (SMC,LMC). For example, Gonzalez-Gaitan et al. (2015) and Gall et al. (2015) compile large optical light curve samples, fitting \(ugriz\) and \(r\) band photometry respectively, and adopt a constant validity domain for the models. While Rubin et al. (2016); Rubin & Gal-Yam (2017) demonstrated that adopting a fixed validity introduces a bias in the parameter inference, a fixed validity remains commonplace (e.g., Hosseinzadeh et al., 2018). Recent attempts by Soumagnac et al. (2020); Ganot et al. (2022) and Hosseinzadeh et al. (2023) find large RSG radii \(\sim 1000\,R_{\odot}\) by fitting early UV-optical light-curves, in tension with previous results, while Vallely et al. (2021) fit single band high
cadence _Transiting Exoplanet Survey Satellite_ (_TESS_; Ricker et al., 2014) light curves and find unrealistically small RSG progenitor radii, which they calibrate to numerical simulations.
While some large samples by Valenti et al. (2016); Faran et al. (2017) fit the luminosities and temperatures of SNe II using multi-band UV-optical datasets, these did not extend to the very early times. However, these studies demonstrate that the blackbody evolution is in agreement with the expectations of the shock-cooling framework of a cooling blackbody with \(T\sim t^{-0.5}\)(Faran et al., 2017).
A different approach to analytic cooling models is the use of numerical hydrodynamical simulations. Motivated by the fact that narrow features from CSM interaction are commonly observed in SNe II (Gal-Yam et al., 2014; Khazov et al., 2016; Yaron et al., 2017; Bruch et al., 2021, 2023), these models include a dense shell of CSM, ejected from the progenitor before explosion. This results in an extended non-polytropic density profile extending to few \(10^{14}\) cm from the progenitor star prior to explosion. Morozova et al. (2018) shows the early time multi-band evolution of a sample of SNe II is better explained by models with dense CSM compared to models which do not include CSM. The breakout radii in this case are typically at the edge of the CSM, at large radii (\(\lesssim 3000R_{\odot}\)). Dessart et al. (2017); Dessart and Hillier (2019) fit the early (\(>\) few days) spectroscopic and photometric sequence of SNe with a grid of non-LTE simulations, and find a small amount of CSM improves the match of the models with the early time photometry. Forster et al. (2018) fit a sample of 26 optical SNe to a grid of hydrodynamical models and argue that they observe a delayed rise in the majority of SNe II, and argue it is explained by the presence of CSM, extending the rise.
In this paper we present a sample of spectroscopically regular SNe II with well-sampled UV-optical light curves. We present our sample selection strategy in SS 2, and the details of our photometric and X-ray follow-up in SS 3. In SS 4 we analyze the color evolution (SS 4.1), and blackbody evolution (SS 4.2) of the SNe. In SS 4.3 we model the light curves during the shock cooling phase. We discuss our results and their implications to the SN progenitors in SS 5.
Throughout the paper we use a flat \(\Lambda\)CDM cosmological model with \(\mathrm{H_{0}}=67.4\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.315\), and \(\Omega_{\Lambda}=0.685\)(Planck Collaboration et al., 2018).
## 2 Sample
### Observing strategy
In Bruch et al. (2021), we described the selection process of infant SNe from the ZTF alert stream. Using a custom filter, we select transient in extragalactic fields (\(|b|>14\) deg), with a non detection-limit \(<2.5\) days from the first detection, and from a non-stellar origin. These candidates are routinely manually inspected by a team of duty astronomers in Europe and Israel during California night-time in order to reject false positives (such as stellar flares, galactic transients, and active galactic nuclei). Management of follow-up resources and candidates was performed through the GROWTH marshal (Kasliwal et al., 2019) and Fritz/SkyPortal platforms (van der Walt et al., 2019; Coughlin et al., 2023). Promising candidates rising by at least 0.5 mag from the previous non-detection are followed-up with optical spectroscopy, optical photometry (various instruments) and UV photometry using the UV-Optical Telescope (UVOT) onboard the _Neil Gehrels Swift Observatory_(Gehrels et al., 2004; Roming et al., 2005). We also followed-up publicly announced infant SNe II which pass our criteria, with ZTF data during the first week. For this paper, we consider all ZTF infant SNe with UV photometry in the first 4 days after estimated explosion and which are classified as spectroscopically regular SNe II at peak light. We consider SNe which are detected until Dec 31st, 2021. Classification references are listed in Table 1.
### Distance
We adopt Hubble-flow distances using the NASA Extragalactic Database (NED)1 and using their online calculator to correct the redshift-distance for Virgo, Great Attractor, and Shapley supercluster infall (based on the work of Mould et al., 2000). The top panel of Fig. 1 shows the distribution of distances in our sample compared to that of a magnitude-limited and spectroscopically complete sample from ZTF Bright Transient Survey (Fremling et al., 2020; Perley et al., 2020).2
Footnote 1: [https://ned.ipac.caltech.edu/](https://ned.ipac.caltech.edu/)
Footnote 2: [http://sites.astro.caltech.edu/ztf/rcf/explorer.php](http://sites.astro.caltech.edu/ztf/rcf/explorer.php)
### Extinction
We correct for foreground Galactic reddening using the Schlafly and Finkbeiner (2011) recalibration of the Schlegel et al. (1998) extinction maps, and assuming a Cardelli et al. (1989) Milky Way extinction law with \(R_{V}=3.1\). These corrections are applied to all photometry data appearing in this paper. We do not correct the
photometry for host-galaxy extinction, and treat this effect separately in SS 4.3.
### Time of zero flux
We acquire an initial estimate of the time of zero flux \(t_{0}\) using a power-law extrapolation of the forced-photometry flux to 0. Using both \(g\)-band and \(r\)-band data, we fit a function \(f_{\lambda}=f_{0}(t-t_{0})^{n}\) with a slope of \(0<n<5\), and allow values of \(t_{0}\) between the first detection of the SN and the last non-detection. We then estimate the error on \(t_{0}\) as the scatter in \(t_{0,best}\) over all allowed values of \(n\), and choose to use the band with the best constraint on \(t_{0}\). In Fig. 2, we show the distribution of first detection times relative to the estimated time of zero flux in both UV and optical bands. We find a large fraction of the SNe have \(t_{0}\) close to their first detections. Most of these are SNe where first detection in the forced photometry light-curve are recovered from a non-detection in the alert photometry - resulting in a sharp rise. As the SN time of zero flux should not correlate with the time of first detection, we expect a uniform distribution in \(t_{0}\). \(t_{first}-t_{0}\) should then be a rising and falling distribution, similar to that observed in \(t_{UV,first}-t_{0}\). The fact that our results deviate from
\begin{table}
\begin{tabular}{r r r r r r r r r r} \hline \hline \multicolumn{1}{c}{ SN} & \multicolumn{1}{c}{ZTF ID} & \multicolumn{1}{c}{\(\alpha\) (J2000)} & \multicolumn{1}{c}{\(\delta\) (J2000)} & \multicolumn{1}{c}{z} & \multicolumn{1}{c}{\(d\)\({}^{n}\) [Mpc]} & \multicolumn{1}{c}{\(t_{ND}\) [JD]} & \multicolumn{1}{c}{\(t_{exp}\) [JD]} & \multicolumn{1}{c}{\(\tau_{flux}^{b}\) [days]} & \multicolumn{1}{c}{Reference\({}^{c}\)} \\ \hline SN 2018cxn & ZTF18abckutn & 237.026897 & 55.714855 & 0.0401 & 186.6 & 2458289.7490 & 2458289.76 \(\pm\) 0.01 & \(<0.0\) & [1] \\ SN 2018dfc & ZTF18abezimd & 252.032360 & 24.304095 & 0.0365 & 170.0 & 2458302.7103 & 2458303.8 \(\pm\) 0.009 & \(6.2\pm 2.8\) & [1] \\ SN 2018ff & ZTF18abokyrk & 2.360629 & 47.354083 & 0.0172 & 76.5 & 2458349.8973 & 2458350.874 \(\pm\) 0.002 & \(1.6\pm 1.0\) & [1.2] \\ SN 2019eoh & ZTF19aatqim & 195.955635 & 38.289155 & 0.0501 & 229.6 & 2458601.7817 & 2458606.683 \(\pm\) 0.03 & [3] \\ SN 2019gnm & ZTF19aawgdxn & 247.763189 & 41.153961 & 0.0307 & 141.3 & 2458633.8250 & 245863.342 \(\pm\) 0.444 & \(<1.4\) & [3] \\ SN 2019wvm & ZTF19abbb & 261.411100 & 59.446730 & 0.0181 & 86.4 & 2458731.7416 & 2458714.69 \(\pm\) 0.007 & \(1.9\pm 1.0\) & [3,4] \\ SN 2019omp & ZTF19abrvij & 260.142987 & 51.632780 & 0.0450 & 206.9 & 2458717.7910 & 245871.713 \(\pm\) 0.0 & \(<0.0\) & [3] \\ SN 2019oxn & ZTF19abuepg & 267.803290 & 51.382550 & 0.0200 & 90.3 & 2458723.7895 & 2458724.342 \(\pm\) 0.129 & \(<0.4\) & [3] \\ SN 2019orf & ZTF19abulfra & 279.817010 & 54.287872 & 0.0480 & 221.2 & 2458723.7900 & 2458724.728 \(\pm\) 0.005 & \(<0.0\) & [3] \\ SN 2019ust & ZTF19acynytj & 13.593936 & 31.670182 & 0.0220 & 96.0 & 2458799.8053 & 2458800.004 \(\pm\) 0.177 & \(5.0\pm 0.5\) & [3] \\ SN 2019wax & ZTF19aczldp & 37.782236 & 4.311291 & 0.0275 & 12.49 & 2458833.7282 & 245883.506 \(\pm\) 0.092 & \(<1.1\) & [11] \\ SN 2020cxd & ZTF20aacphay & 261.621953 & 0.194063 & 0.0039 & 23.7 & 2458869.0296 & 2458889.617 \(\pm\) 0.671 \(\leq 2.4\) & [5,6] \\ SN 2020dyu & ZTF20asfhia & 184.913047 & 33.040393 & 0.0500 & 230.7 & 2458911.9254 & 2458912.814 \(\pm\) 0.021 & \(<0.0\) & [3] \\ SN 2020fay & ZTF20aatzhil & 189.138576 & 11.231654 & 0.0075 & 15.0 & 2458936.9007 & 2458939.43 \(\pm\) 0.16 & \(<0.9\) & [7] \\ SN 2020ojf & ZTF20azwarmt & 185.460355 & 4.481697 & 0.0052 & 1.47 & 24585971.751 & 2458975.21 \(\pm\) 0.424 & \(<0.5\) & [8] \\ SN 2020ojf & ZTF20abcbcep & 246.737033 & 20.245906 & 0.0440 & 202.2 & 2458995.8154 & 2458996.701 \(\pm\) 0.018 & \(4.2\pm 1.5\) & [3] \\ SN 2020Inst & ZTF20abfcdkj & 281.793965 & 0.496802 & 0.0590 & 274.0 & 2459012.8161 & 2459013.689 \(\pm\) 0.067 & \(<0.1\) & [3] \\ SN 202onif & ZTF20abjwwth & 196.057282 & -10.351002 & 0.0104 & 50.5 & 2459021.7334 & 2459023.783 \(\pm\) 0.765 & \(<0.9\) & [11] \\ SN 2020phy & ZTF20aabjonjs & 29.783900 & 86.676205 & 0.0155 & 72.1 & 2459026.9709 & 2459033.849 \(\pm\) 0.014 & \(<0.0\) & [11] \\ SN 2020pin & ZTF20bgygy & 225.958184 & 42.114032 & 0.0169 & 83.2 & 2459045.7542 & 2459046.638 \(\pm\) 0.004 & \(5.1\pm 0.9\) & [9] \\ SN 2020pyr & ZTF20aomok & 202.498180 & 86.462724 & 0.03383 & 16.2 & 2459046.7104 & 2459048.646 \(\pm\) 0.023 & \(5.2\pm 2.5\) & [3] \\ SN 2020qvr & ZTF20aabloaco & 250.983335 & 77.879897 & 0.0500 & 230.7 & 2459065.8438 & 2459066.222 \(\pm\) 0.417 & \(<0.6\) & [11] \\ SN 202afdi & ZTF20abqwks & 224.868111 & 73.898678 & 0.0239 & 110.9 & 2459069.7995 & 2459070.277 \(\pm\) 0.341 & \(1.3\pm 0.5\) & [3] \\ SN 2020ufg & ZTF20acedig & 32.652706 & 24.673752 & 0.0500 & 230.7 & 2459116.8338 & 2459117.752 \(\pm\)
such a distribution indicates a systematic deviation from a power-law rise in flux - a model which is not physically motivated. Hosseinzadeh et al. (2023) fit the early light curve of the recently discovered Type II SN 2023ixf (Itagaki, 2023), and show that the rise is comprised from 2 phases - a slower phase followed by a sharply rising phase. For such a light curve, extrapolating based on the sharply rising phase would result in a time of first light too late by several hours, and the first point on the rise would be close to the fit \(t_{0}\). Our fit provides preliminary evidence this is the case for the majority of SNe II.
### Flash feature timescale
Bruch et al. (2021) define flash-features based on the presence of the \(\lambda\)4686 He ii feature before broad H recombination features appear. The flash feature duration \(\tau_{flash}\) is defined through the half-time between the last spectrum showing \(\lambda\)4686 He ii emission and the subsequent epoch (Bruch et al., 2023). We adopt these definitions and the measurements of Bruch et al. (2023) throughout our paper. We extend the estimation to the SNe not included in Bruch et al. (2023) using all available spectroscopy, which will be released in a future publication.
In Table 1 we list the 34 SNe in our sample, as well as their median alert coordinates, redshifts, distance estimates, non-detection limits, estimated time of zero flux and their flash feature timescales, if applicable.
### RSG radiation-hydrodynamic simulations
When comparing data to semi-analytic models, which are calibrated to numerical simulations, it is unclear how the calibration scatter and theoretical uncertainties will propagate to observed fluxes. These could potentially manifest as correlated residuals when the model is compared to the data, and subsequently create biases in the fit parameters. In order to demonstrate and account for such effects in our analysis, we repeat some of the analysis we perform throughout the paper to a set of 28 multi-group radiation-hydrodynamical simulations of RSG described in detail in M23. These simulations are generated by relaxing the assumption of local thermal equilibrium (LTE) and instead solving the radiation transfer using multiple photon groups and a realistic opacity table with free-free, bound-free and bound-bound opacities at different densities, temperatures, and compositions. The simulations allow us to generate synthetic data sets with arbitrary sampling in time with any set of filters. Unless mentioned otherwise, we use the sampling, filters, and error-bars of the light curves of SN 2020uim, arbitrarily chosen from our sample as a representative SN. We do not add simulated noise, and all points are assumed to be detected regardless of luminosity unless otherwise mentioned.
## 3 Observations
### Optical photometry
ZTF photometry in the _gri_ bands were acquired using the ZTF camera (Dekany et al., 2020) mounted on the 48 inch (1.2 m) Samuel Oschin Telescope at Palomar Observatory (P48). These data were processed using the ZTF Science Data System (ZSDS; Masci et al., 2019). Light curves were obtained using the ZTF forced-photometry service3 on difference images produced using the optimal image subtraction algorithm of Zackay, Ofek and Gal-Yam (ZOGY; Zackay et al., 2016) at the position of the SN, calculated from the median ZTF alert locations which are listed in Table 1. We removed images that have flagged difference images (with problem in the subtraction process), bad pixels close to the SN position, a large standard deviation in the background region, or a seeing of more than 4\({}^{\prime\prime}\). We performed a baseline correction to ensure the mean of the pre-SN flux is zero. We report detections above a 3\(\sigma\) threshold, and use a 5\(\sigma\) threshold for upper limits.
Footnote 3: See ztf_forced_photometry.pdf under [https://irsa.ipac.caltech.edu/data/ZTF/docs](https://irsa.ipac.caltech.edu/data/ZTF/docs)
In addition to the ZTF photometry, we also used the following instruments to collect early multi-band lightcurves:
* The Optical Imager (IO:O) at the 2.0 m robotic Liverpool Telescope (LT; Steele et al., 2004) at the Observatorio del Roque de los Muchachos. We used the Sloan Digital Sky Survey (SDSS; York et al., 2000) \(u\), \(g\), \(r\), \(i\) and \(z\) filters. Reduced images were downloaded from the LT archive and processed with custom image-subtraction and analysis software (K. Hinds and K. Taggart et al., in prep.) Image stacking and alignment is performed using SWarp(Bertin, 2010) where required. Image subtraction is performed using a pre-explosion reference image in the appropriate filter from the Panoramic Survey Telescope and Rapid Response System 1 (Pan-STARRS1; Chambers et al., 2016) or SDSS. The photometry are measured using PSF fitting methodology relative to Pan-STARRS1 or SDSS standards and is based on techniques in Fremling et al. (2016). For SDSS fields without u-band coverage, we returned to these fields after the SN had faded on photometric nights to cre
ate deep stacked u-band reference imaging. We then calibrated these field using IO:O standards taken on the same night at varying airmasses and used these observations to calibrate the photometry (Smith et al., 2002).
* The Rainbow Camera (Blagorodnova et al., 2018) on the Palomar 60 inch (1.5 m) telescope (P60; Cenko et al., 2006). Reductions were performed using the automatic pipeline described by Fremling et al. (2016).
In addition to the above, we use early optical light curves from the literature. These include the multi-band light curves covering the rise of SN 2021yja (Hosseinzadeh et al., 2022) and light curves from the TESS for SN 2020fqv (Tinyanont et al., 2022) and SN 2020nvm Vallely et al. (2021).
### UV photometry
UV photometry were acquired from all SNe using UVOT onboard the _Neil Gehrels Swift Observatory_(Gehrels et al., 2004; Roming et al., 2005). We reduced the images using the _Swift_ HEAsoft4 toolset. Individual exposures comprising a single epoch were summed using uvotimsum. Source counts were then extracted using uvotsource from the summed images using a circular aperture with a radius of 5\({}^{\prime\prime}\). The background was estimated from several larger regions surrounding the host galaxy. These counts were then converted to fluxes using the photometric zero points of Breeveld et al. (2011) with the latest calibration files from September 2020, and including a small scale sensitivity correction with the latest map of reduced sensitivity regions on the sensor from March 2022. A UV template image was acquired for all SNe and for all bands after the SN had faded, with an exposure time twice as long as used for the deepest image of the SN. These images were then summed with any archival images of the site and used to estimate the host flux at the SN site. We remove the local host-galaxy contribution by subtracting the SN site flux from the fluxes from individual epochs. In Fig. 3 we show the early \(g,r\) and \(UVW2\) light curves of the SNe in our sample. In Fig. 4 we show a representative example of the multi-band light curves in our sample. We make the multi-band light curve figures of individual SNe available through the journal website and WISEREP. Finally, we show the full ZTF forced photometry light curves in Fig. A1.
Footnote 4: [https://heasarc.gsfc.nasa.gov/docs/software/heasoft/](https://heasarc.gsfc.nasa.gov/docs/software/heasoft/) v. 6.26.1.
### X-ray observations
While the SNe were monitored with UVOT, _Swift_ also observed the field between 0.3 and 10 keV with its onboard X-ray telescope (XRT) in photon-counting mode (Burrows et al., 2005). We analyzed these data with the online tools provided by the UK _Swift_ team.5 These online tools use the methods of Evans et al. (2007, 2009) and the software package HEASoft v. 6.29 to generate XRT light curves and upper limits, perform PSF fitting, and provide stacked images.
Footnote 5: [https://www.swift.ac.uk/user_objects](https://www.swift.ac.uk/user_objects)
In most cases, the SNe evaded detection at all epochs. We derive upper limits by calculating the median 3\(\sigma\) count-rate limit of each observing block in the 0.3-10 keV band, determined from the local background. We
Figure 1: In the top panel, we show the distribution of distances to the SNe in our sample, compared to the distribution of BTS SNe II. We truncate the plot at 400 Mpc for clarity. In the bottom panel, we show the distribution of peak \(r\)-band magnitude compared to BTS SNe II. In both panels, we show histograms and the cumulative distributions.
stack all data, and converting the count-rates to unabsorbed flux by assuming a power-law spectrum with a photon index of 2, and taking into account the Galactic neutral hydrogen column density at the location of the SN (HI4PI Collaboration et al., 2016).
In several cases (SN 2020jfo, SN 2020nif and SN 2020qty) we find spurious detections which are likely associated with a nearby constant source, identified by inspecting co-added X-ray images over all epochs, and by comparing to archival survey data through the HILIGT server (Saxton et al., 2022). We treat the measured flux as upper limits on the SN flux.
For SN 2020acbm and SN 2020uim, we report \(>3\sigma\) X-ray detections from the binned exposures.6 For both SNe, the SN location is within 90% error region of the source PSF. In the case of SN 2020pqv we report a detection 11" from the SN where the source 90% localization region is \(8\farcs 5\). We lack constraining limits on the quiescent flux at the location of all three SNe when comparing to archival ROSAT data or compared to the late-time XRT exposures. For SN 2021yja, we report a source \(2\farcs 6\) from the SN site from observations in the first 10 days, brighter by a factor \(4.2\pm 1.8\) than the derived \(3\sigma\) upper limit from observation in subsequent epochs - robustly indicating the emission is related to the SN. We report our measurements in Table 2, and show our results in Fig. 5.
Footnote 6: We note that while the detection significance \(S/\sqrt{(}B)>3\), where \(S\) is the source flux and \(B\) is the background level, taking into account the source flux in the error calculation results in a \(<3\sigma\) measurement error, since the measurement signal-to-noise is \(S/\sqrt{(}B+S)\). These approximations for the signal to noise hold in the Gaussian limit, which is approximately correct in our case.
## 4 Results
### Color evolution
Before recombination begins, and although the external layers of the SN ejecta are not in LTE, the spectrum of a SN II is expected to be well approximated by a blackbody (Baron et al., 2000; Blinnikov et al., 2000; Nakar and Sari, 2010; Rabinak and Waxman, 2011; Morag et al., 2023). However, several reasons exists to expect deviations of the spectrum from a perfect blackbody:
* Extinction can contribute significantly to deviations from blackbody. While the exact applicable extinction law has a modest effect on the optical colors, it can create major differences in the UV and UV-optical colors. Large \(R_{V}\) values will cause bluer UV-optical colors compared to an \(R_{V}=3.1\) MW extinction law. Many star-forming galaxies lack the characteristic "bump" at 220 nm, which will mostly affect the UVM2-band photometry (Calzetti et al., 2000; Salim and Narayan, 2020). For both SNe Ia and stripped-envelope SNe, sample color-curves have been used to derive a "blue edge" where the amount of extinction is assumed to be zero (Phillips et al., 1999; Stritzinger et al., 2018). This is in turn used to estimate the host-galaxy extinction in the line of sight to the SN, typically performed at phases for which the intrinsic scatter in color is minimal.
* particularly line blanketing in the UV, as well as broad deviations from blackbody in the continuum. M23 characterize these deviations using multi-group radiation hydrodynamical simulations, and these are included in their latest analytical model. Line blanketing in the UV is observed in the few early time UV spectra of SNe II (Valenti et al., 2016; Vasylyev et al., 2022; Vasylyev et al., 2023; Bostroem et al., 2023; Zimmerman et al., 2023).
* CSM interaction is suggested to create bluer UV-optical colors, to be associated with a higher luminosity, and with spectral signatures indicating the presence of CSM (Ofek et al., 2010; Katz et al., 2011; Chevalier and Irwin, 2011; Hillier and Dessart, 2019). CSM interaction is typically accompanied by strong line emission (Yaron et al., 2017), possibly in the UV, which can create deviations from blackbody.
Figure 2: The times of first detection relative to the estimated time of zero flux, in UV and in optical bands. Both a histogram and a cumulative distribution is shown.
Using our well sampled light curves, we constrain the deviations from a blackbody spectral energy distribution (SED) in our sample, as well as attempt to isolate their main source (i.e., physical or extinction).
First, we consider the effect of extinction. In Fig. 6, we show the \(UVW2-r\) and \(UVW2-UVW1\) color curves for our sample. On both plots, we illustrate the effect of applying galactic extinction with \(E(B-V)\) of \(0.2,0.4\) mag with red and black arrows respectively. We show dashed lines showing the expected colors of a blackbody with various temperatures in the background. The scatter in the color curves represents the variance in temperature and in extinction. A significant variance in temperature (and thus in color) is expected if these SNe are powered by shock-cooling, as the temperature evolution is sensitive to the shock-breakout radius. Despite this, all SNe in our sample besides the highly extinguished SN 2020fqv (Tinyanont et al., 2022) fall within \(E(B-V)=0.2\) mag of the bluest SN in the sample.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline SN & \(t\) [day] & \(t_{\rm max}\) [day] & \(t_{\rm min}\) [day] & XRT count rate [s\({}^{-1}\)] & Flux [\(10^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\)] & Luminosity [\(10^{40}\) erg s\({}^{-1}\)] \\ \hline SN 2018cxn & 7.2 & 10.0 & 1.6 & \(<0.002\) & \(<7.1\) & \(<28.3\) \\ SN 2018dfc & 1.5 & 5.2 & 1.5 & \(<0.0011\) & \(<4.3\) & \(<14.08\) \\ SN 2018ff & 2.1 & 16.8 & 1.2 & \(<0.0015\) & \(<6.4\) & \(<4.56\) \\ SN 2019eoh & 11.6 & 20.0 & 1.4 & \(<0.0015\) & \(<5.3\) & \(<33.68\) \\ SN 2019gnh & 479.9 & 480.4 & 1.9 & \(<0.0009\) & \(<3.2\) & \(<7.41\) \\ SN 2019nvm & 7.6 & 230.2 & 0.3 & \(<0.0006\) & \(<2.3\) & \(<1.85\) \\ SN 2019omp & 2.6 & 11.2 & 1.8 & \(<0.0019\) & \(<6.9\) & \(<35.07\) \\ SN 2019oxn & 2.3 & 10.6 & 0.7 & \(<0.0018\) & \(<6.8\) & \(<6.6\) \\ SN 2019ozf & 196.6 & 391.3 & 1.9 & \(<0.0005\) & \(<1.9\) & \(<10.9\) \\ SN 2019ust & 20.5 & 325.5 & 2.1 & \(<0.0005\) & \(<2.2\) & \(<2.55\) \\ SN 2019wax & 28.7 & 692.5 & 2.1 & \(<0.0008\) & \(<2.9\) & \(<5.39\) \\ SN 2020axvm & 4.3 & 6.2 & 2.4 & \(<0.0016\) & \(<6.2\) & \(<31.65\) \\ SN 2020abue & 1.7 & 11.3 & 0.4 & \(<0.0018\) & \(<7.0\) & \(<13.45\) \\ SN 2020acbm & 5.7 & 22.8 & 0.3 & \(0.0011\pm 0.0004\) & \(4.0\pm 1.5\) & \(4.56\pm 1.71\) \\ SN 2020adci & 3.2 & 4.0 & 2.3 & \(<0.0027\) & \(<10.1\) & \(<14.05\) \\ SN 2020ccxd & 5.0 & 14.9 & 2.9 & \(<0.0028\) & \(<10.7\) & \(<0.38\) \\ SN 2020dyu & 9.6 & 476.7 & 2.3 & \(<0.0008\) & \(<2.9\) & \(<18.22\) \\ SN 2020fqv & 1.7 & 59.0 & 0.0 & \(<0.0072\) & \(<43.1\) & \(<5.79\) \\ SN 2020jfo & 1.4 & 84.5 & 0.0 & \(<0.0017\) & \(<6.3\) & \(<0.41\) \\ SN 2020fnf & 4.0 & 119.5 & 1.4 & \(<0.0004\) & \(<1.7\) & \(<8.03\) \\ SN 2020smt & 2.4 & 13.5 & 1.4 & \(<0.0013\) & \(<5.2\) & \(<46.13\) \\ SN 2020nif & 3.3 & 16.8 & 0.0 & \(<0.006\) & \(<22.9\) & \(<5.87\) \\ SN 2020nyb & 4.2 & 12.3 & 1.2 & \(<0.0012\) & \(<5.3\) & \(<3.05\) \\ SN 2020pnni & 6.9 & 103.1 & 0.6 & \(<0.0006\) & \(<2.1\) & \(<1.41\) \\ SN 2020pqv & 12.6 & 31.3 & 1.5 & \(0.0005\pm 0.0002\) & \(1.8\pm 0.9\) & \(5.2\pm 2.42\) \\ SN 2020qv & 3.7 & 484.5 & 2.6 & \(<0.0016\) & \(<6.2\) & \(<39.45\) \\ SN 2020ufc & 2.7 & 267.2 & 1.7 & \(<0.0007\) & \(<2.8\) & \(<17.74\) \\ SN 2020aim & 272.2 & 272.5 & 272.0 & \(<0.0036\) & \(<14.7\) & \(<12.14\) \\ SN 2020aim & 10.0 & 271.8 & 1.6 & \(0.0008\pm 0.0004\) & \(3.1\pm 1.5\) & \(2.6\pm 1.26\) \\ SN 2020skhs & 17.5 & 256.7 & 2.6 & \(<0.0014\) & \(<6.3\) & \(<9.06\) \\ SN 2020xva & 2.0 & 18.3 & 1.9 & \(<0.0009\) & \(<3.5\) & \(<4.92\) \\ SN 2021apg & 8.1 & 14.3 & 1.9 & \(<0.0014\) & \(<4.8\) & \(<8.53\) \\ SN 2021ibn & 129.6 & 257.3 & 1.9 & \(<0.0008\) & \(<2.8\) & \(<13.54\) \\ SN 2021ska & 2.8 & 12.1 & 1.4 & \(<0.0015\) & \(<5.4\) & \(<11.68\) \\ SN 2021jja & 4.3 & 8.0 & 2.3 & \(0.0013\pm 0.0003\) & \(4.8\pm 1.2\) & \(0.32\pm 0.08\) \\ SN 2021jja & 46.9 & 83.2 & 15.9 & \(<0.0006\) & \(<2.1\) & \(<0.14\) \\ \hline \end{tabular} \({}^{a}\)All times are reported in rest-frame days
\({}^{b}\)We report \(3\sigma\) upper limits, or measurements with a significance of \(3\sigma\) above the background level.
\({}^{c}\)Fluxes are corrected for galactic neutral hydrogen column density, and converted from count-rates assuming a power-law spectrum with a photon index of 2.
\({}^{d}\)For SN 2020jfo, SN 2020iqv and SN 2020nf we report quiescent host-galaxy detections as upper limits on the SN flux.
\end{table}
Table 2: XRT photometry for SNe included in this study
Figure 3: UVW2 (magenta stars), \(g\) and \(r\) (green and red stars) light curves for all of the objects in our samples. The latest upper limits before discovery are marked with a downward facing triangle. We note that some points which are marked as limits in the alert photometry became detections using forced photometry.
We consider this value an upper limit on the reddening affecting these SNe.7
Footnote 7: Our sample does not include other extinguished SNe since we require a blue color to trigger UVOT. In the case of SN 2020fvqv, UVOT was triggered by another group, and thus had early UV and is included in this study.
In Fig. 7, we show the \(M_{n}-M_{r}\) color distributions in our sample at \(t=2\) and \(t=4\) days (panels (a) and (b), respectively), where \(n\in\{UVW2,UVM2,UVW1,U,g,i\}\). For each band, the transparent data points show the interpolated color, the solid diamonds and black dashed lines show the average color, and the error bars and gray shaded regions show the standard deviation of the color. A extinction corresponding to a galactic extinction curve with \(E(B-V)=0.2\) mag applied to the bluest color (transparent points with highest \(M_{n}-M_{r}\)) is indicated by the gray transparent data points. For UV colors, which are most sensitive to extinction, this mild amount of extinction is sufficient to account for the full dispersion of colors in all SNe besides SN 2020fqv. Assuming that SN 2020fqv is well represented by our sample in its intrinsic SED, we use the average colors to calculate its extinction curve. In each curve, we determine \(E(n-r)\) from the color difference at \(t=2\) days, and fit a Cardelli et al. (1989) extinction curve with free \(R_{V}\) and \(A_{V}\). In Fig. 7 we show both the colors of SN 2020fqv (solid plus) and the best fit extinction curve applied to the average SED (red points), which match well at both times. Here and in the rest of the paper, we assign wavelengths to filters using the pivot wavelength for a flat spectrum \(\lambda_{piv}=\sqrt{\frac{\int T(\lambda)d\lambda}{\int T(\lambda)\frac{d \lambda}{\lambda}}}\), where \(T\) is the filter transmission curve, downloaded from the Spanish Virtual Observatory (SVO; Rodrigo et al., 2012; Rodrigo and Solano, 2020).
In Fig. 8, we show the calculated \(E(n-r)\) for SN 2020fqv along with the best fitting extinction curves. The computed posterior probability distribution in the \(A_{V}-R_{V}\) plane is shown in the inset. Using our results, we can determine extinction to \(E(B-V)=0.1\) mag on average, and with a maximum systematic uncertainty of \(E(B-V)=0.2\) mag. The case of SN 2020fqv demonstrates that for highly extinguished SNe, a tight
Figure 4: A representative example of the multi-band light curves of SN 2019nvm in the first 40 days. The individual light curves of the rest of the sample are available through the journal website.
Figure 5: The right panel shows the XRT binned detections and upper limits for the SNe in our sample. Measurements were binned over the duration of the _Swift_ observations, and the time of detections and upper limits is set to the mean photon arrival time. The left panel shows upper limits on the emission for the SN location for the 4 XRT detections from archival ROAST survey data. We also show the XRT light curve of the nearby Type II SN 2023xf (Zimmerman et al., 2023).
constraint can be acquired on \(R_{V}\). As UVM2 measurement for SN 2020fqv were not acquired, we cannot discriminate between extinction curves with and without the 220 nm feature. However, these can likely be distinguished if such measurements were available. For mildly extinguished SNe, one may limit the extinction using these data. In Table 5, we report the color for \(t=1\) to \(t=5\) days. When using this method to measure the extinction, we caution against using a single epoch to estimate the extinction, as it can be degenerate with a temperature difference from the SN II population.
We next consider intrinsic deviations from blackbody. In Fig. 9, we show color-color plots of the SNe in our sample at the first UV epoch. In panel (a) we plot the \(W2-r\) and \(g-r\) colors, and in panel (b) the \(UVW2-UVM2\) and \(g-r\) colors. Data points indicate the colors of the SNe II at their first UVOT visit, where blue and red colors represent SNe with and without flash features in their early spectra, respectively. The solid black line corresponds to a blackbody with 0 extinction between \(10,000\) K and \(100,000\) K. The red dashed lines indicate the effect of extinction between \(E(B-V)=0\) and 0.4 mag with \(R_{V}=3.1\) at the same temperatures. The green dashed lines demonstrate the effect of extinction with the same \(E(B-V)\) range with different values of \(R_{V}\).
The positions that various SNe occupy in the Fig. 9 demonstrate a clear deviation from a non-extinguished blackbody (black curve). SNe with and without flash features occupy the same area in the parameter space, indicating that this deviation from blackbody is not related to the presence of optically thin CSM. Pure reddening can explain some of the deviation, but requires \(R_{V}>3.1\), a high temperature close to \(100,000\) K, and \(E(B-V)\) of up to 0.4 mag for some of the objects - more than the 0.2 mag that we infer based on the scatter in color curves. A difference in \(R_{V}\) seems a good explanation for at least some of the deviation from blackbody. It is consistent with the colors of the various SNe in both the \(W2-r\) color, where the value of \(R_{V}\) has a large effect on the color and the \(W2-M2\) color, which is relatively unaffected by the value of \(R_{V}\).
In panels (c) and (d) we show the expected color-color values from the analytic shock cooling models of M23, at \(E(B-V)=0-0.4\) mag, including time-dependent deviations from blackbody. Colored points represent a subset of SNe from our sample and their evolution in their first week. The time-dependent nature of the color curves (evolving from blue to red) conclusively indicates some of the deviation is intrinsic. For many of the objects, the color evolution is similar to the expected color evolution in the shock cooling models, and a combination of mild \(E(B-V)<0.2\) mag, intrinsic devi
Figure 6: The color evolution of SNe II in our sample in the UVW2 – \(r\) bands (left panel) and UVW2 – UVW1 (right panel) bands. Each curve represents a single SN. The dashed lines are the colors of blackbodies at various temperatures. The arrows show the color difference due to extinction with \(\rm E(B-V)=0.2\) mag (red arrow) and with \(\rm E(B-V)=0.4\) mag (black arrow), assuming a Milky Way extinction curve with \(R_{\rm V}=3.1\). The outlier in the left plot is the highly extinguished SN 2020fqv.
ations from blackbody, and in some cases \(R_{V}>3.1\), can fully explain all SN colors. The color evolution of SN 2020pni (blue stars) stands out in our sample. Its \(g-r\) color becomes bluer in the first few days of its evolution. Terreran et al. (2022) argue the early light curve of this SN is powered by a shock breakout in an extended wind, rather than cooling of a shocked envelope. This non-monotonic color evolution was also observed for the nearby SN 2023ixf (Zimmerman et al., 2023; Jacobson-Galan et al., 2023; Hiramatsu et al., 2023), also suspected as a wind breakout.
To conclude, 33 of 34 SNe in our sample show \(UVW2-r\) colors that become redder with time, consistent with a cooling behaviour. Using the mean colors, the extinction of any SNe can be constrained to better than \(E(B-V)=0.2\) mag. The early UV-optical colors of SNe II indicate deviations from blackbody that are consistent with the expected deviations due to extinction and the expected intrinsic deviations from blackbody in a cooling envelope, with no additional CSM interaction required.
### Blackbody evolution
We linearly interpolate the UV-optical light curves of the sample SNe to the times of UV observations and construct an SED. Using the Scipy curve_fit package (Virtanen et al., 2020), we fit this SED to a Planck function and recover the evolution of the blackbody temperature, radius, and luminosity parameters \(T_{\rm eff}\), \(R_{\rm BB}\), and \(L_{\rm BB}\), respectively. We assume a 0.1 mag systematic error in addition to the statistical errors to account for imperfect cross-instrument calibration. In addition to the best-fit blackbody luminosity, we calculate a pseudobolometric luminosity by performing a trapezoidal integration of the interpolated SED and extrapolating it to the UV and infrared (IR) using the blackbody parameters. The fit results are reported in Table 3.
In Fig. 10 we show the blackbody evolution for our sample SNe, as well as the mean blackbody evolution of the population. To do so, we interpolate the temperatures, radii and luminosities with 0.5 day intervals, and take the population mean separately for SNe with and without flash ionization features as determined by Bruch et al. (2021) and Bruch et al. (2023). We estimate the error on the population mean through a bootstrap analysis (Efron and Tibshirani, 1993). We draw 34 SNe, allowing for repetitions. We then draw samples from the blackbody parameters of each SN assuming a Gaussian distribution for every fit point. We then interpolate to the same time grid and calculate the population mean at
Figure 7: The mean colors of a sample of SNe II at (a) \(t=2\) d and (b) \(t=4\) d. The solid points and gray shaded regions show the mean color and the scatter of each color. The transparent points are individual SN colors. Both the mean and individual SN colors are color-coded by wavelength. The gray points demonstrate the effect of applying \(E(B-V)=0.2\) mag with \(R_{V}=3.1\)Cardelli et al. (1989) extinction law to the bluest colors, demonstrating the extinction in our sample is smaller than this value. The colored plus are the colors of the highly reddened SN 2020fqv. The red curve shows the effect of reddening the mean colors using the best fit extinction curve, which reproduces the colors of SN 2020fqv to within the errorbars for all wavelengths.
every time step. The blue histogram shows the fraction of SNe in our sample with blackbody fit as a function of time.
We find that SNe with flash features have a blackbody temperature \(6.3\%\pm 4.1\%\) cooler and a radius (or photospheric velocity) \(28\%\pm 11\%\) larger than SNe without flash features. This difference is highlighted in Fig. 11 where we show the radius and temperature distribution of SNe with and without flash features, interpolated to \(t=2\) days after explosion. At all times where a significant \(>50\%\) fraction of the sample have measurements, the mean blackbody properties are well described by the predictions of spherical phase shock cooling. Our results indicate that the population of SNe II is well described by a cooling blackbody following shock-breakout at the edge of a shell of material with a steep density profile.
### Shock-cooling fitting
#### 4.3.1 Method and validation
As the population blackbody evolution is well described by shock cooling, we fit individual SN lightcurves to shock-cooling models. We do this using the model presented in M22 and M23, which interpolated between the planar phase (i.e., when \(r\approx R_{bo}\)) and spherical phase (i.e., when \(vt\gtrsim R_{bo}\)) of shock cooling, and predicts the deviations of the SED from blackbody as a function of model parameters. The full model is described in Morag et al. (2022, 2023) and is briefly summarized in SS A.1.
The model has four independent physical parameters: The progenitor radius \(R=R_{13}\,10^{13}\) cm, the shock velocity parameter \(v_{ss}=v_{ss,8.5}\,10^{8.5}\,\rm{cm}\,\rm{s}^{-1}\), the product of density numeric scale factor \(f_{\rho}\) and the progenitor mass \(M_{*}=M\,M_{\odot}\) (treated as a single parameter) and the envelope mass \(M_{env}=M_{env,\odot}\,M_{\odot}\). In addition to these parameters, we also fit for the extinction curve, parameterized as a Cardelli et al. (1989) law with free \(R_{V}\) and \(E(B-V)\), and the breakout time \(t_{0}\).
As demonstrated in Rubin et al. (2016), adopting a fixed validity domain will create a bias against some large radius models. For every model realization, we calculate the validity domain, omitting the points outside this validity range from consideration. In order to properly compare between models with a different number of valid points, we we adopt a likelihood function based on the \(\chi^{2}\) probability density function (PDF), as described in detail in Soumagnac et al. (2020).
Shock cooling models are expected to have residuals in temperature on order 5% - 10% from model predictions (Rabinak and Waxman, 2011; Sapir and Waxman, 2017) when an average opacity is assumed, and additional systematics due to the presence of lines. M23 expect the residuals on the flux to be of order 20% - 40%, which will be correlated in time and wavelength. These residuals determine the appropriate covariance matrix to use in the \(\chi^{2}\) statistic. They will also provide a criterion through which we can reject fits to a given data set. Indeed, when comparing the light curves of our sample of hydrodynamical simulations to the analytical model
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline \multicolumn{1}{c}{ SN} & \multicolumn{1}{c}{t [rest-frame days]} & \multicolumn{1}{c}{T\({}_{\rm eff}\) [K]} & \multicolumn{1}{c}{R\({}_{\rm BB}\) [\(10^{4}\)cm]} & \multicolumn{1}{c}{L\({}_{\rm pseudo}\) [\(10^{42}\)erg s\({}^{-1}\)]} & \multicolumn{1}{c}{L\({}_{\rm pseudo,estrop}\) [\(10^{42}\)erg s\({}^{-1}\)]} & \multicolumn{1}{c}{\(\chi^{2}/dof\)} \\ \hline SN2018cnn & 1.6 & \(23500\pm 1500\) & \(2.15\pm 0.16\) & \(3.75\pm 3.75\) & \(10.01\pm 0.7\) & 1.36 \\ SN2018cnn & 2.19 & \(20800\pm 1400\) & \(2.58\pm 0.22\) & \(3.77\pm 3.77\) & \(8.69\pm 0.54\) & 2.22 \\ SN2018cnn & 5.6 & \(12700\pm 600\) & \(5.3\pm 0.42\) & \(3.5\pm 3.5\) & \(5.08\pm 0.13\) & 2.0 \\ SN2018cnn & 9.59 & \(10500\pm 300\) & \(6.82\pm 0.34\) & \(2.83\pm 2.83\) & \(3.88\pm 0.04\) & 0.57 \\ SN2018dfc & 1.58 & \(22000\pm 800\) & \(4.31\pm 0.2\) & \(13.18\pm 13.18\) & \(31.38\pm 1.18\) & 1.0 \\ SN2018dfc & 3.19 & \(17000\pm 500\) & \(5.99\pm 0.25\) & \(12.46\pm 12.46\) & \(21.67\pm 0.37\) & 1.01 \\ SN2018dfc & 4.1 & \(15000\pm 300\) & \(6.92\pm 0.21\) & \(10.69\pm 10.69\) & \(17.05\pm 0.16\) & 0.46 \\ SN2018dfc & 5.03 & \(13300\pm 200\) & \(8.19\pm 0.29\) & \(9.88\pm 9.88\) & \(14.7\pm 0.12\) & 0.55 \\ SN2018ff & 1.22 & \(20600\pm 1500\) & \(1.68\pm 0.14\) & \(1.67\pm 1.67\) & \(3.69\pm 0.27\) & 2.66 \\ SN2018ff & 1.25 & \(20100\pm 1100\) & \(1.73\pm 0.12\) & \(1.65\pm 1.65\) & \(3.54\pm 0.18\) & 2.13 \\ SN2018ff & 2.1 & \(15600\pm 500\) & \(2.7\pm 0.15\) & \(1.85\pm 1.85\) & \(3.08\pm 0.06\) & 1.65 \\ SN2018ff & 2.65 & \(15100\pm 700\) & \(2.87\pm 0.21\) & \(1.81\pm 1.81\) & \(2.96\pm 0.07\) & 2.87 \\ SN2018ff & 4.57 & \(12000\pm 600\) & \(4.22\pm 0.3\) & \(1.98\pm 1.98\) & \(2.65\pm 0.05\) & 3.6 \\ SN2018ff & 6.15 & \(11200\pm 600\) & \(4.86\pm 0.42\) & \(2.04\pm 2.04\) & \(2.68\pm 0.05\) & 4.41 \\ SN2018ff & 6.17 & \(11100\pm 600\) & \(4.87\pm 0.42\) & \(2.04\pm 2.04\) & \(2.68\pm 0.05\) & 4.41 \\ SN2018ff & 7.31 & \(10600\pm 600\) & \(5.33\pm 0.46\) & \(2.04\pm 2.04\) & \(2.66\pm 0.04\) & 4.21 \\ SN2018ff & 8.33 & \(10000\pm 500\) & \(5.86\pm 0.49\) & \(2.0\pm 2.0\) & \(2.6\pm 0.04\) & 3.5 \\ \hline \hline \end{tabular} a
\end{table}
Table 3: Early-time blackbody fits of SNe included in this work (truncated)
predictions, we find that in 50% of the data points have residuals extending to 0.17 mag and 95% have residuals extending to 0.45 mag. To incorporate the correlation between residuals into our analysis, we construct a likelihood function using the following steps:
* Given a set of light curves, we construct a set of synthetic measurements from the set of hydrodynamical simulations of M23 at the same times and photometric bands, by integrating the simulated SED with the appropriate transmission filters.
* From each simulation, we construct a set of residuals from the analytic model predicted by the physical parameters of each simulation.
* For each light-curve point, we calculate the covariance term as the mean over all simulations, taking into account only simulations which are valid at that time.
* Since the covariance matrix has too many parameters to be accurately estimated in full, we take the singular value decomposition (SVD) of the mean covariance and keep the top 3 eigenvalues.8 We then add this covariance matrix with a diagonal covariance matrix constructed from the observational errors in each data point, and add a 0.1 mag systematic error for cross-instrument calibration. Footnote 8: This choice accounts for \(>80\%\) of the variance, while preventing negative eigenvalues for any sampling used in our work.
* The likelihood of a model given the data is taken to be \(\mathcal{L}=PDF(\chi^{2},\nu)\) where \(\chi^{2}=(d_{i}-m_{i})cov_{ij}^{-1}(d_{j}-m_{j})\), where \(\vec{d},\vec{m}\) are the data and the model respectively, \(\nu\) is the number of points where the model is valid, and PDF is the \(\chi^{2}\) distribution PDF.
Using this likelihood, we fit the model to the photometry using the nested-sampling (Skilling, 2006) package dynesty(Higson et al., 2019; Speagle, 2020). We validate our method by testing that even in the presence of such residuals, we can still recover the true model parameters from simulated data sets. We fit all simulated data sets using this method, and compare the fit parameters with the physical parameters used in the simulations. In Fig. 12, we show an example of such a fit for a simulation generated with \(R_{13}=0.3\), \(v_{s*,8.5}=1.33,M_{env}=1\,M_{\odot}\) with \(E(B-V)=0.1\) mag extinction added. We recover \(R_{13}=0.3\pm 0.05\), \(v_{s*,8.5}=0.9\pm 0.13,M_{env}=16\pm 7.8\,M_{\odot}\) and \(E(B-V)=0.04\pm 0.03\) mag.
In Fig. 13 we show the fit and true radii \(R_{13}\) and shock velocity parameter \(v_{s*,8.5}\), compared to the parameters used in the simulations. The 90% confidence intervals for parameter recovery are 30% for \(R_{13}\), 26% for \(v_{s*,8.5}\) and better than 0.05 mag in \(E(B-V)\) over the entire parameters space of our simulations. However, we cannot recover \(M_{env}\) or \(f_{\rho}M_{tot}\) to better than an order of magnitude, and our fit results are highly sensitive to our choice of prior in those parameters, indicating they cannot be effectively constrained from shock-cooling modelling.
Our results demonstrate that even given significant residuals, one may still fit these analytic models and recover the shock velocity, progenitor radius and the amount of dust reddening with no significant biases. Our results also demonstrate that rejecting shock-cooling as the main powering mechanism of the early light curves requires residuals larger than \(\sim 0.5\) mag.
#### 4.3.2 Light-curve fits
We ran our fitting routine on all sample SNe. We used log-uniform priors for \(R_{13}\in[0.1,30]\), \(v_{s*,8.5}\in[0.1,6]\), \(f_{\rho}M\in[0.1,200]\), \(M_{env,\odot}\in[0.3,30]\). We also fit \(t_{exp}\in[t_{ND}-1,t_{first}]\) with a uniform prior, where \(t_{ND}\)
Figure 8: The best fit extinction curve we find for SN 2020fvq by correcting it to the mean colors of SNe II. In each band, the downward (upward) pointing blue (red) triangle shows the limits of the value of \(A_{\lambda}\) from the bluest (reddest) objects in the sample. The black points show the color difference from the sample. The purple points are the best fit extinction curve, applied on a flat spectrum, and integrated over the filter bandpass. The purple curve is the best fit extinction laws, and the gray transparent curves are 50 randomly drawn curves from the posterior distribution. In the inset, we show the posterior distribution of our fit, with colors indicating the 95,68,50 percentile regions.
is the last non-detection and \(t_{first}\) is the first detection, respectively (relaxing the prior on \(t_{ND}\) does not significantly impact our fit). Motivated by our analysis in SS 4.1 we also fit for host-galaxy extinction by assuming a Cardelli et al. (1989) reddening law with uniform priors on \(E(B-V)\) and \(R_{V}\) in the range \(E(B-V)\in[0,0.25]\) mag and \(R_{V}\in[2,5]\). For SN 2020fqv, we fit with a wide prior of \(E(B-V)\in[0.25,1]\) mag, given the high host-extinction we inferred from its color evolution.
In addition to the flat priors on the parameters, we include non-rectangular priors through the model validity domain. This is done to prevent fits that exclude most data points from the validity range for parameter combinations with high \(v_{s*,8.5}\) and low \(M_{env}\). We assign 0 probability to models that have no photometry data within their validity domain. While this does not impact our results in this work, fitting models without good non-detection limits shortly before explosion, or that are expected to have short validity times (e.g., due to small radii, or high velocity to envelopes mass ratios), might be affected by this demand. In Soumagnac et al. (2020), we assigned priors on the recombination time at \(0.7~{}\mathrm{eV}=8120~{}\mathrm{K}\) (\(t_{0.7~{}\mathrm{eV}}\sim R_{13}^{0.56}v_{s*,8.5}^{0.16}\)) of the SN through it spectral sequence. However, in some cases, the SN is not a good candidate for the SN.
Figure 9: (a) Color-color diagram of the UVW2-\(r\) and the optical \(g-r\) color. The data points represent different SNe at their 1st UV epoch with (blue circles) and without (red squares) flash features. The solid black curve represent the colors of a blackbody with temperatures between \(100\,\mathrm{kK}\) and \(5\,\mathrm{kK}\). The green dashed line and red dot-dashed lines illustrate the effects of extinction on this curve: the red dot-dashed lines show increasing \(E(B-V)\) from 0 to 0.4 mag, applied to the cooling blackbody curve. The green dashed lines show the 100 kK colors extinguished with increasing \(E(B-V)\) from 0 to 0.4 mag, using extinction laws with different \(R_{V}\). The green arrow shows the direction of increasing \(E(B-V)\). (b) is similar to (a), but for \(UVW2-UVM2\). (c) and (d) are similar to (a) and (b), but showing the color evolution of 4 SNe before \(t<7\) days. We also show model Shock-cooling curves using the models of M23 with increasing \(E(B-V)\) using a MW extinction law with \(R_{V}=3.1\). The distance from the black line corresponds to the deviation from blackbody, which is present in all SNe studied in this work.
of the simulations of M23, we start seeing signs of hydrogen emission already at \(20,000\) K. Instead, we use priors derived from the blackbody sequence of the SN. Since there are residuals in color between the simulations and models, and since the effect of host galaxy extinction is known to better than 0.2 mag, the fit temperature assuming \(E(B-V)=0\) mag might not always be accurately used to determine the true photospheric temperature. We quantify the maximal effect of these systematics on the photospheric temperature near \(0.7\ \mathrm{eV}=8120\) K. We fit all synthetic datasets (with an extinction of up to \(E(B-V)=0.2\) mag) with blackbody SEDs assuming no host extinction, and find that demanding that \(T>10,700\) K is enough to determine that \(t>t_{0.7eV}\), and \(T<5500\) is enough to determine that \(t<t_{0.7eV}\) for any combination of parameters, as long as \(E(B-V)\leq 0.2\) mag. These physically motivated priors on the recombination time have a significant effect on our fitting process.
Due to the peculiar temperature and luminosity evolution of SN 2020pni, which does not fit the general predictions of spherical phase shock cooling, we omit this SN from the fitting process. We will treat the modelling of this SN in detail in Zimmerman et al. (in perp.).
In Table 4 we report the parameters of our posterior sampling at the 16th, 50th and 84th percentiles. In all
Figure 11: The distribution of blackbody temperature and radius, interpolated to \(t=2\) days. Black and red points are SNe II with and without flash features, respectively. The dashed lines and shaded regions show the population mean and standard error. The red and blue colored regions show the area occupied by simulated progenitors with \(<10^{14}\) cm and \(>10^{14}\) cm respectively. These are generated by fitting a blackbody to synthetic datasets constructed from the MG simulations of M23. Of the 23 SNe with a measurement at this time, 7 are only consistent with simulations that have a breakout radii \(>10^{14}\) cm, or with a shock velocity parameter \(v_{s*}\gtrsim 6000\mathrm{km\,s^{-1}}\)
Figure 10: The blackbody evolution of a sample of SNe II during the first 10 days. The transparent points represent individual SNe, color coded according to the presence of flash features (black) or lack thereof (red). The blue curve indicates the fraction of the sample with blackbody fits at each time step. The solid points show the population mean, and the dashed curves show the predicted evolution according to spherical phase shock cooling. Panels (a)-(c) shows the blackbody temperature, radius and luminosity, respectively. The match between the predictions of spherical phase shock-cooling models and the population blackbody evolution motivates the use of these models to fit individual SN light curves.
cases, we find good fits for the light curves at \(t>1\) days after explosion. Our fits divide into 2 cases: (1) For 15 SNe, we find good fits to the UV-optical SN light curves
Figure 12: Example of shock-cooling fits to a multi-band synthetic dataset, compared to the models generated from the physical simulation parameters. The solid lines are the average fits from the posterior, and the dot-dashed lines are model generated from the physical simulation parameters. The model light curve typically deviate by up to 20% (calibration uncertainty) from the simulations, and are expected to deviate by up to 40% in band specific flux due to theoretical uncertainty. We show the model until to its upper validity time. The best fit model accurately reproduces the breakout radius, velocity and finds a similar \(E(B-V)\), but cannot reproduce the envelope mass or other model parameters.
Figure 13: Parameter recovery when fitting a sample of synthetic light curves with analytic shock-cooling models. In panels (a) and (b), we show the fit and true parameters for \(R_{13}\) and \(v_{*,8.5}\), respectively. In panel (c), we show the recovery accuracy of \(E(B-V)\). The dashed line represents a perfect recovery, and the shaded regions represent the 68% interval over the full parameter space.
throughout the evolution. These models are characterized by a radius under \(10^{14}\) cm, and residuals better than 0.42 mag (95%) throughout the first week. (2) For the remaining 18 SNe, the early optical light curve points do not match the rise of the models - either pushing it out of the model validity domain or missing it completely by more than 1 mag. These models are exclusively characterized by a large radius (\(>10^{14}\) cm) required to account for a high luminosity, but do not show the shallow rise or double peaked feature expected for planar phase shock cooling of such a star.9 After the first day from estimated explosion, these fits have comparable residuals to group (1). If forced to fit a radius of \(<10^{14}\) cm - a rea
Figure 14: (a) An example of a fit to a SN dataset from our sample. The dot-dashed curves are the best fits in each band. The transparent curves are 50 random samples from the posterior distribution. The vertical dashed lines indicate the best fit lower validity domain (gray) and the transition from planar to spherical phase (orange). (b) An example of a fit which misses the rise (the first \(g\)-band point) for the best fit model (\(R_{13}=22.2,v_{s*,8.5}=1.7\), dot-dashed lines), but to which a reasonable lower radius fit exists (\(R_{13}=4.0,v_{s*,8.5}=3.3\), solid lines).
sonable fits achieved in about half of the cases. For the rest of the objects in this group, forcing a small radius results in a bad overall fit.
Since the spherical phase luminosity \(L_{\rm RW}\sim R_{13}v_{s*,8.5}^{1.91}\), these fits are characterized by a higher \(v_{s*,8.5}\) and more host-galaxy extinction to decrease the temperature as \(T_{\rm ph,t=1~{}d}\sim R_{13}^{1/4}v_{s*,8.5}^{0.07}\). We show examples of fits of both cases in Fig. 14, and make all figures of all light curve fits available as online figures through the journal website upon publication. In Fig 15, we show the illuminating example of SN 2020nvm, which was observed by _TESS_ throughout its rise. We show that a model accounting only for the spherical phase will artificially create a much sharper rise compared to a model which fits the peak. In this case, our best small-radius fit did not match the observed light curve well, and the large radius model (one of the largest values in our sample) misses the rise. The clear first peak expected in planar phase cooling is not observed even at early times.10 The Sapir & Waxman (2017) model fits the rise much better, although it is not physical at early times.
Footnote 10: We note some features are present in the very early light curve. These are also present in some of the simulations of M23, and could be the result of lines. This is likely not the shock breakout signal, which is expected to be very faint in this band (Sapir et al., 2013; Katz et al., 2013; Sapir & Halbertal, 2014)
In Fig. 16, we present the posterior probability for the radius of best-fit models that miss the rise, and those that match the rise. We find no statistically significant difference between SNe with and without flash features (which could perhaps be detected given a larger sample).
We summarize the different categories our objects fall into in Fig. 17. Most SNe II are cooling at early times, showing constant or reddening UV-optical colors. We refer to these as "II-C". SNe II which are heating and showing a bluer UV-optical color with time are referred to as "II-H". We further subdivide the II-C group into SNe with small fit radius ("II-C+"), which are well fit at early times, and those with large fit radius ("II-C-"), which are not well fit by shock cooling models at early times.
## 5 Discussion
### RSG radius distribution
#### 5.1.1 What can the early-time fits teach us?
In SS 4.3.1, we demonstrated that with a typical set of UV-optical light curves, we can recover the breakout radius and shock velocity parameter from the simulations of M23 for a wide range of parameters. When applying our method to the SNe of our sample, we found good fits to roughly half of the SNe, with radii consistent with the observed RSG radius distribution (II-C+). The remaining SNe systematically miss the rise and are characterized either by a high \(R_{13}\) or a high \(v_{s*,8.5}\) due to the higher luminosity of this group compared to other SNe (II-C-). Since there are acceptable fits for roughly half of such SNe, and as the blackbody radius and temperatures of the majority of the sample evolve according to the predictions of spherical phase shock cooling, we cannot rule out that it is the primary powering mechanism of these SNe. Our lack of early-time UV-optical colors and of high quality sampling in the first hours of the SN explosions prevents us from testing whether the blackbody evolution in the very early times evolves according to the predictions of planar phase shock cooling. However, we note that when optical colors are available during these first phases, the colors are consistent with that of a hot \(>15,000\) K blackbody. With this in mind, there are several possibilities to explain the large radius fits:
1. These SNe are powered by shock cooling only, and have a small radius. The failure to fit the rise is due to correlated residuals not present in the simulations, and thus is not modeled in the covariance matrix we used - creating a bias to larger
Figure 15: Best fit shock-cooling models to the early time _TESS_ light curve of SN 2020nvm. The blue curve shows the best fit M23 model to the multi-band light curve, which misses the rise during the planar phase. The green curve shows the best fit for a narrow radius prior, and the red curve shows the same model as the blue curve but accounting only for the spherical phase with the model of Sapir & Waxman (2017). While the spherical phase only can reproduce the full light curve, taking the planar phase into account results in different early time light curve. When including the planar phase, no good fit is found which can describe the entire light curve.
radii in some cases, or they did not cover this particular combination of shock velocity and radius. This possibility is likely what happens in half of the cases, where a good fit is acquired if the fit is forced to a small radius. In other cases, the small radius fit still misses the rise or a unrealistically high \(v_{s*}\) is required.
2. These SNe have a large progenitor radius, and their early time evolution does not fit the predictions of planar phase shock cooling from a spherical RSG envelope. Recent work by Goldberg et al. (2022a,b) shows that the turbulent 3D structure of the outer regions of the envelope, or a non-spherical breakout surface could possibly extend the duration of shock breakout and affect the early stages of shock cooling up to a timescale of \(R/v\lesssim 1\) day. If this is the case for the majority of similar fits, the large radius of the progenitor star would be consistent with a shell of dense CSM or an inflated envelope at \(<3\times 10^{14}\) cm, with the breakout occurring at the edge of the shell. This interpretation is also supported by spectropolarimetric observations of SN 2021yja (Vasylyev et al. 2023b), showing a high degree of continuum polarization during the early photospheric phase
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ SN} & \(R_{13}\) & \(v_{s*,8.5}\) & \(f_{M}\) & \(E(B-V)\) [mag] & \(R_{V}\) & \(t_{0}\) [JD] & \(t_{0.7}\) [days] & \(t_{tr}\) [days] \\ \hline SN2019eoh & \(7.7^{+0.9}_{-0.9}\) & \(2.0^{+0.3}_{-0.3}\) & \(0.5^{+0.8}_{-0.4}\) & \(0.0^{+0.0}_{-0.0}\) & \(4.0^{+0.9}_{-1.3}\) & \(-0.02^{+0.05}_{-0.06}\) & 26.7 & 16.3 \\ SN2020aavm & \(13.4^{+1.4}_{-4.3}\) & \(0.9^{+0.3}_{-0.3}\) & \(83.7^{+85.7}_{-0.7}\) & \(0.1^{+0.1}_{-0.1}\) & \(3.9^{+1.0}_{-1.2}\) & \(0.19^{+0.45}_{-0.50}\) & 22.2 & 47.1 \\ SN2020fqv & \(3.4^{+1.6}_{-1.7}\) & \(1.0^{+0.4}_{-0.3}\) & \(99.4^{+76.4}_{-75.7}\) & \(0.8^{+0.0}_{-0.0}\) & \(2.8^{+0.5}_{-0.5}\) & \(0.40^{+0.26}_{-0.28}\) & 13.8 & 47.4 \\ SN2019oxn & \(5.4^{+0.8}_{-0.8}\) & \(0.7^{+0.1}_{-0.1}\) & \(1.4^{+1.7}_{-1.2}\) & \(0.0^{+0.0}_{-0.0}\) & \(3.9^{+0.9}_{-1.2}\) & \(0.01^{+0.18}_{-0.21}\) & 20.2 & 46.8 \\ SN2020ufx & \(15.2^{+3.8}_{-0.8}\) & \(2.2^{+0.5}_{-0.2}\) & \(83.0^{+81.7}_{-0.7}\) & \(0.0^{+0.0}_{-0.0}\) & \(3.6^{+1.1}_{-1.2}\) & \(0.33^{+0.27}_{-0.27}\) & 27.7 & 4.6 \\ SN2019ozf & \(16.5^{+3.4}_{-3.4}\) & \(0.7^{+0.2}_{-0.2}\) & \(94.5^{+77.6}_{-1.0}\) & \(0.0^{+0.0}_{-0.0}\) & \(3.7^{+1.1}_{-1.2}\) & \(0.44^{+0.33}_{-0.33}\) & 20.6 & 30.9 \\ SN2018cxn & \(13.7^{+3.2}_{-3.1}\) & \(0.6^{+0.1}_{-0.1}\) & \(109.7^{+6.1}_{-0.9}\) & \(0.1^{+0.1}_{-0.1}\) & \(3.8^{+1.0}_{-1.2}\) & \(-0.00^{+0.01}_{-0.01}\) & 16.7 & 38.9 \\ SN2020ckd & \(3.5^{+1.3}_{-1.3}\) & \(0.3^{+0.2}_{-0.0}\) & \(53.0^{+74.3}_{-0.7}\) & \(0.1^{+0.1}_{-0.1}\) & \(3.7^{+1.0}_{-1.2}\) & \(0.48^{+0.53}_{-0.60}\) & 12.4 & 32.9 \\ SN2019nvm & \(17.2^{+1.7}_{-1.2}\) & \(0.7^{+0.1}_{-1.3}\) & \(111.2^{+68.4}_{-0.0}\) & \(0.1^{+0.0}_{-0.0}\) & \(3.6^{+1.1}_{-1.1}\) & \(-0.51^{+0.38}_{-0.8}\) & 25.6 & 38.2 \\ SN2020fln & \(17.5^{+2.1}_{-2.1}\) & \(1.5^{+0.3}_{-0.9}\) & \(7.0^{+0.0}_{-0.0}\) & \(0.0^{+0.0}_{-0.0}\) & \(3.8^{+1.0}_{-1.2}\) & \(-0.2^{+0.34}_{-0.24}\) & 27.0 & 8.8 \\ SN2020jfo & \(6.3^{+1.3}_{-1.3}\) & \(0.6^{+0.1}_{-0.1}\) & \(72.3^{+76.8}_{-56.8}\) & \(0.0^{+0.0}_{-0.0}\) & \(3.6^{+1.1}_{-1.2}\) & \(0.11^{+0.23}_{-0.27}\) & 14.3 & 28.5 \\ SN2020nyb & \(14.8^{+3.1}_{-3.1}\) & \(0.4^{+0.1}_{-0.1}\) & \(60.0^{+81.4}_{-53.1}\) & \(0.1^{+0.1}_{-0.1}\) & \(3.9^{+0.9}_{-1.1}\) & \(0.55^{+0.47}_{-0.60}\) & 21.9 & 60.6 \\ SN2019wxz & \(14.0^{+4.9}_{-4.6}\) & \(1.0^{+0.4}_{-0.3}\) & \(92.6^{+79.7}_{-15.0}\) & \(0.0^{+0.0}_{-0.0}\) & \(3.6^{+1.1}_{-1.2}\) & \(0.26^{+0.51}_{-0.50}\) & 20.2 & 31.2 \\ SN2019gmh & \(14.9^{+6.6}_{-1.7}\) & \(0.8^{+0.2}_{-0.1}\) & \(132.4^{+68.8}_{-0.8}\) & \(0.1^{+0.0}_{-0.0}\) & \(4.1^{+0.0}_{-0.0}\) & \(-0.30^{+0.16}_{-0.16}\) & 24.5 & 10.1 \\ SN2020afdi & \(8.0^{+1.7}_{-1.6}\) & \(0.4^{+0.1}_{-0.1}\) & \(49.1^{+75.2}_{-45.2}\) & \(0.0^{+0.0}_{-0.0}\) & \(3.8^{+1.0}_{-1.2}\) & \(-0.07^{+0.27}_{-0.28}\) & 21.8 & 48.5 \\ SN2020qqv & \(17.3^{+2.2}_{-2.2}\) & \(0.8^{+0.2}_{-0.2}\) & \(80.0^{+83.6}_{-0.6}\) & \(0.0^{+0.0}_{-0.0}\) & \(3.7^{+1.0}_{-1.2}\) & \(-0.61^{+0.46}_{-0.50}\) & 29.5 & 13.5 \\ SN2020mast & \(14.7^{+2.2}_{-2.2}\) & \(0.9^{+0.2}_{-0.2}\) & \(90.1^{+80.7}_{-0.7}\) & \(0.1^{+0.1}_{-0.1}\) & \(3.6^{+1.1}_{-1.2}\) & \(-0.53^{+0.31}_{-0.33}\) & 24.6 & 23.6 \\ SN2020dyn & \(16.0^{+3.3}_{-1.3}\) & \(1.2^{+0.3}_{-0.0}\) & \(105.0^{+72.6}_{-1.6}\) & \(0.0^{+0.0}_{-0.0}\) & \(3.6^{+1.1}_{-1.1}\) & \(-0.54^{+0.24}_{-0.28}\) & 26.0
(\(t>25\) days). SN 2021yja is well fit by a large radius model during its full evolution, but misses the rise by several magnitudes. The large radius fit is also noted by Hosseinzadeh et al. (2022), that fit the spherical phase model of Sapir and Waxman (2017) and acquire very similar parameters, but their fit matches the rise at early times due to lacking an accurate description of the planar phase. A similar case is demonstrated in Fig. 15.
3. These SNe are the result of a breakout from the edge of a shell of dense CSM on a several hours timescale, and the early (few days) light curve is characterized by the subsequent cooling. The intrinsic timescale (i.e., ignoring light travel time) for shock breakout from any spherical density profile is \(\frac{\Delta R}{v}=\frac{c}{v_{\rm do}^{2}}\,\kappa\,\rho_{bo}\)\(\Delta R\) is the width of the breakout shell, and \(v_{bo},\rho_{bo}\) are the velocity and density at breakout (Waxman and Katz, 2017, and references therein). A shock breakout in a slowly declining and extended density profile will be characterized by a density of \(\lesssim 10^{-12}\,{\rm g\,cm^{-3}}\) and occur on a few days timescale. This is likely what occurred during the explosions of SN 2020pni and more recently for SN 2023ixf (Zimmerman et al., 2023), where a rise in temperature was observed during the first few days. In both cases, breakout occurred from a shell of dense CSM confined to \(<2\times 10^{14}\) cm. If the mass of this shell is higher, breakout will occur at the edge of the shell at densities of \(\rho_{bo}\sim 10^{-11}\,{\rm g\,cm^{-3}}\), resulting in an hours long breakout which will power the optical rise. Since we do not include breakout in our modelling (assumed to occur before observations began) the early time light curve will be missed by the fit. After breakout, the cooling should still evolve according to the predictions of spherical or planar phase shock cooling, which are insensitive to the exact shape of the density profile (Sapir et al., 2011; Rabinak and Waxman, 2011; Sapir and Waxman, 2017). The parameter inference will likely be wrong in this case, since cooling is measured relatively to the peak of breakout. A delay of \(\delta t_{\rm d}=0.12\frac{\Delta R}{10^{13}\,{\rm cm}}\frac{v}{10^{9}\,{\rm cm }\,{\rm s}^{-1}}{\rm day}\) will result in an increase of \((1+\delta t_{\rm d})^{1.8}\) in the fit progenitor radius, but will not change the general conclusion that the radius is large enough to reach such low \(\rho_{bo}\). This scenario is seemingly challenged by the lack of strong association between the presence of flash ionization features and a large fit radius. However flash features trace the CSM density profile at \(\sim 10^{15}\,{\rm cm}\)(Yaron et al., 2017) rather than \(R\sim 10^{14}\) cm required for this effect to become significant. This scenario is consistent with the conclusions of Morozova et al. (2018), who fit a grid of hydrodynamical models of progenitors surrounded by dense CSM at \(<10^{14}\) cm, and found that they are consistent with the light curves of observed Type II SNe, with breakout occurring at the edge of the dense CSM.
Similarly to the heating defining the extended breakout of the II-H category, an optical rise while the temperature is heating is the unambiguous marker of an increase in the bolometric luminosity, expected only during breakout itself. Observing or ruling out such heating during the first day of the explosion through high-cadence UV-optical observations thus has the potential to resolve any remaining ambiguity regarding SNe in the
Figure 16: (a) Posterior probability distribution of the breakout radius, for SNe whose fit misses the rise and SNe whose best fit does not miss the rise. (b) A scatter plot showing the correlation between the best fit radius and the \(r\)-band magnitude at \(t=2\) days. A large fit radius is strongly associated with with missing the rise during the first day, and is associated with a brighter \(r\)-band light curve.
II-C- group, since all three options presented above have different predictions for the breakout pulse itself:
1. The breakout pulse occurs at densities of \(\sim 10^{-9}\,\mathrm{g\,cm^{-3}}\). The breakout duration is likely dominated by the light travel time, lasting minutes to an hour. Breakout will likely peak at tens of eV.
2. The breakout pulse occurs at densities of \(\sim 10^{-9}\,\mathrm{g\,cm^{-3}}\). The asymmetric nature of the breakout shell caused a smearing of the breakout to a timescale of a few hours. Locally, the width of the shock transition is still similar, so that breakout would still likely peak at tens of eV.
3. The breakout pulse occurs at densities of \(\lesssim 10^{-11}\,\mathrm{g\,cm^{-3}}\). The low density causes the intrinsic breakout timescale to last a few hours, dominating over the light travel time. Locally, the width of the shock transition is large, so that breakout might be peaking at \(\sim 10\) eV, and could contribute significantly to the optical during the early rise. No additional short duration pulse can be observed.
#### 5.1.2 The intrinsic progenitor radius distribution
To connect the observed parameter distribution to the intrinsic progenitor radius distribution, we account for the selection effects and biases introduced by our observation strategy and the dependence of the luminosity on the breakout radius. We calculate model light curves for the sample of RSG of Davies et al. (2018). We calculated the radii from the observed effective temperatures and luminosities, and generate a set of light curves with a velocity parameter \(v_{s*,8.5}\) in the range \(0.5-1.5\), with the rest of the model parameters set to unity and assuming no host or galactic extinction along the line of sight. We test what fraction of the models is recovered by our observation strategy as a function of distance, demanding a blue color (\(g-r<0\,\mathrm{mag}\)) at \(t=1\) day, and the object brighter than \(19.5\,\mathrm{mag}\) at the same time, which is the typical brightness limiting our ability to classify the object as an SN II, a criterion for followup in our program. We repeat this analysis for an _ULTRASAT_ strategy - demanding an optical brightness above 19.5 mag at peak for spectroscopic classification, and that
Figure 17: Schematic classification of the early light curves of SNe II. They roughly divide into 2 groups: (1) SNe with increasing temperatures at early times, which call “II-H”, and (2) with decreasing temperatures or “II-C”. We further divide the later into 2 groups: (a) which are well fit by shock cooling models at early times, have a good early fit, and a small fit radius. We call these “II-C+”. (b) which are not well fit at early times, are more luminous as a population, and have larger fit radii. We call these “II-C-”. Next to each group we denote the number of SNe in the sample which belong to it, as well as example SNe.
the light curve is higher than the limiting magnitude of 22.5 mag at 1d (Shvartzvald et al., 2023).
We find that as the distance increases above 70 Mpc, we are increasingly biased towards higher progenitor radii. In panel (a) of Fig. 18, we show the fraction of RSG explosions recovered as a function of distance with each strategy, and histogram of the distances of our sample. In panel (b), we show the mean radius of the recovered sample, as a function of distance. In panel (c), we show the posterior distribution of the SNe radius above and below a distance of 70 Mpc. The radius posterior distribution of closer SNe is highly skewed towards radii below \(1000R_{\odot}\), while the distribution of SNe at larger distances is skewed to values above \(1000R_{\odot}\).
We correct the Malmquist bias following the treatment of Rubin et al. (2016). For each point in the posterior sample, we calculate a weight factor \(w_{i}=\frac{D_{i}^{3}}{\sum_{j}D_{j}^{-3}}\)
Figure 18: (a) A histogram of the distances of SNe in this work, and the fraction of simulated SNe light curves which would be followed up with our observations study, and in the _Ultraviolet Transient Astronomy Satellite_ (_ULTRASTAT_) survey. We assume the radius distribution of Davies et al. (2013) for the models. (b) The mean radius of the detected SNe, demonstrating a luminosity bias at \(d>70\) Mpc. (c) The unweighted posterior probability distribution of the breakout radius, and \(d\) above or below 70 Mpc. (d) The posterior distribution of the full sample, corrected and uncorrected for the luminosity bias. The gray histogram is a distribution of RSG radii from Davies et al. (2013). We also shock the cumulative distribution of the observed and corrected posterior distribution, with 68% confidence intervals. While the observed fraction of SNe with large \(>1000\,R_{\odot}\) radius is \(71^{+7}_{-4}\)%, they only account for \(34^{+23}_{-11}\)% of exploding RSGs.
where \(M_{i}+17=5\log(\frac{D_{i}^{*}}{710\,{\rm pc}})\). We show the resulting corrected posterior distribution in Fig. 18 panel (d), along with the unweighted distribution and the distribution of RSG radii of Davies et al. (2018). The error bars are calculated by bootstrapping the posterior distribution: for every realization, we recalculate the posterior for 33 SNe randomly sampled from the list of SNe with viable fits, while allowing for repetition. We repeat this process 500 times and plot the mean and standard deviation on each bin of the histogram.
Our analysis shows that even if most (\(67^{+9}_{-5}\%\)) of the observed SNe have large (\(R>1200\,R_{\odot}\)) breakout radii, the breakout radius distribution would be consistent with the observed RSGs radius distribution (\(R<1200\,R_{\odot}\)) in \(69^{+13}_{-26}\%\) of SNe II explosions. Hinds et al. (in prep.) will analyze the optical light curves of SNe II in the magnitude-limited BTS survey, and reaches a similar conclusion. We further note that for SNe with a CSM breakout such as SN 2020pi or SN 2023ixf, a breakout radius of \(\sim 1500-3000\,R_{\odot}\) is needed to explain the breakout timescale and would be consistent with the distribution we report here (Zimmerman et al., 2023). In the case of SN 2023ixf, constraints on the SN progenitor from pre-explosion data confirms a dusty shell at a similar radius (e.g., Qin et al., 2023). This supports the the idea that SNe II-C- have large radii due to a shell of CSM from which shock breakout occurs.
### X-ray emission and constraints on extended CSM density
Following SN shock breakout, the accelerated ejecta will expand into the surrounding optically thin CSM, acting as a piston and creating a shock in the CSM. For typical CSM densities, this shock is expected to be collisionless, heat the gas to \(\sim 100\) keV temperatures and produce X-ray emission (Fransson et al., 1996; Katz et al., 2011; Chevalier and Irwin, 2012; Svirski et al., 2012; Ofek et al., 2014). In SS 3.3, we reported the XRT detections and upper limits at the SN location, binned over the duration of the _Swift_ observations (typically \(\sim 10,000\) ks). The limits we acquire are several orders of magnitude deeper than the optical emission, reaching as deep as few SNe II previously detected by XRT. SN 2005cs (Brown et al., 2007), SN 2006bp (Brown et al., 2007), SN 2012aw (Immler and Brown, 2012), SN 2013ej (Margutti et al., 2013), and recently SN 2023ixf (Grefenstette et al., 2023).
In Fig. 19, we show a histogram of the limits ratio of X-ray to UV-optical emission at the same times (transparent bars), and the 4 detections we report (vertical red lines with shaded error bars). Our limits range between \(10^{-1}-10^{-4}\) of the optical emission, and the highest detection is \(\sim 10^{-2}\). In SS 4.3.2, we derived constraints on the velocity profiles of the SN ejecta through UV-optical light curve fitting. The photon arrival weighted time of our detections (as well as those in the literature) typically correspond to a few days after explosion - probing the forward shock emission in the extended CSM around the progenitor star at \((0.5-2)\times 10^{15}\,{\rm cm}\). We can use these to constrain the CSM density at \(\sim 10^{15}\) cm and subsequently constrain the mass-loss of the progenitor star prior a few years prior to explosion.
At a time \(t\), a constant velocity shock moving through an optically thin CSM with \(v_{s,csm}\) will sweep up a mass:
\[\frac{M_{CSM}}{M_{\odot}}=2.7\times 10^{-4}\,v_{s,csm,9}t_{5d}\rho_{o,-16} \tag{1}\]
where \(v_{s,csm,9}=\frac{v_{s,csm}}{10^{9}\,{\rm cm}\,{\rm s}^{-1}}\),\(t_{5d}=\frac{t_{s,csm}}{5\,{\rm d}}\) and \(\rho_{o,-16}=\frac{\rho_{o}(r=10^{15}\,{\rm cm})}{10^{-16}\,{\rm g}\,{\rm cm} \,{\rm s}^{-3}}\). To find the velocity \(v_{s,csm}\) we assume it is well approximated by the velocity of the piston (the ejected envelope) at equal mass to the swept of CSM. This is given through the profiles of Rabinak and Waxman (2011). Following their notation (their equations 3. and 4.) we find:
\[\delta_{m,piston}=\frac{M_{csm}}{M_{tot}}=2.7\times 10^{-4}\frac{f_{\rho}v_{s, csm,9}t_{5d}\rho_{o,-16}}{f_{\rho}M_{\odot}} \tag{2}\]
\[v_{s,csm}=v_{f}\left(\delta_{m}=\frac{M_{csm}}{M_{tot}}\right) \tag{3}\]
As long as the fraction \(\frac{M_{CSM}}{M_{tot}}\) is larger then the mass fraction in the breakout shell \(\delta_{m,bo}\):
\[\frac{v_{s,csm}^{(1)}}{{\rm cm}\,{\rm s}^{-1}}=1.5\times 10^{9}\left(\frac{f_{v }}{2}v_{s*,8.5}\right)^{0.9}\left(\frac{t_{5d}\rho_{o,-16}}{f_{\rho}M_{\odot}} \right)^{-0.1} \tag{4}\]
Here we took \(f_{v}=\frac{v_{f}}{v_{s}}=2\), which is typically the case for small \(\delta_{m}<0.01\)(Matzner and McKee, 1999). This is in agreement with the the velocity evolution of Chevalier and Fransson (1994) for a steep post-shock ejecta density profile, as expected here (see e.g., Waxman and Katz, 2017, and references theirn). If \(\frac{M_{csm}}{M_{tot}}<\delta_{m,bo}\) we can assume \(v_{f}=f_{v}v_{s,bo}\) which is the maximum velocity at which breakout occurs. In this case:
\[\frac{v_{s,csm}^{(2)}}{{\rm cm}\,{\rm s}^{-1}}=2\times 10^{9}\left(\frac{f_{v}}{2 }\right)\left(\kappa_{0.34}f_{\rho}M\right)^{0.13}\left(v_{s*,8.5}\right)^{1. 13}R_{13}^{-0.26} \tag{5}\]
so that \(v_{s,csm}=min\left(v_{s,csm}^{(1)},v_{s,csm}^{(2)}\right)\).
The total luminosity generated by the collisionless shock is given by \(L\left(t\right)=2\pi\rho_{csm}r^{2}v_{s,csm}^{3}\).
Using the derived \(v_{s,csm}\) we find:
\[L_{X}=10^{42}\,{\rm erg\,s^{-1}}\times\] \[\begin{cases}2.1\,\rho_{o,-16}^{0.7}v_{s\ast,8.5}^{2.7}t_{5d}^{-0.3} \left(f_{\rho}M_{\odot}\right)^{0.3}&v_{s,csm}=v_{s,csm}^{(1)}\\ 0.6\,\rho_{o,-16}\left(\kappa_{0.34}f_{\rho}M_{\odot}\right)^{0.4}&\\ \times\left(v_{s\ast,8.5}\right)^{3.4}R_{13}^{-0.8}&v_{s,csm}=v_{s,csm}^{(2)} \end{cases} \tag{6}\]
Using Eq. 6, we convert our constraints on the XRT luminosity to constraints of the CSM density and mass loss. We assume a Bremsstrahlung spectrum with a temperature \(T=200\mu(\frac{v_{s,csm}}{10^{9}\,{\rm cm\,s^{-1}}})^{2}\) keV (Fransson et al., 1996; Katz et al., 2011), where \(\mu\) is the mean particle weight assumed to be \(\mu=0.61\) for an ionized medium with a solar composition. We then correct the observed XRT luminosity to a bolometric X-ray luminosity, with correction factors ranging from 2-6 over our sample. We assume no intrinsic X-ray absorption at the SN site. To estimate the error on the values, the calculation is repeated for 100 points randomly drawn from the posterior sample on the shock-cooling light curve fits, and by randomly drawing points from a Gaussian distribution with a mean and standard deviation representing the X-ray measurements. We calculate \(\frac{M}{M_{\odot}{\rm yr}^{-1}}=10^{-4}\rho_{o,-16}v_{w,50}\) where \(v_{w,50}\) is the CSM velocity in units of \(50\,{\rm km\,s^{-1}}\), assumed to be 1.
We show our constraints in Fig. 20. Here the colored points represent individual detections, the downward pointing triangles represent upper limits, and the blue plus stands for the estimate of Grefenstette et al. (2023) for the mass-loss of SN 2023ixf with a shock velocity arbitrarily chosen to be \(10^{9}\,{\rm cm\,s^{-1}}\), deduced from the absorbing hydrogen column density between subsequent observations.
There are 2 main systematics involved in our approach. (1) The emission spectrum of a shock traversing the CSM is highly uncertain, and assuming it will emit with a temperature equal to the plasma temperature is probably inaccurate. For example, Grefenstette et al. (2023) found for SN 2023ixf a temperature of \(35^{+22}_{-12}\) keV, which results in a velocity \(v=(0.54^{0.15}_{-0.1})\times 10^{9}\,{\rm cm\,s^{-1}}\) which is lower by at least a factor of 2 from the observed photospheric velocity of SN 2023ixf (Zimmerman et al., 2023; Jacobson-Galan et al., 2023). Decreasing the temperature of the X-ray spectrum from \(>120\) keV to 35 keV would reduce the bolometric X-ray luminosity by factor \(>2\) and subsequently reduce the mass-loss and density. (2) The intrinsic absorption of the CSM could affect the emission. In the case of SN 2023ixf, Grefenstette et al. (2023) report an absorption column density of \(2.6\sim 10^{23}\) atoms cm\({}^{-2}\) at \(t=4\) days, and \(5\sim 10^{22}\) atoms cm\({}^{-2}\) at \(t=11\) days. Using the NASA Portable, Interactive Multi-Mission Simulator11, we estimate our results would change by a factor of \(\times 2\) if \(N_{H}=1\times 10^{23}\) cm\({}^{-2}\) in the XRT band. Such a value at the typical photon-weighted XRT observation time would imply a mass loss rate of \(\gtrsim 10^{-4}\,M_{\odot}\,{\rm yr}^{-1}\), indicating this will affect only a few of the SNe in our sample. Our limits are consistent with the observed mass-loss of field RSGs (de Jager et al., 1988; Marshall et al., 2004; van Loon et al., 2005), but lower than inferred through modelling of narrow "flash-ionization" spectral features, implying mass-loss rates as high as \(10^{-2}M_{\odot}\,{\rm yr}^{-1}\)(Dessart et al., 2017; Boian and Groh, 2019), likely since these methods probe different regions of the CSM density profile. This is also the case for SN 2023ixf: comparisons of the early time spectra performed by Jacobson-Galan et al. (2023) and Bostroem et al. (2023) to the models of Dessart et al. (2017) indicate a mass-loss rate of \(10^{-3}-10^{-2}\,M_{\odot}\,{\rm yr}^{-1}\), much higher than those inferred by Grefenstette et al. (2023), probing the extended CSM. The models of Dessart et al. (2017) introduce a mass-loss rate declining continuously to \(10^{-6}\,M_{\odot}\,{\rm yr}^{-1}\) by \(r=10^{15}\) cm, reflecting a dense mass-loss region swept up by the shock in the CSM at early times. Thus they are capable of discriminating between different CSM densities at few \(10^{14}\,{\rm cm}\).
Footnote 11: [https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl](https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl)
Since some amount of confined CSM is present in the majority of SNe II (Bruch et al., 2021) we consider the effect of such dense CSM on our analysis. We repeat the analysis, but assume that the CSM swept up by the shock at \(t<t_{X}\) has a density profile of \(10^{-14}\,{\rm g\,cm^{-3}}(\frac{r}{10^{8}\,{\rm cm}})^{-2}\) (\(\dot{M}=10^{-3}M_{\odot}\,{\rm yr}^{-1}\)). This weakly decreases \(v_{csm}\), and subsequently decreases \(L\). For the majority of the sample, our limits do change by more then 50%, and at most by a factor of 3.
Our results independently support the conclusion that by \(\sim 10^{15}\) cm, the density of the CSM has already declined to typical values observed for RSG stars, and that regions of dense mass loss are confined to the nearby environment of the progenitor star, and probing the final year of its evolution.
### Observing shock-breakout and shock-cooling with ULTRASAT
_ULTRASAT_ will conduct a high cadence (5 min) UV survey with a 200 deg\({}^{2}\) field of view (FOV). It will detect tens of shock breakout signatures and hundreds of shock cooling light curves in its first 3 years (Shvartzvald et al., 2023). The high cadence light curves of _ULTRASAT_ will resolve all phases of the early SN evolution - shock
breakout, planar phase and spherical phase shock cooling. While spherical shock cooling alone provides constraints on the progenitor parameters, the planar phase, typically lasting hours, can discriminate between models more finely. Directly observing the breakout pulse can provide independent constraints on the breakout radius, and the velocity of the outermost layers of the ejecta. This can resolve the remaining ambiguity as to the reason for the systematic deviation from the expected planar phase in large radii fits. Observing the early UV-optical color of SNe will discriminate between a light curve rise driven by cooling, following a stellar edge breakout, or by heating of the ejecta, during an extended shock breakout in a shallow density profile (examples of the latter including SN 2020pni and SN 2023ixf). For SNe with light curves well matched by a stellar breakout, the velocity and mass of the breakout shell will be constrained by the breakout pulse itself (Sapir et al., 2011; Sapir et al., 2013).
In combination with X-ray followup and spectral modeling, these can be used to accurately map the CSM density profile, with each tracer probing a different segment of the density profile. While there have been some candidate shock-breakout flares in the optical (Garnavich
Figure 19: The ratio of X-ray to UV-optical emission, measured at the same times and averaged over the duration of the _Swift_ observations. Upper limits are shown as a transparent histogram, and the 4 detections we report are shown using vertical red lines with shaded error bars.
Figure 20: X-ray limits on the extended (\(\sim 10^{15}\) cm) CSM density, mass loss and CSM shock velocity. Black triangles represent upper limits, colored points are detections from this work, and the blue plus represents the X-ray constraints of SN 2023ixf from Grefenstette et al. (2023). The extended \(10^{15}\) cm mass-loss is consistent with field RSG levels.
Figure 21: Prediction for the breakout flare signal from a sample of SNe II in the optical and UV. Panel (a) shows a kernel density estimate (KDE) plot of the _ULTRASAT_ breakout duration and peak magnitude. Grey points correspond to the prediction of the best fit cooling light curve. The dashed line shows the limiting magnitude of the survey binned to varying degree. Panel (b) shows the same for the _TESS_ bandpass, although the prediction for the breakout pulse spectrum are less certain in the optical, and should be treated as lower limits. Our results show it is very difficult to rule out the existence of a breakout pulse in optical wavelengths alone.
et al., 2016; Bersten et al., 2018), some claims have been disputed (Rubin & Gal-Yam, 2017), and the sample of _TESS_ CCSNe of Vallely et al. (2021), binned to 30-min cadence, show no detection of breakout flares. Breakout flares are expected to peak in the UV or X-ray, but the non-LTE spectral shape makes prediction in the optical highly uncertain (Sapir et al., 2013; Sapir & Halbertal, 2014). While initially the number of photons produced is not enough to reach thermal equilibrium, the planar phase temperatures are already close to the equilibrium temperature, and the exact details of this transition can change the optical light curve by orders of magnitude. The UV peak, closer to the peak frequency of the emission, is much better understood.
In order to produce a clear prediction for the _ULTRASAT_ survey based on the observed sample of SNe II, we calculate the breakout signal in the _TESS_ and in the UVOT \(UVM2\) bandpass (\(UVM2\) is chosen since it is closest to the _ULTRASAT_ bandpass). For every SN we fit in SS 4.3, we use breakout properties \(\rho_{\rm bo}\), and \(v_{s,bo}\) to calculate the luminosity and spectrum at breakout according to the models of Sapir et al. (2011); Katz et al. (2012); Sapir et al. (2013). We integrate the spectrum and and compute the typical _TESS_ and _ULTRASAT_ brightness during breakout, and the duration of the expected breakout. We show the distribution of parameters in Fig. 21. Panel (a) shows a kernel density estimate (KDE) plot of the _ULTRASAT_ breakout landscape, and panel (b) shows the expected _TESS_ brightness. We highlight the predictions for SN 2020fqv and SN 2020nvm, observed by _TESS_. We stress the optical wavelength predictions are highly uncertain, and should be treated as lower limits. Our results are consistent with the entirety of the breakout flares predicted by our modelling being measured by _ULTRASAT_, and none of the flares being observed in the optical wavelengths.
Figure 22: (a) Schematic illustration of the proposed alternatives for early observations of SNe II. The curves represent possible pre-explosion density profiles of the envelope and CSM, corresponding to the mass-loss history of the progenitor in the months before explosion. Depending on the exact parameters of this profile, the breakout shell can be located in 3 possible locations. The red star represent a breakout radius at the edge of the stellar envelope. Since the density is steeply declining, the shock transition region is narrow, and the duration of breakout will be typically dominated by the light travel time. The blue star corresponds to a breakout at the edge of a dense shell of CSM. The density profile is declining steeply, and the breakout pulse duration can be set either by the light travel time or the shock crossing time, both lasting hours. The green point corresponds to the third option, occurring for a minority of cases. Here the density profile is shallow, and increasing the duration of the breakout pulse to the shock crossing timescale of a few days. The \(r\lesssim 3\times 10^{14}\) cm density will determine the early light curve and spectra, and the \(r\gtrsim 3\times 10^{14}\) cm determines the X-ray emission emerging after the first few days. (b) The breakout radius for a star with a \(500R_{\odot}\) and \(1M_{\odot}\) stellar envelope surrounded by varying amounts of CSM confined to \(1.5\times 10^{14}\)cm. The conversion to mass-loss assumed \(v_{w}=50\,{\rm km\,s^{-1}}\). Increasing the mass of the shell of dense CSM moves the breakout location from the stellar envelope, to the shallow region of the dense CSM, and onward to the edge of the dense CSM, if a significant portion of the envelope was ejected.
## 6 Conclusions and Summary
* In this paper we have presented the UV-optical photometry of 34 spectroscopically regular SNe II detected in the ZTF survey and followed up by the _Swift_ telescope within 4 days of explosion. In addition to the UV-optical data, we report four XRT detections and 3 \(\sigma\) upper limits for the rest of the sample.
* In SS 4.1 we analyze the color evolution in of the sample. We show that besides SN 2020pni, the rest of our sample had UV-optical colors which are becoming redder with time across the entire SED, indicating they are cooling.
* We show that the combination of UV, UV-optical and optical colors can be used as a discriminator between various degree of intrinsic time-dependent deviations from blackbody and host-galaxy extinction with non-MW extinction laws. We show there is no preference in UV-optical color for SNe with flash features, and argue the deviations are consistent with the predictions of shock cooling models.
* Using the scatter in early time color, we argue our sample has a host extinction smaller than \(E(B-V)=0.2\) mag. Subsequently, we show we can measure the extinction of highly extinguished SNe to better than 0.2 mag. The average early time colors of the SNe in our sample are provided in Table 5.
* In SS 4.2 we fit the SEDs of the SNe in our sample at the times of UVOT observations to a blackbody, and recover the evolution of their blackbody radius and temperature. We show that the evolution of these parameters is in excellent agreement with the predictions of spherical phase shock cooling, with a statistically significant difference in the average temperature and radius between object with and without flash features. We also show at least 30% of the objects in our sample are more luminous than expected from an envelope breakout with \(R<10^{14}\) cm- indicating a larger progenitor radius or a higher shock velocity parameter relative to generic expectations.
* Motivated by the good agreement with the predictions of spherical phase shock-cooling, we present a method to fit the light curves to latest shock-cooling models in SS 4.3.1, accounting for deviations from blackbody over a large range of parameters, and interpolating between the planar and spherical phase of shock cooling. We demonstrate this method is unbiased when fitting the MG simulations of M23, although these have correlated residuals. We demonstrate that we can recover the breakout radius \(R_{*}\), the shock velocity parameter \(v_{*,8.5}\) describing the velocity profile in the outer regions of the ejecta, and the extinction. We show that we cannot recover the envelope mass \(M_{env}\), total mass \(M\), or numerical density scaling parameter \(f_{\rho}\) using our method.
* Overall we find the early UV-optical light curves of our sample divides into 3 groups. (1) A majority (33/34) of SNe which cool at early times, which we denote as "II-C". This group is comprised of (a) SNe that are well fit throughout their evolution, with radii characteristic of the observed RSG radius distribution and (b) SNe which are fit by larger radius, more luminous models and which systematically miss the early (\(<\)1d) rise. We denote these as "II-C+" and "II-C-" respectively. (2) The third group is represented by a single object in our sample (SN 2020pni), which is heating in the first few days. A similar evolution has been observed for the nearby SN 2023ixf. We denote these as "II-H".
* As we have demonstrated that there is no bias in our fitting method, we argue this reflects a physical difference from an idealized breakout from a polytropic envelope. We speculate this difference could be related to the presence of CSM or an asymmetric shock breakout. We assume the inference of large radii is real, and show that while most of the sample is characterized by a large radius, this is due to a luminosity bias affecting our sample at distance \(>70\) Mpc. We show the volume corrected probability peaks at radii similar to those of field RSG. We conclude that while \(71^{+77}_{-4}\)% of observed SNe II are over luminous, with a large radius, the majority (\(66^{+11}_{-22}\)%) of exploding RSG have a typical radius at explosion. Since some objects in our sample are also consistent with a smaller radius, this should be treated as a lower limit.
* Using the X-ray limits and the constraints on the velocity profile of the ejecta from the light curve fitting, we derive limits on the CSM density at \(0.5-2\times 10^{15}\) cm from the progenitor star, which constrains the mass loss of the progenitors \(\sim 3-15\) yrs before the explosion assuming a \(50\,{\rm km\,s^{-1}}\) winds. We show the limits and detection are systematically lower than the required mass-loss to explain flash ionization features, supporting the conclusion that these stars undergo
increased mass-loss in the final months before explosion. Uncertainties in the spectral shape of the X-ray emission, the amount of CSM below \(10^{14}\) cm, and absorption in the CSM will change this result by less than an order of magnitude.
* In SS 5.3 we study the predictions of the fit parameter distribution to the landscape of shock breakout flares for the _ULTRASAT_ mission and high-cadence optical missions such as TESS. We argue the non-detections of breakout flares in the optical surveys is to be expected, and that observations with _ULTRASAT_ should indeed easily discover the breakout flares from an analogues sample to ours.
* By combining our constraints on the breakout radius and the extended CSM density, we propose a scenario that explains all three groups in our sample in a single framework. By varying the amount of CSM lost in the last year, the breakout radius, duration and temperature change. If a small amount of mass (\(\lesssim 10^{-3}\,M_{\odot}\)) is lost, breakout will occur at the stellar envelope. Its characteristic duration will be minutes to an hour and will peak in the extreme UV. This scenario can explain most SNe II-C+. If the star loses most of its envelope (\(\gtrsim 0.1\,M_{\odot}\)), breakout will occur at the edge of the dense CSM. The characteristic breakout duration will be hours long, and can contaminate the early light curves as it will peak in the far UV. This scenario will explain most II-C-. If the SN loses \(\sim 0.01\,M_{\odot}\) during the last year, breakout will occur in the dense CSM. Such a breakout will be occur over a timescale of a few days, during which a heating of the breakout region and an increase in luminosity will be observed as the breakout pulse is released, with an SED peaking in the near UV. This scenario will account for SNe II-H. This framework is schematically summarized in Fig. 22.
## 7 Data Availability
All data used in this paper will be made available via WISeREP12(Yaron and Gal-Yam, 2012). We make all figures of all light curve fits and light curve plots available as online figures through the journal website upon publication.
Footnote 12: [https://www.wiserep.org](https://www.wiserep.org)
## 8 Acknowledgements
_Software_: Astropy (Astropy Collaboration et al., 2013, 2018), IPython (Perez and Granger, 2007), Matplotlib (Hunter, 2007), Numpy (Oliphant, 2006), Scipy (Virtanen et al., 2020), exctinction (Barbary, 2016), dynesty (Skilling, 2004, 2006; Feroz et al., 2009; Higson et al., 2019; Speagle, 2020) GROWTH marshal (Kasliwal et al., 2019), Fritz/SkyPortal (van der Walt et al., 2019; Coughlin et al., 2023), SWarp (Bertin, 2010)
_Facilities_: _Neil Gehrels Swift Observatory_, P48, _Swift_(UVOT, XRT), P60 (RC), Liverpool telescope (IO:O)
We thank Doron Kushnir, Barak Zackay and Boaz Katz for their insights on the analysis. We are grateful to the staff at the various observatories where data were obtained. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester.
Based on observations obtained with the Samuel Oschin Telescope 48-inch and the 60-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Weizmann Institute of Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, IN2P3, University of Warwick, Ruhr University Bochum, Northwestern University and former partners the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant #12540303 (PI M. J. Graham). The SED Machine at Palomar Observatory is based upon work supported by the NSF under grant 1106171. The Gordon and Betty Moore Foundation, through both the Data-Driven Investigator Program and a dedicated grant, provided critical funding for SkyPortal.
The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. Partly based on observations made with the Nordic Optical Telescope, operated at the Observatorio del Roque de los Muchachos.
This research has made use of the Spanish Virtual Observatory ([https://svo.cab.inta-csic.es](https://svo.cab.inta-csic.es)) project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00 A.G-Y.'s research is supported by the EU via ERC grant 725161, the ISF GW excellence center, an IMOS space infrastructure grant and BSF/Transformative and GIF grants, as well as the Andre Deloro Institute for Advanced Research in Space and Optics, The Helen Kimmel Center for Planetary Science, the Schwartz/Reisman Collaborative Science Program and the Norman E Alexander Family Foundation ULTRASAT Data Center Fund, Minerva and Yeda-Sela; A.G.-Y. is the incumbent of the Arlyn Imberman Professorial Chair. E. Waxman's research is partially supported by grants from the ISF, Norman E Alexander Family M Foundation ULTRASAT Data Center, Nella and Leon Benoziyo Center for Astrophysics, Schwartz Reisman Institute for Theoretical Physics, and by the Max Planck Professorial Chair of Quantum Physics.E.O.O. is grateful for the support of grants from the Benozio center, Willner Family Leadership Institute, Ilan Gluzman (Secaucus NJ), Madame Olga Klein - Astrachan, Minerva foundation, Israel Science Foundation, BSF-NSF, Israel Ministry of Science, Yeda-Sela, and Weizmann-MIT, and the Rosa and Emilio Segre Research Award.
|
2301.11673 | Bayesian Self-Supervised Contrastive Learning | Recent years have witnessed many successful applications of contrastive
learning in diverse domains, yet its self-supervised version still remains many
exciting challenges. As the negative samples are drawn from unlabeled datasets,
a randomly selected sample may be actually a false negative to an anchor,
leading to incorrect encoder training. This paper proposes a new
self-supervised contrastive loss called the BCL loss that still uses random
samples from the unlabeled data while correcting the resulting bias with
importance weights. The key idea is to design the desired sampling distribution
for sampling hard true negative samples under the Bayesian framework. The
prominent advantage lies in that the desired sampling distribution is a
parametric structure, with a location parameter for debiasing false negative
and concentration parameter for mining hard negative, respectively. Experiments
validate the effectiveness and superiority of the BCL loss. | Bin Liu, Bang Wang, Tianrui Li | 2023-01-27T12:13:06Z | http://arxiv.org/abs/2301.11673v4 | # Bayesian Self-Supervised Contrastive Learning
###### Abstract
Recent years have witnessed many successful applications of contrastive learning in diverse domains, yet its self-supervised version still remains many exciting challenges. As the negative samples are drawn from unlabeled datasets, a randomly selected sample may be actually a false negative to an anchor, leading to incorrect encoder training. This paper proposes a new self-supervised contrastive loss called the Bcl loss that still uses random samples from the unlabeled data while correcting the resulting bias with importance weights. The key idea is to design the desired sampling distribution for sampling hard true negative samples under the Bayesian framework. The prominent advantage lies in that the desired sampling distribution is a parametric structure, with a location parameter for debiasing false negative and concentration parameter for mining hard negative, respectively. Experiments validate the effectiveness and superiority of the Bcl loss 1.
Footnote 1: [email protected]; [email protected]; School of Electronic Information and Communications, Huazhong University of Science and Technology (HUST), Wuhan, China.
## 1 Introduction and Contribution
Unsupervised learning has been extensively researched for its advantages of learning representations without human labelers for manually labeling data. How to learn good representation without supervision, however, has been a long-standing problem in machine learning. Recently, _contrastive learning_ that leverages a _contrastive loss_(Chopra et al., 2005; Hadsell et al., 2006) to train a representation encoder has been promoted as a promising solution to this problem (Oord et al., 2018; Tian et al., 2020; Liu et al., 2021; Chen et al., 2020). Remarkable successes of contrastive learning have been observed for many applications in different domains (Alce et al., 2018, 2019; Misra and Maaten, 2020; He et al., 2020). Nonetheless, its potentials can be further released for a better contrastive loss being designed.
We study the following _self-supervised contrastive learning_ problem (Oord et al., 2018; Chuang et al., 2020; Robinson et al., 2021; Chen et al., 2020; Chuang et al., 2020; Liu et al., 2021): Consider an unlabeled dataset \(\mathcal{X}\) and a class label set \(\mathcal{C}\), let \(h:\mathcal{X}\rightarrow\mathcal{C}\) be the classification function assigning a _data point_\(x\in\mathcal{X}\) with a _class label_\(c\in\mathcal{C}\). Assume that observing a class label \(\rho(c)=\tau^{+}\) is uniform and \(\tau^{-}=1-\tau^{+}\) is the probability of observing any different class. For a given data point \(x\), let \(p^{+}(x^{+})=p(x^{+}|h(x)=h(x^{+}))\) denote the probability of another point \(x^{+}\) with the same label as \(x\), and in such a case, \(x^{+}\) is called a _positive sample_ specific to \(x\). Likewise, let \(p^{-}(x^{-})=p(x^{-}|h(x)\neq h(x^{-}))\) denote the probability of another point \(x^{-}\) with a different label to \(x\), and in such a case, \(x^{-}\) is called a _negative sample_ specific to \(x\). Let \(f:\mathcal{X}\rightarrow\mathbb{R}^{d}\) denote the representation learning function (i.e., encoder) to map a point \(x\) to an _embedding_\(f(x)\) on a \(d\)-dimensional hypersphere.
Self-supervised contrastive learning is to contrast similar pairs \((x,x^{+})\) and dissimilar pairs \((x,x^{-})\) to learn the encoder \(f\)(Wang and Isola, 2020; Wang and Liu, 2021; Chuang et al., 2020); While the objective is to encourage representations of \((x,x^{+})\) to be closer than that of \((x,x^{-})\). In training the encoder, we randomly draw a point from the underlying data distribution \(p_{d}\) on \(\mathcal{X}\), i.e., \(x\sim p_{d}\), and its positive sample \(x^{+}\) can be easily obtained from some semantic-invariant operation on \(x\) like image masking, written as \(x^{+}\sim p^{+}\). In practice, a negative sample \(x^{-}\) is drawn from the unlabeled dataset and \(x^{-}\sim p_{d}\). However, the sample \(x^{-}\) could be potentially with the same label as \(x\), i.e., it is a _false negative_ to \(x\). In such a case, the construction of a dissimilar pair \((x,x^{-})\) would degrade the learned representations (Wang and Liu, 2021; Chuang et al., 2020). As no prior knowledge about the label of \(x^{-}\), we propose to include an _importance weight_\(\omega\) to measure the credibility of a constructed pair \((x,x^{-})\) for contrastive learning.
**Contribution**: In this paper, we propose the following Bayesian self-supervised Contrastive Learning objective function, viz., the Bcl loss:
\[\mathcal{L}_{\text{Bcl}}=\mathbb{E}_{\begin{subarray}{c}x\sim p_{d}\\ x^{+}\sim p^{+}\\ x^{-}\sim p_{d}\end{subarray}}-\log[\frac{e^{f(x)^{T}f(x^{+})}}{e^{f(x)^{T} f(x^{+})}+\sum_{i=1}^{N}\omega_{i}\cdot e^{f(x)^{T}f(x^{+}_{i})}}],\]
Our main contributions include (i) the derivation of parametric structural sampling distribution conditioned on _hard principle_ and _true principle_, (ii) the posterior estimation
of a unlabeled sample being true negative sample, and (iii) the stochastic process depiction to simulate predictions of neural networks.
Compared with the InfoNCE (Oord et al., 2018), we include the importance weight \(\omega\) in the contrastive loss, which is designed to down-weight a constructed pair \((x,x^{-})\) for \(x^{-}\) being a false negative sample or to up-weight \((x,x^{-})\) for \(x^{-}\) being a true negative sample. The key idea is to design a desired sampling distribution for hard true negative samples under the Bayesian framework. The prominent advantage lies in that we derive the parametric structure for the desired sampling distribution with a location parameter for debiasing false negative and concentration parameter for mining hard negative, respectively. More detailed analysis on \(\mathcal{L}_{\text{BCL}}\) and derivation of \(\omega\) computation are given in the subsequent sections.
We summarize the computation of \(\omega\) in each training epoch as follows. In a training epoch, we randomly draw \(N\) samples \(\{x_{i}^{-}\}_{i=1}^{N}\) from training set, which are assumed as negative samples specific to \(x\). Let \(\hat{x}_{i}^{-}=\exp(f(x)^{T}f(x_{i}^{-}))\) denote the power exponent of similarity between \(x\) and \(x_{i}^{-}\). We compute the importance weight \(\omega_{i}\) for \(x_{i}^{-}\) by the following three steps:
Step-1: Compute \(\Phi_{N}(\hat{x}_{i}^{-})\), called the empirical distribution function of \(\hat{x}_{i}^{-}\),
\[\Phi_{N}(\hat{x}_{i}^{-})=\frac{1}{N}\sum_{j=1}^{N}\mathbb{I}(\hat{x}_{j}^{-} \leq\hat{x}_{i}^{-}), \tag{1}\]
where \(\mathbb{I}(\cdot)\) is the indicator function.
Step-2: Compute \(p(\text{Tn}|\hat{x}_{i}^{-})\), the posterior probability of \(x_{i}^{-}\) being a _true negative_ (TN) to \(x\),
\[p(\text{Tn}|\hat{x}_{i}^{-})=\frac{\alpha\tau^{-}+(1-2\alpha)\Phi_{N}(\hat{x} _{i}^{-})\tau^{-}}{\alpha\tau^{-}+(1-\alpha)\tau^{+}+(1-2\alpha)\Phi_{N}(\hat{ x}_{i}^{-})(\tau^{-}-\tau^{+})}, \tag{2}\]
where \(\alpha\) is a hyperparameter to be explained latter.
Step-3: Compute \(\omega_{i}(\hat{x}_{i}^{-})\), the importance weight of \(x_{i}^{-}\) for correcting the bias between actual sampling distribution and desired sampling distribution
\[\omega_{i}(\hat{x}_{i}^{-})=\frac{p(\text{Tn}|\hat{x}_{i}^{-})\cdot\hat{x}_{i }^{\beta}}{\frac{1}{N}\sum_{i=1}^{N}p(\text{Tn}|\hat{x}_{i}^{-})\cdot\hat{x}_ {i}^{\beta}}, \tag{3}\]
where \(\beta\) is a hyperparameter to be explained latter.
## 2 Contrastive Loss and Analysis
### Contrastive Loss
In the context of _supervised contrastive learning_, dissimilar pairs \((x,x^{-})\) can be easily constructed by randomly drawing a true negative sample \(x^{-}\) specific to \(x\), i.e., \(x^{-}\sim p^{-}\), based on the sample label. The _contrastive predictive coding_ (CPC) (Oord et al., 2018) introduces the following InfoNCE loss (Gutmann and Hyvarinen, 2010, 2012):
\[\mathcal{L}_{\text{Sup}}=\mathbb{E}_{\begin{subarray}{c}x\sim p_{d}\\ x^{+}\sim p^{+}\\ x^{-}\sim p^{-}\end{subarray}}[-\log\frac{e^{f(x)^{T}f(x^{+})}}{e^{f(x)^{T}f( x^{+})}+\sum_{i=1}^{N}e^{f(x)^{T}f(x_{i}^{-})}}] \tag{4}\]
to learn an encoder \(f:\mathcal{X}\rightarrow\mathbb{R}^{d}/t\) that maps a data point \(x\) to the hypersphere \(\mathbb{R}^{d}\) of radius \(1/t\), where \(t\) is the temperature scaling. As in the CPC, we also set \(t=1\) in our theoretical analysis.
In the context of _self-supervised contrastive learning_, however, as samples' labels are not available, i.e., \(p^{-}(x^{\prime})=p(x^{\prime}|h(x)\neq h(x^{\prime}))\) is not accessible, the standard approach is to draw \(N\) samples from the data distribution \(p_{d}\), which are supposed to be negative samples to \(x\), to optimize the following InfoNCE _self-supervised contrastive loss_:
\[\mathcal{L}_{\text{Blased}}=\mathbb{E}_{\begin{subarray}{c}x\sim p_{d}\\ x^{+}\sim p^{+}\\ x^{-}\sim p_{d}\end{subarray}}[-\log\frac{e^{f(x)^{T}f(x^{+})}}{e^{f(x)^{T}f( x^{+})}+\sum_{i=1}^{N}e^{f(x)^{T}f(x_{i}^{-})}}]. \tag{5}\]
Following the DCL (Chuang et al., 2020), it is also called as _biased contrastive loss_ since those supposedly negative samples \(x^{-}\) drawn from \(p_{d}\) might come from the same class as the data point \(x\) with probability \(\tau^{+}\).
### Sampling Bias Analysis
Let \(x^{-}\in\text{Tn}\) denote \(x^{-}\) being a _true negative_ (TN) sample specific to \(x\). Let \(x^{-}\in\text{Fn}\) denote \(x^{-}\) being a _false negative_ (FN) sample specific to \(x\), i.e. \(x^{-}\) and \(x\) are with the same ground truth class label. Note that whether \(x^{-}\) is a TN or FN is specific to a particular _anchor_ point \(x\), and in what follows, we omit the _specific to \(x\)_ for brevity. It has been proven that for \(\{x_{i}^{-}\in\text{Tn}\}_{i=1}^{N}\), optimizing the InfoNCE loss \(\mathcal{L}_{\text{Sup}}\) will result in the learning model estimating and optimizing the _density ratio_\(\frac{p^{+}}{p^{-}}\)(Oord et al., 2018; Poole et al., 2019). Denote \(\hat{x}^{+}=e^{f(x)^{T}f(x^{+})}\). The CPC (Oord et al., 2018) proves that minimizing \(\mathcal{L}_{\text{Sup}}\) leads to
\[\hat{x}^{+}\propto p^{+}/p^{-}. \tag{6}\]
As discussed by (Oord et al., 2018), \(p^{+}/p^{-}\) preserves the mutual information (MI) of future information and present signals, where MI maximization is a fundamental problem in science and engineering (Poole et al., 2019; Belghazi et al., 2018).
Now consider the InfoNCE loss \(\mathcal{L}_{\text{Blased}}\), which can be regarded as the categorical cross-entropy of classifying one positive sample \(x^{+}\) from unlabeled samples. For analysis purpose, we rewrite \(x^{+}\) as \(x_{0}\). Given \(N+1\) unlabeled data points, the posterior probability of one data point \(x_{0}\) being
a positive sample can be derived by
\[P(x_{0}\in\text{pos}|\{x_{i}\}_{i=0}^{N}) \tag{7}\] \[= \frac{p^{+}(x_{0})\prod_{i=1}^{N}p_{d}(x_{i})}{\sum_{j=0}^{N}p^{+}(x _{j})\prod_{i\neq j}p_{d}(x_{i})}\] \[= \frac{p^{+}(x_{0})/p_{d}(x_{0})}{p^{+}(x_{0})/p_{d}(x_{0})+\sum_{j =1}^{N}p^{+}(x_{j})/p_{d}(x_{j})}\]
To minimize \(\mathcal{L}_{\text{Biased}}\), the optimal value for this posterior probability is 1, which is achieved in the limit of \(p^{+}(x_{0})/p_{d}(x_{0})\rightarrow+\infty\) or \(p^{+}(x_{j})/p_{d}(x_{j})\to 0\). Minimizing \(\mathcal{L}_{\text{Biased}}\) leads to
\[\hat{x}^{+}\propto p^{+}/p_{d}. \tag{8}\]
Note that this is different from Eq. (6), since \(x_{i}^{-}\) may not be Tn for lack of ground truth label.
Denote \(\hat{x}^{+}=m\cdot p^{+}/p_{d},\ m\geq 0\). We investigate the gap between optimizing \(\hat{x}^{+}\) and the optimization objective \(p^{+}/p^{-}\). Inserting \(p_{d}=p^{-}\tau^{-}+p^{+}\tau^{+}\) back to Eq. (8), we obtain
\[\hat{x}^{+}=m\cdot\frac{p^{+}}{p^{-}\tau^{-}+p^{+}\tau^{+}}. \tag{9}\]
Rearranging the above equation yields
\[p^{+}/p^{-}=\frac{\hat{x}^{+}\cdot\tau^{-}}{m-\hat{x}^{+}\cdot\tau^{+}}. \tag{10}\]
Fig. 1 illustrates the approximate shape of Eq. (10) as a fractional function, which reveals the inconsistency between InfoNCE \(\mathcal{L}_{\text{Biased}}\) loss optimization and MI optimization. That is, when optimizing InfoNCE loss, the increase of \(\hat{x}^{+}\) does not lead to the monotonic increase of \(p^{+}/p^{-}\). Indeed, the existence of _jump discontinuity_ indicates that the optimization of \(\mathcal{L}_{\text{Biased}}\) does not necessarily lead to the tractable MI optimization. The reason for the intractable MI optimization is from the fact that not all \(\{x_{i}^{-}\}_{i=1}^{N}\) are Tn samples, as they are randomly drawn from the data distribution \(p_{d}\). This leads to the inclusion of \(p^{+}\) in the denominator of Eq. (9) when decomposing the data distribution \(p_{d}\). Fig. 6 in Appendix provides an intuitive explanation. The four sampled data points actually contain one Fn sample. Such a Fn sample should be pulled closer to the anchor \(x\). However, as it is mistakenly treated as a negative sample, during model training it will be pushed further apart from the anchor, which breaks the semantic structure of embeddings (Wang and Liu, 2021).
## 3 The Proposed Method
In this paper, we consider to randomly draw negative samples \(\{x_{i}^{-}\}_{i=1}^{N}\) from the unlabeled dataset, i.e., \(x_{i}^{-}\sim p_{d}\). As the class label is not accessible, \(x_{i}^{-}\) could be either a Tn sample or a Fn sample. We propose to include and compute an importance weight \(\omega_{i}\) into the InfoNCE contrastive loss for correcting the resulting bias of drawing negative samples from \(p_{d}\). The ideal situation is that we can set \(\omega=0\) to each Fn sample, so that only the _hard true negative_ samples contribute to the calculation of contrastive loss, which relies on the design of _desired sampling distribution_.
We consider the following two design principles of the _sampling distribution_ for drawing \(\{x_{i}^{-}\}_{i=1}^{N}\). The _true principle_(Wang and Liu, 2021; Robinson et al., 2021) states that the Fn samples should not be pushed apart from the anchor \(x\) in the embedding space. The _hard principle_(Yannis et al., 2020; Robinson et al., 2021; Florian et al., 2015; Hyun et al., 2016) states that the _hard_ Tn samples should be pushed further apart in the embedding space.
### False Negative Debiasing
We first consider the true principle for the design of sampling distribution. We denote the power exponent of similarity between an anchor \(x\) and another unlabeled sample \(x^{\prime}\) as \(\hat{x}=e^{f(x)^{\mathsf{T}}f(x^{\prime})}\). Assume that \(\hat{x}\) is independently and identically distributed with a _probability density function_\(\phi\) and _cumulative distribution function_\(\Phi(\hat{x})=\int_{-\infty}^{\hat{x}}\phi(t)dt\). As \(x^{\prime}\) can be either a Tn sample or a Fn sample, so \(\phi\) contains two populations, denoted as \(\phi_{\text{Tn}}\) and \(\phi_{\text{Fn}}\). The problem of computing the \(\mathcal{L}_{\text{BCL}}\) loss Eq. (1) is reduced to estimating the sum over \(\hat{x}\sim\phi_{\text{Tn}}\), i.e., \(\sum_{i=1}^{N}e^{f(x)^{\mathsf{T}}f(x_{i}^{-})}\), while using samples \(\hat{x}\sim\phi\).
Existing approaches for solving above problem is the density estimation to fit \(\phi\)(Xia et al., 2022), where \(\phi\) is parameterized as a two-component mixture of \(\phi_{\text{Tn}}\) and \(\phi_{\text{Fn}}\), such as the Gaussian Mixture Model (Lindsay, 1995), Beta Mixture Model (Xia et al., 2022). To make the analysis possible, \(\phi_{\text{Tn}}\) and \(\phi_{\text{Fn}}\) are postulated to follow a simple density function with fixed parameters, which is a too strong assumption. In addition, the learning algorithm for estimating \(\phi\) is expen
Figure 1: Illustration of \(\mathcal{L}_{\text{Biased}}\) and mutual information optimization by Eq. (10).
sive, since the mixture coefficients that indicate the probability of \(\hat{x}\in\text{Tn}\) or Fn are hidden variables. The parameters of \(\phi_{\text{Tn}}\) and \(\phi_{\text{FN}}\) can only be obtained through the iterative numerical computation of the EM algorithm (Dempster et al., 1977) that are sensitive to initial values.
In this paper, we propose an analytic method without explicitly estimating \(\phi\), also called the nonparametric method in the statistical theory. Consider \(n\) random variables from \(\phi\) arranged in the ascending order according to their realizations. We write them as \(X_{(1)}\leq X_{(2)}\leq\cdots\leq X_{(n)}\), and \(X_{(k)}\) is called the \(k\)-th \((k=1,\cdots,n)\) order statistics (David & Nagaraja, 2004). The _probability density function_ (PDF) of \(X_{(k)}\) is given by:
\[\phi_{(k)}(\hat{x})=\frac{n!}{(k-1)!(n-k)!}\Phi^{k-1}(\hat{x})\phi(\hat{x})[1- \Phi(\hat{x})]^{n-k}\]
By conditioning on \(n=2\) we obtain:
\[\phi_{(1)} = 2\phi(\hat{x})[1-\Phi(\hat{x})] \tag{11}\] \[\phi_{(2)} = 2\phi(\hat{x})\Phi(\hat{x}) \tag{12}\]
Next, we investigate the position of positive and negative samples on the hypersphere, so as to get a deep insight into \(\phi_{\text{Tn}}\). Consider a \((x,x^{+},x^{-})\) triple, there exists a closed ball \(\mathfrak{B}[f(x),d^{+}]=\{f(\cdot)|d(f(x),f(\cdot))\leq d^{+}\}\) with center \(f(x)\) and radius \(d^{+}\), where \(d^{+}=\|f(x)-f(x^{+})\|\) is the distance of anchor embedding \(f(x)\) and positive sample embedding \(f(x^{+})\). Two possible cases arise: \(f(x^{-})\in\mathfrak{B}[f(x),d^{+}]\) or \(f(x^{-})\notin\mathfrak{B}[f(x),d^{+}]\), as illustrated by Fig. 2. We can describe the two cases with the Euclidean distance: Fig 2(a) corresponds to \(d^{+}<d^{-}\), and Fig 2(b) corresponds to \(d^{-}\leq d^{+}\), where \(d^{-}=\|f(x)-f(x^{-})\|\). Note that the Euclidean distance \(d^{\pm}=\sqrt{2/t^{2}-2f(x)^{\mathsf{T}}f(x^{\pm})}\) since all embeddings \(f(\cdot)\) are on the surface of a hypersphere of radius \(1/t\), so we have \(\hat{x}^{-}<\hat{x}^{+}\) for case (a), and \(\hat{x}^{+}\leq\hat{x}^{-}\) for case (b). Expressed in the notation of order statistics, \(\hat{x}^{-}\) (marked in blue in Fig 2) is a realization of \(X_{(1)}\) for case (a), or \(\hat{x}^{-}\) is a realization of \(X_{(2)}\) for case (b), respectively.
The generation process of observation \(\hat{x}\) from \(\phi_{\text{Tn}}\) can be described as follows: Select case (a) with probability \(\alpha\), and then generate an observation \(\hat{x}\) from \(\phi_{(1)}\); Or select case (b) with probability \(1-\alpha\), and then generate an observation \(\hat{x}\) from \(\phi_{(2)}\). That is, \(\phi_{\text{TN}}\) is the component of \(X_{(1)}\) and \(X_{(2)}\) with a _mixture coefficient_\(\alpha\)
\[\phi_{\text{Tn}}(\hat{x})=\alpha\phi_{(1)}(\hat{x})+(1-\alpha)\phi_{(2)}(\hat {x}) \tag{13}\]
Similarly, \(\phi_{\text{FN}}\) is the component of \(X_{(2)}\) and \(X_{(1)}\) with mixture coefficient \(\alpha\):
\[\phi_{\text{FN}}(\hat{x})=\alpha\phi_{(2)}(\hat{x})+(1-\alpha)\phi_{(1)}(\hat {x}) \tag{14}\]
Note that the way of taking \(\hat{x}^{-}\) as a realization of \(X_{(1)}\) for case (a) omits the situation of \(\hat{x}^{-}=\hat{x}^{+}\). The probability measure of \(\hat{x}^{-}\) for such case is 0 as \(\phi\) is continuous density function dominated by Lebesgue measure.
**Proposition 3.1** (Class Conditional Density).: _If \(\phi(\hat{x})\) is continuous density function that satisfy \(\phi(\hat{x})\geq 0\) and \(\int_{-\infty}^{+\infty}\phi(\hat{x})d\hat{x}=1\), then \(\phi_{\text{Tn}}(\hat{x})\) and \(\phi_{\text{Fn}}(\hat{x})\) are probability density functions that satisfy \(\phi_{\text{Tn}}(\hat{x})\geq 0\), \(\phi_{\text{Fn}}(\hat{x})\geq 0\), and \(\int_{-\infty}^{+\infty}\phi_{\text{Tn}}(\hat{x})d\hat{x}=1\), \(\int_{-\infty}^{+\infty}\phi_{\text{Fn}}(\hat{x})d\hat{x}=1\)._
Proof.: See Appendix C.1.
There may need a further understanding and clarification of the mixture coefficient \(\alpha\) by reviewing Fig. 2. Intuitively, \(\alpha\) is the probability that \(f(x^{-})\) falls out of \(\mathfrak{B}[f(x),d^{+}]\). For a worst encoder \(f\) that randomly guesses, \(\alpha=0.5\); While for a perfect encoder \(\alpha=1\). Therefor, the reasonable value of \(\alpha\in[0.5,1]\). In fact, \(\alpha\) reflects the encoder's capability of scoring a positive sample higher than that of a negative sample, which admits the empirical macro-AUC metric over all anchors \(x\) in the training data set \(\mathcal{D}\):
\[\alpha = \int\limits_{x\in\mathcal{X}}\int\limits_{0}^{+\infty}\int\limits_{ 0}^{+\infty}\mathbb{I}(\hat{x}^{+}\geq\hat{x}^{-})p(\hat{x}^{+},\hat{x}^{-})p(x)d \hat{x}^{+}d\hat{x}^{-}dx \tag{15}\] \[\simeq \frac{1}{|\mathcal{D}|}\frac{1}{|\mathcal{D}^{+}||\mathcal{D}^{-}| }\sum_{\mathcal{D}^{+}}\sum_{\mathcal{D}^{-}}\mathbb{I}(\hat{x}^{+}\geq\hat{x}^ {-})\] \[= \frac{1}{|\mathcal{D}|}AUC\]
Figure 2: Two possible cases for the relative positions of anchor, positive, and negative triples.
By setting \(\phi(\hat{x})\) as \(\mathcal{N}(0,1)\) in Eq. (11) and Eq. (12), we can get a quick snapshot of how \(\alpha\) affects \(\phi_{\text{Tn}}(\hat{x})\) and \(\phi_{\text{FN}}(\hat{x})\) as illustrated in Fig 3. The larger value of \(\alpha\) results in higher discrimination of \(\phi_{\text{Tn}}(\hat{x})\) from \(\phi_{\text{FN}}(\hat{x})\). This is also in accordance with our cognition that a better encoder encodes dissimilar data points with different class labels more orthogonal.
Based on the Bayes formula, the posterior probability of observing \(\hat{x}\in\text{Tn}\) is computed by
\[p(\text{Tn}|\hat{x}) \tag{16}\] \[= \frac{\phi_{\text{Tn}}(\hat{x})\tau^{-}}{\phi_{\text{Tn}}(\hat{x} )\tau^{-}+\phi_{\text{FN}}(\hat{x})\tau^{+}}\] \[= \frac{\alpha\tau^{-}+(1-2\alpha)\Phi(\hat{x})\tau^{-}}{\alpha\tau ^{-}+(1-\alpha)\tau^{+}+(1-2\alpha)\Phi(\hat{x})(\tau^{-}-\tau^{+})}. \tag{17}\]
\(\phi(\hat{x})\) is eliminated due to the fractional form of the Bayesian formula. The posterior probability still belongs a nonparametric distribution family, since we do not know the specific expression of \(\Phi(\hat{x})=\int_{-\infty}^{\hat{x}}\phi(t)dt\). However, it converges to the following empirical distribution function
\[\Phi_{n}(\hat{x})=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}_{|X_{i}\leq\hat{x}|}. \tag{18}\]
Glivenko theorem (Glivenko, 1933) has strengthened this result by proving the uniform convergence of \(\Phi_{n}(\hat{x})\) to \(\Phi(\hat{x})\). Note that \(\Phi_{n}(\hat{x})\in[0,1]\) is the _sample information_ in the Bayesian viewpoint, which includes a good probabilistic interpretation of the unlabeled data \(x^{\prime}\in\text{FN}\) given its observation \(\hat{x}\). For a larger \(\hat{x}\) (with a closer embedding distance to the anchor), \(\Phi_{n}(\hat{x})\) assigns a higher probabilistic prediction of the unlabeled sample \(x^{\prime}\) sharing the identical latent class label with that of the anchor. In other words, \(\Phi_{n}(\hat{x})\) can be regarded as the likelihood that reflects the explanation capability for the observation \(\hat{x}\) conditioning on the event of an unlabeled sample being a \(\text{FN}\) sample.
We note that \(\Phi_{n}(\hat{x})\) allows us to obtain the analytical solution of the probability estimate \(p(\text{Tn}|\hat{x})\) from the nonparametric structural \(\phi\) without postulation on \(\phi\). Also note that the calculation of \(p(\text{Tn}|\hat{x})\) does not depend on any specific expression of \(\phi\). In Fig 3, we set \(\phi\) as \(\mathcal{N}(0,1)\) only for illustration of how \(\alpha\) impacts on \(\phi_{\text{Tn}}(\hat{x})\) and \(\phi_{\text{FN}}(\hat{x})\).
### Hard Negative Mining
We next consider the hard principle for the design of sampling distribution. We note that the posterior estimator of Eq. (17) mainly targets on the problem of false negative debiasing. It is worth emphasizing that the posterior estimation \(p(\text{Tn}|\hat{x})\) is determined by \(\tau\) as prior class probability and relative position of observation \(\Phi_{n}(\hat{x})\) as likelihood, with the mixture coefficient \(\alpha\) as a correction indicator for the performance of an encoder \(f\). Yet \(p(\text{Tn}|\hat{x})\) is un-affected by specific expressions of \(\phi\). With such properties, it is much easier to combine the ingredient of hardness to the desired sampling distribution.
We adopt the von Mises-Fisher distribution (Mardia et al., 2000; Robinson et al., 2021) to describe the unlabeled negative samples \(x^{-}\) that are embedded around the anchor embedding \(f(x)\) with the unnormalized density \(p(x^{-})\propto e^{\beta f(x)^{T}f(x^{-})}\). As such, the density of \(\hat{x}\) conditioned on hardness is up-weighted by:
\[p(\hat{x}|\text{Hard})\propto\phi(\hat{x})\hat{x}^{\beta}, \tag{19}\]
where \(\beta\) is the _concentration parameter_ that controls the concentration degree of unlabeled samples around an anchor. So the desired sampling distribution for drawing \(\{x^{-}_{i}\}_{i=1}^{N}\) conditioning on both true principle and hard principle can be derived as:
\[\psi(\hat{x};\alpha,\beta) \triangleq p(\hat{x}|\text{Tn},\text{Hard}) \tag{20}\] \[\propto p(\hat{x},\text{Tn}|\text{Hard})\] \[= p(\text{Tn}|\hat{x},\text{Hard})p(\hat{x}|\text{Hard})\] (21) \[= p(\text{Tn}|\hat{x})p(\hat{x}|\text{Hard})\] \[= p(\text{Tn}|\hat{x})\cdot\phi(\hat{x})\hat{x}^{\beta} \tag{22}\]
The symbol \(\propto\) in Eq. (20) is obtained by omitting \(p(\text{Tn}|\text{Hard})\) as the normalization constant, and the symbol equality in Eq. (21) is obtained using the property of posterior estimation: The assumption of von Mises-Fisher conditioned on different hard level Hard in Eq. (19) is essentially taking a specific expression of \(\phi\), which is independent of the posterior estimator \(p(\text{Tn}|\hat{x})\). Intuitively, different hard level controls different concentration degree of observations, but do not change the relative position of observations \(\Phi_{n}(\hat{x})\) in Eq. (17).
### Monte Carlo Importance Sampling
With the desired sampling distribution \(\psi\), we can approximate the expectation over hard and true samples using classic Monte-Carlo importance sampling (Hesterberg, 1988; Bengio & Senecal, 2008):
\[\mathbb{E}_{\hat{x}\sim\psi}\hat{x} = \int_{0}^{+\infty}\hat{x}\frac{\psi(\hat{x})}{\phi(\hat{x})}\phi( \hat{x})d\hat{x} \tag{23}\] \[= \mathbb{E}_{\hat{x}\sim\phi}\hat{x}\frac{\psi(\hat{x})}{\phi(\hat{ x})}\] \[\simeq \frac{1}{N}\sum_{i=1}^{N}\omega_{i}\hat{x}_{i}\]
where \(\omega_{i}\) is the density ratio between \(\psi\) and \(\phi\), which can be calculated by:
\[\omega_{i}(\hat{x}_{i};\alpha,\beta) = \frac{\psi(\hat{x}_{i})/Z_{\psi}}{\phi(\hat{x}_{i})} \tag{24}\] \[= \frac{p(\text{T}\text{N}|\hat{x}_{i})\cdot\phi(\hat{x}_{i})\hat{x} ^{\beta}/Z_{\psi}}{\phi(\hat{x}_{i})}\] \[= \frac{p(\text{T}\text{N}|\hat{x}_{i})\cdot\hat{x}_{i}^{\beta}}{Z _{\psi}}.\]
\(Z_{\psi}\) is the partition function of \(\psi\) since it is unnormalized, which admits the following empirical estimate:
\[Z_{\psi} = \int_{0}^{\infty}\psi(\hat{x};\alpha,\beta)d\hat{x} \tag{25}\] \[= \int_{0}^{\infty}p(\text{T}\text{N}|\hat{x})\hat{x}^{\beta}\cdot \phi(\hat{x})d\hat{x}\] \[= \mathbb{E}_{\hat{x}\sim\phi}p(\text{T}\text{N}|\hat{x})\hat{x}^{\beta}\] \[\simeq \frac{1}{N}\sum_{i=1}^{N}p(\text{T}\text{N}|\hat{x}_{i})\hat{x}_ {i}^{\beta}\]
Inserting Eq. (25) into Eq. (24), we obtain:
\[\omega_{i}(\hat{x}_{i};\alpha,\beta)=\frac{p(\text{T}\text{N}|\hat{x}_{i}) \cdot\hat{x}_{i}^{\beta}}{\frac{1}{N}\sum_{i=1}^{N}p(\text{T}\text{N}|\hat{x}_ {i})\cdot\hat{x}_{i}^{\beta}}. \tag{26}\]
The quantities \(\omega_{i}(\hat{x}_{i};\alpha,\beta)\) is a function of \(\hat{x}_{i}\), called as _importance weights_ in this paper. They are used to correct the bias due to sampling \(x^{-}\sim p_{d}\), with non-strict sense of location parameter \(\alpha\) conditioning on the true negative principle and concentration parameter \(\beta\) conditioning on the hard negative principle. Intuitively, \(\omega_{i}(\hat{x}_{i};\alpha,\beta)\) downweights a false negative sample by the \(p(\text{T}\text{N}|\hat{x}_{i})\) term; While it up-weights a hard negative sample by the \(\hat{x}_{i}^{\beta}\) term.
So far we have finished the derivations of \(\omega\) computation for our BCL contrastive loss as defined in Eq. (1). It is important to state that we introduce the calculable empirical distribution function \(\Phi_{n}(\hat{x})\) as the likelihood as well as the \(\alpha\) as the encoder correction factor to compute the posterior estimator \(p(\text{T}\text{N}|\hat{x})=\tau^{-}\). In particular, for a very poor encoder with \(\alpha=1/2\), the posterior estimator reduces to \(p(\text{T}\text{N}|\hat{x})=\tau^{-}\), indicating that the encoder does not provide any useful information for re-weighting \(\hat{x}_{i}^{-}\). Next we relate the proposed loss \(\mathcal{L}_{\text{ BCL}}\) to \(\mathcal{L}_{\text{ SUP}}\), and show \(\mathcal{L}_{\text{ BCL}}\) leads to a consistent estimation.
**Lemma 3.2** (Asymptotic Unbiased Estimation).: _For any encoder \(f\) and as \(N\to\infty\) we observe \(\mathcal{L}_{\text{ SUP}}\to\mathcal{L}_{\text{ BCL}}\)._
Proof.: See Appendix C.2
**Complexity**: The steps for \(\mathcal{L}_{\text{ BCL}}\) computation in one training epoch have been summarized in Section 1. Compared with the InfoNCE loss \(\mathcal{L}_{\text{biased}}\), the additional computation complexity comes from first calculating the empirical distribution function \(\Phi_{n}(\hat{x})\) in the order of \(\mathcal{O}(N)\), which can be neglected as it is far smaller than encoding a sample.
## 4 Numerical Experiments
### Experiment Objective
A contrastive loss is used in combination with an encoder, like a neural network, to learn representations for a downstream task, like image classification. However, in practice tasks and encoders are often with too many choices. As such, we argue only using task performance may not be enough for evaluating a contrastive loss. Instead, we propose and design the following numerical experiments to compare contrastive losses. The objective is to compare the difference between a unsupervised and a supervised contrastive loss.
Recall that the core operation of a self-supervise contrastive loss is to estimate the expectation of \(\hat{x}_{i}\)(Chuang et al., 2020), where \(x_{i}\in\text{T}\text{n}\), \(\hat{x}_{i}\triangleq e^{f(x_{i})^{T}f(x_{i})}\), to approximate the supervised loss \(\mathcal{L}_{\text{ SUP}}\) using \(N\) randomly selected unlabeled samples \(x_{i}\). For the supervised loss \(\mathcal{L}_{\text{ SUP}}\), we define the mean of true negative samples' observations by
\[\theta_{\text{ SUP}}=\frac{\sum_{i=1}^{N}\mathbb{I}(x_{i})\cdot\hat{x}_{i}}{ \sum_{i=1}^{N}\mathbb{I}(x_{i})}, \tag{27}\]
where \(\mathbb{I}(x_{i})\) is the indicator function, \(\mathbb{I}(x_{i})=1\) if \(x_{i}\in\text{T}\text{n}\); Otherwise, \(\mathbb{I}(x_{i})=0\), since the expectation is replaced by empirical estimates in practice (Oord et al., 2018; Chuang et al., 2020; Chen et al., 2020). For the proposed self-supervised loss \(\mathcal{L}_{\text{ BCL}}\), we define the BCL estimator by
\[\hat{\theta}_{\text{ BCL}}=\frac{1}{N}\sum_{i=1}^{N}\omega_{i}\cdot\hat{x}_{i}. \tag{28}\]
The empirical counterpart of a unsupervised loss \(\mathcal{L}_{\text{ BCL}}\) equals to supervised loss \(\mathcal{L}_{\text{ SUP}}\), as if \(\hat{\theta}_{\text{ BCL}}=\hat{\theta}_{\text{ SUP}}\). We compare with the following two estimators: the \(\hat{\theta}_{\text{biased}}\) by (Oord et al., 2018) and \(\theta_{\text{DCL}}\) by (Chuang et al., 2020).
\[\hat{\theta}_{\text{biased}}=\frac{1}{N}\sum_{i=1}^{N}\hat{x}_{i} \tag{29}\] \[\hat{\theta}_{\text{DCL}}=\frac{1}{N\tau^{-}}(\sum_{i=1}^{N}\hat{ x}_{i}-N\tau^{+}\cdot\frac{\sum_{j=1}^{K}\hat{x}_{j}^{+}}{K}) \tag{30}\]
Eq. (30) can be understood as the summation of Tn samples' observations divided by the number of Tn samples \(N\tau^{-}\). Specifically, \(N\tau^{+}\) is the number of Fn samples, and \(\sum_{j=1}^{K}\hat{x}_{j}^{+}/K\) is the mean value of \(K\) Fn samples. So the second term inside the parenthesis corresponds to the summation of Fn samples' observations among \(N\) samples; While subtracting it from \(\sum_{i=1}^{N}\hat{x}_{i}\) corresponds to the summation of Tn samples among \(N\) randomly selected samples.
### Experiment Design
We design a stochastic process framework to simulate \(\hat{x}\), which depicts the numerical generative process of observation \(\hat{x}\). In short, an observation \(\hat{x}\) is realized in the following sample function space.
**Definition 4.1** (Sample Function Space).: Consider a function of observation \(X(x,e)\) of two variables defined on \(\mathcal{X}\times\Omega\). For an anchor \(x\in\mathcal{X}\), \(X(x,e)\) is a random variable on a probability space \((\Omega,\mathcal{F},P)\) related to the randomly selected unlabeled samples \(x_{i}\); For a fixed \(e\in\Omega\), \(X(x,e)\) is a sample function related to different anchors. We call \(\{X(x,e):x\in\mathcal{X},e\in\Omega\}\) a sample function space.
In the sample function space, an anchor \(x\) determines the parameter of \((\Omega,\mathcal{F},P)\), where \(P\) is the anchor-specific proposal distribution \(\phi\). As \(\phi\) is not required to be identical for different anchors, it simulates the situation that different anchors may result in different distributions of observations. Note that \(M\) anchors correspond to \(M\) sequences of random variables, each sequence with different parameters. For an anchor \(x\), a brief description of generating an observation \(\hat{x}\) is as follows:
**(i)** Select a class label according to the class probability \(\tau^{+}\), indicating that the observation comes from Fn with probability \(\tau^{+}\), or comes from Tn with probability \(\tau^{-}\).
**(ii)** Generate an observation \(\hat{x}\) from the class conditional density \(\phi_{\text{Fn}}\) (or \(\phi_{\text{Ts}}\)), dependent on the anchor-specific \(\phi\), and location parameter \(\alpha\).
**(iii)** Map \(\hat{x}\) to \(\exp(\hat{x}/t)\) as the final observation.
An illustrative example is presented in Fig 7 of Appendix. Repeat the process to generate \(N\) observations for one anchor and repeat it for \(M\) anchors, we obtain \(\{\exp(\hat{x}_{mi}/t):m=1,...,M,i=1,\cdots N\}\) corresponding to an empirical observation from the sample function space \(\{X(x,e):x\in\mathcal{X},e\in\Omega\}\). The complete stochastic process depiction of observations is presented in Algorithm 1 of Appendix B.1.
Note that in **(ii)**, event if we set \(\phi\) as a simple distribution, the corresponding class conditional density \(\phi_{\text{Fn}}\) (or \(\phi_{\text{Ts}}\)) is no longer a simple distribution. It is difficult to directly draw observations from \(\phi_{\text{Fn}}\) (or \(\phi_{\text{Ts}}\)). In this paper, we generate the observations from \(\phi_{\text{Fn}}\) (or \(\phi_{\text{Ts}}\)) using the accept-reject sampling (Casella et al., 2004) technique (see Algorithm 2 and 3 of Appendix B.1 for implementation details).
Note that in **(iii)**, although the corresponding density expression of \(\exp(\hat{x}/t)\) is extremely complicated, \(\hat{x}\rightarrow\exp(\hat{x}/t)\) is a strictly monotonic transformation, making the empirical distribution function remains the same: \(\Phi_{\text{n}}(\hat{x})=\Phi_{n}(\exp(\hat{x}/t))\).
Fig 4 plots the empirical distribution of \(\exp(\hat{x}/t)\) according to the above generative process, which corresponds to the distribution depicted in Fig 3, where we set \(t=2\), \(M=1\) and \(N=20000\). We note that the empirical distributions in Fig 4 exhibits similar structures to those by using real-world datasets as reported in (Robinson et al., 2021; Xia et al., 2022), indicating the effectiveness of the proposed stochastic process depiction framework for simulating \(\hat{x}\).
### Experiment Results
We evaluate the quality of estimators in terms of _mean squared error_ (MSE). For \(M\) anchors, we calculate Mse \(\hat{\theta}_{\text{Bcl}}=\frac{1}{M}\sum_{m=1}^{M}(\hat{\theta}_{\text{Bcl},m}-\theta_{\text{SUP},m})^{2}\). Fig. 5 compares the MSE of different estimators against different parameters. It can be observed that the estimator \(\hat{\theta}_{\text{Bcl}}\) is superior than the other two estimators in terms of lower MSE with different parameter settings. We refer to Appendix B.1 for detailed settings of our numerical experiments. A particular notice here is that we set \(\beta=0\) in the \(\omega\) computation, as \(\beta\) is designed in consideration of the requirements of downstream task that pushing Tn samples further apart, other than statistical quality of \(\hat{\theta}_{\text{Bcl}}\).
## 5 Real Dataset Experiments
Thanks to the SimCLR (Chen et al., 2020), DCL (Chuang et al., 2020) and HCL (Robinson et al., 2021) for providing the experiment framework and source codes. We conduct the real dataset experiments with the same settings as theirs for two vision tasks using the CIFAR10 (Krizhevsky and Hinton, 2009) and STL10 (Adam et al., 2011) datasets (Detailed settings are also provided in Appendix B.2).
* SimCLR (Chen et al., 2020): It provides the experiment framework for all competitors, which comprises of a stochastic data augmentation module, the ResNet-50 as an encoder, neural network projection head. It uses the contrastive loss \(\mathcal{L}_{\text{Biased}}\).
* DCL (Chuang et al., 2020): It mainly considers the
Figure 4: Empirical distribution of \(\exp(\hat{x}/t)\) with different \(\alpha\) settings. The density is fitted by using Gaussian kernel.
false negative debiasing task and uses the estimator Eq. (30) to replace the second term in the denominator of \(\mathcal{L}_{\text{Biased}}\).
* HCL (Robinson et al., 2021): Following the DCL debiasing framework, it also takes into consideration of hard negative mining by up-weighting each randomly selected sample as follows. \[\omega_{i}^{\text{HCL}}=\frac{\hat{x}_{i}^{\beta}}{\frac{1}{N}\sum_{j=1}^{N} \hat{x}_{j}^{\beta}}.\] (31)
Table 1 validates the effectiveness of the Bcl with some performance improvements over the competitors. We next discuss the the advantage of our Bcl loss. The debiasing mechanism of Dcl and Hcl includes a correction term in the denominator, which equally modifies the gradient magnitude for all random samples. While the Bcl modifies the gradient magnitude for each random sample individually, which is capable of independently performing false negative debiasing task and hard negative mining task. In addition, the Bcl utilizes sample information to down-weight the relatively close negative samples in the embedding space (those having larger \(\Phi_{n}(\hat{x})\), smaller \(p(\text{Tn}|\hat{x})\) and higher confidence of being Fn). With the same hard level of Hcl, the Bcl increases model's tolerance to Fn samples.
## 6 Conclusion
This paper has proposed a Bcl loss for self-supervised contrastive learning, which uses the importance sampling for correcting the bias of using random negative samples drawn from un-labeled data. The key idea is to design the desired sampling distribution under the Bayesian framework. The prominent advantage lies in that the desired sampling distribution is a parametric structure, with a location parameter correspond to encoder's macro-AUC metric for debiasing false negative, and concentration parameter correspond to embeddings concentration degree for mining hard negative, respectively. In our future work, we shall investigate using \(\psi\) for guiding explicit negative sampling (Xu et al., 2022), and exploiting the optimal tradeoff between the false negative debiasing and hard negative mining.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Methods**} & \multicolumn{5}{c}{Negative Sample Size \(N\)} \\ \cline{3-6} & & N=30 & N=62 & N=126 & N=254 & N=510 \\ \hline \multirow{4}{*}{**CIFAR10**} & SimCLR & 80.21 & 84.82 & 87.58 & 89.87 & 91.12 \\ & DCL & 82.41 & 87.60 & 90.38 & 91.36 & 91.78 \\ & HCL & 83.42 & 88.45 & 90.53 & 91.57 & 91.62 \\ & BCL & 83.61 & 88.56 & 90.83 & 91.87 & 92.18 \\ \hline \multirow{4}{*}{**STL10**} & SimCLR & 61.20 & 71.69 & 74.36 & 77.33 & 80.20 \\ & DCL & 63.91 & 71.48 & 76.69 & 81.48 & 84.23 \\ \cline{1-1} & HCL & 67.24 & 73.38 & 79.44 & 83.82 & 86.32 \\ \cline{1-1} & BCL & 67.45 & 73.36 & 80.23 & 84.68 & 86.51 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification accuracy on CIFAR10 and STL10.
Figure 5: Mse of different estimators with various parameter settings. |
2305.12855 | Microcontroller Based AVR Hazardous Gas Detection System using IoT | MQ-6 Semiconductor Sensor for Combustible Gas detection is a Sensitive Gas
sensor. The sensitive material of this MQ-6 gas sensor is SnO2, which works
with lower conductivity in clean air. When the target combustible gas exist,
the sensors conductivity is higher along with the gas concentration rising. As
the conductivity increases the current in the circuit of the sensor increases
which results in lower sensor resistance. This change is used to correspond,
the output signal of gas concentration. MQ-6 gas sensor has high sensitivity to
Methane, Propane and Butane and could be used to detect both Methane and
Propane. The sensor could be used to detect different combustible gas
especially Methane, it is with low cost and suitable for different application. | Ram Prasad | 2023-05-22T09:29:55Z | http://arxiv.org/abs/2305.12855v1 | # Microcontroller based AVR Hazardous Gas Detection System using IoT
###### Abstract
MQ-6 Semiconductor Sensor for Combustible Gas detection is a Sensitive Gas sensor. The sensitive material of this MQ-6 gas sensor is SnO2, which works with lower conductivity in clean air. When the target combustible gas exist, the sensors conductivity is higher along with the gas concentration rising. As the conductivity increases the current in the circuit of the sensor increases which results in lower sensor resistance. This change is used to correspond, the output signal of gas concentration. MQ-6 gas sensor has high sensitivity to Methane, Propane and Butane and could be used to detect both Methane and Propane. The sensor could be used to detect different combustible gas especially Methane, it is with low cost and suitable for different application.
Wireless Network, MQ-6 gas sensor, SnO2, internet of thing,
## 1 Introduction
A number of reviews on the subject of gas leakage detection techniques were done in the past either as part of research papers/technical reports on a certain leak detection method and other gas related subjects. In the year of 2008, LIU zhen-ya, WANG Zhen-dong and CHEN Rong, [1] "Intelligent Residential Security Alarm and Remote Control System Based On Single Chip Computer", the paper focuses on, Intelligent residential burglar alarm, emergency alarm, fire alarm, toxic gas leakage remote automatic sound alarm and remote control system, which is based on 89c51 single chip computer or 8051 microcontroller. The system that they design was used to send a message to emergency number provided and as well as call the police hotline number for emergency help.
In the year of 2006, Ioan Lita, Ion Bogdan Cioc and Daniel Alexandru Visan, [2] "A New Approach of Automatic Localization System Using GPS and GSM/GPRS Transmission", this paper focuses on, a low cost automotive localization system using GPS and GSM-SMS services, which provides the position of the vehicle on the driver's or owner's mobile phone as a short message (SMS) on his request. The system can be integrated with the car alarm system which alerts the owner, on his mobile phone, about the events that occurs with his car when it is parked. Or sends SMS to the relatives to provide fast emergency if any accident is happened.
In the year 2000, K. Galatsis, W. Woldarsla, Y.X. Li and K. Kalantar-zadeh, [3] "A Vehicle air quality monitor using gas sensors for improved safety", this paper focuses on A vehicle cabin air quality monitor using carbon monoxide (CO) and oxygen (02) gas sensors has been designed, developed and on-road tested. As of today the use of Air Conditioner (A/C) in the cars is more often this is dangerous to outer environment causing Global Warming like problems but as well as it affects the inner environment of a car. It causes problems like decrease in the oxygen level
around 15% and increase in the level of harmful gases like Carbon mono oxide. The continuous monitoring of these gases increase vehicle safety and an alert can be made to let the passengers know that the concentration of the gases has reached their threshold value and it will be dangerous to further use the exhaust or AC in car. Later, in the 2002 they published another paper, [4] "Investigation of gas sensors for vehicle cabin air quality monitoring". In this they proposed the use of MOS (Metal oxide Semiconductor) Gas Sensor. Commercially available gas sensors are compared with fabricated Moo3 based sensors possessed comparable gas sensing properties.
In year 2018; Zhao, W., Kamezaki, M., Yoshida, K., Konno, M., Onuki, A., & Sugano, S. [5] Worked on "An automatic tracked robot chain system for gas pipeline inspection and maintenance based on wireless relay communication". Mamatha, Sasritha, and Reddy, CS in the year 2017, developed an expert system and heuristics algorithm for cloud resource scheduling and introduced an android based automatic gas detection and indication robot[6; 7]. They proposed prototype depicts a mini mobile robot which is capable to detect gas leakage in hazardous places [8-12]. Whenever there is an event of gas leakage in a particular place the robot immediately read and sends the data to android mobile through wireless communication like Bluetooth. In this they develop an android application for android based smart phones which can receive data from robot directly through Bluetooth. The application warns with an indication whenever there is an occurrence of gas leakage and it can also be controlled by the robot movements via Bluetooth by using text commands as well as voice commands [13; 14].
In the paper [16], they introduced a robot and mobile application. Which made it is certain that an autonomous, mobile gas detection and leak localization robot is possible today and can significantly enhance safety. Shyamaladevi, [17] and her team; in their research article told about the project ARM7 based automated high performance system for LPG refill booking and leakage detection and methodology to make their project. In this, if there is a case of leakage, the resistance of the sensor decreases which increase its conductivity. The related output pulse is fed to microcontroller and which switches on the buzzer and exhaust fan to provide quick alert to house members [18; 19]. Microcontroller will send a message like "EMERGENCY ALERT: LPG gas leakage found in your home" to the required cell numbers via GSM module and the same will be displayed on LCD. The gas leakage detection system was proposed, designed and successfully implemented in this paper for home safety and industrial applications [20; 21]. Along with gas leakage detection, this system gives a fully automated approach towards the gas booking.
Figure 1: Bluetooth technology to control and monitor parameters driven by a robot
This project was implemented using the ARM 7 processor and simulated using the Keil software. The cost involved in developing the system is significantly low and is much less than the cost of gas detectors commercially available in the market. In year 2016, [22] "Dangerous gas detection using an integrated circuit and MQ-9" given by Falohun A.S., Oke A.O. and Abolaji B.M. This paper was focused on combustible gas detection. In this basically, they built an embedded design which includes typical input and output devices include switches, relays, solenoids, LEDs, small or custom LCD displays, radio frequency devices, and sensors for data such as temperature, humidity, light level etc [23-25]. Principle of operation proposed was the gas detector alarm system is designed with the intention to ensure that the event of gas is intelligently detected, promptly notified and interactively managed. It is built around a timer to accept input from the gas sensor, MQ-9, and activate a buzzer and set of led that alerts in the presence of gas. The sensor used is the MQ- 9 that specializes in gas detection equipment for carbon monoxide (CO) and Methane (CH4), LPG family and any other relevant industry or car assemblage.
According to the value received if that is above threshold, microcontroller will turn on LED and Buzzer and message is start viewing on the 16x2 LCD display [26-28]. Once few milliseconds delay, it conjointly sends the information over the internet for throwing gas out and continue to send messages as "Gas Leakage Detected" to the concerned mobile number. This information that is send over the server created on the internet and a Smartphone application can be used to notify [29]. The data on the server is displayed at a webpage for user.
The advantage of MQ-9 gas sensor is that it has; good sensitivity to CO/Combustible gas, high sensitivity to methane, propane and CO, long life and low cost and simple drive circuit. The enveloped MQ-9 has 6 pins, 4 of which are used to fetch signals, and other 2 for providing heating current. Once powered, the output of the sensor is normally HIGH but goes LOW when gas is sensed.
## 2 Architectural Model
Embedded Systems is defined in many ways. Few definitions such as, "An embedded system is a microprocessor based system that is built to control a function or a range of functions". An embedded system is some combination of computer hardware and software, either fixed in capability or programmable, that is designed for a specific function or for specific functions within a larger system. In the project, both the functions transmitting data over internet and sending text message to User mobile number, are done wirelessly using GSM module. MQ-6 Gas sensor is used to detect Hazardous Gases which are combustible in nature. The power supply of the project is regulated as 5V, supplied by a DC battery. The programming languages used for developing the software to the AVR Microcontroller is Embedded C as well as assembly language. The PROTEUS is used to stimulate the project on software.
Figure 2: User Interactive Gas Leakage and Fire Alarm System
## 3 System Aspects and Achievements
In this proposed model the authors want to achieve five aspects:
* **Design of Embedded System:** In this authors are using the AVR microcontroller that control all the module and other components and peripheral devices connected to it. Some of them work as input units and others as output units.
* **GSM Module**: GSM module is used to send the message of gas leakage to the emergency number provided in the program. This module is also used to setup an internet connection and use for IOT Applications. The data fetched from the Gas Sensor is uploaded to a virtual server which can be displayed on a webpage.
* **Alerting System**: This part includes a buzzer and a LCD Display. Buzzer is used to work as an alarm and LCD is being used to Display the Alert message and other details required to show.
* **Sensor Module**: This module is use to sense the gas leakage. In this proposed module they use a gas sensor MQ 6 to perform the leakage detection operation. This is the main component of the whole system. This is a Semiconductor type Gas sensor in which the conductivity increases as the gas concentration increases. To sense the concentration of gases SnO2 and Au layer is used. The increases conductivity results in decrease in the Sensor load resistance, this change is measured by the controller. If the concentration goes beyond threshold it sends signal to controller which starts the tasks assigned by the programmer.
* **Software Used**: Two types of software that are used in developing this system are Atmel Studio and Proteus. Atmel Studio is used for programming. Embedded C is generally used to program microcontrollers. This studio generates a HEX code file which is required to burn in the microcontroller. On the other hand Proteus is simulation software. It is a general practice to run the program on Proteus before physically connecting the system to avoid failures.
## 4 System Implementation
The hazardous gases like LPG and combustible gas were sensed by the MQ-6 gas sensor and are monitored by the AVR microcontroller and displayed in the LCD. In critical situation, that is when the LPG exceeds from normal level above 1000ppm and in the same way when the Propane exceeds the normal level of 1000ppm then an alarm is generated and a SMS is sent to the authorized user as an alerting system, which helps in faster diffusion of the critical situation. The prototype of the proposed system is shown in the below given figure.
Figure 3: The Proposed Model Block Diagram
## 5 Future Scope
In the present days automation plays a vital role in the human life. Most of the day to day tasks are monitored and controlled by embedded controllers. Some of the applications that can be observed are:
**Mobile Application**: In this digital era when everyone has Smartphone. So people can easily connect to a mobile based application with the website to alert more efficiently in the users phone by giving the proper details. This Application will be directly connected to website for more instant results.
**Different Gases**: Environmental air quality is also becoming increasingly important and there will be many future requirements for low-cost, air quality monitoring sensors such as MEMS-based metal oxide Semiconductor sensors which are capable of monitoring pollutants such as ozone, carbon monoxide, nitrogen dioxide and ammonia.
**Infrared sensors**: The demands of fail-safe operation and higher reliability have caused many users to switch to infrared gas sensing. These types of sensor generally incorporate a pulsing source of infrared radiation that is absorbed by certain gases in proportion to the gas concentration. The infrared wavelength is chosen to suit a particular gas such as methane or carbon dioxide.
**Lower power and size**: The power consumed by pollistor and infrared types of gas sensors has limited their use in portable instrumentation to some extent due to battery capacities. Over the last few years rechargeable technology has developed greatly with battery chemistry migrating from Nickel Cadmium (Ni-Cd) to Nickel-Metal Hydride (NiMH) and now various forms of Lithium-Ion (Li-Ion), which offer the best power to weight ratio. The mobile phone and consumer electronic markets will continue to drive the development of new battery technologies with spin-off benefits to the portable gas sensor industry.
Figure 4: Gas detection unit result LCD display
## 6 Conclusions
The importance of gas sensing is set to grow with increasing requirements for safety and environmental protection across many industries. The current range of gas sensing technologies has served us well but the future holds many new possibilities Power and size reductions and an improvement in ruggedness will allow a new generation of body worn devices A wide variety of leak detecting devices are available for gas pipelines. Some techniques have been improved since their first proposal and some new ones were designed as a result of advances in sensor manufacturing and computing power.
|
2310.07750 | Cosmological and idealized simulations of dark matter haloes with
velocity-dependent, rare and frequent self-interactions | Dark matter self-interactions may have the capability to solve or at least
mitigate small-scale problems of the cosmological standard model, Lambda Cold
Dark Matter. There are a variety of self-interacting dark matter models that
lead to distinguishable astrophysical predictions and hence varying success in
explaining observations. Studies of dark matter (DM) density cores on various
mass scales suggest a velocity-dependent scattering cross-section. In this
work, we investigate how a velocity dependence alters the evolution of the DM
distribution for frequent DM scatterings and compare to the
velocity-independent case. We demonstrate that these cases are qualitatively
different using a test problem. Moreover, we study the evolution of the density
profile of idealized DM haloes and find that a velocity dependence can lead to
larger core sizes and different time-scales of core formation and core
collapse. In cosmological simulations, we investigate the effect of
velocity-dependent self-interaction on haloes and satellites in the mass range
of $\approx 10^{11} - 10^{14}$ M$_\odot$. We study the abundance of satellites,
density, and shape profiles and try to infer qualitative differences between
velocity-dependent and velocity-independent scatterings as well as between
frequent and rare self-interactions. We find that a strongly velocity-dependent
cross-section can significantly amplify the diversity of rotation curves,
independent of the angular dependence of the differential cross-section. We
further find that the abundance of satellites in general depends on both the
velocity dependence and the scattering angle, although the latter is less
important for strongly velocity-dependent cross-sections. | Moritz S. Fischer, Lenard Kasselmann, Marcus Brüggen, Klaus Dolag, Felix Kahlhoefer, Antonio Ragagnin, Andrew Robertson, Kai Schmidt-Hoberg | 2023-10-11T18:00:00Z | http://arxiv.org/abs/2310.07750v3 | Cosmological and idealised simulations of dark matter haloes with velocity-dependent, rare and frequent self-interactions
###### Abstract
Dark matter self-interactions may have the capability to solve or at least mitigate small-scale problems of the cosmological standard model, \(\Lambda\)CDM. There are a variety of self-interacting dark matter (SIDM) models that lead to distinguishable astrophysical predictions and hence varying success in explaining observations. Studies of dark matter (DM) density cores on various mass scales suggest a velocity-dependent scattering cross-section. In this work we investigate how a velocity dependence alters the evolution of the DM distribution for _frequent_ DM scatterings and compare to the velocity-independent case. We demonstrate that these cases are qualitatively different using a test problem. Moreover, we study the evolution of the density profile of idealised DM haloes and find that a velocity dependence can lead to larger core sizes and different time scales of core formation and core collapse. In cosmological simulations, we investigate the effect of velocity-dependent self-interaction on haloes and satellites in the mass range of \(\approx 10^{11}\)-\(10^{14}\,\mathrm{M}_{\odot}\). We study the abundance of satellites, density and shape profiles and try to infer qualitative differences between velocity-dependent and velocity-independent scatterings as well as between frequent and rare self-interactions. We find that a strongly velocity-dependent cross-section can significantly amplify the diversity of rotation curves, independent of the angular dependence of the differential cross-section. We further find that the abundance of satellites in general depends on both the velocity dependence and the scattering angle, although the latter is less important for strongly velocity-dependent cross-sections.
keywords: astroparticle physics - methods: numerical - galaxies: haloes - dark matter
## 1 Introduction
Historically, dark matter (DM) self-interactions have been motivated to solve problems on small, i.e. galactic scales. It was found that cosmological DM-only simulations can explain the large-scale structure of the universe quite well. But on smaller scales, deviations between the observations and simulations were encountered (e.g. Moore et al., 1998). Spergel and Steinhardt (2000) proposed self-interacting dark matter (SIDM) as a solution to two problems on small scales. Namely, SIDM can reduce the abundance of satellites and the central density of haloes. As the self-interactions lead to heat flow into the central region of a Navarro-Frenk-White (NFW, Navarro et al., 1996) halo, they reduce the central density and can form density cores. The first \(N\)-body simulation using a Monte Carlo scheme of this core formation has been performed by Burkert (2000). Since then SIDM has been found to be capable of solving or at least mitigating further small-scale problems of Cold Dark Matter (CDM) (for a review see Tulin and Yu, 2018; Adhikari et al., 2022). This does not only include the core-cusp problem (e.g. Dave et al., 2001), but also diverse rotation curves (e.g. Creasey et al., 2017; Kamada et al., 2017; Robertson et al., 2018; Correa et al., 2022) and the too-big-to-fail problem (e.g. Zavala et al., 2013; Elbert et al., 2015; Kaplinghat et al., 2019). For a review of small-scale problems in Lambda cold dark matter (\(\Lambda\)CDM), we refer the reader to Bullock and Boylan-Kolchin (2017).
Meanwhile, it has also emerged that there are other avenues to solve these small-scale problems. On the one hand, it was found that
including the baryonic physics, in particular, feedback mechanisms from supernovae (e.g. Read & Gilmore, 2005; Governato et al., 2012; Pontzen & Governato, 2012) and black holes can form density cores (e.g. Martizzi et al., 2013; Silk, 2017; Peirani et al., 2017). On the other hand, researchers have become more cautious about inferring density profiles from rotation curves (e.g. Pineda et al., 2016; Read et al., 2016b; Genina et al., 2018; Oman et al., 2019; Roper et al., 2023; Downing & Oman, 2023). Beyond SIDM, other DM models have been investigated, including warm DM (Dodelson & Widrow, 1994) and fuzzy DM (Hu et al., 2000).
Although SIDM has initially been mainly motivated by small-scale issues, it provides DM candidates worth investigating, independent of the state of the small-scale crisis. The nature of DM is still unknown and could have properties which we can only infer indirectly via astronomical observations. This is true for models of SIDM, and studying them is essentially constraining particle physics properties of DM. Particle candidates that fall into the class of SIDM can have various characteristics. The scattering may be elastic or inelastic, it may involve multiple states and can feature different angular dependencies. Another aspect is how the cross-section depends on the relative velocity of the scattering particles.
Velocity-dependent self-interactions have been recently studied by various authors (e.g. Colin et al., 2002; Nadler et al., 2020; Yang & Yu, 2022; Outmezguine et al., 2023; Yang et al., 2023b). Such studies were performed not only with DM-only (DMO) simulations but also within hydrodynamical cosmological simulations (e.g. Vogelsberger et al., 2014; Robertson et al., 2019, 2020; Rose et al., 2022; Mastromarino et al., 2023; Rahimi et al., 2023). They are well-motivated for different angular dependencies, including forward-enhanced cross-sections from light mediator models (e.g. Buckley & Fox, 2010; Loeb & Weiner, 2011; Brongmann et al., 2017). But also models of resonant scattering (e.g. Chu et al., 2019; Tsai et al., 2022) can explain a velocity dependence while featuring an isotropic cross-section.
From an astronomical perspective, velocity-dependent self-interactions are well motivated (e.g. Kaplinghat et al., 2016; Correa, 2021; Gilman et al., 2021; Sagauski et al., 2021; Silverman et al., 2022; Lovell & Zavala, 2023). They would allow fulfilling stringent constraints from galaxy clusters while having a fairly large effect on low-mass haloes. When the self-interaction cross-section decreases with velocity, it has a weaker effect in galaxy clusters because their typical relative DM velocities are larger than in galaxies. Furthermore, they can lead to a qualitative different evolution of systems that involve multiple velocity scales. For instance, this is true for the evolution of the satellite distribution (e.g. Zeng et al., 2022) and could lead to an increase in the diversity of density profiles and rotation curves (e.g. Nadler et al., 2023; Yang et al., 2023c).
The aim of this study is to explore qualitative differences arising from the velocity dependence of the self-interactions and to understand their implications on constraining the angular dependence of the cross-section. In this paper, we consider two different angular dependencies: Firstly, isotropic scattering, to which we refer as rare self-interactions (rSIDM). Secondly, a cross-section with typical scattering angles that are very small. In consequence, frequent interactions are needed to significantly alter the DM distribution. Hence, we call it frequent self-interactions (fSIDM). We explore sIDM and fSIDM models with several velocity dependencies to study qualitative differences arising from the velocity and angular dependence. The scattering of all SIDM models we consider is elastic. For our study, we employ idealised \(N\)-body simulations of a test problem and DM haloes as well as cosmological simulations. Unlike velocity-independent models (Fischer et al., 2022), fSIDM with a velocity-dependent interaction has not been studied in a cosmological context. Finally, all our simulations are DM-only, i.e. we ignore the effects of baryons. In a companion paper (Sabarish et al., 2021) velocity-dependent fSIDM is studied in the context of merging galaxy clusters.
This paper is structured as follows: In Section 2 we describe the numerical setup of our simulations including a novel time-stepping criterion. A presentation of the simulations and our results follows for the idealised setups in Section 3 and the cosmological simulations in Section 4. Shortcomings and directions for further research are discussed in Section 5. Finally, in Section 6 we conclude. Additional information can be found in the appendices.
## 2 Numerical setup
In this section, we describe our numerical setup. First, we begin by describing the simulation code and the SIDM implementation. We continue with the parametrisation for the velocity-dependent cross-section. Next, we introduce a novel time-step criterion for the velocity-dependent self-interaction. Lastly, the simulations with their initial conditions and the identification of the substructure are described. In addition, a description of our improved parallelisation scheme for SIDM can be found in Appendix A.
### SIDM implementation and simulations
For our simulations, we use the cosmological hydrodynamical \(N\)-body code gadget-3. The predecessor gadget-2 has been described in Springel (2005). Various additional modules have been developed for the OpenGadget3 version that we are using. The implementation of DM self-interactions has been described by Fischer et al. (2021, 2022). To date, this is the only implementation for simulating frequently self-interacting DM. The code is also able to simulate isotropic cross-sections. Here, the employed scheme is very similar to the one introduced by Rocha et al. (2013), except that we use an adaptive kernel size set by the 64 next neighbours and a different time step criterion.
We have run several simulations of CDM, rSIDM and fSIDM for idealised setups with individual haloes as well as cosmological simulations. For all simulations we used the cosmological \(N\)-body code gadget-3. The details of the simulations can be found in the corresponding Sections 3 and 4. In addition, we ran simulations to test the code, they can be found in the Appendices C and D.
### Velocity-dependent cross-section
There are numerous studies in the literature considering a cross-section, \(\sigma\), that depends on the scattering velocity, \(v\). A typical choice - that we employ as well - is a cross-section that scales as \(\sigma\propto v^{-4}\) in the limit of high \(v\). This dependence may be motivated by particle physics (e.g. Ibe & Yu, 2010; Tulin et al., 2013) and has been employed in numerous studies (e.g. Kaplinghat et al., 2016; Robertson et al., 2017).
Following Robertson et al. (2017); Kahlhoefer et al. (2017), we consider the momentum transfer cross-section
\[\sigma_{\rm T}=2\pi\int_{-1}^{1}\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega_{\rm cms }}\left(1-\bigm{|}\cos\theta_{\rm cms}\bigm{|}\right)\mathrm{d}\cos\theta_{ \rm cms}\,. \tag{1}\]
We parameterise the velocity dependence of the momentum transfer cross section as
\[\frac{\sigma_{\rm T}}{m}=\frac{\sigma_{0}}{m}\left(1+\left(\frac{v}{w}\right) ^{\beta}\right)^{\alpha/\beta}\,. \tag{2}\]
Here, \(\sigma_{0}\) corresponds to the cross-section in the velocity-independent regime, \(w\) denotes the velocity cutoff, \(\alpha\) sets the decline at high velocities and \(\beta\) describes the transition from the constant cross-section at low velocities to the decreasing cross-section at high velocities. In this study, we always set \(\alpha=-4\) and \(\beta=2\). This choice is motivated by the fact that in the limit of the Born-approximation, the velocity dependence of the total and the transfer cross-section are very similar (Ibe and Yu, 2010). More details on the transfer cross-section and the possible connections to the underlying particle physics can be found in the companion paper (Sabarish et al., prep).
In most physically motivated cases, a velocity dependence also implies an angular dependence of the differential scattering cross-section. \(N\)-body simulations had been limited in simulating frequent scatterings about small angles until the work by Fischer et al. (2021). Here, we go beyond the common large-angle scattering and investigate small-angle as well as isotropic scattering combined with a velocity dependence.
In order to probe different velocity regimes, we use several combinations of \(\sigma_{0}\) and \(w\). These are described together with the details of the simulations in Section 3 and Section 4. Each parameter set is simulated with fSIDM and rSIDM, the latter corresponding to isotropic scattering. Note that we use the momentum transfer cross-section (Eq. 1) to match fSIDM and rSIDM. In the case of isotropic scattering, the total cross-section is twice as large as the momentum transfer cross-section.
### Time step criterion
For velocity-dependent self-interactions, a separate time-step criterion can become more important than for velocity-independent scatterings because cross-sections can become large at low velocities. Depending on the cross-section this can give more stringent limitations on the time step than imposed by the gravity scheme. We found that the time-step criterion introduced by Fischer et al. (2021) for velocity-independent self-interactions is not always well-suited for a velocity-dependent cross-section (this has been previously described by Kasselmann, 2021). Here we introduce a new time step criterion for velocity-dependent scattering that has a velocity-dependence as described by Eq. 2. In more general terms, our time step criterion requires that there is a finite velocity for which the fractional velocity change due to the drag force becomes maximal and finite. We remind the reader that the effective drag force for fSIDM was introduced by Kahlhoefer et al. (2014) and employed to develop a numerical scheme by Fischer et al. (2021). It is given as
\[F_{\rm drag}=\frac{1}{2}\,\frac{\sigma_{\rm T}(v)}{m}\,v^{2}\,m_{\rm n}^{2}\, \Lambda\,. \tag{3}\]
The relative particle velocity is denoted by \(v\), \(m_{\rm n}\) is the numerical particle mass and \(\Lambda\) is the kernel overlap, a geometrical factor (for details see Fischer et al., 2021).
Assuming the parametrisation according to Eq. 2 the fractional velocity change (\(\Delta v/v\)) due to the drag force becomes maximal for the velocity
\[v_{e}=\frac{w}{(-1-\alpha)^{1/\beta}}\,. \tag{4}\]
Note that this is only applicable if \(\alpha<-1\) and \(\beta>0\). For our choice of \(\alpha=-4\) and \(\beta=2\) this implies \(v_{e}=w/\sqrt{3}\).
Using the maximum allowed fractional velocity change \(\tau\), we can express the time-step criterion for particle \(i\) as
\[\Delta t_{i}<\tau\,\frac{2}{v_{e}}\frac{1}{m_{\rm n}\,\Lambda_{ii}}\left(\frac {\sigma_{\rm T}(v_{e})}{m}\right)^{-1}\,. \tag{5}\]
Here, \(\Lambda_{ii}\) gives the maximal possible kernel overlap by calculating it with the particle itself.
It is worth pointing out that this time step depends on the chosen number of neighbours, \(N_{\rm ngb}\). With a larger number of neighbours \(\Lambda_{ii}\) becomes smaller and thus the time step is larger and vice versa. Finally, we note that this time step criterion also applies to rSIDM when using the total cross-section, \(\sigma\), instead of \(\sigma_{\rm T}\). For rSIDM, the scattering probability reaches a maximum at \(v_{e}\) (see Eq. 4) too.
In Appendix B, we provide further discussion on issues related to the formulation of a time step criterion.
## 3 Idealised simulations
In this section, we present and analyse our idealised simulations and show the results we obtain. First, we start with a simple test problem in Sec. 3.1. Secondly, the evolution of the core size for isolated haloes is shown (Sec. 3.2) for both initial Hernquist and NFW profiles.
### Thermalisation problem
To learn about the differences between a constant and a velocity-dependent cross-section, we first consider the thermalisation problem previously studied by Fischer et al. (2021). This has the advantage that we study the pure effect of DM self-interactions without the influence of gravity.
The numerical setup consists of a periodic box with a constant density of \(10^{7}\,\mathrm{M}_{\odot}\,\mathrm{kpc}^{-3}\) sampled by \(10^{4}\) particles. The cubic box has a side length of 10 kpc and its particles have a velocity of \(2\,\mathrm{km}\,\mathrm{s}^{-1}\) which points into a random direction. In Tab. 1, we describe the employed cross-sections. The scattering broadens the velocity distribution such that it evolves towards a Maxwell-Boltzmann distribution. We can characterise the width of the distribution of the absolute velocities by computing its variance.
In Fig. 1 we show the results as a function of time. For frequent self-interactions, this has been previously studied by Kasselmann (2021). In line with his results, we find that the evolution of the thermalisation rate evolves qualitatively differently for velocity-dependent self-interactions compared to a constant cross-section. The thermalisation process evolves faster at early times and slower at late times for the velocity-dependent self-interactions. For the isotropic cross-section, we find the same. Initially, the system evolves faster for the velocity-dependent cross-sections, because the cross-section evaluated at the typically relative velocity of the particles is larger compared to the velocity-independent cross-section. The lower thermalisation rate at late times, i.e. when the velocity distribution is already close to the Maxwell-Boltzmann distribution, stems mainly from a deviation at the high-velocity tail. The decrease of the cross-section with velocity makes velocity-dependent self-interactions less efficient in scattering
\begin{table}
\begin{tabular}{l c c c} name & type & \(\sigma_{0}/m\) & \(w\) \\ & & \([\mathrm{cm}^{2}\,\mathrm{g}^{-1}]\) & \([\mathrm{km}\,\mathrm{s}^{-1}]\) \\ \hline f10 & frequent & 10 & – \\ r10 & rare & 10 & – \\ f4.56fw0.1 & frequent & \(4.5\times 10^{6}\) & 0.1 \\ r4.56fw0.1 & rare & \(4.5\times 10^{6}\) & 0.1 \\ \end{tabular}
\end{table}
Table 1: The table shows the different cross-sections that we used for the thermalisation problem. The first column gives the name that we use in the paper to abbreviate the cross-section. It follows the type of self-interaction. Here, “rare” corresponds to isotropic scattering. The third column gives \(\sigma_{0}/m\) and the last one \(w\) (see also Eq. 2).
particles to high velocities. In consequence, the thermalisation rate in a late stage is reduced.
### Isolated haloes
Here, we study the evolution of isolated haloes subject to velocity-dependent self-interactions. Firstly, we investigate the density profile of an isolated halo with a density following a Hernquist profile (Hernquist, 1990) and secondly, we do the same for a halo with an NFW profile (Navarro et al., 1996). For the two haloes, we also compare rare and frequent self-interactions.
#### 3.2.1 Hernquist Halo
We simulate the same Hernquist halo as first described by Robertson et al. (2017b). It has a mass of \(M=2.46\times 10^{14}\) M\({}_{\odot}\) and a scale radius of \(r_{s}=279\) kpc. We generate the initial conditions by sampling the halo up to \(r=400\,r_{s}\) using \(N=10^{7}\) particles. For the gravitational softening length we employ \(\epsilon=0.56\) kpc. The simulations include velocity-independent and velocity-dependent cross-sections both for FSIDM and FSIDM. In detail, the cross-sections are shown in Tab. 2. It is worth noting that for the velocity-dependent simulations, the SIDM time-step constraint was tighter than the one from gravity, at least for a fraction of the particles. This led to a significant increase in computational costs. We determine the core size, \(r_{\rm core}\), as previously done by Robertson et al. (2017a) and Fischer et al. (2021a) by fitting a cored Hernquist profile. It is given as
\[\rho(r)=\frac{M}{2\pi}\frac{r_{s}}{(r^{4}+r_{\rm core}^{4})^{1/4}}\frac{1}{(r +r_{s})^{3}}. \tag{6}\]
As in the original Hernquist profile, \(M\) denotes the halo mass and \(r_{\rm s}\) the scale radius. The evolution of the core size is shown in Fig. 2 for the different DM models.1
Footnote 1: We found the exact core size to be sensitive to details of the optimisation procedure, which might be caused by a noisy likelihood. This might be the main source of different core sizes for the same halo in the literature (Robertson et al., 2017b; Fischer et al., 2021a; Correa et al., 2022).
In the early stages, the density core grows due to self-interactions whose effect can be described as heat transfer (e.g. Lynden-Bell & Eggleton, 1980; Balberg et al., 2002) that follows the gradient of the velocity dispersion. As a result, the central region of the halo heats up and its density is decreasing. For the collisionless DM, we find a small core caused by gravitational two-body interactions, a process known as numerical core formation (e.g. Dehnen, 2001). At later stages, the core size is decreasing and the halo enters the collapse phase. In this phase, heat is only transported outward, as the central region cools it also contracts. Gravitational bound systems are characterised by a negative heat capacity. This is for example well known from star clusters but also applies to the haloes we study here. In consequence, the velocity dispersion at the central region of the halo is increasing and leads to a runaway process called the gravothermal catastrophe.
In previous studies it was found that the maximum core size that is reached during the haloes evolution is roughly independent of the strength of the cross-section (e.g. Kochanek & White, 2000), but also its angular dependence (e.g. Robertson et al., 2017a; Fischer et al., 2021). In contrast, we find that the velocity-dependent cross-sections give a larger maximum core size. However, we have to note that this only occurs for sufficiently small values of \(w\). For the initial Hernquist halo heat is flowing inwards for radii smaller than the radius of the maximal velocity dispersion, \(r(v_{\rm max}^{2})\), this should set the core formation time. In contrast, for radii larger than \(r(v_{\rm max}^{2})\),
\begin{table}
\begin{tabular}{l c c c} name & type & \(\sigma_{0}/m\) & \(w\) \\ & & \([\rm cm^{2}\,g^{-1}]\) & \([\rm km\,s^{-1}]\) \\ \hline c0 & collisionless & 0.0 & – \\ f0.8 & frequent & 0.8 & – \\ r0.8 & rare & 0.8 & – \\ f1e5w100 & frequent & \(10^{5}\) & 100 \\ r1e5w100 & rare & \(10^{5}\) & 100 \\ \end{tabular}
\end{table}
Table 2: The cross-sections that we employed for simulating a Hernquist halo are shown. The columns are the same as in Tab. 1.
Figure 1: The variance for the distribution of absolute velocities of the thermalisation problem introduced by Fischer et al. (2021a) is shown. We display the results for different SIDM models as a function of time. In black we indicated the variance of the final Maxwell–Boltzmann distribution.
Figure 2: The size of the density core for a Hernquist halo as a function of time is shown when evolved with different DM models. We indicate the cross-section in the legend. The first number refers to \(\sigma_{0}/m\) in units of \(\rm cm^{2}\,g^{-1}\) and the second one to \(w\) in units of \(\rm km\,s^{-1}\) (see Tab. 2). The first two SIDM simulations are for a velocity-independent cross-section and the number gives \(\sigma_{\rm T}/m\).
heat is flowing outward, determining the core collapse time. The maximum core size should be a result of the ratio of the total heat in and outflow. In consequence, a DM candidate that is more efficient in transporting heat inward than outward compared to other DM models would produce a larger maximum core size. We discuss this further in Sec. 3.2.3, after we have shown the results for the isolated NFW halo.
However, to gain further insights into the halo following initially a Hernquist profile, we first plot various quantities at the time of maximum core expansion in Fig. 3. The upper panel shows the density and velocity dispersion profile, and the bottom panel displays quantities related to heat conductivity.
In the following, we describe how we compute the quantities of the bottom panel. Assuming identical particles the viscosity cross-section is given by
\[\sigma_{\nu}=4\pi\int_{0}^{1}\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\sin^{2 }\theta\,\mathrm{d}\cos\theta\,. \tag{7}\]
Based on this we can express the effective cross-section of Yang & Yu (2022) as
\[\sigma_{\mathrm{eff}}=\frac{3}{2}\frac{\langle v^{5}\sigma_{\nu}(v)\rangle}{ \langle v^{5}\rangle}\,. \tag{8}\]
They introduced the effective cross-section with the aim of matching differential cross-sections with various angular and velocity dependencies. It thus allows transferring constraints on the strength of self-interactions to various SIDM models. Here, the average is computed assuming the velocities are well described by a Maxwell-Boltzmann distribution. Next, we give the heat conductivity using \(\sigma_{\mathrm{eff}}\). Strictly speaking, we do not specify the heat conductivity \(\kappa\), but use \(\kappa^{t}=m/\mathrm{k}_{\mathrm{B}}\kappa\), with \(m\) the DM particle mass and \(\mathrm{k}_{\mathrm{B}}\) the Boltzmann constant. This is commonly used in the gravothermal fluid model (e.g. Koda & Shapiro, 2011). Note, Kummer et al. (2019) took the angular dependence into account by expressing the heat conductivity in terms of the viscosity cross-section. Here we go further and use the effective cross-section for \(\kappa^{t}\). For the short-mean-free-path (smfp) regime it is given as
\[\kappa^{t}_{\mathrm{smfp}}=\frac{9\,b\,v}{4}\left(\frac{\sigma_{\mathrm{eff}}} {m}\right)^{-1}\quad\mathrm{with}\quad b=\frac{25\sqrt{\pi}}{32}\,. \tag{9}\]
The one-dimensional velocity dispersion is expressed by \(v^{2}\). In the long-mean-free-path (lmfp) regime the heat conductivity can be expressed as
\[\kappa^{t}_{\mathrm{lmfp}}=\hat{d}\,C\,\frac{v^{3}\rho}{4\pi\mathrm{G}}\left( \frac{\sigma_{\mathrm{eff}}}{m}\right)\quad\mathrm{with}\quad\hat{d}=\sqrt{ \frac{16}{\pi}},\ C\approx 0.75\,. \tag{10}\]
Here, \(\rho\) denotes the density and \(\mathrm{G}\) is the gravitational constant.
The Knudsen number, \(\mathrm{Kn}\), is usually used to distinguish between the lmfp and smfp regime and is defined as
\[\mathrm{Kn}=\frac{3}{2}\sqrt{\frac{4\pi\mathrm{G}}{\rho v^{2}}}\left(\frac{ \sigma_{\mathrm{eff}}}{m}\right)^{-1}\,. \tag{11}\]
Numerically \(\mathrm{Kn}>1\) corresponds to the lmfp regime and \(\mathrm{Kn}<1\) to the smfp regime.
From the upper panel of Fig. 3, we can see that the velocity dispersion at the time of maximum core expansion is roughly constant for radii smaller than \(r\langle v_{\mathrm{max}}^{2}\rangle\). However, the density core itself is much smaller, resulting in a steep density gradient at \(r\langle v_{\mathrm{max}}^{2}\rangle\).
The bottom panel shows the maximum core sizes for the velocity-dependent and velocity-independent cross-sections. It is visible that the maximum core size is smaller than \(r\langle v_{\mathrm{max}}^{2}\rangle\). Moreover, we can see that the Knudsen number is increasing as a function of radius and is always much larger than unity. Implying that the halo is always in the lmfp regime. For radii smaller than \(r_{\mathrm{s}}\), the corresponding heat conductivity (\(\kappa^{t}_{\mathrm{lmfp}}\)) is larger for the velocity-independent cross-section. In contrast, \(\kappa^{t}_{\mathrm{smfp}}\) is larger for the velocity-dependent cross-section. If the cross-section is decreasing as a function of velocity, smaller scattering velocities may play a more important role compared to large velocities in the heat conduction than for velocity-independent cross-sections (see also Sec. 3.1).
However, using the effective cross-section may eventually be problematic for extreme velocity dependencies. Depending on the velocity of a DM particle, it sees different distributions of relative velocities and thus has a mean free path that depends on its velocity. Unfor
Figure 3: Various properties of the halo following initially a Hernquist profile are shown at the evolution stage when its density core is the largest. In the upper panel, we show the density (black) and the velocity dispersion (blue) as a function of radius. Moreover, the scale radius, \(r_{s}\), and the radius at which the velocity dispersion of the initial profile reaches its maximum, \(r\langle v_{\mathrm{max}}^{2}\rangle\), are indicated. The lower panel gives \(\kappa^{t}\) for the smfp (grey) and lmfp (black) regime (see Eq. 9 and Eq. 10) as well as the Knudsen number (see Eq. 11). These quantities are computed based on the effective cross-section, \(\sigma_{\mathrm{eff}}/m\). In addition, the maximum core sizes are shown for the runs with the frequent self-interactions, i.e. for the velocity-independent and velocity-dependent cross-sections. To compute the quantities that are shown as a function of radius we used the simulation with frequent self-interactions and without velocity-dependence.
tunately, it is not understood how the evolution in the lmfp regime could be derived from first principles. This complicates a precise description of the heat conduction in the halo.
#### 3.2.2 NFW Halo
We studied the core formation in an isolated NFW halo using various DM models. These include velocity-independent cross-sections for fSIDM and rSIDM each with \(\sigma_{\rm T}/m=10.0\,\rm cm^{2}g^{-1}\) and velocity-dependent fSIDM and rSIDM cross-section with \(\sigma/m=5000.0\,\rm cm^{2}g^{-1}\), \(w=720\,\rm km\,s^{-1}\) and \(\sigma/m=2.5\times 10^{5}\,\rm cm^{2}g^{-1}\), \(w=180\,\rm km\,s^{-1}\). The cross-sections and the abbreviations we use for them are also shown in Tab. 3
For the NFW halo we use the same initial conditions as used by Fischer et al. (2021) for fig. 5. Our halo has a virial mass of \(10^{15}\,\rm M_{\odot}\), a scale radius of \(300\,\rm kpc\) and a density parameter of \(\rho_{0}\equiv 4\rho(r_{\rm s})=2.9\times 10^{6}\,\rm M_{\odot}\,\rm kpc^{-3}\). The halo is sampled up to the virial radius (\(r_{\rm vir}=1626\,\rm kpc\)) and resolved by \(N=10^{6}\) particles. For the simulations we employ a gravitational softening length of \(\epsilon=0.56\,\rm kpc\).
We measure the core size by fitting a cored NFW profile2. It is given by,
Footnote 2: In the literature also other descriptions of a cored NFW profile exist (e.g. Read et al., 2016; Read et al., 2016; Ray et al., 2022). The one we use for the simulation of the halo following initially an NFW profile.
\[\rho(r)=\frac{\rho_{0}}{\left(r^{4}+r_{\rm core}^{4}\right)^{1/4}}\,\frac{r_{ \rm s}}{\left(1+r/r_{\rm s}\right)^{2}}\,. \tag{12}\]
For the fitting procedure we have \(\rho_{0}\), \(r_{\rm s}\) and \(r_{\rm core}\) as free parameters. We maximise a likelihood based on Poisson statistics as described in sec. 4 of Fischer et al. (2021).
The core sizes for different DM models are shown in Fig. 4. First, we consider the cross-sections, f10, r10, f5e3w720 and r5e3w720. For the phase of the core formation and the onset of core collapse up to \(\approx 4\,\rm Gyr\) the core sizes are very similar. Only the velocity-dependent rSIDM cross-section yields slightly larger core sizes. Hence the momentum transfer cross-section provides a good match between fSIDM and rSIDM in the given case. Only at later stages of the halo evolution do differences between the models occur. When the core size is almost zero, it seems that small-angle scattering slows down the core collapse compared to isotropic scattering. These results are partially in line with previous work. Yang & Yu (2022) found that a constant and velocity-dependent cross-section behave
Figure 4: We display the core size for an NFW halo, which we simulated with different DM models. The abbreviations for the cross-sections are explained in Tab. 3.
Figure 5: We show the same as in Fig. 3, but for the NFW simulations. The the maximum core size of the velocity-dependent model refers to the run with f2.5e5w180.
\begin{table}
\begin{tabular}{l c c c} name & type & \(\alpha_{0}/m\) & \(w\) \\ & & [\(\rm cm^{2}g^{-1}\)] & [\(\rm km\,s^{-1}\)] \\ \hline c0 & collisionless & 0 & – \\ f10 & frequent & 10 & – \\ r10 & rare & 10 & – \\ f5e3w720 & frequent & \(5\times 10^{3}\) & 720 \\ r5e3w720 & rare & \(5\times 10^{3}\) & 720 \\ f2.5e5w180 & frequent & \(2.5\times 10^{5}\) & 180 \\ r2.5e5w180 & rare & \(2.5\times 10^{5}\) & 180 \\ \end{tabular}
\end{table}
Table 3: The cross-sections that we employed for simulating an NFW halo are shown. The columns are the same as in Tab. 1.
qualitatively very similarly for most of the halo evolution but differ at the late stages of the collapse phase. They also found that the viscosity cross-section provides a better match between different angular dependencies than the momentum transfer cross-section. In the companion paper (Sabarish et al. prep), it is found that the viscosity cross-section can indeed provide a reasonable, but not perfect match between isotropic scattering and a very anisotropic cross-section in the fSIDM limit. In contrast, for our setup with a much stronger cross-section the momentum transfer cross-section provides a very good match regardless of the velocity dependence. However, we should point out that the quality of the match depends on the halo properties and the strength of the self-interactions, see, e.g. fig. 9 from Fischer et al. (2022) (we show this result again in Sec. 4.2.2). Here, one can see for the larger cross-section that the momentum transfer cross-section match yields a larger effect of fSIDM on the central densities of DM haloes at the high-mass end compared to rSIDM. For lower-mass haloes, it changes and rSIDM has a stronger effect on the central halo density. As Yang and Yu (2022) simulated NFW haloes with a mass of \(M_{200}=10^{7}\,\mathrm{M}_{\odot}\) and a concentration of \(c_{200}\approx 20\) (for details see their tab. 1) they probed a different regime than we do here. Hence, the quality of a matching procedure for the angular dependence could depend on the halo properties and the strength of the self-interactions. It is also important to note that the inner regions of our NFW halo are in the smfp regime or close to it (\(\mathrm{Kn}<1\)) and not in the lmfp regime for the velocity-independent cross-sections.
For the strongly velocity-dependent cross-section, i.e. the one with \(w=180\,\mathrm{km}\,\mathrm{s}^{-1}\), we find that the evolution differs qualitatively from the ones with a weaker velocity-dependence. The results are somewhat similar to the results for the Hernquist halo, the maximum core size becomes larger and the collapse time longer compared to the core formation time. However, the increase in the maximum core size is weaker compared to the Hernquist halo. This could be because the cross-section we have simulated is not as extremely velocity-dependent as for the Hernquist halo (\(w=100\,\mathrm{km}\,\mathrm{s}^{-1}\) for the Hernquist halo and \(w=180\,\mathrm{km}\,\mathrm{s}^{-1}\) for the NFW halo). Note that the NFW halo has a larger total mass and hence a larger velocity dispersion than the Hernquist halo, such that the two simulations cannot be directly compared. But when \(w\) is compared to the typical scattering velocity of the halo, the velocity dependence appears to be similar. In consequence, it is plausible that the difference in maximum core size stems primarily from a different reason such as the details of the density profile.
Analogous to the Hernquist halo we have computed the same quantities as in Fig. 3, but for the NFW halo and show them in Fig. 5. In contrast to the Hernquist halo, we find that the central region of the halo has a Knudsen number smaller than unity when simulated with the velocity-independent cross-section and thus would be considered to be in the smfp regime. In addition, the heat conductivity in the two regimes is more similar. But the Knudsen number varies strongly with velocity dependence. As for the Hernquist halo \(\kappa^{l}_{\mathrm{smfp}}\) has a larger value in the case of the velocity-dependent cross-section and \(\kappa^{l}_{\mathrm{lmfp}}\) is larger for the velocity-independent cross-section.
#### 3.2.3 Discussion of isolated halo evolution
In this last part on isolated haloes, we discuss the physics driving their evolution. During the evolution of the halo, the central velocity dispersion is increasing and the effective strength of the self-interactions may change according to the velocity dependence of the cross-section. An increasing velocity dispersion implies higher relative velocities of the DM particles and for a cross-section that decreases with velocity this leads to fewer scatterings.
The halo may reach its maximum core size when the gradient of the velocity dispersion has become zero. Afterwards, heat is only flowing outwards, which leads to a shrinking density core and the gravothermal collapse of the halo. While the density core is shrinking the central velocity dispersion is increasing. Given this increase in velocity dispersion, one would expect that the collapse is slowing down for a velocity-dependent cross-section compared to a velocity-independent one. However, in our simulations, we do not find an indication that the rate at which the density core is shrinking changes due to the velocity dependence (see Fig. 4). Instead, we only found that the core collapse time scale relative to the core formation time scale changes.
The evolution of the halo may not only be determined by the central region but also by larger radii, at least radii up to \(r(\nu_{\mathrm{max}}^{2})\) and a bit beyond may play a crucial role. A core-collapse rate that is insensitive to the velocity dependence might be caused by the relevant velocity dispersion staying roughly constant. Indeed the velocity dispersion at larger radii is less affected during the evolution and may play a crucial role in the core collapse. Right from the beginning of the simulation, during core formation, heat flows outward at radii larger than \(r(\nu_{\mathrm{max}}^{2})\). This heat flow takes place at velocities that are larger than in the central region of the halo. In consequence, the ratio of heat inflow and outflow depends on the velocity dependence of the scattering. For example, this is visible in the core formation and core collapse times. They are set by heat inflow and outflow.
The cross-sections we have simulated lead to roughly the same core formation time. For strongly velocity-dependent cross-sections, less heat outflow takes place during that time. This can result in a larger maximum core size as we found for the Hernquist halo (see Fig. 2). The maximum core size depends on the transition radius between heat inflow and outflow. Initially, this radius is set by \(r(\nu_{\mathrm{max}}^{2})\) but evolves according to the ratio of heat in and outflow. As we found, this evolution is only significantly affected by strongly velocity-dependent cross-sections.
Overall it becomes clear that if the scattering is velocity dependent, the evolution of an isolated halo can change qualitatively. However, we do not have a precise understanding of the physical mechanisms driving this difference. How effective the heat outflow taking place in the lmfp regime could depend on the gradient of the gravitational potential and the ability to scatter particles to large velocities. It could be mainly the high-velocity particles exceeding the escape velocity and carrying energy away that drive the core collapse. In this context, the exact density profile may eventually matter. For example the Hernquist and NFW profiles that we have investigated, have a different slope in the outskirts. Implying a different gradient of the gravitational potential. Further investigation is needed to fully understand the evolution of isolated haloes.
## 4 Cosmological simulations
We present our cosmological simulations in this section and show the results we obtain. First, we describe the simulations, followed by the analysis of the data. This includes many aspects such as the density and shape profiles of the DM haloes and the abundance of satellites.
### Simulations
We have run several simulations of CDM, rSIDM and fSIDM. For the SIDM models we use two different velocity dependencies, namely \(w=180\,\mathrm{km\,s}^{-1}\) and \(w=560\,\mathrm{km\,s}^{-1}\). For each of them we have models that differ in \(\sigma_{0}\) by one order of magnitude. Our simulations are run with fSIDM and a momentum transfer matched isotropic cross-section. The details of the DM models are given in Tab. 4 and their velocity-dependence is plotted in Fig. 6. Here, we also show the scattering velocities inside the centres of haloes from three different mass bins, which we use in Section 4.2. The velocities are indicated with a Maxwell-Boltzmann distribution that runs logarithmically in velocity.
\[f_{\mathrm{log}}(v_{\mathrm{scat}})=\sqrt{2}\,\frac{v_{\mathrm{scat}}^{3}}{a^ {3}}\,e^{-\frac{v_{\mathrm{scat}}^{2}}{2.4\sigma^{2}}}\quad\mathrm{with}\quad a =\sqrt{2\,\nu^{2}}\,. \tag{13}\]
The distribution of scattering velocities, \(v_{\mathrm{scat}}\), depends on the one-dimensional velocity dispersion, \(\nu^{2}\), of the halo. In appendix F, we put those DM models in the context of current observational constraints on the strength of DM self-interactions.
For the full box cosmological simulations, we use the same ICs as by Fischer et al. (2022). They are similar to box 4 of the Magneticum simulations 3 and have a comoving side length of \(48\,\mathrm{Mpc}^{-1}\). The employed cosmological model is described by the following parameters: \(\Omega_{\mathrm{M}}=0.272\), \(\Omega_{\Lambda}=0.728\), \(h=0.704\), \(n_{g}=0.963\) and \(\sigma_{8}=0.809\)(WMAP7, Komatsu et al., 2011). Further properties can be found in Tab. 5.
Footnote 3: Magneticum: [http://www.magneticum.org](http://www.magneticum.org)
The DM haloes are identified using the friends-of-friends algorithm4, which is implemented in gadget-3. The mass of a halo, \(M\), is computed as the sum of the gravitationally bound particles. The virial radius, \(r_{\mathrm{vir}}\), and the virial mass, \(M_{\mathrm{vir}}\), are measured with the spherical-overdensity approach based on the over-density predicted by the generalised spherical top-hat collapse model (e.g. Eke et al., 1996). Here, \(r_{\mathrm{vir}}\) is defined as the radius at which the mean density becomes larger than the one of the top-hat collapse model and \(M_{\mathrm{vir}}\) is the mass inside \(r_{\mathrm{vir}}\).
Footnote 4: A description of the friends-of-friends algorithm can, for example, be found in the work by More et al. (2011).
We use SuBfind(Springel et al., 2001; Dolag et al., 2009), which is implemented as part of gadget-3, to identify the substructure in the simulation. Every halo contains at least one subhalo, which is the primary subhalo located at the same position as the halo (determined by the location of the most gravitationally bound particle of the halo). The primary subhalo typically contains most of the particles that belong to the halo, but this is not necessarily the case.
### Results
In the following, we show the results of our cosmological simulations. The simulation setup we used is described in Sec. 4.1. We begin with the surface density of a massive halo (Section 4.2.1). Subsequently, we discuss the density profiles of the haloes in Section 4.2.2 and continue with their shapes (Section 4.2.3). We investigate the abundance of satellites (Section 4.2.4) as well as their diversity in terms of the circular velocity (Sec. 4.2.5). Finally, in Sec. 4.2.6, we study differences between frequent and rare self-interactions in the context of velocity-dependent scattering.
\begin{table}
\begin{tabular}{l c c c} name & type & \(\sigma_{0}/m\) & \(w\) \\ & & \([\mathrm{cm}^{2}\,\mathrm{g}^{-1}]\) & \([\mathrm{km\,s}^{-1}]\) \\ \hline c0 & collisionless & 0.0 & – \\ f0.1 & frequent & 0.1 & – \\ r0.1 & rare & 0.1 & – \\ f1 & frequent & 1.0 & – \\ r1 & rare & 1.0 & – \\ f10w180 & frequent & 10.0 & 180 \\ r10w180 & rare & 10.0 & 180 \\ f100w180 & frequent & 100.0 & 180 \\ r100w180 & rare & 100.0 & 180 \\ f0.3w560 & frequent & 0.3 & 560 \\ r0.3w560 & rare & 0.3 & 560 \\ fW560 & frequent & 3.0 & 560 \\ r3w560 & rare & 3.0 & 560 \\ \end{tabular}
\end{table}
Table 4: The table shows the different cross-sections that we used for the cosmological simulations. Analogously to Tab. 1, we use the same columns. Note, that the simulations of the first five DM models have been presented by Fischer et al. (2022).
\begin{table}
\begin{tabular}{l c c} Name & \(I_{\mathrm{los}}\) & \(N_{\mathrm{DM}}\) & \(m_{\mathrm{DM}}\) \\ & (\(\mathrm{cMpc}\,h^{-1}\)) & & (\(\mathrm{M}_{\odot}\,h^{-1}\)) \\ \hline hr & 48 & \(216^{3}\) & \(8.28\times 10^{8}\) \\ uhr & 48 & \(576^{3}\) & \(4.37\times 10^{7}\) \\ \end{tabular}
\end{table}
Table 5: The table gives the different simulations we run. The first column denotes the name, the second one the box size, the third one the number of numerical DM particles and the last one the mass of the numerical DM particles. Each setup was run with 8 different velocity-dependent cross-sections as described in Tab. 4.
Figure 6: We illustrate the cross-sections used for our cosmological simulations. In blue, we show the velocity-independent cross-sections from Fischer et al. (2022). The velocity-dependent cross-sections are displayed in orange (\(w=180\,\mathrm{km\,s}^{-1}\)) and purple (\(w=560\,\mathrm{km\,s}^{-1}\)). In green, we indicate typical scattering velocities. The Maxwell-Boltzmann distributions (see Eq. 13) correspond to the scattering velocities in the centres of the haloes from the three halo mass bins that we use in Sec. 4.2.
#### 4.2.1 Surface Density
In Fig. 7 we show the surface density of the same halo but in different DM models. It is the fourth most massive halo (\(M=9.3\times 10^{13}\,{\rm M}_{\odot}\,h^{-1}\)) in our simulation and nicely illustrates the effects of SIDM. They are most pronounced when comparing the two panels on the left-hand side, as the fSIDM simulation of the two has relatively strong self-interactions (\(\sigma_{\rm T}/m=1.0\,{\rm cm}^{2}\,{\rm g}^{-1}\)). Typical effects of SIDM that can be seen here are the formation of a density core, the rounder shape of haloes and the suppression of substructure. Many of the satellites visible in the CDM run do not exist in the fSIDM run. However, in the other SIDM runs shown here the suppression of the satellite abundance is weaker. There exist even objects for which no counterpart in the CDM simulation can be identified by eye. This is in particular the case for the velocity-dependent cross-section shown in the right-hand side panels. In the following sections, we quantify these self-interaction-induced changes in the DM distribution.
#### 4.2.2 Density Profiles
A quantity commonly measured for SIDM is the density profile of haloes. In particular the formation of a central density core that is characterised by a shallow gradient and a lower density compared to CDM (except O'Neil et al., 2023). We have studied this in an idealised setup in Section 3. Within the cosmological context this has been measured by various authors (e.g. Stafford et al., 2020; Eckert, D. et al., 2022; Mastromarino et al., 2023) and used to constrain the strength of DM self-interactions (see Appendix F).
We investigate the DM density profile for the haloes of our cosmological simulations. In particular, we study the median density profile within three halo mass bins. This is shown in Fig. 8, where we indicated the median virial mass and virial radius of the haloes contained in the three mass bins. We show all cross-sections we have simulated the ones with \(w=180\,{\rm km\,s}^{-1}\) are shown in orange and the ones with \(w=560\,{\rm km\,s}^{-1}\) are shown in purple. The small cross-sections, i.e. the one with the smaller \(\sigma_{0}/m\) for each \(w\) show hardly any core formation for the most massive haloes (left-hand panel). But for the less massive haloes, the core size is increasing in terms of the virial radius, \(r_{\rm vir}\). This is a consequence of the relative velocities between the DM particles being smaller for less massive systems. As a result, the particles typically scatter at smaller relative velocities for which the interaction strength is larger compared to high velocities (see also Fig. 6).
In Fig. 9, we show the central density of the DM haloes as a
Figure 7: The surface density of the fourth most massive system in our simulation is shown. We cross-identified it among all simulations and show it from the same perspective. We rotate the system such that for CDM the semi-major axis is parallel to the \(x\)-axis and the semi-minor axis parallel to the \(y\)-axis. We scale the axes in terms of \(r_{1/2}\), the half mass radius of the primary subhalo in the CDM simulation. The surface density is indicated with a logarithmic colour scaling. We use the same for each panel. The abbreviation of the cross-section is given in the lower left corner of each panel and the detailed parameters can be looked up in Tab. 4.
function of their virial mass. For the velocity-independent cross-section (left-hand panel), we find that it is decreasing as a function of halo mass when self-interactions are present. When considering the velocity-dependent runs it becomes clear that the gradient with halo mass depends on the velocity-dependence of the self-interactions. For \(w=560\,\mathrm{km\,s}^{-1}\) there is no or only a weak trend with halo mass (middle panel). But for the \(w=180\,\mathrm{km\,s}^{-1}\) cross-section (right-hand panel), the central density is increasing with halo mass and thus the trend is opposite to the simulations with a constant cross-section.
Note that we used the momentum transfer cross-section to match rSIDM and fSIDM. If we would have used the viscosity cross-section, the fSIDM cross-section would only have 2/3 of its value to correspond to the simulated rSIDM cross-section. A detailed derivation of this factor has been presented by (Sabarish et al., prep). This would imply larger central densities for the fSIDM cross-sections. In consequence, it probably would often provide a better matching. Except for haloes with masses lower than \(M_{\mathrm{vir}}\approx 10^{13}\mathrm{M}_{\odot}\) and simulated with the strong and velocity-independent scattering. Here the matching would become worse. It should be noted, that not all haloes used in Fig. 9 are relaxed which makes the picture more complicated.
#### 4.2.3 Shapes
A commonly studied property of DM haloes is their shape. This has for SIDM been investigated by several authors (e.g. Peter et al., 2013; Samee et al., 2018; Robertson et al., 2019; Banerjee et al., 2020; Chua et al., 2020; Harvey et al., 2021; Despali et al., 2022; Shen et al., 2022). DM self-interactions significantly affect the shape of the haloes up to larger radii than the density profile (Fischer et al., 2022). Furthermore, how large the affected radii are depends on the strength of the self-interactions (Vargya et al., 2022).
To compute the shapes of our simulated DM haloes we proceed as previously described by Fischer et al. (2022). We compute the mass tensor of particles within an ellipsoidal selection volume using their mass, \(m\), and position, \(r\):
\[\mathbf{M}_{ij}=\sum_{k}m_{k}r_{k,i}r_{k,j} \tag{14}\]
Here, \(k\) denotes a particle and \(i\), \(j\) are the coordinate indices. The selection volume for the next iteration is determined by the eigenvalues and eigenvectors of the mass tensor. We iterate until the shape of the selection volume converges against the one inferred from the mass tensor. It is important to note that shapes close to the centre
Figure 8: We show the median density profile for haloes from three different mass bins. The results for the velocity-independent and velocity-dependent cross-sections are displayed together. However, we show the results only for fSIDM as the fSIDM results are similar. The density is plotted as a function of the radius in units of the virial radius. The shaded regions indicate the scatter among the haloes, and the range between the 25th and 75th percentiles is displayed. The virial mass and the virial radius given in the panels indicate the median of the corresponding mass bin from the CDM simulation. All plots show the profiles for a redshift of \(z=0\) and are produced from the full cosmological box with the highest resolution. Note, we have used all particles, not only those that belong to the halo as identified by SunFind.
Figure 9: The central density of the DM haloes is shown as a function of their virial mass. We measure the central density as the mean density within a radius of \(0.01r_{\mathrm{vir}}\). In the left-hand side panel the simulations with a velocity-independent cross-section are shown (reprint of fig. 9 of Fischer et al., 2022). The middle panel gives the velocity-dependent scattering with \(w=560\,\mathrm{km\,s}^{-1}\) and the right-hand side panel displays the self-interactions with \(w=180\,\mathrm{km\,s}^{-1}\). Individual systems are indicated by “\(+\)” when evolved with the smaller cross-section. For the larger cross-section, we use “\(\times\)” and the CDM case is marked by “\(\times\)”. In addition, we computed the mean of the distribution as a function of virial mass, shown by the lines. The shaded regions give the corresponding standard deviation.
of the haloes cannot be measured accurately. The vanishing density gradient within the density core of SIDM haloes renders the shape undefined (Fischer and Valenzuela, 2023).
In Fig. 10 we plot \(s=c/a\) as a function of the semi-major axis, \(a\), in units of the virial radius. The semi-minor axis is denoted by \(c\). In general, we find that SIDM makes the haloes more round, as one would expect, and that fSIDM and rSIDM are qualitatively very similar.
Moreover, we show the shape of the haloes as a function of mass in Fig. 11. Here, we compute the shape from the innermost particles within a volume equal to a sphere of radius \(0.078r_{\rm vir}\). For CDM we find that haloes become more ellipsoidal with increasing mass. This trend is well known in the literature (e.g. Jing and Suto, 2002; Allgood et al., 2006; Munoz-Cuartas et al., 2011; Despali et al., 2013, 2014). This can change when including self-interactions, especially for a velocity-independent cross-section. Here, the effect of the self-interactions is increasing with halo mass (see left panel of Fig. 11). However, for the most massive systems in our simulation we find the haloes to become more elliptical even with SIDM. This might be due to few objects which on average might be less relaxed than the ones at lower masses. Given a velocity dependent cross-section haloes become more elliptical with mass at the high-mass end. But the gradient is steeper compared to CDM as self-interactions lead to rounder haloes at lower masses and at the high mass end the shape becomes similar to CDM (middle and right panel).
#### 4.2.4 Satellites
The properties of satellite systems are a promising probe for studies of DM. Depending on the DM model, fewer or more satellites are predicted, and they may differ in their density profiles. This has been studied in the context of multiple DM models, including SIDM (e.g. Banerjee et al., 2020; Nadler et al., 2020, 2021; Bhattacharyya et al., 2022).
In Fig. 12 we show the number of satellites per logarithmic mass as a function of their mass in units of the virial mass of their host system. We find that DM self-interactions can reduce the abundance of satellites, and the number of less massive subhaloes is stronger affected than the more massive satellites. Moreover, the momentum-transfer-matched frequent self-interactions lead to a stronger suppression than the isotropic scattering (as previously described for a constant cross-section in Fischer et al., 2022). All this seems to be independent of the velocity dependence. Interestingly, the difference between fSIDM and rSIDM is shrinking for the strong velocity dependence. For the velocity-independent simulations (left-hand side panel) and the mildly velocity-dependent runs (\(w=560\,{\rm km\,s^{-1}}\), middle panel),
Figure 11: The shape of the DM haloes is shown as a function of their virial mass. The left-hand side panel gives the results for the velocity-independent cross-sections (previously shown in fig. 14 by Fischer et al., 2022). In the middle panel, we display the results for the velocity-dependent scattering with \(w=560\,{\rm km\,s^{-1}}\) and in the right-hand panel for \(w=180\,{\rm km\,s^{-1}}\). This figure is built analogously to Fig. 9.
Figure 10: We show the median shape, \(s=c/a\), of the DM haloes within three mass bins as a function of the major semi-axis, \(a\). Each panel displays a different mass bin with its median mass being indicated. This figure is build analogously to the density profiles in Fig. 8. The shaded regions indicate the scatter among the haloes, and the range between the 25th and 75th percentiles is displayed. We show it only for the collisionless DM and the strongest fSIDM model of each velocity-dependence. In addition, we indicate at which radius the shape sensitivity (Fischer and Valenzuela, 2023) for the 25th percentile drops below a value of 25. This is indicative of a radius above which the shape measurements are reliable. Note, in particular for CDM the presence of satellites reduces the shape sensitivity.
the stronger rSIDM cross-section has a similar effect to the weak fSIDM cross-section. But for the strongly velocity-dependent run (\(w=180\,\mathrm{km\,s}^{-1}\), right-hand panel), the strong rSIDM cross-section is no longer similar to the weak fSIDM one but closer to the strong fSIDM one. Hence, we find a strong velocity dependence to reduce the differences between cross-sections with different angular dependencies.
The difference between rSIDM and fSIDM may mainly arise from host-satellite scattering as those interactions take place with a preferred direction and thus are far from an equilibrium state. Also, these interactions significantly contribute to the suppression of the satellite abundance (e.g. Zeng et al., 2022). To understand the reduced difference between rSIDM and fSIDM it is important to note, that the host-satellite interactions take place at higher velocities than the scatterings within the satellite between its particles. Consequently, a velocity-dependent cross-section can reduce the host-satellite scattering compared to the satellite-satellite interactions and thus reduce the difference between rSIDM and fSIDM.
In addition, we find that the suppression of the satellite abundance for the mildly velocity-dependent cross-sections (middle panel) is less strong than for the other two velocity dependencies. We would not have expected this difference in strength from the density profiles
Figure 12: We show the number of satellites per logarithmic mass as a function of their total mass relative to the virial mass of their host (upper panels). In the lower panels we dispaly the ratio of the DM models to CDM. All panels give the result of the 100 most massive groups in our full cosmological box. The left-hand side panels shows the results for the velocity-independent cross-sections (previously shown in fig. 6 of Fischer et al., 2022). The middle panels gives the velocity-dependent self-interactions with \(w=560\,\mathrm{km\,s}^{-1}\) and the right-hand side panels for \(w=180\,\mathrm{km\,s}^{-1}\). All subhaloes, except for the primary one, within a radius of \(5\,r_{\mathrm{vir}}\) were considered. The results are for a redshift of \(z=0\). Note that the least resolved satellites used here contain about 100 particles.
Figure 13: For the 100 most massive haloes of our simulations, we show the cumulative number of satellites per halo as a function of radius (upper panels). We also give the ratio of the DM models to CDM (lower panels). The left-hand side panel shows the results for the velocity-independent cross-sections (previously shown in fig. 7 of Fischer et al., 2022). The middle panel gives the velocity-dependent self-interactions with \(w=560\,\mathrm{km\,s}^{-1}\) and the right-hand side panel for \(w=180\,\mathrm{km\,s}^{-1}\). The results are shown for \(z=0\) and subhaloes were only considered if they are less massive than the primary subhalo and more massive than \(M>9.6\times 10^{10}\,\mathrm{M_{\odot}\,h}^{-1}\).
that we show in Sec. 4.2.2. Though, there is a velocity scale at which the mildly velocity-dependent cross-sections are weaker than the corresponding ones with a different velocity-dependence (see Fig. 6). Interestingly, this becomes even more pronounced when computing the effective cross-section introduced by Yang and Yu (2022), see Appendix F. Given that the host-satellite scattering, which drives the suppression of the satellite abundance, takes preferentially place in this velocity regime, it could explain the different strengths of the satellite suppression.
In Fig. 13 we display the number of satellites as a function of the distance to their host in units of the host's virial radius. The upper panels show the cumulative number of satellites and the lower panels display the ratio to CDM. We note that the ratios at small distances are subject to a considerable amount of noise as they are computed from a small number of satellites. Here, we find again that self-interactions can suppress the number of satellites. The inner ones are more affected than the distant ones and frequent self-interactions lead to a stronger suppression than rare scattering if the same momentum transfer cross-sections are compared. This is well visible for the velocity-independent cross-sections in the left-hand panel. The simulations with frequent self-interactions show roughly a reduction in the number of satellites twice as large as for the corresponding simulations with rare self-interactions. As in Fig. 12 we find that the difference between rSIDM and fSIDM becomes less for the strongest velocity-dependence (\(w=180\,\mathrm{km\,s}^{-1}\)).
#### 4.2.5 Diversity of satellites
One of the small-scale issues is the diversity problem. It usually refers to the variation between the rotation curves of galaxies (e.g. Kamada et al., 2017; Ren et al., 2019; Zentner et al., 2022). To study their diversity, we focus on the circular velocity at a radius of 3.5 kpc instead of looking at the full profile. The velocity at 3.5 kpc is sensitive to the core formation or core collapse. In Fig. 14 we show the circular velocity at that radius for satellites more massive than \(\approx 4.9\times 10^{10}\,\mathrm{M}_{\odot}\,h^{-1}\) as a function of their mass. Note that we consider all subhaloes identified by SuhFind satellites if they are not a primary subhalo (see Sec. 2.1).
For the velocity-independent cross-sections (left-hand panel of Fig. 14) we find that self-interactions decrease the circular velocity at 3.5 kpc. This corresponds to the formation of a density core. For the larger cross-sections the circular velocity is lower, i.e. the density core is larger. Basically, the same applies for the cross-sections with \(w=560\,\mathrm{km\,s}^{-1}\) (middle panel of Fig. 14). But it is noticeable that the most massive subhaloes experience less suppression of \(v_{\mathrm{circ}}\) in the inner region. This is simply a consequence of the velocity dependence, as the DM particles in the more massive subhaloes have higher typical relative velocities. For the cross-section with the strong velocity dependence (\(w=180\,\mathrm{km\,s}^{-1}\)), we find qualitatively different results. For the more massive subhaloes, we find the suppression of the circular velocity as for the other simulations. But on average, the least massive objects show an increase in circular velocity for the stronger cross-sections compared to CDM. Moreover, the distribution of values for the circular velocity is broader compared to CDM. The other cross-sections do not show such a significant increase in diversity.
When comparing the results for rSIDM and fSIDM, we do not find a clear qualitative difference arising from the typical scattering angle of the self-interactions. In contrast, the momentum transfer cross-section provides a matching that is not far off but surprisingly accurate for the velocity-dependent cross-sections.
The diversity of rotation curves has been studied a lot with SIDM, and it has been shown that self-interactions can create more diverse density profiles. In particular, low-mass objects have been studied. There are several papers that studied MW-like satellites and dwarf galaxies (e.g. Creasey et al., 2017; Zavala et al., 2019; Correa et al., 2022; Lovell and Zavala, 2023). It has been found in DMQ simulations that cross-sections with a strong velocity dependence can even trigger core collapse within satellites (e.g. Turner et al., 2021; Yang et al., 2023; Nadler et al., 2023). Especially for satellites the core collapse can be enhanced by tidal stripping (e.g. Kahlhoefer et al., 2019; Nishikawa et al., 2020). This is in line with our finding of more compact objects at low masses for our strongly velocity-dependent cross-sections.
#### 4.2.6 Frequent vs. rare self-interactions
Finally, we want to investigate how the different DM models affect the satellites of our most massive haloes. Previously, we found that fSIDM can lead to a stronger suppression of the number of satellites than rSIDM does (Fischer et al., 2022). Identifying such differences is crucial to constrain the angular dependence of dark matter self-interactions. In contrast to our previous work, we investigate the maximum circular velocity in the satellites here. But show the number of satellites in Appendix E.
We cross-identify the haloes and their satellites among the simulations based on their particles. As we start from the same initial conditions, we can match the haloes with the same particles identi
Figure 14: We show the circular velocity at 3.5 kpc for satellites with a mass of at least \(\approx 4.9\times 10^{10}\,\mathrm{M}_{\odot}\,h^{-1}\). We consider all satellites that are not the primary subhalo. The lines indicate the mean and the shaded regions the standard deviation for the corresponding DM models. This is analogous to Fig. 9, as well as the markers.
fied based on their unique identification numbers. To evaluate how well two haloes match we make use of the gravitational potential at the particle's location. Particles at a lower gravitational potential are stronger weighted to find the best matching analogue. Given a list of the halo particles sorted according to how deep they sit in the gravitational potential, starting with the one at the lowest potential, we compute weights for them. These weights are given as
\[w_{i}=\left(\frac{1}{i+1}\right)^{\alpha}. \tag{15}\]
Note, here we assume the first list index to be \(i=0\). The parameter \(\alpha\) allows for different weightings, we use \(\alpha=0.8\). In practice, we compute the weight for the CDM run only. This is because we use the CDM haloes as a benchmark and ask how well the SIDM haloes match them. The quality of a potential match is given by the sum of the weights \(w_{i}\) for the particles that the CDM halo and the SIDM halo have in common.
For the analysis, we do not consider all haloes but apply different selection criteria. Firstly, the hosts and their satellites should be well-resolved. We consider only the 13th most massive haloes and limit the selection further by requiring that we are able to match at least five satellites with a minimum mass of \(9.6\times 10^{10}\,\mathrm{M}_{\odot}\,h^{-1}\) (2200 particles). Furthermore, we require the haloes to be relaxed. Here, we assume a halo to be relaxed if the centre of mass and the most bound particle of the primary subhalo are separated by not more than 10% of the virial radius. In addition, we tested a further limitation by excluding haloes based on the ratio of the halo and primary subhalo mass. However, in practice, this did not exclude any halo. At least when we have required that the primary subhalo does not contain less than 75% of the halo mass.
In Fig. 15, we display our results for how the central halo densities correlate with the relative change of the maximum circular velocity in the satellites. We show the average relative change multiplied by the average maximum circular velocity in the CDM satellites. Here, we use the maximum velocity as computed by SuBFind. It is given by the maximum of the circular velocity, \(v_{\mathrm{circ}}=\sqrt{G\,M(<r)}/r\), in radial distance, \(r\), from the centre of the subhalo.
We find the maximum circular velocity in the satellites altered by the DM self-interactions. For the velocity-independent scattering it typically decreases with increasing cross-section. This implies that the satellites are less concentrated. In contrast, a velocity-dependent cross-section can also lead to a larger value for the maximum circular velocity. Whether this is the case or not depends in our model on the parameter \(w\), i.e. how strongly velocity-dependent the scattering is. It is worth pointing out that our selection criterion of subhaloes above a mass threshold that we can match might favourably pick subhaloes that have become more concentrated due to the velocity-dependent self-interactions. Thus the increase in maximum circular velocity may not be representative of all the subhaloes.
We find that frequent self-interactions tend to lead to a smaller maximum circular velocity than rare scattering. For the larger cross-sections we have simulated, we find that the maximum circular velocity for rare self-interactions compared to frequent ones is increased for the typical system (median) by \(\approx 8\%\) (velocity-independent), \(\approx 2\%\) (\(w=560\,\mathrm{km\,s}^{-1}\)), and \(\approx 1\%\) (\(w=180\,\mathrm{km\,s}^{-1}\)). This means that the difference between SIDM and \(\tau\)SIM decreases for our simulations with stronger velocity dependence. Hence, this is in line with our finding of a qualitative difference for the abundance of satellites in Section 4.2.4. However, the difference we find here might also largely be due to the fact that the stronger velocity-dependent cross-section we study has a weaker effect on massive haloes. For example, this becomes visible when comparing the central densities. In consequence, the reduced qualitative difference between large- and small-angle scattering might be better visible from Fig. 12. But here we can see that not only for a constant cross-section the angular dependence matters but also for strongly velocity-dependent self-interactions even if the subhaloes are becoming more compact on average.
We note that the analysis above is not based on a larger statistical sample and thus the exact numbers may change. But we expect the qualitative trend to be the same. It is also worth pointing out that the less massive satellites might be affected more strongly by the self-interactions (see Fig. 12) and thus differences between models are larger for them. Hence, this should be followed up with simulations with a much higher spatial resolution.
## 5 Discussion
In this section, we discuss the assumptions and limitations of our simulations as well as the implications of our results. We begin with technical considerations and end by discussing what the next steps for a follow-up study may look like.
In contrast to our previous work (Fischer et al., 2022) we explored velocity-dependent cross-sections. We found that simulating those interactions requires a separate time step criterion (i.e. different from the one of Fischer et al., 2021). Especially cross-section with a strong velocity-dependence, i.e. a small value for \(w\) (see Eq. 2), can be computationally very expensive compared to a velocity-independent cross-section with a similar effective cross-section. A more detailed discussion of building a time step criterion can be found in Appendix B.
Figure 15: We show how the DM model affects the maximum circular velocity in the satellites and the host’s central density. We have cross-identified the haloes in the different DM runs. The lines connect the same halo, i.e. indicated how the properties of a halo change when varying the cross-section. The shown haloes are among the most massive ones, the details of the selection criterion are explained in the text.
When measuring the core sizes in Sec. 3.2, we found that the resulting fit is surprisingly sensitive to the optimisation method. This may limit the comparability of core sizes inferred by different authors. In particular, Correa et al. (2022) describe in their appendix B, that results on the evolution of the core size differ in terms of the maximum cores size in the literature.
The results of our cosmological simulations depend on the algorithms employed to identify haloes and their substructure. For this task, we used the build-in module SubFitsorb (Springel et al., 2001; Dolag et al., 2009). There exist a number of codes that are capable of identifying substructure (e.g. Knollmann & Knebe, 2009; Maciejewski et al., 2009; Tweed, D. et al., 2009; Behroozi et al., 2012; Han et al., 2017; Elahi et al., 2019). These codes use different algorithms and are known to give somewhat different results Knebe et al. (2013). In consequence, our results could change a bit when employing a different substructure finder.
In this paper, we aimed to understand how a velocity dependence of the self-interactions affects differences arising from the angular dependence of the cross-section. Very anisotropic cross-sections are typically expected to be velocity-dependent (e.g. Buckley & Fox, 2010; Loeb & Weiner, 2011; Bringmann et al., 2017). It is known that fISIDM and rSIDM differ mainly in systems that are far from equilibrium, such as mergers (Fischer et al., 2021) and the abundance of satellites Fischer et al. (2022). The evolution of those systems is governed by multiple velocity scales, where typically the larger velocity scale is the one that is mainly responsible for differences arising from the angular dependence of the self-interactions. Consequently, the difference becomes less when the self-interactions at large velocities are suppressed due to velocity-dependent scattering. We found this for the abundance of satellites. In consequence, it could be interesting to probe less massive systems for distinguishing rSIDM and fISDM as the velocity dependence could be weaker. At least in the model employed in our study, a system with typical velocities smaller than \(w\) would only experience a weak velocity dependence (see Eq. 2). The relevant mass scales for the cross-sections we simulated are visible from the effective cross-section as a function of mass shown in Appendix F.
Despite our studies of satellites, it is worth mentioning that very anisotropic cross-sections have been mainly studied in the context of merging galaxy clusters (e.g. Kahlhoefer et al., 2014; Harvey et al., 2015; Fischer et al., 2023; Wittman et al., 2023). At about the pericentre passage, such cross-sections can give rise to an effective drag force decelerating the DM component and creating an offset between the galaxies and the DM. Cross-sections that are velocity dependent and strongly anisotropic, have not been studied in the context of such mergers yet. Only a Bullet Cluster-like system has been simulated by Robertson et al. (2017) using a velocity-dependent anisotropic cross-section, but it does not fall within the limit of fISIDM. Studying merging systems with velocity-dependent fISIDM is crucial to understand their power to constrain such models and is the subject of a companion paper (Sabarish et al. prep).
Our simulations are all DM-only. On the one hand, it allows us to understand the qualitative differences between DM models better compared to simulations including further physical processes. But on the other hand, it limits the possibility to compare the results to observations and derive constraints on the cross-section. Consequently, the next step would be to include baryonic physics, i.e. run hydrodynamical simulations. Several authors have found that taking baryons into account can reduce the differences between collisonless and self-interacting DM and thus would mitigates constraints derived from DM-only studies (e.g. Fry et al., 2015; Despali et al., 2022; Sirks et al., 2022; Mastromarino et al., 2023). SIDM can be more responsive to the baryon distribution than CDM in Milky Way-mass galaxies (e.g. Sameie et al., 2018; Sameie et al., 2021). In the presence of baryons, effects from SIDM can even be reversed - at least for a fraction of the haloes. It has been shown that for galaxies with Milky Way-like masses and above the interplay of baryons and self-interactions can lead to cuspier density profiles than in CDM (e.g. Despali et al., 2019; Rose et al., 2022). In principle, baryons could also affect the ability to constrain the angular dependence with the abundance of satellites.
Aside from constraining the angular dependence, one would like to have a procedure to compare the effect of SIDM with different angular dependencies. This would allow to transfer constraints between models that differ in their typical scattering angle. Yang & Yu (2022) introduced the effective cross-section for this purpose, where the angular matching is based on the viscosity cross-section. However, the quality of the matching may depend on the physical system, i.e. how relaxed the system is. But not only on this, as we found the momentum transfer cross-section can at least for some setups provide an excellent match (see Fig. 4) excluding that the viscosity cross-section does as well. However, this does not contradict the viscosity cross-section providing a better match usually. But it implies that the matching is more complicated and may depend on the properties of the astrophysical system. It may matter how strong the self-interactions are and whether the system evolves in the smfp or Imfp regime. In the latter one gravity plays an important role between two consecutive scattering events (assuming an isotropic cross-section) and thus may make the evolution of the halo and the matching of different angular dependencies sensitive to the details of the density profile.
## 6 Conclusions
In this paper, we have studied SIDM with velocity-dependent scattering, considering isotropic cross-sections and strongly forward-enhanced ones. For accurate modelling of velocity-dependent self-interactions, we introduced a new time-step criterion and enhanced the performance with an improved parallelisation scheme. To learn about qualitative differences arising from the velocity dependence, we first simulated the thermalization problem, a simple test problem without gravity. Secondly, we studied the evolution of the density profile of isolated haloes including Hernquist and NFW profiles. For the remainder of the paper, we focused on cosmological simulations and investigated the qualitative differences between the DM models concerning the velocity and angular dependence of the self-interactions. Our most important results can be summarized as follows:
* We found that velocity-dependent self-interactions lead to a slower population of the high-velocity tail of the Maxwell-Boltzmann distribution during thermalisation due to the suppressed cross-section at high velocities.
* The evolution of the density profile of isolated haloes is qualitatively affected by the velocity dependence, i.e. it is not self-similar. This can lead to a longer collapse time relative to the core formation time and a larger maximum core size. However, we found a significant difference between velocity-independent and velocity-dependent cross-sections only for strong velocity dependencies, i.e. when \(w\) is much smaller than the typical scattering velocity.
* The velocity dependence of the self-interactions controls whether the central density of haloes is increasing or decreasing as a function of halo mass.
* Given a strong velocity dependence (small \(w\)), frequent self-interactions can diversify the density profile similar to an isotropic cross-section. We found that the two angular dependencies can create
haloes that are less compact as well as haloes that are more compact at the same subhalo mass.
* A strong velocity dependence of the cross-section, i.e. a small value of \(w\) can reduce the differences between fSIDM and rSIDM regarding the abundance of satellites.
The simulations we conducted were DM-only and allowed us to understand phenomenological differences arising from the velocity dependence of DM scattering. Our results may be instructive for more detailed studies of qualitative differences between SIDM models and helpful in designing more sophisticated simulations that include baryonic matter and additional physics such as cooling, star formation, AGN, and associated feedback mechanisms. Undertaking such a study to learn about the chances to discriminate between rSIDM and fSIDM when baryonic physics is taken into account, is the subject of forthcoming work.
## Acknowledgements
We thank all participants of the Darkium SIDM Journal Club for helpful discussions. MSF thanks Lucas Kimmig for a fruitful discussion. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306, Germany's Excellence Strategy - EXC-2094 "Origins" - 390783311 and the Emmy Noether Grant No. KA 4662/1-2. Antonio Ragagnin acknowledge support from the grant PRIN-MIUR 2017 WSCC32. KD acknowledges support by the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme grant agreement ERC-2019-AdG 882679. The simulations have been carried out on the computing facility HPD-Cluster 2015) of the University of Hamburg and the computing facilities of the Computational Center for Particle and Astrophysics (C2PAP). Preprint numbers: DESY-23-154, TTP23-041. Software: NumPy (Harris et al., 2020), Matplotlib (Hunter, 2007), SciPy (Virtanen et al., 2020)
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author. In addition, some of the data can be retrieved from our webpage: [https://www.darkium.org](https://www.darkium.org).
|
2310.17543 | On invariant distributions of Feller Markov chains with applications to
dynamical systems with random switching | We introduce simple conditions ensuring that invariant distributions of a
Feller Markov chain on a compact Riemannian manifold are absolutely continuous
with a lower semi-continuous, continuous or smooth density with respect to the
Riemannian measure. This is applied to Markov chains obtained by random
composition of maps and to piecewise deterministic Markov processes obtained by
random switching between flows. | Michel Benaïm, Oliver Tough | 2023-10-26T16:39:02Z | http://arxiv.org/abs/2310.17543v6 | On invariant distributions of Feller Markov chains with applications to dynamical systems with random switching
###### Abstract
We introduce simple conditions ensuring that invariant distributions of a Feller Markov chain on a compact Riemannian manifold are absolutely continuous with a lower semi-continuous, continuous or smooth density with respect to the Riemannian measure. This is applied to Markov chains obtained by random composition of maps and to piecewise deterministic Markov processes obtained by random switching between flows.
1
Footnote 1: Institut de Mathématiques, Université de Neuchâtel, Switzerland.
2
Footnote 2: Department of Mathematical Sciences, University of Bath, United Kingdom
###### Contents
* 1 Introduction
* 2 Notation, hypotheses and basic results
* 2.1 On Assumption 2.2: a uniqueness result
* 3 Random maps
* 3.1 Expansion volume rates and spectral radius
* 3.2 Application to random maps
* 4 Piecewise deterministic Markov processes
* 4.1 A discrete kernel associated to \((Z_{t})_{t\geq 0}\)
* 4.2 Invariant distributions
* 4.3 Smooth invariant distributions on the torus
* 4.4 Smooth invariant distributions under fast switching
## 1 Introduction
The aim of this paper is to propose and discuss simple conditions guaranteeing that the invariant distributions of a Feller Markov chain on a compact space satisfy certain regularity properties, such as having lower semi-continuous, continuous or smooth densities with respect to a reference measure.
Our initial motivation comes from _piecewise deterministic Markov processes_ (PDMPs) generated by random switching between deterministic flows. The ergodic properties of these type of processes have been the focus of much attention in the last decade and conditions ensuring existence, uniqueness, and absolute continuity (with respect to a reference Riemannian measure) of invariant measures, are now well understood ([2], [7], [8], [14], [11], [6]). Concerning the regularity (continuity, smoothness) of these densities, some partial results have been obtained in dimension one by Bakhtin, Hurth and Mattingly in [1], Bakhtin, Hurth, Lawley and Mattingly in [3] and [4] for specific systems in dimension two, and by the present authors in [12] for systems under "sufficiently fast" switching. Also worth mentioning is Locherbach's beautiful article [21] on certain PDMPs with jumps, in which techniques (similar to those in [3]) are used to prove regularity. However, beyond these cases, the problem remains largely open. One of our principal goals is to revisit these questions, and to provide a simple and general framework allowing - in particular - for the results of [3] and [12] to be extended.
The general idea of the paper can be roughly described as follows. Suppose \(P\) is a Feller Markov kernel on some compact metric space \(M\) and that \(\mathcal{C}(M)\) is a convex cone of measures embedded in some Banach space \(E.\) For instance, if \(M\) is a Riemannian manifold, \(\mathcal{C}(M)\) can be chosen to be the set of measures having a \(C^{r}\) (\(r\geq 0\)) density with respect to the Riemannian measure, and \(E=C^{r}(M).\)
Suppose that \(P=Q+\Delta\) where \(Q,\Delta\) are sub-Markov kernels such that \(Q\) maps the set of probability measures into \(\mathcal{C}(M)\) and \(\Delta\) maps \(\mathcal{C}(M)\) into itself. Then, it is not hard to show that if the spectral radius of \(\Delta\) (seen as
an operator on \(E\)) is \(<1\), invariant distributions of \(P\) lie in \(\mathcal{C}(M)\).
The paper explores and develops this idea. Section 2 sets the general framework, notation and hypotheses. Here we state and prove our general results, such as the aforementioned Theorem 2.8, along with other results ensuring absolute continuity of the invariant distributions and lower semi-continuity of their densities (Theorems 2.6 and 2.11).
Section 3 considers the situation where \(P\) is induced by a random iterative system on a compact Riemannian manifold and provides conditions ensuring that the decomposition \(P=Q+\Delta\) holds with \(\mathcal{C}(M)\) the set of measures having a density (respectively a lower semi continuous, of \(C^{r}\) density) with respect to the Riemannian measure. In the specific case where \(\Delta=\delta_{\phi}\) with \(\phi\) a local diffeomorphism, \(\Delta\) is nothing but the Ruelle transfer operator of \(\phi\) and its spectral radius can be estimated in terms of certain topological (or measure-theoretic) invariants for \(\phi.\) This is done in subsection 3.1 and applied to specific examples in subsection 3.2.
Section 4 is devoted to PDMPs, as described above. We prove that under certain Hormander conditions, there are finitely many ergodic measures that are absolutely continuous with respect to the Riemannian measure and whose densities are lower semi continuous (Theorem 4.5). If the Hormander condition only holds at an accessible point, such a measure is unique (Theorem 4.4). In subsection 4.3 we consider the situation of two transverse vector fields on the torus, and give a precise condition (involving the switching rates and the Floquet exponents of the linearly stable periodic orbits of the vector fields) ensuring that the invariant measures have a \(C^{k}\) density (Theorem 4.6). This result relies on the spectral radius estimate of the Ruelle transfer operator given in section 3.1 and substantially extends the results in [3]. The last section 4.4 is devoted to general PDMPs under fast switching. We show how our approach provides for a short proof that under fast switching and a certain Hormander condition, invariant densities are \(C^{r}.\)
## 2 Notation, hypotheses and basic results
Let \(M\) be a compact metric space equipped with its Borel sigma field \(\mathcal{B}(M)\).
We let \(\mathcal{M}(M)\) (respectively \(\mathcal{P}(M)\)) denote the set of non negative (respectively, probability) measures over \(M.\)
A _convex cone_ of a measures is a set \(\mathcal{C}(M)\subset\mathcal{M}(M)\) such that \(\alpha\mu+\beta\nu\in\mathcal{C}(M),\) for all \(\mu,\nu\in\mathcal{C}(M),\) and all \(\alpha,\beta\geq 0.\)
**Example 2.1**: Suppose that \(M\) is a Riemannian manifold with Riemannian measure \(m.\) Examples of convex cones in \(\mathcal{M}(M)\) include:
* \(\mathcal{M}_{ac}(M)\subset\mathcal{M}(M),\) the set of measures which are absolutely continuous with respect to \(m;\)
* \(\mathcal{M}_{ac}^{ls}(M)\subset\mathcal{M}_{ac}(M),\) the subset which have a lower semi-continuous density;
* \(\mathcal{M}_{ac}^{r}(M)\subset\mathcal{M}_{ac}(M),r\geq 0,\) the subset which have a \(C^{r}\) density.
A _bounded kernel_ on \(M\) is a family \(Q=\{Q(x,\cdot)\}_{x\in M}\) with \(Q(x,\cdot)\in\mathcal{M}(M)\) such that for all \(A\in\mathcal{B}(M),\) the mapping \(x\to Q(x,A)\) is measurable, and \(\sup_{x\in M}Q(x,M)<\infty.\) We say that \(Q\) is _non-degenerate_ if \(Q(x,M)>0\) for all \(x\in M;\)_sub-Markov_ if \(\sup_{x\in M}Q(x,M)\leq 1;\) and _Markov_ if \(Q(x,\cdot)\in\mathcal{P}(M)\) for all \(x\in M.\)
We let \(B(M)\) (respectively \(C^{0}(M)\)) denote the Banach space of bounded measurable (respectively continuous) real valued functions on \(M,\) endowed with the uniform norm \(\|f\|_{0}=\sup_{x\in M}|f(x)|.\)
A bounded kernel \(Q\) induces a bounded operator on \(B(M)\) defined by
\[Qf(x)=\int_{M}f(y)Q(x,dy),\]
for all \(f\in B(M).\) We call it _Feller_ if it maps \(C^{0}(M)\) into itself. It also induces an operator on \(\mathcal{M}(M)\) defined by
\[\mu Q(A)=\int\mu(dx)Q(x,A),\]
for all \(\mu\in\mathcal{M}(M)\) and \(A\in\mathcal{B}(M).\)
If \(Q\) is Markov, we let \(\mathsf{Inv}(Q)\) denote the set of _invariant probability measures_ of \(Q.;\) that is the set of \(\mu\in\mathcal{P}(M)\) such that \(\mu Q=\mu.\) If \(Q\) is Markov and Feller, then \(\mathsf{Inv}(Q)\) is a nonempty convex compact (for the weak* topology) subset of \(\mathcal{P}(M)\) (see e.g [10], Corollary 4.21).
From now on, we let \(P\) denote a Markov Feller kernel and \(\mathcal{C}(M)\) a convex cone of measures. Our standing assumption is given by the following assumption.
**Assumption 2.2** (Standing assumption): _The kernel \(P\) may be decomposed into \(P=Q+\Delta,\) whereby:_
**(i)**: \(Q\) _is a non-degenerate Feller sub-Markov kernel and_ \(\Delta\) _is a (possibly degenerate) sub-Markov kernel;_
**(ii)**: \({\cal M}(M)Q:=\{\mu Q:\,\mu\in{\cal M}(M)\}\subset{\cal C}(M);\)__
**(iii)**: \({\cal C}(M)\Delta:=\{\mu\Delta:\,\mu\in{\cal C}(M)\}\subset{\cal C}(M).\)__
In our applications, \({\cal C}(M)\) will be, like in Example 2.1, a set of measure having certain regularity properties. In words, Assumption 2.2 means that \(Q\) "creates" regularity, whilst \(\Delta\) "preserves" regularity.
It follows from Assumption 2.2 that \(\Delta\) is Feller and that
\[\sup_{x\in M}\Delta(x,M):=\rho<1.\]
In particular,
\[(I-\Delta)^{-1}:=\sum_{k\geq 0}\Delta^{k}\]
is also a Feller kernel and
\[(I-\Delta)^{-1}(x,M)\leq 1-\rho.\]
Here \(I=\Delta^{0}=\{\delta_{x}(\cdot)\}_{x\in M}.\)
The following result is a straightforward consequence of Assumption 2.2, and will be used repeatedly.
**Lemma 2.3**: _Let \(\Pi\in{\sf Inv}(P).\) Then, under Assumption 2.2\((i),\)_
\[\Pi=\Pi Q(I-\Delta)^{-1}=\sum_{k\geq 0}\Pi Q\Delta^{k}.\]
**Proof:** This follows directly from the equation \(\Pi=\Pi P\Leftrightarrow\Pi(I-\Delta)=\Pi Q.\)\(\Box\)
**Example 2.4**: Suppose that \(Q(x,dy)=\pi(dy)\) with \(\pi\in{\cal M}(M).\) Lemma 2.3 shows that
\[{\sf Inv}(P)=\{\pi(I-\Delta)^{-1}\}.\]
We say that \({\cal C}(M)\) is _stable by monotone convergence_ if for every sequence \((\mu_{n})_{n\geq 0}\) with \(\mu_{n}\in{\cal C}(M)\) and \(\mu_{n}\leq\mu_{n+1},\)\(\mu=\lim_{n\to\infty}\mu_{n}\) lies in \({\cal C}(M).\) Here, \(\mu=\lim_{n\to\infty}\mu_{n}\) simply means that \(\mu(A):=\lim_{n\to\infty}\mu_{n}(A)\in[0,\infty]\) for all \(A\in{\cal B}(M).\)
**Remark 2.5**: The sets \({\cal M}_{ac}(M)\) and \({\cal M}_{ac}^{ls}(M)\) as defined in Example 2.1 are stable by monotone convergence.__
A first useful (and immediate) consequence of Lemma 2.3 is the next result.
**Theorem 2.6**: _Assume Assumption 2.2 holds with \({\cal C}(M)\) stable by monotone convergence. Then \({\sf Inv}(P)\subset{\cal C}(M).\)_
**Corollary 2.7**: _Suppose \(M\) is a Riemannian manifold. Assume Assumption 2.2 holds with \({\cal C}(M)={\cal M}_{ac}^{ls}(M).\) Then:_
**(i)**: \({\sf Inv}(P)\subset{\cal C}(M);\)__
**(ii)**: _If_ \(\mu,\nu\in{\sf Inv}(P)\) _are ergodic, either_ \(\mu=\nu\) _or there exist nonempty disjoint open sets_ \(U,V\) _such that_ \(\mu(U)=\nu(V)=1.\) _In particular, if_ \(M\) _is connected and an invariant distribution has full support, then it is the unique invariant distribution of_ \(P.\)__
**Proof:**\((i)\) follows from Proposition 2.6 and Remark 2.5. We now turn to \((ii).\) By ergodicity either \(\mu=\nu\) or \(\mu\) and \(\nu\) are mutually singular. By Proposition 2.6, \(\mu(dx)=h(x)m(dx)\) and \(\nu(dx)=g(x)m(dx)\) with \(h,g\) lower semi-continuous. Set \(U=\{x\in M\,:h(x)>0\}\) and \(V=\{x\in M\,:g(x)>0\}.\) Then \(U\) and \(V\) are open and \(\mu(dx)\geq\frac{h(x)}{g(x)}1_{V}(x)\nu(dx).\) So, if \(\mu\) and \(\nu\) are mutually singular, \(h\) has to be zero on \(V.\)\(\Box\)
Another useful (and immediate) consequence of Lemma 2.3 is given by the next result.
**Theorem 2.8**: _Assume Assumption 2.2 holds with \({\cal C}(M)\) a closed subset of some Banach space \((E,\|.\|_{E}).\) Assume furthermore that one of the two following conditions holds:_
**(i)**: _For all_ \(\mu\in{\cal C}(M),\sum_{k\geq 0}\|\mu\Delta^{k}\|_{E}<\infty;\)__
**(ii)**: \(\mu\to\mu\Delta\) _extends to a bounded operator on_ \(E,\) _whose resolvent set contains_ \(1.\)__
_Then \({\sf Inv}(P)\subset{\cal C}(M).\)_
**Remark 2.9**: In the following sections, this theorem will be used when \(M\) is a Riemannian manifold, \({\cal C}(M)={\cal M}_{ac}^{r}(M),\) and \(E\) is the Banach space of bounded signed measures whose density is \(C^{r}\) (naturally identified with \(C^{r}(M)\) equipped with the \(C^{r}\) norm).
**Remark 2.10**: A sufficient practical condition ensuring condition \((i)\) (and also \((ii)\)) in Theorem 2.8 is that \(\mu\to\mu\Delta\) extends to a bounded operator on \(E\) whose spectral radius,
\[{\cal R}(\Delta,E)=\lim_{n\to\infty}\|\Delta^{n}\|_{E}^{1/n},\]
is strictly less than \(1.\)__
### On Assumption 2.2: a uniqueness result
It is often the case that a Markov kernel \(P\) doesn't satisfy the standing assumption, Assumption 2.2, but that some power of \(P,P^{k}\) (for some \(k\geq 1\)), or its \(a\)-resolvent
\[R_{a}=(1-a)\sum_{k\geq 0}a^{k}P^{k}\]
(for some \(0<a<1\)), does. Since
\[{\sf Inv}(R_{a})={\sf Inv}(P)\subset{\sf Inv}(P^{k}),\]
the conclusions of the previous theorems remain valid in these cases.
The next theorem illustrates this idea. Let \(P\) be a Feller Markov kernel which doesn't necessarily satisfy the standing assumption. A point \(p\in M\) is called _accessible_ (for \(P\)) if for every neighbourhood \(U\) of \(p\) and every \(x\in M,\)\(R_{a}(x,U)>0\) (for some, hence all \(0<a<1\)). The set of points which are accessible for \(P\) is the, possibly empty, compact set
\[\Gamma_{P}=\bigcap_{x\in M}{\sf supp}(R_{a}(x,\cdot)).\]
Point \(p\) is called a _weak Doeblin_ point, if there exists a neighbourhood \(V\) of \(p,\) a non-trivial measure \(\pi\in{\cal M}(M)\) and \(0<a<1,\) such that \(R_{a}(x,dy)\geq\pi(dy)\) for all \(x\in V.\) The measure \(\pi\) is called a _minorizing_ measure.
**Theorem 2.11**: _Let \({\cal C}(M)\) be a convex cone stable by monotone convergence. Suppose that \({\cal C}(M)P\subset{\cal C}(M)\) and that \(P\) possesses an accessible weak Doeblin point with a minorizing measure \(\pi\in{\cal C}(M).\) Then \(P\) has a unique invariant probability measure \(\Pi\) and \(\Pi\in{\cal C}(M).\)_
**Proof:** By assumption, there exists an open set \(V\) such that \(R_{a}(x,dy)\geq\pi(dy)\) for all \(x\in V\) and \(R_{a}(x,V)>0\) for all \(x\in M.\) Thus, by the Feller continuity of \(P\) (hence \(R_{a}\)), compactness and the Portmanteau theorem, \(R_{a}(x,V)\geq\delta>0\) for all \(x\in M\) and some \(\delta>0.\) It follows that \(R_{a}^{2}(x,dy)\geq\delta\pi(dy).\) By Theorem 2.6 and Example 2.4 applied to \(R_{a}^{2},\) we get that \(\mathsf{Inv}(P)\subset\mathsf{Inv}(R_{a}^{2})=\{\Pi\}\subset\mathcal{C}(M).\)\(\Box\)
## 3 Random maps
We suppose here that \(M\) is a compact \(d\)-dimensional connected Riemannian manifold. For \(k\geq 0,\) we let \(C^{k}(M)\) denote the space of \(C^{k}\) functions \(\rho:M\to\mathbb{R},\) equipped with the \(C^{k}\) topology (see e.g [18], Chapter 2). We let \(\|\cdot\|_{C^{k}(M)}\) denote a norm on \(C^{k}(M)\) making \(C^{k}(M)\) a Banach space. We let \(C^{k}(M,M)\) be the space of \(C^{k}\) maps from \(M\) into itself, equipped with the \(C^{k}\) topology and associated Borel \(\sigma\)-field.
We now let \(r\geq 1,\) and let \(\nu\) be a probability measure on \(C^{r}(M,M).\) Consider the chain on \(M\) induced by the random iterative system
\[X_{k+1}=\varphi_{k+1}(X_{k}),\]
where \((\varphi_{k})_{k\in\mathbb{N}}\) is a family of i.i.d random variables, independent of \(X_{0},\) having distribution \(\nu.\)
The kernel of this chain can then be written
\[P^{\nu}f(x)=\int_{C^{r}(M,M)}f(\varphi(x))\nu(d\varphi), \tag{1}\]
and is clearly Feller. For further reference, we call this kernel the _kernel induced by \(\nu.\)_
Throughout this section we shall take \(P:=P^{\nu},\) and assume that \(\nu\) may be written as
\[\nu:=(1-a)\nu_{0}+a\nu_{1},\]
whereby \(\nu_{0},\nu_{1}\) are two probability measures over \(C^{r}(M,M)\) and \(0<a<1.\) Thus we can write \(P=Q+\Delta\) with \(Q=(1-a)P^{\nu_{0}}\) and \(\Delta=aP^{\nu_{1}},\) where \(P^{\nu_{0}},P^{\nu_{1}}\) are defined like \(P^{\nu}\) with \(\nu_{0},\nu_{1}\) in place of \(\nu.\) We furthermore assume that \(\nu_{0},\nu_{1}\) satisfy the following hypotheses 3.1 and 3.3 below. These are natural hypotheses ensuring that the standing assumption, Assumption 2.2,
holds true with \({\cal C}(M)\) being one of the sets \({\cal M}_{ac}(M),{\cal M}_{ac}^{ls}(M)\) or \({\cal M}_{ac}^{r-1}(M)\) as defined in Example 2.1. To be concise, Assumption 3.1 assumes that \(\nu_{0}\) is the image measure of a finite dimensional \(C^{r}\) density by a submersion, while Assumption 3.3 assumes that \(\nu_{1}\) is supported by local diffeomorphisms.
**Assumption 3.1** (Standing assumption \(1\) for RDS): _There exist \(n\geq d,\) a smooth \(n\)-dimensional manifold \(\Theta\) with smooth Riemann measure \(d\theta,\) a \(C^{r}\) probability density function \(h_{0}:\Theta\to\mathbb{R}_{+}\) with compact support \({\sf supp}_{0}(h),\) and a \(C^{r}\) map_
\[\mathbf{\Phi}:M\times\Theta\mapsto M,\]
\[(x,\theta)\to\mathbf{\Phi}(x,\theta)=\mathbf{\Phi}_{\theta }(x)\]
_such that:_
**(i)**: \(\nu_{0}\) _is the image measure of_ \(h_{0}(\theta)d\theta\) _by the map_ \(\theta\to\mathbf{\Phi}_{\theta}.\) _That is_
\[P^{\nu_{0}}(f)(x)=\int_{\Theta}f(\mathbf{\Phi}_{\theta}(x))h_{0}( \theta)d\theta.\]
**(ii)**: _For all_ \(x\in M\) _and_ \(\theta\in{\sf supp}(h_{0})\)__\(\partial_{\theta}\mathbf{\Phi}(x,\theta):T_{\theta}\Theta\mapsto T_{ \mathbf{\Phi}(x,\theta)}M\) _is surjective._
**Proposition 3.2**: _Assume Assumption 3.1. Then, there exists a \(C^{r}\) map \(q:M\times M\mapsto\mathbb{R}_{+}\) such that_
\[P^{\nu_{0}}(x,dy)=q(x,y)m(dy).\]
_In particular, \({\cal M}(M)Q\subset{\cal M}_{ac}^{r}(M).\)_
**Proof:** We assume for notational convenience that \(\Theta=\mathbb{R}^{n},\) but the proof easily extends to the general case.
Claim: For all \(x^{*}\in M\) and \(\theta^{*}\in{\sf supp}(h_{0}),\) there exist neighbourhoods \(U(=U(x^{*},\theta^{*}))\) of \(x^{*}\) and \(V(=V(x^{*},\theta^{*}))\) of \(\theta^{*}\) such that for every \(C^{r}\) function \(\eta:\mathbb{R}^{n}\mapsto\mathbb{R}\) with compact support \({\sf supp}(\eta)\subset V,\) there exists a \(C^{r}\) map \(q_{\eta}:M\times M\to\mathbb{R}_{+}\) with the property that
\[\int_{\mathbb{R}^{n}}f(\mathbf{\Phi}(x,\theta))h(\theta)\eta(\theta )d\theta=\int_{M}q_{\eta}(x,y)f(y)m(dy)\]
for all \(x\in U,\) and \(f\in B(M).\)
We assume for the time being that the claim is proven. Fix \(x^{*}\in M.\) We extract from the family \(\{V(x^{*},\theta^{*}),\theta^{*}\in\mathsf{supp}(h_{0})\}\) a covering of \(\mathsf{supp}(h_{0})\) by open sets \(V_{i}=V(x^{*},\theta_{i}),i\in I,\) with \(I\) finite. Set \(U=\cap_{i\in I}U(x^{*},\theta_{i}).\) Using a partition of unity subordinate to \(\{V_{i}\}_{i\in I},\)\(h_{0}\) can be written as \(h_{0}=\sum_{i\in I}h_{0}\eta_{i}\) where \(\eta_{i}\) is smooth with compact support in \(V_{i},\)\(0\leq\eta_{i},\) and \(\sum_{i\in I}\eta_{1}=1.\) It then follows from the claim that for all \(x\in U,\)
\[P^{\nu_{0}}(x,dy)=\sum_{i\in I}q_{i}(x,y)m(dy),\]
where \(q_{i}:M\times M\to\mathbb{R}_{+}\) is \(C^{r}.\) This proves the lemma.
Proof of the claim: After a permutation of the canonical basis of \(\mathbb{R}^{n}\) we can assume that \(\theta=(\theta_{1},\theta_{2})\in\mathbb{R}^{d}\times\mathbb{R}^{n-d}\) where \(\partial_{\theta_{1}}\mathbf{\Phi}(x^{*},\theta^{*})\) has rank \(d.\) Thus, by the inverse function theorem, there exist open neighbourhoods \(U^{\prime}\) of \(x^{*}\) and \(V=V_{1}\times V_{2}\) of \(\theta^{*}=(\theta_{1}^{*},\theta_{2}^{*})\) such that the map
\[H:(\theta_{1},\theta_{2},x)\to(\mathbf{\Phi}(x,\theta),\theta_{2},x)\]
is a \(C^{r}\) diffeomorphism from \(V\times U^{\prime}\) onto its image \(W=H(V\times U^{\prime}).\) Its inverse is then given by \((y,\theta_{2},x)\to(\psi(y,\theta_{2},x),\theta_{2},x),\) where \(\psi:W\mapsto V_{1}\) is \(C^{r}.\)
Let \(U\) be a relatively compact neighbourhood of \(x^{*}\) with \(\overline{U}\subset U^{\prime},\) and let \(\eta:\mathbb{R}^{n}\to\mathbb{R}_{+}\) be a \(C^{r}\) function with compact support \(\mathsf{supp}(\eta)\subset V.\) Set \(K=\mathsf{supp}(\eta)\times\overline{U}\) and let \(\tilde{k}(x,y,\theta_{2})\) be a \(C^{r}\) function which coincides with
\[(\eta h_{0})(\psi(y,\theta_{2},x),\theta_{2})|\mathsf{det}\partial_{y}\psi(y, \theta_{2},x)|\]
on \(H(K)\) and is zero outside \(W.\) We define \(q_{\eta}:M\times M\mapsto\mathbb{R}_{+}\) by
\[q_{\eta}(x,y)=\int\tilde{k}(x,y,\theta_{2})d\theta_{2}.\]
Then \(q_{\eta}\) is \(C^{r}\) and by the change of variable formula,
\[\int f(\mathbf{\Phi}(x,\theta))(\eta h_{0})(\theta)g(x)d\theta m(dx)=\int q_{ \eta}(x,y)g(x)f(y)m(dx)m(dy)\]
for every continuous function \(g\) with support contained in \(U.\) This proves the claim. \(\Box\)
We define \(\mathsf{Diff}^{r}_{\mathsf{loc}}(M)\subset C^{r}(M,M)\) to be the (open) set of maps \(\varphi\in C^{r}(M,M)\) for which \(D\varphi(x):T_{x}M\mapsto T_{\varphi(x)}M\) is invertible at every point \(x\in M.\)
We let \(\varphi\in\mathsf{Diff}^{r}_{\mathsf{loc}}(M).\) Since the space \(M\) is connected, \(\varphi^{-1}(y)\) is nonempty, finite, and its cardinality doesn't depend on \(y\) for all \(y\in M\). We denote this cardinality by \(\mathsf{deg}(\varphi)\).
We let \(J(\varphi,x)>0\) denote the _Jacobian_ of \(\varphi\) at \(x\) with respect to \(m.\) If the tangent spaces \(T_{x}M\) and \(T_{\varphi(x)}M\) are equipped with orthonormal bases, then
\[J(\varphi,x)=|\mathsf{det}D\varphi(x)|.\]
The _transfer_ or _Ruelle-Perron-Frobenius_ operator induced by \(\varphi\) is the operator \(\mathcal{L}_{\varphi}\) acting on \(L^{1}(m)\) or \(C^{r-1}(M)\), defined by
\[\mathcal{L}_{\varphi}(\rho)(y)=\sum_{\{x\in\varphi^{-1}(y)\}}\frac{\rho(x)}{J (\varphi,x)}. \tag{2}\]
The fact that \(\mathcal{L}_{\varphi}(\rho)\) maps \(C^{r-1}(M)\) into itself easily follows from the inverse function theorem. Indeed, for all \(y\in M\), there exist an open neighbourhood \(U\) of \(y\) and \(C^{r}\) diffeomorphisms \(\psi_{i}:U\mapsto\psi_{i}(U),i=1,\ldots,\mathsf{deg}(\varphi)\) such that for all \(z\in U,\)
\[\mathcal{L}_{\varphi}(\rho)(z)=\sum_{i=1}^{\mathsf{deg}(\varphi)}\frac{\rho( \psi_{i}(z))}{J(\varphi,\psi_{i}(z))}.\]
This expression also shows that \(\mathcal{L}_{\varphi}\) is a bounded operator on \(C^{r-1}(M).\) We let
\[\|\mathcal{L}_{\varphi}\|_{C^{r-1}(M)}=\sup_{\{\rho\,:\,\|\rho\|_{C^{r-1}(M)} \leq 1\}}\|\mathcal{L}_{\varphi}(\rho)\|_{C^{r-1}(M)}\]
denote its operator norm.
For \(0\leq k\leq r-1\), we let
\[\mathcal{R}(\mathcal{L}_{\phi},C^{k}(M))=\lim_{n\to\infty}\|(\mathcal{L}_{ \phi})^{n}\|_{C^{k}(M)}^{1/n} \tag{3}\]
be the spectral radius of \(\mathcal{L}_{\phi}\) on \(C^{k}(M)\).
**Assumption 3.3** (Standing assumption \(2\) for RDS): \[\nu_{1}(\mathsf{Diff}^{r}_{\mathsf{loc}}(M))=1.\]
**Proposition 3.4**: _Assume Assumption 3.3. If \(\mu\in\mathcal{M}_{ac}(M)\) has density \(\rho,\) then \(\mu P^{\nu_{1}}\in\mathcal{M}_{ac}(M)\) and its density is given by_
\[y\to\mathcal{L}_{\nu_{1}}(\rho)(y):=\int_{\operatorname{Diff}^{r}_{\text{loc}}(M )}(\mathcal{L}_{\varphi}\rho)(y)\nu_{1}(d\varphi).\]
_This density is lower semi-continuous whenever \(\rho\) is. In particular \(\mathcal{C}(M)\Delta\subset\mathcal{C}(M)\) with \(\mathcal{C}(M)=\mathcal{M}_{ac}^{ls}(M).\)_
_If in addition_
\[\int_{\operatorname{Diff}^{r}_{\text{loc}}(M)}\|\mathcal{L}_{\varphi}\|_{C^{r- 1}(M)}\nu_{1}(d\varphi)<\infty,\]
_then \(\mathcal{L}_{\nu_{1}}\) is a bounded operator on \(C^{r-1}(M)\) and_
\[\|\mathcal{L}_{\nu_{1}}\|_{C^{r-1}(M)}\leq\int_{\operatorname{Diff}^{r}_{\text {loc}}(M)}\|\mathcal{L}_{\varphi}\|_{C^{r-1}(M)}\nu_{1}(d\varphi).\]
_In particular \(\mathcal{C}(M)\Delta\subset\mathcal{C}(M)\) with \(\mathcal{C}(M)=\mathcal{M}_{ac}^{r-1}(M).\)_
**Proof:** For all \(f\in B(M),\)
\[\int_{M}P^{\nu_{1}}(f)(x)\rho(x)m(dx) =\int_{M}\left(\int_{\operatorname{Diff}^{r}_{\text{loc}}(M)}f( \varphi(x))\nu_{1}(d\varphi)\right)\rho(x)m(dx)\] \[=\int_{\operatorname{Diff}^{r}_{\text{loc}}(M)}\left(\int_{M}f( \varphi(x))\rho(x)m(dx)\right)\nu_{1}(d\varphi)\] \[=\int_{\operatorname{Diff}^{r}_{\text{loc}}(M)}\left(\int_{M}f(x) \mathcal{L}_{\varphi}(\rho)(x)m(dx)\right)\nu_{1}(d\varphi)\] \[=\int_{M}f(x)\left(\int_{\operatorname{Diff}^{r}_{\text{loc}}(M) }(\mathcal{L}_{\varphi}\rho)(x)\nu_{1}(d\varphi)\right)\rho(x)m(dx).\]
The second and last equalities follow from Fubini's theorem, and the third one follows from the change of variable formula. This proves the first assertion. If \(\rho\) is lower semi-continuous, so is \(\mathcal{L}_{\varphi}\rho.\) Thus, if \(y_{n}\to y,\)
\[\liminf_{n\to\infty}\int\mathcal{L}_{\varphi}\rho(y_{n})\nu_{1}(d\varphi)\geq \int\liminf_{n\to\infty}\mathcal{L}_{\varphi}\rho(y_{n})\nu_{1}(d\varphi)\geq \int\mathcal{L}_{\varphi}\rho(y)\nu_{1}(d\varphi)\]
by Fatou's Lemma. This shows that \(\frac{d\mu P^{\nu_{1}}}{dm}\) is lower-semicontinuous.
We now prove the last statement. For all \(\rho\in C^{r-1}(M),\) the mapping \(\mathcal{L}_{(\cdot)}\rho:\mathsf{Diff}^{r}_{\mathsf{loc}}(M)\to C^{r-1}(M), \varphi\mapsto\mathcal{L}_{\varphi}\rho\) is continuous, hence measurable. It is then Bochner measurable (see [16], Theorem 2, Section 1, Chapter 2) and the condition that \(\int_{\mathsf{Diff}^{r}_{\mathsf{loc}}(M)}\|\mathcal{L}_{\varphi}(\rho)\|_{C^ {r-1}(M)}\nu_{1}(d\varphi)<\infty\) makes it Bochner integrable ([16], Theorem 2, Section 2, Chapter 2). Properties of Bochner integrals ([16], Theorem 4, Section 2, Chapter 2) imply that
\[\Big{\|}\int_{\mathsf{Diff}^{r}_{\mathsf{loc}}(M)}\mathcal{L}_{\varphi}(\rho) \nu_{1}(d\varphi)\Big{\|}_{C^{r-1}(M)}\leq\int_{\mathsf{Diff}^{r}_{\mathsf{ loc}}(M)}\|\mathcal{L}_{\varphi}(\rho)\|_{C^{r-1}(M)}\nu_{1}(d\varphi).\]
This concludes the proof. \(\Box\)
We recall that \(P=P^{\nu}\) is given by (1). Corollary 2.7 and Theorem 2.8 applied to the present setting, combined with Propositions 3.2 and 3.4, imply the following.
**Theorem 3.5**: _Assume Hypotheses 3.1 and 3.3. Then, \(\mathsf{Inv}(P)\subset\mathcal{M}^{ls}_{ac}(M).\) If \(\mu\in\mathsf{Inv}(P)\) has full support, then \(\mathsf{Inv}(P)=\{\mu\}.\)_
**Theorem 3.6**: _Assume Hypotheses 3.1 and 3.3. If_
\[\int_{\mathsf{Diff}^{r}_{\mathsf{loc}}(M)}\|\mathcal{L}_{\varphi}\|_{C^{r-1}( M)}\nu_{1}(d\varphi)<\infty,\]
_and \(1/a\) is in the resolvent set of \(\mathcal{L}_{\nu_{1}}\) (on \(C^{r-1}(M)\)), then \(\mathsf{Inv}(P)\subset\mathcal{M}^{r-1}_{ac}(M).\)_
### Expansion volume rates and spectral radius
In this subsection and the following, we consider the case where
\[\nu_{1}=\delta_{\phi}\]
for some \(\phi\in\mathsf{Diff}^{r}_{\mathsf{loc}}(M),r\geq 1,\) so that \(\mathcal{L}_{\nu_{1}}\) is the transfer operator \(\mathcal{L}_{\phi}.\) When \(\phi\) is an _expanding_ map (see the definition below), the spectral properties of \(\mathcal{L}_{\phi}\) have been well understood since the seminal work of Ruelle [23]. We refer the reader to the excellent monograph [5] for a comprehensive introduction to the subject.
When \(\phi\) is non-expanding, it is still possible to give simple sufficient conditions ensuring that \(\frac{1}{a}\) lies in the resolvent of \(\mathcal{L}_{\phi},\) so that Theorem 3.6 applies.
This is the object of the next proposition, Proposition 3.10. Before stating this proposition we introduce certain quantities that will naturally appear in the estimate of the spectral radius of \(\mathcal{L}_{\phi}:\) the _expansion rate_ and the _expansion volume rates_ of \(\phi.\)
Let \(K\) be a nonempty, compact and forward invariant set (i.e \(\phi(K)\subset K\)). The _expansion constant_ of \(\phi\) at \(x\) is the positive number
\[EC(\phi,x)=\inf_{v\in T_{x}M\;\|v\|_{x}=1}\|D\phi(x)v\|_{\phi(x)}.\]
Here \(\|\cdot\|_{x}\) stands for the Riemaniann norm on \(T_{x}M.\) Following Hirsch [19], define the (logarithmic) _expansion rate_ of \(\phi\) at \(K\) as
\[\mathcal{E}(\phi,K)=\lim_{n\to\infty}\frac{1}{n}\log(\min_{x\in K}EC(\phi^{n}, x)),\]
where the limit exists by subadditivity. The _expansion rate_ of \(\phi\) is defined as
\[\mathcal{E}(\phi)=\mathcal{E}(\phi,M).\]
We let \(\mathsf{Inv}(\phi)\) and \(\mathsf{Inv}_{erg}(\phi)\) respectively denote the set of invariant (respectively ergodic) probability measures for \(\phi.\) By a theorem of Schreiber [24],
\[\mathcal{E}(\phi)=\inf_{\mu\in\mathsf{Inv}_{erg}(\phi)}\Lambda_{1}(\mu), \tag{4}\]
where \(\Lambda_{1}(\mu)\) stands for the smallest Lyapunov exponent for \((\phi,\mu).\)
For all \(k\geq 0,\) we analogously define the \(k\)_-expansion volume rate_ of \(\Phi\) at \(K\) as
\[\mathcal{EV}_{k}(\phi,K)=\lim_{n\to\infty}\frac{1}{n}(\min_{x\in K}\left[\log (J(\phi^{n},x))+k\log(EC(\phi^{n},x))\right]),\]
and the \(k\)_-expansion volume rate_ of \(\Phi\) as
\[\mathcal{EV}_{k}(\phi)=\mathcal{EV}_{k}(\phi,M). \tag{5}\]
Again, these limits exist by subadditivity.
The following characterization easily follows from a beautiful result due to Schreiber [25] on the growth rates of sub-additive functions.
**Proposition 3.7**: _The \(k\)-expansion volume rate of \(\Phi\) is given by_
\[\mathcal{EV}_{k}(\phi)=\inf_{\mu\in\mathsf{Inv}_{erg}(\phi)}((k+1)\Lambda_{1}( \mu)+\Lambda_{2}(\mu)+\ldots+\Lambda_{d}(\mu)), \tag{6}\]
_where \(\Lambda_{1}(\mu)\leq\ldots\leq\Lambda_{d}(\mu)\) are the Lyapunov exponents of \((\phi,\mu)\) counted with their multiplicity._
**Proof:** Let \(F:M\times\mathbb{N}\to\mathbb{R}\) be defined as
\[F(x,n)=-\log J(\phi^{n},x)-kEC(\phi^{n},x).\]
Then \(F\) is continuous in \(x\) and subadditive with respect to \(\phi\), meaning that
\[F(x,n+1)\leq F(x,n)+F(\phi(x),1).\]
This directly follows from the properties \(J(\phi^{n+1},x)=J(\phi^{n},\phi(x))J(\phi,x)\) and \(EC(\phi^{n+1},x)\geq EC(\phi^{n},\phi(x))EC(\phi,x).\) Therefore, by Theorem 1 in [25],
\[lim_{n\to\infty}\left(\sup_{x\in M}\frac{1}{n}F(x,n)\right) =\inf_{n>0}\left(\sup_{x\in M}\frac{1}{n}F(x,n)\right)\] \[=\sup_{\mu\in\mathsf{Inv}_{erg}(\phi)}\inf_{n>0}\frac{1}{n}\int_{ M}F(n,x)\mu(dx).\]
For all \(\mu\in\mathsf{Inv}_{erg}(\phi)\) we have that
\[\frac{1}{n}\int_{M}F(n,x)\mu(dx)\] \[=-\frac{1}{n}\sum_{k=0}^{n}\int_{M}\log(J(\phi,\phi^{k}(x)))\mu( dx)-k\frac{1}{n}\int_{M}\log(EC(\phi^{n},x))\mu(dx)\] \[=-\int_{M}\log(J(\phi,x))\mu(dx)-k\frac{1}{n}\int_{M}\log(EC(\phi^ {n},x))\mu(dx).\]
The first term on the right hand side is equal to \(-(\Lambda_{1}(\mu)+\ldots+\Lambda_{d}(\mu))\) by the multiplicative ergodic theorem, and the second term converges to \(-k\Lambda_{1}(\mu).\)\(\Box\)
**Remark 3.8**: We let
\[\omega_{\phi}(x)=\bigcap_{n\geq 0}\overline{\{\phi^{k}(x)\;:k\geq n\}}\]
be the _omega limit set_ of \(x\),
\[BC(\phi)=\overline{\{x\in M\::x\in\omega_{\phi}(x)\}}\]
the _Birkhoff center_ of \(\phi\), and
\[MC(\phi)=\overline{\bigcup_{\mu\in\mathsf{Inv}_{erg}(\phi)}\mathsf{supp}(\mu)}\]
the _minimal center of attraction_ of \(\phi.\) By the Poincare recurrence theorem (see e.g. [22], Chapter 1), \(MC(\phi)\subset BC(\phi).\) Thus, equalities (4) and (6) imply that
\[{\cal E}(\phi)={\cal E}(\phi,BC(\phi))={\cal E}(\phi,MC(\phi))\]
and
\[{\cal E}{\cal V}_{k}(\phi)={\cal E}{\cal V}_{k}(\phi,BC(\phi))={\cal E}{\cal V}_ {k}(\phi,MC(\phi)).\]
These properties prove to be useful to compute or estimate the expansion and expansion volume rates in certain cases (see Examples 3.17 and 3.18 below).
**Remark 3.9**: We have that
\[d{\cal E}(\phi)\leq{\cal E}{\cal V}_{0}(\phi)\leq\log({\sf deg}(\phi)).\]
The first inequality follows from identities (4) and (6), while the second follows from the second statement in the next proposition.
Note that this has the consequence that
\[d{\cal E}(\phi)\leq{\cal E}{\cal V}_{0}(\phi)\leq 0\]
when \(\phi\) is a diffeomorphism. Observe also that if \({\cal E}{\cal V}_{0}(\phi)\leq 0,\) then \(k\mapsto{\cal E}{\cal V}_{k}(\phi)\) is nonincreasing.
We recall (see equation (3)) that for all \(0\leq k\leq r-1,\)\({\cal R}({\cal L}_{\phi},C^{k}(M))\) is the spectral radius of \({\cal L}_{\phi}\) on \(C^{k}(M).\)
**Proposition 3.10**: _We have the following:_
**(i)**: _if_ \({\cal E}(\phi)>0,\) _then_ \({\cal R}({\cal L}_{\phi},C^{r-1}(M))=1;\)__
**(ii)**: _if_ \({\cal E}(\phi)\leq 0,\) _then_
\[1\leq{\cal R}({\cal L}_{\phi},C^{r-1}(M))\leq{\sf deg}(\phi)\max_{0\leq k\leq r -1}e^{-{\cal E}{\cal V}_{k}(\phi)}.\]
**Remark 3.11**: The first assertion of this proposition is a direct consequence of the seminal work of Ruelle ([23]). Some details are given below.
Some of Ruelle's results have been extended by Campbell and Latushkin in [13] to the situation where \(\phi\) is no longer expanding but is a covering map (i.e a local diffeomorphism as in the present setting). They compute the
essential spectral radius of the transfer operator and provide an upper bound for the spectral radius in \(C^{0}(M)\) (in the present setting) given by
\[\exp\Big{(}\sup_{\mu\in\mathsf{Inv}_{erg}(\phi)}\Big{[}H(\mu)-\int_{ M}\log(J(\phi,x))\mu(dx)\Big{]}\Big{)} \tag{7}\] \[= \exp\Big{(}-\inf_{\mu\in\mathsf{Inv}_{erg}(\phi)}\Big{[}(\Lambda_{ 1}(\mu)+\ldots+\Lambda_{d}(\mu))-H(\mu)\Big{]}\Big{)},\]
where \(H(\mu)\) is the measure-theoretic entropy of \((\phi,\mu).\) They claim (see [13, Theorem 1]) that this upper bound is also an upper bound for the spectral radius in \(C^{r}(M)\) for \(r\geq 1.\) Although this result is true when \(\phi\) is expanding, it cannot be true when \(\phi\) is not expanding, as shown by the following simple example. The error in their proof comes from the fact that they rely on estimates (given in [23]) which are valid only for expanding maps.
The estimate given in Proposition 3.10, (ii), provides a correct estimate well-suited to non expanding maps.
**Example 3.12**: We take \(M=S^{1}=\mathbb{R}/\mathbb{Z},\) and suppose that \(\phi\) is a smooth, orientation preserving diffeomorphism with two fixed points, \(0\) and \(1/2,\) such that \(\phi\) coincides with
\[x\mapsto\frac{x}{\alpha}\]
on a neighbourhood of \(0,\) whereby \(\alpha>1\) and \(\phi^{\prime}(1/2)>1.\) The ergodic measures of \(\phi\) are the Dirac measures \(\delta_{0},\delta_{1/2},\) and for all \(k\geq 0,\)
\[\mathcal{EV}_{k}(\phi)=-\ln(\alpha)(k+1)<0.\]
Thus, by Proposition 3.10, \(\mathcal{R}(\mathcal{L}_{\phi},C^{r}(M))\leq\alpha^{r+1}\) for all \(0\leq r<\infty\). We now let \(\rho(x)=\sin(2\pi x)\) if \(r\) is odd, and \(\rho(x)=\cos(2\pi x)\) is \(r\) is even. Then
\[\|(\mathcal{L}_{\phi^{n}}(\rho))\|_{C^{r}(M)}:=\sum_{k=0}^{r}\|(\mathcal{L}_{ \phi^{n}}(\rho))^{(k)}\|_{0}\geq|(\mathcal{L}_{\phi^{n}}(\rho))^{(r)}(0)|= \alpha^{n(r+1)}.\]
This implies that
\[\mathcal{R}(\mathcal{L}_{\phi},C^{r}(M))=\alpha^{r+1}\quad\text{for all}\quad 0 \leq r<\infty. \tag{8}\]
This simple example shows that the inequality in Proposition 3.10 can be an equality, for any \(r\).
The measure-theoretic entropy for any Dirac mass is \(0,\) whence we see that the Campbell-Latushkin upper bound in (7) is precisely \(\alpha\). However, the authors claim in [13, Theorem 1] that this same upper bound for the \(C^{r}\) spectral radius holds for all \(0\leq r<\infty,\) which cannot be true for any \(r\geq 1\) by (8).
**Proof of Proposition 3.10**
_Step 1._ If \({\cal E}(\phi)>0,\) then \(\inf_{x\in M}EC(\phi^{n},x)\geq\theta>1\) for some \(n\geq 1\) and some \(\theta>1.\) Thus, replacing \(\phi\) by \(\phi^{n},\) we can assume that \(EC(\phi,x)\geq\theta>1.\) This conditions means that \(\phi\) is _expanding_. Then, by a theorem due to Ruelle [23], Theorem 3.6 (ii), (see also [5], Theorem 2.6) \(R={\cal R}({\cal L}_{\phi},C^{r-1}(M))\) is an eigenvalue of \({\cal L}_{\phi}\) associated to a positive eigenfunction \(\rho.\) Since \(\int_{M}\rho dm=\int_{M}({\cal L}_{\phi}\rho)dm,\)\(R\) must be \(1.\) This proves the first assertion.
_Step 2._ We now prove the left hand side inequality of assertion \((ii).\) Suppose for the sake of contradiction that \({\cal R}({\cal L}_{\phi},C^{r-1}(M))<1.\) Then
\[\lim_{n\to\infty}\|{\cal L}_{\phi}^{n}\|_{C^{r-1}(M)}=0,\]
so that \(\lim_{n\to\infty}\|{\cal L}_{\phi}^{n}1\|_{0}=0\) in particular. On the other hand, \(\int_{M}{\cal L}_{\phi}^{n}1dm=\int_{M}1dm=m(M)>0.\) This is a contradiction.
_Step 3._ Our last goal is to prove the right hand side inequality of assertion \((ii).\) It is first convenient to specify a norm on \(C^{k}(M),k\geq 0.\)
Throughout, \({\mathbb{R}}^{d}\) is equipped with the Euclidean norm. For all \(k\geq 1,\) let \(L^{k}_{sym}({\mathbb{R}}^{d})\) be the vector space of \(k\)-linear symmetric forms on \({\mathbb{R}}^{d}.\) If \(A:{\mathbb{R}}^{d}\to{\mathbb{R}}^{d}\) is a linear map and \(L\in L^{k}_{sym}({\mathbb{R}}^{d}),A^{*}L\in L^{k}_{sym}({\mathbb{R}}^{d})\) is defined by \(A^{*}L(u_{1},\ldots,u_{k})=L(Au_{1},\ldots,Au_{k}).\) The norm of \(L\in L^{k}_{sym}({\mathbb{R}}^{d})\) is defined as \(\|L\|=\sup\{|L(u_{1},\ldots,u_{k})|\,:u_{i}\in{\mathbb{R}}^{d},\|u_{i}\|\leq 1\}.\)
We consider \(U\subset{\mathbb{R}}^{d}\) open and \(f\in C^{k}(U):=\{f:U\to{\mathbb{R}},\,C^{k}\}.\) The \(k\)-th derivative of \(f\) is a continuous mapping \(D^{k}f:U\to L^{k}_{sym}({\mathbb{R}}^{d}).\) The following lemma will be used below. It follows from classical rules in differential calculus. Its verification is left to the reader.
**Lemma 3.13**: _Let \(k\geq 1,\) and \(U,V\) open subsets of \({\mathbb{R}}^{d}.\)_
**(i)**: _Let_ \(g\in C^{k}(U).\) _There exists a continuous function_ \(C_{g}:U\to{\mathbb{R}}_{+}\) _(depending on_ \(g\) _and_ \(k\)_) such that for all_ \(f\in C^{k}(U)\) _and_ \(x\in U,\)__
\[\|D^{k}(gf)(x)-g(x)D^{k}f(x)\|\leq C_{g}(x)\left(|f(x)|+\sum_{i=1}^{k-1}\|D^{ i}f(x)\|\right).\]
**(ii)**: _Let_ \(\Psi:U\to V\) _be a_ \(C^{k}\) _map. There exists a continuous function_ \(C^{\prime}_{\Psi}:U\to{\mathbb{R}}_{+}\) _(depending on_ \(\Psi\) _and_ \(k\)_) such that for all_ \(f\in C^{k}(V)\) _and_
\(x\in U,\)
\[\|D^{k}(f\circ\Psi)(x)-D\Psi(x)^{*}D^{k}f(\Psi(x))\|\] \[\leq C^{\prime}_{\Psi}(x)\left(|f(\Psi(x))|+\sum_{i=1}^{k-1}\|D^{i} f(\Psi(x))\|\right).\]
We now define a norm on \(C^{k}(M).\) Let \(W\) be the open ball in \(\mathbb{R}^{d}\) centered at the origin with radius \(2\) and let \(V\) be the open ball centered at the origin with radius \(1.\)
By the compactness of \(M\) there exists an atlas \(\{\alpha,\mathcal{O}_{\alpha}\}_{\alpha\in\aleph}\) with \(\aleph\) finite such that:
**(i)**: \(\alpha\) maps \(\mathcal{O}_{\alpha}\) diffeomorphically onto an open set in \(\mathbb{R}^{d}\) containing \(\overline{W};\)
**(ii)**: the open sets \(\mathcal{O}^{\prime}_{\alpha}=\alpha^{-1}(V),\alpha\in\aleph\), cover \(M.\)
If \(\rho\in C^{k}(M)\) and \(1\leq j\leq k\), we set
\[|\rho|_{j}=\sup_{\alpha\in\aleph,x\in\overline{V}}\|D^{j}(\rho\circ\alpha^{-1} )(x)\|\]
and
\[\|\rho\|_{k}=\|\rho\|_{0}+\sum_{j=1}^{k}|\rho|_{j}.\]
It is not hard to verify that \(\|.\|_{k}\) is a norm on \(C^{k}(M)\) inducing the \(C^{k}\) topology.
**Lemma 3.14**: _Let \(k\geq 1\) and let \(L:C^{k}(M)\to C^{k}(M)\) be a bounded operator. Suppose that there exist sequences \((a_{n})_{n\geq 0},(b_{n})_{n\geq 0},a_{n}\geq 0,b_{n}\geq 0\) such that for all \(n\geq 0\) and \(\rho\in C^{k}(M)\),_
\[|L^{n}\rho|_{k}\leq a_{n}|\rho|_{k}+b_{n}\|\rho\|_{k-1}.\]
_Then_
\[\mathcal{R}(L,C^{k}(M))\leq\max\left(\mathcal{R}(L,C^{k-1}(M)),\limsup_{n\to \infty}a_{n}^{1/n}\right).\]
**Proof:** For all \(\delta>0\), we set
\[\|\rho\|_{k,\delta}=\|\rho\|_{k-1}+\delta|\rho|_{k}.\]
Note that \(\|\rho\|_{k,\delta}\) and \(\|\rho\|_{k}\) are equivalent norms. In particular we have
\[\mathcal{R}(L,C^{k}(M))=\lim_{n\to\infty}\|L^{n}\|_{k,\delta}^{1/n}\leq\|L\|_{k,\delta}\]
for all \(\delta>0.\)
We now fix \(A>\limsup_{n\to\infty}a_{n}^{1/n}\) and \(R>\mathcal{R}(L,C^{k-1}(M)).\) Then, for some \(n\geq 0\) sufficiently large and all \(\delta>0,\)
\[\|L^{n}\rho\|_{k,\delta} \leq\|L^{n}\rho\|_{k-1}+\delta[a_{n}|\rho|_{k}+b_{n}\|\rho\|_{k-1}]\] \[\leq R^{n}\|\rho\|_{k-1}+\delta[A^{n}|\rho|_{k}+b_{n}\|\rho\|_{k-1}]\] \[\leq\max\left(R^{n}+\delta b_{n},A^{n}\right)\|\rho\|_{k,\delta}.\]
Thus
\[\mathcal{R}(L^{n},C^{k}(M))\leq\|L^{n}\|_{k,\delta}\leq\max\left(R^{n}+\delta b _{n},A^{n}\right).\]
Since \(\delta>0\) is arbitrary, this shows that
\[\mathcal{R}(L^{n},C^{k}(M))\leq\max\left(R^{n},A^{n}\right).\]
Thus,
\[\mathcal{R}(L,C^{k}(M))=\mathcal{R}(L^{n},C^{k}(M))^{1/n}\leq\max(A,R).\]
This concludes the proof. \(\Box\)
**Lemma 3.15**:
1. \(\mathcal{R}(\mathcal{L}_{\phi},C^{0}(M))\leq\mathsf{deg}(\phi)e^{-\mathcal{EV }_{0}(\phi)};\)__
2. _For all_ \(1\leq k\leq r-1,\mathcal{L}_{\phi}\) _satisfies the assumptions of Lemma_ 3.14 _with_ \[\limsup_{n\to\infty}a_{n}^{1/n}\leq\mathsf{deg}(\phi)e^{-\mathcal{EV}_{k}(\phi)}\]
**Proof:** Throughout this proof we set
\[j_{\phi}(x)=\frac{1}{J(\phi,x)}.\]
\((i):\) By the definition of \(\mathcal{L}_{\phi},\)
\[\|\mathcal{L}_{\phi}(\rho)\|_{0}\leq\mathsf{deg}(\phi)\sup_{x\in M}j_{\phi}(x )\|\rho\|_{0}\]
for all \(\rho\in C_{0}(M).\) Thus, replacing \(\phi\) by \(\phi^{n},\) we obtain that
\[\|\mathcal{L}_{\phi}^{n}(\rho)\|_{0}\leq\mathsf{deg}(\phi)^{n}\sup_{x\in M}j_{ \phi^{n}}(x)\|\rho\|_{0},\]
whence the result follows from the definition of \(\mathcal{EV}_{0}(\phi).\)
\((ii):\) To shorten notation we firstly consider the case where \(\mathsf{deg}(\phi)=1,\) so that \(\phi\) is a diffeomorphism with inverse \(\psi.\) Then \(\mathcal{L}_{\phi}(\rho)=(\rho\circ\psi)(j_{\phi}\circ\psi).\) Our first goal is to bound
\[|\mathcal{L}_{\phi}(\rho)|_{k}=\sup_{x\in\overline{V},\alpha\in\aleph}\|D^{k}( \mathcal{L}_{\phi}(\rho)\circ\alpha^{-1})(x)\|.\]
We let \(\alpha\in\aleph\) and \(\overline{x}\in\overline{V},\) and choose \(\beta\in\aleph\) such that \(\psi(\alpha^{-1}(\overline{x}))\in\mathcal{O}_{\beta}^{\prime}\) (recall that the family \(\{U_{\beta}^{\prime}\}\) cover \(M\)).
Set \(U=\alpha(\psi^{-1}(\mathcal{O}_{\beta}^{\prime})\cap\mathcal{O}_{\alpha}),f= \rho\circ\beta^{-1}:V\to\mathbb{R},g=j_{\phi}\circ\psi\circ\alpha^{-1}:U\to \mathbb{R}\) and \(\Psi=\beta\circ\psi\circ\alpha^{-1}:U\to V.\) Then on \(U\) we have
\[\mathcal{L}_{\phi}(\rho)\circ\alpha^{-1}=(f\circ\Psi)g.\]
Hence, relying on Lemma 3.13, one can find a smaller neighbourhood of \(\overline{x},\)\(U_{\overline{x}}\subset U\) and a constant \(C(\phi,\overline{x})\) (depending on \(\phi\) and \(\overline{x}\)) such that for all \(x\in U_{\overline{x}}\)
\[\|D^{k}(\mathcal{L}_{\phi}(\rho)\circ\alpha^{-1})(x)-g(x)D\Psi(x) ^{*}D^{k}f(x)\|\] \[\leq C(\phi,\overline{x})\left(|f(\Psi(x))|+\sum_{i=1}^{k-1}\|Df^ {i}(\Psi(x))\|\right)\] \[\leq C(\phi,\overline{x})\|\rho\|_{k-1}.\]
We take constants \(0<c,c^{\prime}<\infty\) (depending only upon the atlas \(\{\alpha,\mathcal{O}_{\alpha}\}\)) such that for all \(\alpha\in\aleph,x\in\alpha^{-1}(\overline{W})\) and \(u\in T_{x}M\) we have
\[c^{\prime}\|u\|_{x}\leq\|D\alpha(x)u\|\leq c\|u\|_{x}.\]
Thus, defining \(c^{\prime\prime}=c/c^{\prime},\) for all \(x\in U_{\overline{x}}\) we have that
\[\|g(x)D\Psi(x)^{*}D^{k}f(x)\|\leq g(x)\|D^{k}f(x)\|\|D\Psi(x)\|^{k}\] \[\leq c^{\prime\prime}g(x)\|D^{k}f(x)\|\|D\psi(\alpha^{-1}(x))\|_{ \alpha^{-1}(x)}^{k}\] \[=c^{\prime\prime}j_{\phi}(\psi\circ\alpha^{-1}(x))\|D^{k}f(x)\|EC (\phi,\psi\circ\alpha^{-1}(x))^{-k}\] \[\leq c^{\prime\prime}\|D^{k}f(x)\|\sup_{y\in M}j(\phi,y)EC(\phi,y )^{-k}.\]
Finally, since \(\overline{V}\) can be covered by finitely many neighbourhoods of the form \(U_{\overline{x}}\), we obtain that
\[|{\cal L}_{\phi}(\rho)|_{k}\leq c^{\prime\prime}|\rho|_{k}\sup_{y\in M}\left[j( \phi,y)EC(\phi,y)^{-k}\right]+c_{\phi}\|f\|_{k-1},\]
where \(c^{\prime\prime}\) depends only upon the atlas \(\{\alpha,{\cal O}_{\alpha}\}\) and \(c_{\phi}\) depends on \(\phi.\) Replacing \(\phi\) by \(\phi^{n}\) gives
\[|{\cal L}_{\phi^{n}}(\rho)|_{k}\leq c^{\prime\prime}|\rho|_{k}\sup_{y\in M} \left[j(\phi^{n},y)EC(\phi^{n},y)^{-k}\right]+c_{\phi^{n}}\|f\|_{k-1}.\]
This proves the desired result.
The proof for \({\sf deg}(\phi)>1\) is similar, with the inverse of \(\phi\) be replaced by the \({\sf deg}(\phi)\) local inverses. \(\Box\)
The proof of the right hand side inequality of Proposition 3.10\((ii)\) now easily follows from lemmas 3.14 and 3.15.
### Application to random maps
We recall that \(P=P^{\nu}\), as defined in the beginning of the present section.
**Theorem 3.16**: _We assume Assumption 3.1 and that \(\nu_{1}=\delta_{\phi}\) for some \(\phi\in{\sf Diff}^{r}_{\sf loc}(M).\)_
**(i)**: _If_ \({\cal E}(\phi)>0,\) _then_ \({\sf Inv}(P)\subset{\cal M}^{r-1}_{ac}(M)\) _for all_ \(a<1.\)__
**(ii)**: _If_ \({\cal E}(\phi)\leq 0,\) _then_ \({\sf Inv}(P)\subset{\cal M}^{r-1}_{ac}(M)\) _for all_ \(a<\min_{k=0,\ldots,r-1}\frac{e^{{\cal E}{\cal V}_{k}(\phi)}}{{\sf deg}(\phi)}.\)__
**Proof:** Theorem 3.16 follows from Theorem 3.6 and Proposition 3.10 \(\Box\)
As an illustration of this last result, consider two examples where \(\phi\) is a diffeomorphism, so that \(\min_{k=0,\ldots,r-1}\frac{e^{{\cal E}{\cal V}_{k}(\phi)}}{{\sf deg}(\phi)}=e ^{{\cal E}{\cal V}_{r-1}(\phi)},\) and where \({\cal E}{\cal V}_{r-1}(\phi)\) can be easily expressed.
**Example 3.17**: Suppose that \(\phi\) is a \(C^{r}\) diffeomorphism on \(M\) such that for all \(x\in M,\)
\[\omega_{\phi}(x)\subset{\sf Fix}(\phi):=\{p\in M\;:\phi(p)=p\}.\]
One can, for instance, imagine that \(\phi=\Phi^{1}\) is the time one map of a flow \(\{\Phi^{t}\}\) induced by a \(C^{r},r\geq 1,\) gradient vector field \(F=-\nabla V\) on \(M\) (or more generally a vector field having a strict Lyapounov function).
Here \(BC(\phi)=\mathsf{Fix}(\phi)\), so that by Remark 3.8,
\[\mathcal{EV}_{r-1}(\phi)=\mathcal{EV}_{r-1}(\phi,\mathsf{Fix}(\phi))=\inf_{p\in \mathsf{Fix}(\phi)}\log(J(\phi,p))+(r-1)\Lambda_{1}(p)\]
and
\[\mathcal{E}(\phi)=\mathcal{E}(\phi,\mathsf{Fix}(\phi))=\inf_{p\in\mathsf{Fix} (F)}\Lambda_{1}(p).\]
Here
\[J(\phi,p)=\log(|\mathsf{det}D\phi(p)|)\]
and
\[\Lambda_{1}(p)=\min\{\log(|z|)\ :z\mbox{ is an eigenvalue of }D\phi(p)\}.\]
Note that, in case \(\phi\) is the time one map of the flow induced by \(F=-\nabla V,\)\(\mathsf{Fix}(\phi)=\mathsf{Eq}(F)=F^{-1}(0)\)\(J(\phi,p)=\mathsf{div}_{p}(F)=-\Delta V(p)\) and \(\Lambda_{1}(p)\) is the smallest eigenvalue of the Hessian of \(-V\) at \(p.\)
**Example 3.18**: We suppose here that \(M=S^{2}\) and that \(\phi=\Phi^{1}\) where \(\{\Phi^{t}\}\) is induced by a \(C^{r}\) vector field \(F.\) We no longer assume that \(F\) is gradient-like but will assume that \(\mathsf{Eq}(F)\) is finite.
If \(p\in\mathsf{Eq}(F)\) we let
\[\Lambda_{1}(p)\leq\Lambda_{2}(p)\]
denote the real part of the eigenvalues of \(DF(p).\) Note that
\[\mathsf{div}_{p}(F)=\Lambda_{1}(p)+\Lambda_{2}(p).\]
Given \(T>0,\) a \(T\)-_periodic orbit_ is an orbit \(\gamma=\{\Phi^{t}(p),t\in\mathbb{R}\}\) such that \(\Phi^{T}(p)=p\) and \(\Phi^{t}(p)\neq p\) for all \(0<t<T\). We let \(\mathsf{Per}_{T}(F)\) denote the set of such orbits and \(\mathsf{Per}(F)=\cup_{T>0}\mathsf{Per}_{T}(F).\)
If \(\gamma\in\mathsf{Per}_{T}(F)\) and \(p\in\gamma,\)\(D\Phi^{T}(p)\) has two (possibly equal) eigenvalues (that depend only of \(\gamma\)): 1 (corresponding to eigenvector \(F(p)\)) and \(J(\Phi^{T},p).\) We let
\[\{\Lambda_{1}(\gamma),\Lambda_{2}(\gamma)\}=\{0,\frac{\log(J(\Phi^{T},p)}{T}\}\]
denote the logarithms of these eigenvalues, with the convention that \(\Lambda_{1}(\gamma)\leq\Lambda_{2}(\gamma).\) A periodic orbit, \(\gamma,\) is said to be _linearly stable_ if \(\Lambda_{1}(\gamma)<0.\) We let \(\mathsf{Per}_{-}(F)\) denote the set of linearly stable periodic orbits. Note that, although \(\mathsf{Per}(F)\) may be uncountable, \(\mathsf{Per}_{-}(F)\) is finite.
In the following lemma, Lemma 3.19, we implicitly identify an equilibrium point, \(p,\) with the orbit \(\{p\}=\{\Phi^{t}(p):t\in\mathbb{R}\}.\) Again, combined with Theorem 3.16, this gives simple conditions on \(a\) ensuring the smoothness of invariant distributions.
**Lemma 3.19**: _Suppose that \(F\) has finitely many equilibria. Let \(\mu\) be an ergodic probability measure for \(\phi.\) Then \(\int\log(J(\phi,x))\mu(dx)=\Lambda_{1}(\gamma)+\Lambda_{2}(\gamma)\) and \(\Lambda_{1}(\mu)=\Lambda_{1}(\gamma)\) for some equilibrium or periodic orbit \(\gamma.\) In particular,_
\[\mathcal{EV}_{r-1}(\phi)=\min_{\gamma\in\mathtt{Eq}(F)\cup\mathtt{Per}_{-}(F) }r\Lambda_{1}(\gamma)+\Lambda_{2}(\gamma)\]
_and_
\[\mathcal{E}(\phi)=\min_{\gamma\in\mathtt{Eq}(F)\cup\mathtt{Per}_{-}(F)} \Lambda_{1}(\gamma).\]
Proof.: By the Poincare recurrence theorem and Birkhoff ergodic theorem, there exists a set \(\Omega\subset M,\) with \(\mu(\Omega)=1,\) such that \(x\in\omega_{\phi}(x)\) (Poincare) and \(\frac{1}{n}\sum_{k=0}^{n-1}\delta_{\phi^{k}(x)}\Rightarrow\mu\) (Birkhoff) for all \(x\in\Omega.\) Here \(\Rightarrow\) stands for weak* convergence.
We take \(p\in\Omega.\) We claim that \(p\) is either a periodic point (i.e lies in a periodic orbit) or an equilibrium point for \(\{\Phi^{t}\}.\) Clearly \(\omega_{\phi}(p)\subset\omega_{\{\Phi^{t}\}}(p),\) the omega limit set of \(p\) for \(\{\Phi^{t}\}.\) Such a set is _internally chain recurrent_ for \(\{\Phi^{t}\}.\) Therefore, by a result proved in [9], Theorem 1.1, every point in \(\omega_{\{\Phi^{t}\}}(p)\) is either periodic or belongs to an _orbit cycle_. An orbit cycle is a finite sequence \(\Gamma=\gamma_{1},\ldots,\gamma_{m}\) of orbits such that the alpha limit set of \(\gamma_{i}\) (for \(\{\Phi^{t}\}\)) is an equilibrium \(e_{i-1}\) and its omega limit set is an equilibrium \(e_{i},\) with \(e_{0}=e_{m}.\) Therefore, because \(p\in\omega_{\{\Phi^{t}\}}(p),\)\(p\) is either a periodic or an equilibrium point. This proves the claim.
If \(p\) is an equilibrium, then \(\mu=\delta_{p},\)\(\int\log(J(\phi,y))\mu(dy)=\mathsf{div}_{p}(F)=\Lambda_{1}(p)+\Lambda_{2}(p),\) and \(\Lambda_{1}(\mu)=\Lambda_{1}(p).\) If \(p\) is \(T\)-periodic for \(\{\Phi^{t}\}\) and \(T=N/K\) is rational, then \(\mu=\frac{1}{N}\sum_{i=0}^{N-1}\delta_{\phi^{i}(p)}\) with \(\phi^{N}(p)=\phi^{TK}(p)=p.\) Thus
\[\int\log(J(\phi,y))\mu(dy)=\frac{1}{TK}\log(J(\Phi^{T},p)^{K})\] \[=\frac{1}{T}\log(J(\Phi^{T},p))=\Lambda_{1}(\gamma)+\Lambda_{2}( \gamma).\]
It \(T\) is irrational, then \(\mu=\frac{1}{T}\int_{0}^{T}\delta_{\Phi^{s}(p)}ds\) and again we have that
\[\int\log(J(\phi,y))\mu(dy)=\frac{1}{T}\int_{0}^{T}\log(J(\phi, \Phi^{s}(p))ds\] \[=\frac{1}{T}\int_{0}^{T}\int_{0}^{1}\mathsf{Tr}(DF(\Phi^{s+u}(p)) duds=\frac{1}{T}\int_{0}^{1}\int_{0}^{T}\mathsf{Tr}(DF(\Phi^{s+u}(p))duds\] \[=\frac{1}{T}\int_{0}^{T}\mathsf{Tr}(DF(\Phi^{u}(p))du=\Lambda_{1} (\gamma)+\Lambda_{1}(\gamma).\]
## 4 Piecewise deterministic Markov processes
We let \(E\) be a finite set and \(\{F_{i}\}_{i\in E}\) be a family of \(C^{r}\) (\(r\geq 1\)) vector fields on \(M\) where \(M\) is, as before, a \(d\)-dimensional compact connected Riemannian manifold.
We set \(\mathbf{M}=M\times E.\) Then \(\mathbf{M}\) can be viewed as a \(d\)-dimensional compact manifold with \(\mathsf{card}(E)\) components. A map \(g:\mathbf{M}\mapsto\mathbb{R}\) is \(C^{k}\) if \(x\to g(x,i)=g_{i}(x)\) is \(C^{k}\) for all \(i\in E.\) A map \(g:\mathbf{M}\mapsto\mathbb{R}\cup\{\infty\}\) is lower semi-continuous if \(g_{i}\) is lower semi-continuous for all \(i\in E.\) The Riemannian measure on \(\mathbf{M}\) is given by \(\mathbf{m}=m\otimes\sum_{i\in E}\delta_{i},\) whereby \(m\) is the Riemannian measure on \(M\). The sets \(\mathcal{M}_{ac}(\mathbf{M}),\mathcal{M}_{ac}^{ls}(\mathbf{M})\) and \(\mathcal{M}_{ac}^{r}(\mathbf{M})\) are defined accordingly.
We let \((Z_{t}=(X_{t},I_{t}))_{t\geq 0}\) be a continuous time Feller Markov process living on \(\mathbf{M}\) whose infinitesimal generator \(\mathcal{A}\) acts on functions \(g\in C^{1}(\mathbf{M})\) according to the formula
\[\mathcal{A}g(x,i)=\langle F_{i}(x),\nabla g_{i}(x)\rangle_{x}+\sum_{j\in E} \alpha_{ij}(x)(g_{j}(x)-g_{i}(x)),\]
whereby:
**(i)**: \(\alpha_{ij}(x)\geq 0\) and (for convenience) \(\alpha_{ii}(x)=0\) for all \(i,j\in E;\)
**(ii)**: The matrix \((\alpha_{ij}(x))_{i,j\in E}\) is irreducible and \(C^{r-1}\) in \(x.\)
For further reference, we sometimes call the data \(\{\{F_{i}\}_{i\in E},(\alpha_{ij}(x))_{i,j\in E}\}\) the _characteristics_ of \((Z_{t})_{t\geq 0}.\)
An alternative pathwise description of the process is as follows. The component \((X_{t})_{t\geq 0}\) is a solution to the differential equation
\[\frac{dX_{t}}{dt}=F_{I_{t}}(X_{t}),\]
while \((I_{t})_{t\geq 0}\) is a jump process whose jump rates depends on \((X_{t}),\)
\[\mathsf{P}(I_{t+s}=j|\sigma(Z_{u},u\leq t),I_{t}=i)=\alpha_{ij}(X_{t})s+o(s).\]
In words, starting from \((x,i)\), \(X_{t}\) follows the ode induced by \(F_{i}\) and switches to the ode induced by \(F_{j}\) at rate \(\alpha_{ij}(X_{t}).\) Then \(X_{t}\) follows the ode induced by \(F_{j}\) until it switches to the ode induced by \(F_{k}\) at rate \(\alpha_{jk}(X_{t}),\) and so on.
This type of process falls under the broader category of _piecewise deterministic Markov processes_, introduced by Davis [15]. Their ergodic properties have been the focus of much attention in the last decade ([2], [7][8], [1], [11], [3]).
### A discrete kernel associated to \((Z_{t})_{t\geq 0}\)
In order to use the results of the preceding sections, we firstly introduce a (discrete time) Markov kernel \(P\) whose invariant distributions are linked to the invariant distributions of \((Z_{t})_{t\geq 0}.\)
We let \(\{\Phi_{i}^{t}\}_{t\in\mathbb{R}}\) denote the flow induced by \(F_{i}.\) We fix \(\alpha>0\) sufficiently large so that
\[\sup_{x\in M}\sum_{j\in E}\alpha_{ij}(x)<\alpha. \tag{9}\]
Set \(A_{ij}(x)=\frac{\alpha_{ij}(x)}{\alpha}\) for \(i\neq j\) and \(A_{ii}(x)=1-\sum_{j\neq i}A_{ij}(x).\) Let \(A,K\) and \(P\) be the Markov operators on \(\mathbf{M}\) respectively defined by
\[Ag(x,i)=\sum_{j}A_{ij}(x)g(x,j), \tag{10}\]
\[Kg(x,i)=\int_{0}^{\infty}\alpha e^{-\alpha t}g(\Phi_{t}^{i}(x),i)dt \tag{11}\]
and
\[P=KA \tag{12}\]
**Remark 4.1**: The Kernel \(P\) is the kernel of a discrete time chain \((X_{n},I_{n})_{n\geq 0}\) living on \(\mathbf{M}\) whose dynamics can be described as follows. Starting from \((x,i)\in\mathbf{M},\) we pick a random variable \(T\) having an exponential distribution with parameter \(\alpha,\) and set \(X_{1}=\Phi_{i}^{T}(x).\) We then choose \(I_{1}=j\) with probability \(A_{ij}(X_{1}).\)
Invariant distributions of the Markov kernel \(P\) and invariant distributions of the Markov process \((Z_{t})_{t\geq 0}\) are linked by the following result proved in [7, Proposition 2.4 and Lemma 2.6].
**Proposition 4.2**: _We let \((Z_{t})_{t\geq 0}\) be the piecewise-deterministic Markov process having characteristics \(\{\{F_{i}\}_{i\in E},(\alpha_{ij}(x))_{i,j\in E}\}.\) The mapping \(\mu\to\mu K\) maps homeomorphically \(\mathsf{Inv}(P)\) (respectively \(\mathsf{Inv}_{erg}(P),\) the set of ergodic probability measures of \(P\)) onto the set of invariant (respectively ergodic) probability measures for \((Z_{t})_{t\geq 0}.\) Its converse homeomorphism is given by \(\mu\mapsto\mu A.\)_
_Moreover we have that \(\mathsf{supp}(\mu)=\mathsf{supp}(\mu K)\) for all \(\mu\in\mathsf{Inv}(P).\)_
By Liouville's formula, the transfer operator of \(\Phi_{i}^{t}\) (see Section 3) is given by
\[\mathcal{L}_{\Phi_{i}^{t}}(\rho)(x)=\rho(\Phi_{i}^{-t}(x))\exp\left[-\int_{0}^ {t}\mathsf{div}(F_{i})(\Phi_{i}^{-s}(x))ds\right] \tag{13}\]
for \(\rho\in L^{1}(m),\) where \(\mathsf{div}(F_{i})\) denotes the divergence of \(F_{i}\) on \(M.\) We also set
\[\mathcal{L}_{i}(\rho)(x)=\int_{0}^{\infty}\alpha e^{-\alpha t}\mathcal{L}_{ \Phi_{i}^{t}}(\rho)(x)dt \tag{14}\]
for \(\rho\in L^{1}(m)\). Observe that, using the notation of Proposition 3.4, \(\mathcal{L}_{i}:=\mathcal{L}_{\nu_{1}}\) where \(\nu_{1}\) is the measure on \(\mathsf{diff}_{loc}^{r}(M)\) given by \(\nu_{1}=\int_{0}^{\infty}\alpha e^{-\alpha t}\delta_{\Phi_{i}^{t}}dt.\)
Associated to \(K\) is the transfer operator defined on \(L^{1}(\mathbf{m})\) by
\[\mathcal{K}\rho(x,i)=\mathcal{L}_{i}\rho_{i}(x).\]
The purpose of the next lemma is twofold. Firstly, it will be used to show that \(P\) satisfies Assumption 2.2, \((iii),\) with \(\mathcal{C}(\mathbf{M})\) one of the sets \(\mathcal{M}_{ac}(\mathbf{M}),\mathcal{M}_{ac}^{ls}(\mathbf{M})\) or \(\mathcal{M}_{ac}^{r}(\mathbf{M}).\) Secondly, it shows that the mapping \(\mu\to\mu K\) in Proposition 4.2 preserve these sets.
**Lemma 4.3**: _Suppose that \(\mu\in\mathcal{M}_{ac}(\mathbf{M})\) has density \(\rho\) with respect to \(\mathbf{m}.\) Then we have the following:_
**(i)**: \(\mu A\) _has density_ \(A^{t}\rho\) _given by_
\[A^{t}\rho(x,i)=\sum_{j}\rho_{j}(x)A_{ji}(x).\]
_If_ \(\rho\) _is lower semi-continuous or_ \(C^{k}\) _with_ \(0\leq k\leq r-1,\) _then so is_ \(A^{t}\rho.\)__
**(ii)**: \(\mu K\) _has a density given by_ \(\mathcal{K}\rho.\) _If_ \(\rho\) _is lower semi-continuous, then so is_ \(\mathcal{K}\rho.\)__
**(iii)**: _If we furthermore assume that_
\[\alpha>\max_{i\in E}\log\left(\mathcal{R}(\mathcal{L}_{\Phi_{i}^{1}},C^{r-1}(M)) \right), \tag{15}\]
_then_ \(\mathcal{K}\) _is a bounded operator on_ \(C^{r-1}(\mathbf{M})\) _and_
\[\mathcal{R}(\mathcal{K},C^{r-1}(\mathbf{M}))\leq\frac{\alpha}{\alpha-\max_{i \in E}\log\left(\mathcal{R}(\mathcal{L}_{\Phi_{i}^{1}},C^{r-1}(M))\right)}.\]
**Proof:**\((i)\) is immediate to verify and \((ii)\) easily follows from Proposition 3.4.
We now turn to \((iii)\). By classical results (see [17, Chapter V, Corollary 4.1] for example), \((t,x)\mapsto\Phi_{i}^{t}(x)\) is \(C^{r}.\) The form of \(\mathcal{L}_{\Phi_{i}^{t}}\) (see equation (13)) and the fact that \(\mathsf{div}(F_{i})\) is \(C^{r-1}\) imply that
\[\sup_{0\leq t\leq 1}\|\mathcal{L}_{\Phi_{i}^{t}}\|_{C^{r-1}(M)}\leq C\]
for some constant \(C<\infty\) (depending on \(r\)). For \(t\geq 0\), we write \(t=n+s\) for \(n\in\mathbb{N}\) and \(0\leq s\leq 1.\) Thus
\[\mathcal{L}_{\Phi_{i}^{t}}=\mathcal{L}_{\Phi_{i}^{1}}^{n}\circ\mathcal{L}_{ \Phi_{i}^{s}}.\]
Therefore for all \(\varepsilon>0\) there exists another constant \(C^{\prime}<\infty\) such that for all \(t\geq 0\) we have
\[\|\mathcal{L}_{\Phi_{i}^{t}}\|_{C^{r-1}(M)}\leq C^{\prime}e^{n(\log(R_{i})+ \varepsilon)}\leq C^{\prime}e^{t(\log(R_{i})+\varepsilon)}, \tag{16}\]
where \(R_{i}\) stands for \(\mathcal{R}(\mathcal{L}_{\Phi_{i}^{1}},C^{r-1}(M)).\) Proposition 3.4 then implies that \(\mathcal{L}_{i}\) is a bounded operator on \(C^{r-1}(M).\) We likewise have that \(\mathcal{K}\) is a bounded operator on \(C^{r-1}(\mathbf{M}).\)
We now establish the upper bound on the spectral radius. Note that for all \(n\in\mathbb{N}\) we have
\[\mathcal{K}^{n}\rho(x,i)=\mathbb{E}(\mathcal{L}_{\Phi_{i}^{S_{n}}}(\rho_{i})( x)),\]
where \(S_{n}=T_{1}+\ldots+T_{n}\) and \(\{T_{i}\}_{i\geq 1}\) is a sequence of independent random variables having an exponential distribution with parameter \(\alpha.\) Thus
\[\|\mathcal{K}^{n}\rho\|_{C^{r-1}(M)}\leq\max_{i\in E}C^{\prime}\mathbb{E}[e^{ S_{n}(\log(R_{i})+\varepsilon)}]=\max_{i\in E}C^{\prime}\big{(}\mathbb{E}[e^{T_{1}( \log(R_{i})+\varepsilon)}]\big{)}^{n}.\]
This proves the result. \(\Box\)
### Invariant distributions
Let \(C_{pc}(\mathbb{R}_{+},E)\) be the set of piecewise continuous functions \(J:\mathbb{R}_{+}\to E\). Given \(J\in C_{pc}(\mathbb{R}_{+},E)\), we let \(t\to\Phi^{t}(x,J)\) denote the solution to the non autonomous differential equation
\[\frac{dx}{dt}=F_{J(t)}(x), \tag{17}\]
with initial condition \(x(0)=x.\) For all \(x\in M\), we define
\[\gamma^{+}(x)=\{\Phi^{t}(x,J)\,:t\geq 0\text{ and }J\in C_{pc}(\mathbb{R}_{+},E)\}.\]
We let \(\Gamma\) be the possibly empty, compact connected set defined by
\[\Gamma=\bigcap_{x\in M}\overline{\gamma^{+}(x)}.\]
Connectedness (as well as other topological properties of \(\Gamma\)) are proved in [7, Proposition 3.11] (see also the erratum [8]). By Proposition 3.13 in [7] we have
\[\Gamma_{P}=\Gamma\times E,\]
where \(\Gamma_{P}\) is _the accessible set_ (as defined in Section 2.1) of the kernel \(P\) given by (12).
We let \(r_{max}\in\{1,2,\ldots\}\cup\{\infty\}\) be the maximal \(r\) such that all the \(F_{i}s\) are \(C^{r}\). We define \(\mathbf{F}_{0}:=\{F_{i}\,:i\in E\}\) and inductively, for all \(n=1,\ldots r_{\max}-1\), \(\mathbf{F}_{n}=\mathbf{F}_{n-1}\cup\{[F,G]:F\in\mathbf{F}_{0},G\in\mathbf{F}_{n -1}\},\) where \([F,G]\) is the Lie bracket of \(F\) and \(G.\)
We let \(n\leq r_{max}-1.\) Inspired by the terminology used in [7] (see also [10, Chapter 6]), we say that a point \(p\in M\) satisfies the \(n\)-_weak bracket condition_ if \(\mathbf{F}_{n}(p):=\{G(p)\,:G\in\mathbf{F}_{n}\}\) spans \(T_{p}M.\) We say that \(p\) satisfies the _weak bracket condition_ if it satisfies the \(n\)-weak bracket condition, for some \(n\leq r_{max}-1.\)
It was proved in [2] (for \(\alpha_{ij}(x)\) constant over \(x\)) and in [7] that for \(C^{\infty}\) vector fields (i.e \(r_{max}=\infty\)), the existence of a point \(p\in\Gamma\) at which the weak bracket condition holds implies that \((Z_{t})\) has a unique invariant distribution which is absolutely continuous with respect to \(\mathbf{m}.\) The next theorem also shows that its density is lower semi-continuous. A first version of this result, when \(\alpha_{ij}(x)\) is constant over \(x\), was proved in [12].
**Theorem 4.4**: _Assume there exists a point \(p\in\Gamma\) at which the weak bracket condition holds. Then \((Z_{t})\) has a unique invariant probability measure \(\Pi\) which is absolutely continuous with respect to \(\mathbf{m}\) and whose density \(\rho\) is lower semi continuous. In addition, \(\mathsf{supp}(\Pi)=\Gamma\times E\) and for all \(i\in E,\)_
\[\mathsf{supp}(\rho_{i}):=\overline{\{x\in M:\,\rho_{i}(x)>0\}}=\Gamma.\]
**Proof:** We let \((p,i_{0})\in\Gamma\times E=\Gamma_{P}.\) By Theorems 4.1 and 4.4 in [7], \((p,i_{0})\) is a weak Doeblin point (as defined in Section 2.1) of \(P\) with a minorizing measure given by
\[\pi(dxdi)=c\mathbf{1}_{\mathcal{V}\times E}(x,i)\mathbf{m}(dxdi),\]
for some nonempty open set \(\mathcal{V}\subset M\) and \(c>0.\) This shows that \(\pi\in\mathcal{M}_{ac}^{ls}(\mathbf{M}).\) Therefore, by Theorem 2.11, \(P\) has a unique invariant distribution \(\mu\) having a lower semi-continuous density \(h.\) By Proposition 4.2 and Lemma 4.3, \(\Pi=\mu K\) is the unique invariant distribution of \((Z_{t})_{t\geq 0}\) and its density, \(\rho=\mathcal{K}h,\) is lower semi-continuous. Also \(\mu\) and \(\Pi\) have the same support.
Basic properties of the accessible set (see [10, Proposition 5.8 (iv)], for example) imply that \(\mathsf{supp}(\mu)\) (hence \(\mathsf{supp}(\Pi)\)) is equal to \(\Gamma_{P}.\) Clearly \(\mathsf{supp}(\Pi)\subset\mathsf{supp}(\rho).\) Conversely, if \(\rho_{i}(x)>\delta>0,\) by the lower semi-continuity of \(\rho,\) there exists a ball \(B(x,\varepsilon)\) such that \(\rho_{i}(y)>\delta\) for all \(y\in B(x,\varepsilon).\) Thus \(\Pi(B(x,\varepsilon)\times\{i\})>0.\) This proves the converse inclusion \(\mathsf{supp}(\rho)\subset\mathsf{supp}(\Pi).\)\(\Box\)
The next result considers the situation where \(\Gamma\) is empty but the weak bracket condition holds everywhere. It relies on the preceding result combined with ideas and results from [11].
**Theorem 4.5**: _We assume that the weak bracket condition holds at every point \(p\in M.\) Then \((Z_{t})_{t\geq 0}\) has finitely many ergodic probability measures \(\Pi^{1},\ldots,\Pi^{k}\). These are absolutely continuous with respect to \(\mathbf{m}\), with lower semi-continuous densities \(\rho^{1},\ldots,\rho^{k}\). For each \(j=1,\ldots,k\), the support of \(\Pi^{j}\) can be written \(\mathsf{supp}(\Pi^{j})=\Gamma^{j}\times E\), where \(\Gamma^{j}\) is a compact connected set. Furthermore, for all \(i\in E,\)_
\[\mathsf{supp}(\rho_{i}^{j}):=\overline{\{x\in M:\,\rho_{i}^{j}(x)>0\}}=\Gamma ^{j}.\]
**Proof:** The proof uses some results and ideas from control theory. For consistency with the terminology used in [7], we phrase it using differential inclusions. We let
\[\mathsf{co}(F)(x)=\{\sum_{i\in E}p_{i}F_{i}(x)\;:p_{i}\geq 0,\sum_{i\in E}p_{i}=1 \}\in T_{x}M\]
be the convex hull of the family \(\{F_{i}(x)\}_{i\in E}.\) A solution to the differential inclusion
\[\dot{\eta}\in\mathsf{co}(F)(\eta) \tag{18}\]
is an absolutely continuous function \(\eta\in C^{0}(\mathbb{R},M)\) which satisfies \(\dot{\eta}(t)\in\mathsf{co}(F)(\eta(t))\) for almost all \(t\in\mathbb{R}.\) Such a differential inclusion induces a set-valued dynamical system defined as
\[\Psi_{t}(x)=\{\eta(t):\eta(0)=x\text{ and }\eta\text{ is solution to}(\ref{eq:18})\}.\]
We refer the reader to [7] for background and references. For \(I\subset\mathbb{R},\) we set \(\Psi_{I}(x)=\bigcup_{t\in I}\Psi_{t}(x).\) We call a set \(C\subset M,\) a _compact invariant control set_ if \(C\) is nonempty, compact and \(C=\overline{\Psi_{[0,\infty)}(x)}\) for all \(x\in C.\) This is consistent with the terminology used in control theory (see, for instance, [11, Definition 2.4 and Theorem 2.2]). The set \(\Gamma\) previously defined is, when it exists, a compact invariant control set. This follows, for instance, [7, Proposition 3.11]. Under the present assumption that the weak bracket conditions holds at every point \(p\in M\), there are, by [11, Corollary 2.13], finitely many compact invariant control sets \(\Gamma^{1},\ldots,\Gamma^{k}.\) Furthermore we have the following:
**(i)**: for all \(j\in\{1,\ldots,k\}\)\(\overline{\mathsf{Int}(\Gamma^{j})}=\Gamma^{j};\)
**(ii)**: for each \(x\in M,\) there exists \(j\in\{1,\ldots,k\}\) such that \(\gamma^{+}(x)\cap\mathsf{Int}(\Gamma^{j})\neq\emptyset;\)
**(iii)**: for each \(j\in\{1,\ldots,k\}\) and \(x\in\Gamma^{j},\mathsf{Int}(\Gamma^{j})\subset\gamma^{+}(x).\)
It follows from \((i),(iii)\) and the definition of a compact invariant control set, that \(\Gamma^{j}=\bigcap_{x\in\Gamma^{j}}\gamma^{+}(x).\) The proof of Theorem 4.5 then applies verbatim to \(P\) restricted to \(\Gamma^{j}.\) This proves that \(P\) restricted to \(\Gamma^{j}\) has a unique, hence ergodic for \(P\), invariant distribution \(\Pi^{j}\) with density \(\rho^{j}\) enjoying the properties stated in the theorem.
To establish that the \(\Pi^{j}\)s are the only ergodic probability measures, it suffices to show that every \(\mu\in\mathsf{Inv}(P)\) is supported by \(\bigcup_{j=1}^{k}\Gamma^{j}.\) It easily follows from \((ii)\) that \(W=\bigcup_{j=1}^{k}\mathsf{Int}(\Gamma^{j})\) is accessible for \(P\), that is \(R_{a}(x,W)>0\) for all \(x\in M\) (this can, for instance, be deduced from the support theorem, [7, 8, Theorem 3.4]). By the Feller continuity of \(R_{a}\) (inherited from the Feller continuity of \(P\)), the Portmanteau theorem and the compactness of \(M,\) we have that \(R_{a}(x,W)\geq\delta>0\) for all \(x\in M\), for some \(\delta>0.\) Since \(R_{a}(y,M\setminus\overline{W})=0\) for all \(y\in\overline{W}\) one obtains that (one may compare this to
[11, Theorem 4.7])
\[\mu(M\setminus\overline{W})=\mu R_{a}^{2}(M\setminus\overline{W})= \int_{M\setminus\overline{W}}\mu R_{a}(x,dy)R_{a}(y,M\setminus\overline{W})\] \[\leq(1-\delta)\mu R_{a}(M\setminus\overline{W})=(1-\delta)\mu(M \setminus\overline{W}).\]
We therefore obtain that \(\mu(M\setminus\overline{W})=0.\)\(\Box\)
### Smooth invariant distributions on the torus
This section is motivated by the work of Bakhtin, Hurth, Lawley and Mattingly [3]. It retrieves and substantially extends their main result (see Remark 4.8).
Here we assume that \(M=\mathbb{T}^{2}=\mathbb{R}^{2}/\mathbb{Z}^{2}\) is the two dimensional flat torus, \(E=\{1,2\},\) and that the vector fields \(F_{1},F_{2}\) are \(C^{r}\) with \(r\geq 2,\) and transverse everywhere - that is \(\{F_{1}(p),F_{2}(p)\}\) span \(T_{p}\mathbb{T}^{2}\) for all \(p.\) In particular \(F_{1},F_{2}\) never vanish. Moreover we assume that the jump rates are constant, that is
\[\alpha_{12}(x)=\alpha_{12}>0,\text{ and }\alpha_{21}(x)=\alpha_{21}>0.\]
Using the notation introduced in Example 3.18, we let \(\mathsf{Per}_{-}(F_{i})\) denote the (possibly empty) finite set of linearly stable periodic orbits of \(F_{i}.\) For \(\gamma\in\mathsf{Per}_{-}(F_{i})\) we let \(\Lambda_{1,i}(\gamma)<0\) denote the non-zero Floquet exponent of \(\gamma.\)
We shall establish here the following result.
**Theorem 4.6**: _We let \(1\leq k\leq r.\) Assume that for all \(i=1,2\) and \(\gamma\in\mathsf{Per}_{-}(F_{i})\)_
\[\min(\alpha_{12},\alpha_{21})>-k\Lambda_{1,i}(\gamma).\]
_Then, \((Z_{t})\) has finitely many ergodic probability measures (see Theorem 4.5), each of which has a \(C^{k-1}\) density with respect to \(\mathbf{m}.\)_
**Corollary 4.7**: _Suppose that \(F_{1},F_{2}\) have no periodic orbits. Then \((Z_{t})\) has a unique invariant distribution and its density is \(C^{r-1}.\)_
**Proof:** A fixed-point-free \(C^{2}\) flow with no periodic orbits on \(\mathbb{T}^{2}\) has dense orbits (see the proof of Proposition 4.13). The accessible set is then \(\mathbb{T}^{2}\) and uniqueness follows (see e.g Theorem 4.4). The \(C^{r-1}\) continuity follows from Theorem 4.6.
**Remark 4.8**: Using ideas inspired by Malliavin calculus, Bakhtin, Hurth, Lawley and Mattingly in [3], give a proof of Corollary 4.7, (when \(r=\infty\) and \(\alpha_{12}=\alpha_{21}\)) in the particular case where each of the flows induced by \(F^{1}\) and \(F^{2}\) possesses an invariant probability measure with an everywhere positive \(C^{\infty}\) density. This, it should be noted, is a strong assumption.
#### Proof of Theorem 4.6
The idea of the proof is to show that \(P^{n}\) (for \(n\) sufficiently large) satisfies the standing assumption, Assumption 2.2, and the assumptions of Theorem 2.8. We assume here that \(F_{1},F_{2}\) are \(C^{r}\) with \(r\geq 1.\) The assumption that \(r\geq 2\) will be required in Proposition 4.13.
We let \((X_{n},I_{n})_{n\geq 0}\) be the discrete-time Markov chain with kernel \(P\) (see Remark 4.1), and define \(\tau=\min\{k\geq 1:\;I_{k}\neq I_{0}\}\) to be the first switching time. For \(n\geq 2\) and \(1\leq k\leq n-1\) we set
\[P_{n,k}(f)(x,i)=\mathbb{E}\left[f(X_{n},I_{n})\mathbf{1}_{\tau=k}|(X_{0},I_{0 })=(x,i)\right],\]
and
\[\Delta_{n,n}(f)(x,i)=\mathbb{E}\left[f(X_{n},I_{n})\mathbf{1}_{\tau\geq n}|(X _{0},I_{0})=(x,i)\right].\]
Clearly we have that
\[P^{n}f=\sum_{k=0}^{n-1}P_{n,k}f+\Delta_{n,n}f.\]
We now decompose the (matrix) operator \(A\) as \(A=S+\bar{S}\) where \(S\) (corresponding to switching) is defined by
\[Sf(x,i)=A_{ij}f(x,j)\mbox{ with }j=3-i,\]
and \(\bar{S}\) (corresponding to not switching) is given by \(\bar{S}f(x,i)=A_{ii}f(x,i).\) It is readily seen that
\[P_{n,k}=(K\bar{S})^{k-1}KSP^{n-k}=(K\bar{S})^{k-1}[KSK]AP^{n-k-1}.\]
This simply express the fact that the first switch occurs at time \(k.\) We likewise have that
\[\Delta_{n,n}=(K\bar{S})^{n-1}P.\]
In the next three lemmas, we use the following convenient notation. We denote \(\mathcal{C}(\mathbf{M})=\mathcal{M}_{ac}^{r-1}(\mathbf{M}),\) and if \(\mu\in\mathcal{C}(\mathbf{M})\) has density \(\rho,\) then \(\|\rho\|_{C^{r-1}(\mathbf{M})}\) is denoted by \(\|\mu\|_{\mathcal{C}(\mathbf{M})}\). We also assume that the parameter \(\alpha\) that occurs in the definitions of \(A\) and \(K\) satisfies inequality (15).
**Lemma 4.9**: _We suppose that \(F_{1},F_{2}\) are transverse at every point \(p\in\mathbb{T}^{2}.\) For all \(\varepsilon>0,\)\(KSK\) can be decomposed as \(KSK=Q+\Delta\) where \(Q,\Delta\) are Feller sub-Markov kernels and satisfy:_
**(i)**: \(\mathcal{M}(\mathbf{M})Q\subset\mathcal{C}(\mathbf{M})\)_;_
**(ii)**: \(\mathcal{C}(\mathbf{M})\Delta\subset\mathcal{C}(\mathbf{M})\)_;_
**(iii)**: \(\|\mu\Delta\|_{\mathcal{C}(\mathbf{M})}\leq\varepsilon\|\mu\|_{\mathcal{C}( \mathbf{M})}\) _for all_ \(\mu\in\mathcal{C}(\mathbf{M}).\)__
**Proof:** We set \(j=3-i\) for \(i\in\{1,2\}.\) We note that
\[KSKf(x,i)=A_{ij}\int_{\mathbb{R}_{+}^{2}}f(\Phi_{j}^{t}\circ\Phi_{i}^{s}(x),j) \alpha^{2}e^{-\alpha(t+s)}dtds.\]
For all \(n>1,\) we let \(\eta_{n}:\mathbb{R}\mapsto\mathbb{R}_{+}\) be a \(C^{\infty}\) function such that \(\eta_{n}=1\) on \([\frac{1}{n},n],\)\(\eta_{n}=0\) on \(\mathbb{R}\setminus[\frac{1}{2n},2n],\) and \(0\leq\eta_{n}\leq 1.\) We set
\[Qf(x,i)=A_{ij}\int_{\mathbb{R}_{+}^{2}}f(\Phi_{j}^{t}\circ\Phi_{i}^{s}(x),j) \alpha^{2}e^{-\alpha(t+s)}\eta_{n}(t)\eta_{n}(s)dtds\]
and \(\Delta=KSK-Q.\) The assumption that \(F_{1},F_{2}\) are transverse makes the map \((t,s)\in\mathbb{R}_{+}^{2}\rightarrow\Phi_{j}^{t}\circ\Phi_{i}^{s}(x)\in \mathbb{T}^{2}\) a submersion for all \(x\in\mathbb{T}^{2}.\) Indeed, denoting \(y=\Phi_{i}^{s}(x),\) we have that
\[\left(\frac{\partial}{\partial t}\Phi_{j}^{t}\circ\Phi_{i}^{s}(x),\frac{ \partial}{\partial s}\Phi_{j}^{t}\circ\Phi_{i}^{s}(x)\right)=\left(D\Phi_{j}^ {t}(y)F_{j}(y),D\Phi_{j}^{t}(y)F_{i}(y)\right).\]
Proposition 3.2 implies that condition \((i)\) is satisfied. For the second assertion, we proceed as in the proof of Lemma 4.3\((iii).\) For all \(\mu\in\mathcal{C}(\mathbf{M})\) we have that
\[\|\mu\Delta\|_{\mathcal{C}}(\mathbf{M})\leq\|\mu\|_{\mathcal{C}( \mathbf{M})}\left[\int_{\mathbb{R}}\max_{i=1,2}\|\mathcal{L}_{\Phi_{i}^{t}}\|_ {C^{r-1}(M)}\alpha e^{-\alpha t}(1-\eta_{n}(t))dt\right]^{2}\] \[\leq\left[\int_{\mathbb{R}}C^{\prime}e^{-\beta t}(1-\eta_{n}(t)) dt\right]^{2},\]
for some constant \(C^{\prime},\beta>0\) (see equation (16)). For \(n\) sufficiently large, the right hand term can be made arbitrary small, by monotone convergence.
**Lemma 4.10**: _We assume that \(F_{1},F_{2}\) are transverse at every point \(p\in\mathbb{T}^{2}.\) Then for all \(n\geq 2,k=1,\ldots n-1,\) and \(\varepsilon>0,\)\(P_{n,k}\) can be decomposed into \(P_{n,k}=Q_{n,k}+\Delta_{n,k}\), where \(Q_{n,k},\Delta_{n,k}\) are Feller sub-Markov kernels and satisfy:_
**(i)**: \(\mathcal{M}(\mathbf{M})Q_{n,k}\subset\mathcal{C}(\mathbf{M})\)_;_
**(ii)**: \(\mathcal{C}(\mathbf{M})\Delta_{n,k}\subset\mathcal{C}(\mathbf{M})\)_;_
**(iii)**: _for all_ \(\mu\in\mathcal{C}(\mathbf{M}),\|\mu\Delta_{n,k}\|_{\mathcal{C}(\mathbf{M})} \leq\varepsilon\|\mu\|_{\mathcal{C}(\mathbf{M})}.\)__
**Proof:** With \(Q,\Delta\) as in Lemma 4.9, we set
\[Q_{n,k}=(K\bar{S})^{k-1}QAP^{n-k-1},\quad\Delta_{n,k}=(K\bar{S})^{k-1}\Delta AP ^{n-k-1}.\]
Then we have
\[P_{n,k}=(K\bar{S})^{k-1}[KSK]AP^{n-k-1}=Q_{n,k}+\Delta_{n,k}.\]
Since \(\mathcal{M}(\mathbf{M})\) and \(\mathcal{C}(\mathbf{M})\) are invariant under the operators \(K,A,\bar{S},P,\) assertion \((i)\) and \((ii)\) follow directly from Lemma 4.9. We likewise have \(\|\mu\Delta_{n,k}\|_{\mathcal{C}(\mathbf{M})}\leq\varepsilon\|\mathcal{K}\|_{ C^{r-1}(\mathbf{M})}^{n-2}\|\mu\|_{\mathcal{C}(\mathbf{M})},\) by Lemma 4.9. Replacing \(\varepsilon\) by \(\varepsilon/\|\mathcal{K}\|_{C^{r-1}(\mathbf{M})}^{n-2},\) we obtain \((iii).\)\(\Box\)
Making \(\alpha\) larger if necessary, we assume as we may that
\[\alpha>\log\left(\mathcal{R}(\mathcal{L}_{\Phi_{i}^{1}},C^{r-1}(M))\right).\]
**Lemma 4.11**: _We have the following:_
**(i)**: \(\mathcal{C}(\mathbf{M})\Delta_{n,n}\subset\mathcal{C}(\mathbf{M});\)__
**(ii)**: _For all_ \(\varepsilon>0,\) _there exists_ \(C<\infty\) _such that for all_ \(\mu\in\mathcal{C}(\mathbf{M})\) _and_ \(n\geq 2\)_, we have_
\[\|\mu\Delta_{n,n}\|_{\mathcal{C}(\mathbf{M})}\leq C\left[\frac{\alpha-\min \left(\alpha_{12},\alpha_{21}\right)}{\alpha-\max_{i=1,2}\log\left(\mathcal{R }(\mathcal{L}_{\Phi_{i}^{1}},C^{r-1}(M))\right)}\right]^{n}e^{n\varepsilon}\| \mu\|_{\mathcal{C}(\mathbf{M})}.\]
**Proof:** We firstly observe that \(K\) and \(\bar{S}\) commute, so that \(\Delta_{n,n}=\bar{S}^{n-1}K^{n}A.\) We therefore have that
\[\|\mu\Delta_{n,n}\|_{\mathcal{C}(\mathbf{M})}\leq\|\bar{S}\|^{n-1 }\|\mathcal{K}^{n}\|_{C^{r-1}(\mathbf{M})}\|A^{t}\|\|\mu\|_{\mathcal{C}(\mathbf{ M})}\] \[=\max(1-\frac{\alpha_{12}}{\alpha},1-\frac{\alpha_{21}}{\alpha}) ^{n-1}\|\mathcal{K}^{n}\|_{C^{r-1}(\mathbf{M})}\|A^{t}\|\|\mu\|_{\mathcal{C}( \mathbf{M})}\]
for all \(\mu\in{\cal C}({\bf M}),\) whence the result follows from Lemma 4.3\((iii).\)\(\Box\)
**Theorem 4.12**: _Suppose that \(F_{1},F_{2}\) are \(C^{r},r\geq 1,\) transverse at every point \(p\in{\mathbb{T}}^{2},\) and that_
\[\min\left(\alpha_{12},\alpha_{21}\right)>\max_{i=1,2}\log\left({\cal R}({\cal L }_{\Phi_{i}^{1}},C^{r-1}(M))\right).\]
_Then every ergodic measure for \((Z_{t})\) has a \(C^{r-1}\) density with respect to \({\bf m}.\)_
**Proof:** Using the notation of the proceeding lemmas, we write \(P^{n}=Q_{n}+\Delta_{n},\) whereby \(Q_{n}=\sum_{k=0}^{n-1}Q_{n,k}\) and \(\Delta_{n}=\sum_{k=0}^{n}\Delta_{n,k}.\) Then \((Q_{n},\Delta_{n})\) satisfies the standing assumption, Assumption 2.2, and for \(n\) sufficiently large, there exists \(0\leq\theta<1\) such that \(\|\mu\Delta_{n}\|_{{\cal C}({\bf M})}\leq\theta\|\mu\|_{{\cal C}({\bf M})}\) for all \(\mu\in{\cal C}({\bf M}).\) Theorem 4.12 then follows from Theorem 2.8\(\Box\)
We then obtain Theorem 4.6 as a consequence of Theorem 4.12 and the next proposition, Proposition 4.13, combined with the estimates given by Proposition 3.10.
For a \(C^{1}\) flow \(\{\Phi^{t}\}\) we define the expansion rate and \(k\)-expansion volume rate of \(\Phi\) to be the expansion rate and \(k\)-expansion volume rate of the time one map \(\Phi^{1},\) which we denote as \({\cal E}(\Phi)\) and \({\cal E}{\cal V}_{k}(\Phi)\) respectively.
**Proposition 4.13**: _We let \(F\) be a \(C^{2}\) vector field on \({\mathbb{T}}^{2}\) with no equilibria (i.e \({\sf Eq}(F)=F^{-1}(0)=\emptyset\)) and let \(\{\Phi^{t}\}\) be the induced flow. Then_
\[{\cal E}(\Phi)=\min_{\{\gamma\in{\sf Per}_{-}(F)\}}\Lambda_{1}(\gamma),\]
_and_
\[{\cal E}{\cal V}_{k}(\Phi)=(k+1)\min_{\{\gamma\in{\sf Per}_{-}(F)\}}\Lambda_{ 1}(\gamma)\]
_for all \(k\geq 0,\) with the convention that the right hand sides are \(0\) whenever \({\sf Per}_{-}(F)=\emptyset.\)_
**Proof:** By Propositions 14.2.2 and 14.2.4 in Katok and Hasselblat, [20], a fixed-point-free \(C^{2}\) flow on \({\mathbb{T}}^{2}\) must enjoy one of the following two properties:
**(a)**: either all recurrent points are periodic;
**(b)**: or there exists a closed transversal and every orbit crosses this transversal. Furthermore, the return map to this transversal is a \(C^{2}\) circle diffeomorphism \(f:S^{1}\mapsto S^{1}\) which, by the Denjoy Theorem ([20, Theorem 12.1.1]), is topologically conjugate to an irrational rotation.
If \(F\) has no periodic orbit then we are in case \((b)\). We then have that \(\mathcal{E}(\Phi)\leq 0\) by Remark 3.9. We now assume for contradiction that \(\mathcal{E}(\Phi)<-\lambda<0\). Then, by [24, Corollary 2], there exists two distinct points \(x,y\in\mathbb{T}^{2}\) such that \(\limsup_{t\to\infty}\frac{\log(d(\Phi^{t}(x),\Phi^{t}(y))}{t}<-\lambda.\) This implies that the return map \(f\) has two distinct points \(\theta,\alpha\in S^{1}\) such that \(d(f^{n}(\theta),f^{n}(\alpha))\to 0\) as \(n\to 0\). However \(f\) is topologically conjugate to a rotation and a rotation is an isometry, whence we obtain a contradiction.
If \(F\) has periodic orbits, then we are in case \((a).\) We let \(\mu\) be an ergodic probability measure for \(\Phi^{1}.\) By the Poincare recurrence theorem and Birkhoff's theorem, there exists a point \(p\), recurrent for \(\Phi^{1},\) such that
\[\frac{1}{n}\sum_{k=1}^{n}\delta_{\Phi^{k}(p)}\Rightarrow\mu.\]
By \((a)\), \(p\) is \(T\)-periodic for \(\{\Phi^{t}\}\), for some \(T>0\). Thus, reasoning as in Example 3.18, either \(\mu=\frac{1}{N}\sum_{i=0}^{N-1}\delta_{\Phi^{i}(p)}\) for some \(N\in\mathbb{N}\) (if \(T\) is rational) or \(\mu=\frac{1}{T}\int_{0}^{T}\delta_{\Phi^{s}(p)}ds\) (if \(T\) is irrational). In both cases, \(\Lambda_{1}(\mu)\) equals the Floquet exponent \(\Lambda_{1}(\gamma)\) of the periodic orbit. The result then follows from Schreiber's theorem (equation (4)). \(\Box\)
**Remark 4.14**: The fact that \(\mathcal{E}(\Phi)=0\) when \(F\) has no periodic orbit answers a question raised by Moe Hirsch in [19]. An affirmative answer to this question is given in the introduction of Schreiber's paper [24], but the proof and the assumptions are not detailed in the paper. The result does actually directly follows from Schreiber's results as shown above, at least for \(C^{2}\) flows. The question is open for \(C^{1}\) flows.
### Smooth invariant distributions under fast switching
We return here to the general model of a PDMP (as described in the beginning of Section 4), but under the assumption that the rate matrix \((\alpha_{ij}(x))_{i,j\in E}\) takes the simple form
\[\alpha_{ij}=p_{j}\alpha\mathbf{1}_{i\neq j} \tag{19}\]
where \(\alpha>0,p_{j}>0\) and \(\sum_{j\in E}p_{j}=1.\)
We shall prove here the following result.
**Theorem 4.15**: _Let \((Z_{t})_{t\geq 0}\) be the PDMP corresponding to the characteristics \((\{F_{i}\}_{i\in E},(\alpha_{ij})_{i,j\in E})\), where \(\alpha_{ij}\) is given by (19). Suppose that the \(1\)-Bracket condition holds at every point \(x\in M.\) Then, there exists \(\alpha^{*}>0\) such that for all \(\alpha\geq\alpha^{*}\) the ergodic measures of \((Z_{t})\) (see Theorem 4.5) have a \(C^{r-1}\) density with respect to \({\bf m}.\)_
A version of this result (for slightly more general switching rates but under the assumption that there exists an accessible point), was established by the present authors in [12]. However, the proof given here is simpler and provides a good illustration of our general method.
We firstly observe that invariant distributions of \((Z_{t})_{t\geq 0}\) can be described in terms of invariant distributions of a discrete-time Markov chain living on \(M.\)
We let \(\bar{P}\) be the Markov kernel on \(M\) (not on \({\bf M}\)) defined by
\[\bar{P}(f)(x)=\sum_{i\in E}p_{i}\int_{0}^{\infty}\alpha e^{-\alpha t}f(\Phi_{t }^{i}(x))dt. \tag{20}\]
Recall that \(K\) is the operator given by (11). For \(\mu\in{\cal P}(M),\) we let \((\mu\otimes p)\in{\cal P}({\bf M})\) denote the probability measure defined by \(\mu\otimes p(dx\times\{i\})=p_{i}\mu(dx).\)
The next proposition is similar to Proposition 4.2.
**Proposition 4.16**: _Let \((Z_{t})_{t\geq 0}\) be the PDMP corresponding to the characteristics \((\{F_{i}\}_{i\in E},(\alpha_{ij})_{i,j\in E})\), where \(\alpha_{ij}\) is given by (19). Then the set of invariant distributions of \((Z_{t})_{t\geq 0}\) is given by_
\[\{(\mu\otimes p)K:\;\mu\in{\sf Inv}(\bar{P})\}.\]
**Proof:** We set \(A_{ij}=p_{j}\) for all \(i,j\) and let \(P=KA\) (see equation (12)). By Proposition 4.2 it suffices to show that \({\sf Inv}(P)=\{\mu\otimes p:\;\mu\in{\sf Inv}(\bar{P})\}.\) We let \(\mu\in{\sf Inv}(\bar{P})\) and \(f\in{\cal B}({\bf M}).\) Then we have that
\[(\mu\otimes p)Pf=\sum_{i\in E}p_{i}\int_{M}\left(\int_{\mathbb{R} _{+}}\sum_{j}p_{j}f_{j}(\Phi_{i}^{t}(x))\alpha e^{-\alpha t}dt\right)\mu(dx)\] \[=\sum_{j\in E}p_{j}\int_{M}\int_{\mathbb{R}_{+}}[\sum_{i\in E}p_{ i}f_{j}(\Phi_{i}^{t}(x))]\alpha e^{-\alpha t}\mu(dx)\] \[=\sum_{j\in E}p_{j}\mu(\bar{P}f_{j})=\sum_{j\in E}p_{j}\mu f_{j}= (\mu\otimes p)f.\]
Thus we have that \(\mu\otimes p\in{\sf Inv}(P).\)
Conversely, we let \(\nu\in{\sf Inv}(P)\) and \(f\in{\cal B}({\bf M}).\) Then we have \(\nu K(Af)=\nu f,\) that is
\[\int_{M}(\sum_{j\in E}p_{j}f_{j}(x))\sum_{i\in E}\nu K(dx\times i)=\nu f.\]
This shows that \(p\otimes\mu=\nu,\) whereby
\[\mu(dx)=\sum_{i\in E}(\nu K)(dx\times i).\]
On the other hand, the relation \(\nu KA=\nu\) implies \((\nu K)AKf=\nu Kf\) for all \(f\in{\cal B}({\bf M}).\) If one applies this relation to the map \(f\) such that \(f_{i}(x)=g(x)\) for all \(i,\) it follows that \(\mu\bar{P}g=\mu g.\) Thus we have that \(\mu\in{\sf Inv}(\bar{P}).\)\(\Box\)
#### Proof of Theorem 4.15
In light of Proposition 4.16, it suffices to consider invariant distributions of the operator \(\bar{P}\) given by equation (20). To highlight the influence of the switching rate parameter \(\alpha,\) we rewrite \(\bar{P}\) as \(\bar{P}_{\alpha}.\)
We let \(n\geq 1,{\bf i}=(i_{1},\ldots,i_{n})\in E^{n},\) and \(h:({\mathbb{R}}_{+}^{*})^{n}\mapsto[0,1]\) be a \(C^{\infty}\) function. Set \(p_{\bf i}=p_{i_{1}}\cdots p_{i_{n}}\) and let \(\bar{P}_{\alpha,{\bf i},h}\) denote the sub-Markovian operator on \(M\) defined by
\[\bar{P}_{\alpha,{\bf i},h}f(x)=p_{\bf i}\int_{{\mathbb{R}}_{+}^{n}}f(\Phi_{i_ {n}}^{t_{n}/\alpha}\circ\cdots\circ\Phi_{i_{1}}^{t_{1}/\alpha}(x))e^{-|t|}h(t)dt,\]
whereby \(|t|=t_{1}+\ldots+t_{n}.\) If \(h\equiv 1,\) we write \(\bar{P}_{\alpha,{\bf i}}\) for \(\bar{P}_{\alpha,{\bf i},h.}\) Clearly we have that
\[\bar{P}_{\alpha,{\bf i}}=\bar{P}_{\alpha,{\bf i},1-h}+\bar{P}_{\alpha,{\bf i},h}\]
and
\[\bar{P}_{\alpha}^{n}=\sum_{{\bf i}\in E^{n}}\bar{P}_{\alpha,{\bf i}}.\]
**Lemma 4.17**: **(i)** _If \(\mu\in{\cal M}_{ac}(M)\) has density \(\rho\in L^{1}(m)\), then \(\mu\bar{P}_{\alpha,{\bf i},h}\) has density \(p_{\bf i}{\cal L}_{\alpha,{\bf i},h}(\rho)\), whereby_
\[{\cal L}_{\alpha,{\bf i},h}(\rho)=\int_{{\mathbb{R}}_{+}^{n}}{\cal L}_{\Phi_{ i_{n}}^{t_{n}/\alpha}}\circ\cdots\circ{\cal L}_{\Phi_{i_{1}}^{t_{1}/\alpha}}( \rho)e^{-|t|}h(t)dt.\]
_Moreover if we have \(\alpha>\max_{i\in E}\log\left(\mathcal{R}(\mathcal{L}_{\Phi_{i}^{1}},C^{r-1}(M))\right)\), then \(\mathcal{L}_{\alpha,\mathbf{i},h}\) is a bounded operator on \(C^{r-1}(M).\)_
**(ii)**: _We have that_
\[\limsup_{\alpha\to\infty}\|\mathcal{L}_{\alpha,\mathbf{i},h}\|_{C^{r-1}(M)} \leq\int_{\mathbb{R}_{+}^{n}}e^{-|t|}h(t)dt.\]
**Proof:** The proof of \((i)\) is similar to the proof of Lemma 4.3\((iii)\) (itself relying on Proposition 3.4). Its verification is left to the reader.
\((ii).\) We let \(\eta(t_{1},\ldots,t_{n})=\|\mathcal{L}_{\Phi_{i}^{t_{n}}}\|_{C^{r-1}(M)} \ldots\|\mathcal{L}_{\Phi_{i_{1}}^{t_{1}}}\|_{C^{r-1}(M)}.\) By the \(C^{r}\) continuity of the map \((t,x)\mapsto\Phi_{i}^{t}(x)\) (see, for example, [17, Chapter V, Corollary 4.1]), \(\Phi_{i}^{t}\to\Phi_{i}^{0}\) as \(t\to 0\) in the \(C^{r}\) topology. From this and the formula (13), it follows that \(\limsup_{t\to 0_{\mathbb{R}^{n}}}\eta(t)\leq 1.\) Therefore for all \(\varepsilon>0,\) there exists some \(\varepsilon>0\) such that \(\eta(t)\leq 1+\varepsilon\) for all \(t\in\mathbb{R}_{+}^{n}\) such that \(|t|\leq\delta.\) Thus we have
\[\|\mathcal{L}_{\alpha,\mathbf{i},h}\|_{C^{r-1}(M)}\leq\int_{ \mathbb{R}_{+}^{n}}\eta(t/\alpha)e^{-|t|}h(t)dt\] \[\leq(1+\varepsilon)\int_{\mathbb{R}_{+}^{n}}e^{-|t|}h(t)\mathbf{1 }_{|t|\leq\alpha\delta}dt+\int_{\mathbb{R}_{+}^{n}}\eta(t/\alpha)h(t)e^{-|t|} \mathbf{1}_{|t|\geq\alpha\delta}dt.\]
When \(\alpha\to\infty,\) the first term on the right goes to \(1+\varepsilon\) while the second term goes to \(0\). This follows from the fact that \(\eta(t)\leq C^{\prime}e^{\beta|t|}\) for some \(\beta>0\) and \(C^{\prime}<\infty,\) by equation (16). This concludes the proof. \(\Box\)
**Proposition 4.18**: _We suppose that there exist \(n\geq 1,\mathbf{i}=(i_{1},\ldots,i_{n})\in E^{n}\) and \(U\subset(\mathbb{R}_{+}^{*})^{n}\) a nonempty open set such that:_
**(i)**: \(\frac{1}{\alpha}U\subset U\) _for all_ \(\alpha\geq 1;\)__
**(ii)**: _for all_ \(x\in M,\) _the map_ \((t_{1},\ldots,t_{n})\to\Phi_{i_{n}}^{t_{n}}\circ\cdots\circ\Phi_{i_{1}}^{t_{1} }(x)\) _is a submersion on_ \(U.\)__
_Then, there exists \(\alpha^{*}\geq 1\) such that for all \(\alpha\geq\alpha^{*}\)\(\mathsf{Inv}(\bar{P}_{\alpha})\subset\mathcal{M}_{ac}^{r-1}(M).\)_
**Proof:** We let \(h:(\mathbb{R}_{+}^{*})^{n}\to[0,1]\) be a \(C^{\infty}\) nonzero function with compact support in \(U.\). We set \(Q_{\alpha}=\bar{P}_{\alpha,\mathbf{i},h}\) and \(\Delta_{\alpha}=\bar{P}_{\alpha,\mathbf{i},1-h}+\sum_{\mathbf{j}\in E^{n} \setminus\{\mathbf{i}\}}\bar{P}_{\alpha,\mathbf{j}}\) and take
\(\mathcal{C}(M)=\mathcal{M}_{ac}^{r-1}(M).\) By Proposition 3.2, \(\mathcal{M}(M)Q_{\alpha}\subset\mathcal{C}(M)\) for all \(\alpha\geq 1.\) By Lemma 4.17, \(\mathcal{C}(M)\Delta_{\alpha}\subset\mathcal{C}(M)\) and
\[\limsup_{\alpha\to\infty}\|\Delta_{\alpha}\|_{\mathcal{C}(M)}\leq[1-p_{ \mathbf{i}}\int_{(\mathbb{R}_{+}^{*})^{n}}e^{-(t_{1}+\ldots+t_{n})}h(t_{1}, \ldots,t_{n})dt_{1}\ldots dt_{n}]<1,\]
whereby \(\|\Delta_{\alpha}\|_{\mathcal{C}(M)}\) represents \(\|p_{\mathbf{i}}\mathcal{L}_{\mathbf{i},\alpha,h}+\sum_{\mathbf{j}\neq i}p_{ \mathbf{j}}\mathcal{L}_{\mathbf{j},\alpha,1}\|_{C^{r-1}(M)}.\) The proposition then follows from Theorem 2.8. \(\Box\)
By [12, Proposition 5.1], the 1-Bracket condition implies that the assumptions of Proposition 4.18 are satisfied. This concludes the proof of Theorem 4.15. \(\Box\)
**Acknowledgement:** The work of MB, and partially that of OT, was funded by the grant 200020-219913 from the Swiss National Foundation. The work of OT was also partially funded by the EPSRC MathRad programme grant EP/W026899/.
|
2301.01460 | Computational Models for High-Power Cyclotrons and FFAs | A summary of numerical modeling capabilities regarding high power cyclotrons
and fixed field alternating gradient machines is presented. This paper focuses
on techniques made available by the OPAL simulation code. | Andreas Adelmann, Chris T. Rogers | 2023-01-04T06:09:28Z | http://arxiv.org/abs/2301.01460v1 | # Computational Models for High-Power Cyclotrons and FFAs
###### Abstract
A summary of numerical modeling capabilities regarding high power cyclotrons and fixed field alternating gradient machines is presented. This paper focuses on techniques made available by the OPAL simulation code.
High Power Cyclotrons, High Power FFAs, Computational Models, OPAL +
Footnote †: journal: Computational Models for High-Power Cyclotrons and FFAs
## 1 Overview on Computational Models
In all high-power particle accelerators "one of the major limitations is particle losses. Losses may be controlled, resulting in beam particles impinging on dedicated equipment such as collimators, or uncontrolled, resulting in beam particles striking other equipment around the accelerator. Uncontrolled losses can damage and activate any equipment in the accelerator and so must be minimized. Controlled losses need to be carefully considered and also minimized. The amount and cause of loss are investigated by modeling accelerators using simulation codes that model numerically the behaviour of beams. A review of available numerical codes can be found in the article of Smirnov [1]. In this paper modeling capabilities available in OPAL are discussed in more detail [2].
### Single particle modeling
For conventional cyclotrons (and FFAs) the single particle tool box is established and many different codes variants exists [1]. For cyclotrons and (horizontal FFAs) the existing tools seem to be comfortable and accurate. New machines like vertical FFAs, currently studied for example at the Rutherford Appleton Laboratory (RAL) [3], require non-trivial modifications to the existing codes. These modifications are on the way for example in the code OPAL [2] and expected to be available in second quarter of 2022.
Recently, in the context of very high field and ultra compact H\({}^{-}\) cyclotrons beam stripping losses of ion beams by interactions with residual gas and electromagnetic fields are evaluated [4]. The beam stripping algorithm, implemented in OPAL, evaluates the interaction of hydrogen ions with residual gas and electromagnetic fields. In the first case, the cross sections of the processes are estimated according to the energy by means of analytical functions (see Sec. II-A c[4]). The
implementation allows the user to set the pressure, temperature, and composition of the residual gas, which could be selected for the calculations as either molecular hydrogen (H\({}_{2}^{+}\)) or dry air in the usual proportion. For precise simulations, a two-dimensional pressure field map from an external file can be imported into OPAL, providing more realistic vacuum conditions.
Concerning electromagnetic stripping, the electric dissociation lifetime is evaluated through the theoretical formalism (see Sec. II-B [4]). In both instances, the individual probability at each integration step for every particle is assessed.
A stochastic process is used to evaluate if an interaction occurs. In this case the particle will be stripped and removed from the beam, or optionally transformed to a secondary heavy particle, dependent on the interaction. In this case, the secondary particle will continue its movement but with the new particle properties.
### Large Scale Multiparticle Modeling
In general, modeling losses in high intensity accelerators require 3D space-charge and sufficient simulation particles. Recent investigations [5] propose a sparse grid-based adaptive noise reduction strategy for electrostatic particle-in-cell (PIC) simulations. By projecting the charge density onto sparse grids, high-frequency particle noise is reduced and hence an optimal number of grid points and simulation particles can be obtained. For a 3D Penning trap simulation, a maximum speedup of 2.8 and 15 times memory reduction has been obtained. This method is already integrated into OPAL.
### Surrogate Model Construction
Cheap to evaluate surrogate models have gained a lot of interest lately. Statistical [6] or machine learning techniques are used [7]. These models can for example replace a computationally heavy model in a multi-objective optimization [8] or in the future be part of an on-line model. Some surrogate modeling algorithms may include an intrinsic estimator for the model uncertainty [9].
## 2 Physics Modeling
In this section we show latest additions to the open source code OPAL [2] regarding cyclotron and FFA modeling capabilities.
### Modeling H- Injection and Painting in Vertical and Horizontal FFAs
Fixed Field Accelerators (FFAs) have fixed magnetic fields, like cyclotrons, but increase bending field with momentum and hence more compact designs can be realized. FFAs offer the power efficiency of cyclotrons combined with the energy reach of synchrotrons.
FFAs have never been used for high power proton acceleration, however in OPAL the necessary models are available for design. Single particle tracking has been benchmarked against the KURNS FFA [10]. A design for a 3-12 MeV H- FFA prototype ring is being pursued at RAL as a prototype for a MW-class neutron spallation source [3]. Scaling horizontal orbit excursion (hFFA) and a vertical orbit excursion (vFFA) FFA are both under consideration. Both are non-isochronous machines using RF cavities with variable resonant frequency. Injection is planned using charge exchange of H\({}^{-}\) to H\({}^{+}\) and phase space painting.
In hFFAs, magnetic rigidity varies with radius. The dipole field varies as [11]
\[B_{z}(z=0)=B_{0}(\psi)\left(\frac{r}{r_{0}}\right)^{k}\,. \tag{1}\]
\(B_{0}(\psi)\) is the dipole field as a function of a normalised azimuthal coordinate \(\psi\), \(r\) is the radial coordinate, \(r_{0}\) is a nominal (user-defined) radius, and \(k\) is the field index. The field away from the midplane, at \(z\neq 0\), may be calculated using a recursion relation arising from consideration of Maxwell's equations in free space. OPAL has capability to calculate the expansion to arbitrary order, within machine precision. The normalised azimuthal coordinate
\[\psi=\phi-\tan(\delta)\ln\left(\frac{r}{r_{0}}\right) \tag{2}\]
is a measure of distance around the ring. Here \(\phi\) is the geometrical azimuthal angle and \(\delta\) is the spiral angle; for a sector FFA magnet \(\delta=0\) and \(\psi=\phi\). The arrangement of fields in this way guarantees that single particle trajectories and optical parameters at all orders scale exactly with momentum.
In vFFAs, magnetic rigidity varies with height. As particles are accelerated, the closed orbit changes height. Successive acceleration kicks add incoherently, so overall the beam follows the closed orbit with no appreciable emittance growth. Rectangular vFFA magnets have been implemented in OPAL, with a dipole field that varies as [12]
\[B_{0}(x_{v}=0)=B_{0}(s_{v})e^{mz_{v}}\,. \tag{3}\]
\(z_{v}\) is the height, \(s_{v}\) is a nominal longitudinal coordinate and \(x_{v}\) is a nominal horizontal coordinate in the rectangular coordinate system of the magnet. \(B_{0}\) describes the dipole field variation with longitudinal distance. A tanh model is available for vFFA fields. \(m\) is the vFFA field index, roughly equivalent to the field index \(k\) in hFFAs. Fields away from the plane having \(x_{v}=0\) are calculated using a field expansion derived from consideration of Maxwell's laws. It is noted that the focusing in the magnet body is, to linear order, skew quadrupole. The fringe field has solenoid components parallel to \(s_{v}\) that may be significant for short magnets. This arrangement of fields guarantees that trajectories and optical functions are identical as momentum increases, barring a vertical displacement. In particular, the path length of the beam is independent of momentum, the momentum compaction factor is exactly 0 and ultra-relativistic particles are isochronous.
In order to model injection into the FFA, OPAL was extended with models for:
* horizontal & vertical FFA magnets as described above;
* variable frequency RF cavities;
* arbitrary order multipoles with maxwellian fringe fields;
* foil model (scattering and energy loss);
* pulsed injected beam; and
* pulsed multipoles.
ll but the latter two features are available in the latest version of OPAL. This enabled a fully four-dimensional simulation of the injection system, including consideration of effects such as appropriate phasing of the pulsed dipoles and transverse breathing of the beam arising due to initial longitudinal mismatch at injection.
As an example, a schematic of an injection system and associated parameters for the 3-12 MeV test ring is shown for a horizontal FFA in Fig. 1. Owing to the compact nature of the ring, the injection system is spread across a number of cells. H\({}^{-}\) are brought into the ring and onto a foil. Bump magnets in the ring distort the proton closed orbit so that particles passing through the foil are returned to a nominal closed orbit. The foil is placed inside the defocusing (D) dipole magnet so that the distorted H\({}^{+}\) closed orbit and H\({}^{-}\) beam, initially separated, are brought onto the same trajectory. Electrons are stripped from the H\({}^{-}\) leaving H\({}^{+}\) (protons). The bump magnets are slowly varied, so that the proton closed orbit is moved away from the injection point for the H\({}^{-}\) and newly injected particles are at higher horizontal amplitude. In the H\({}^{-}\) injection line, pulsed magnets move the H\({}^{-}\) upwards so that newly injected particles are at higher vertical amplitude. Overall, a correlation is introduced between horizontal and vertical amplitude. Sample trajectories and bump magnet field strengths for the magnets in the ring are shown in Fig. 1. In this example vertical bumpers are not considered - they are all kept at 0 T field. The beam following injection is shown in fig. 2.
### Beam stripping interactions
Beam transmission optimization and loss characterization, where beam stripping interactions are a key issue, play an important role in the design and operation of compact cyclotrons. A beam stripping model has been implemented in the three-dimensional object-oriented parallel code OPAL-cycl, a flavor of the OPAL framework. The model includes Monte Carlo methods for interaction with residual gas and dissociation by electromagnetic stripping. The model has been verified with theoretical models and it has been applied to the AMIT cyclotron according to design conditions [4].
### Spiral inflector modeling
In [13] a spiral inflector model implemented in OPAL is presented, that enables us to run highly realistic simulations of the spiral inflector system of a compact cyclotron (c.f. Fig. 3). A new geometry class and field solver can handle the complicated boundary conditions posed by the electrode system in the central region of the cyclotron both in terms of particle termination, and calculation of self-fields. Results are benchmarked against the analytical solution of a coasting beam. As a practical example, the spiral inflector and the first revolution in a 1 MeV/amu test cyclotron, located at Best Cyclotron Systems, Inc., are modeled and compared to the simulation results [14; 15]. In conclusion, OPAL can handle realistic and arbitrary boundary geometries. Simulated injection efficiencies and beam shape compare well with measured efficiencies and a preliminary measurement of the beam distribution after injection.
### Neighboring Turn Modeling
This article presents a hardware architecture independent implementation of an adaptive mesh refinement Poisson solver that is integrated into the electrostatic Particle-In-Cell beam dynamics code OPAL. The Poisson solver is solely based on second generation Trilinos packages to ensure the desired hardware portability. Based on the massively parallel framework A
Figure 2: Beam (left) after injection is completed, but still on a distorted orbit (right) following collapse of the bump. \(x\) is the position of the beam relative to the ring centre and \(y\) is the height of the particle above the midplane. Particles are coloured according to the injection turn.
as BoxLib, the new adaptive mesh refinement interface provides several refinement policies in order to enable precise large-scale neighbouring bunch simulations in high intensity cyclotrons. The solver is validated with a built-in multigrid solver of AMREX and a test problem with analytical solution. The parallel scalability is presented as well as an example of a neighbouring bunch simulation that covers the scale of the later anticipated physics simulation [16].
## 3 Path Forward
While statistical and machine learning techniques have a lot of potential, high fidelity physics simulations will always be used to, for example, produce the training set. In case of high-intensity machines we will need large numbers of particles and the associated fine mesh to solve the PDE in question. It is imperative that we make use of existing and future high performance infrastructure.
Figure 4: Integrated projection of the electric field component \(E_{x}\) onto the xy-plane showing 7 adjacent particle bunches [16].
Figure 3: Spiral inflector with selected particle trajectories from an OPAL simulation. The beam enters axially (from the top) through an aperture (grey) and is bent into the mid-plane by a combination of the electrostatic field generated by the spiral electrodes (green and blue) and the cyclotron’s main magnetic field. Then it is accelerated by the two Dees (copper, Dummy-Dees not shown) [13].
A performance portable implementation [16] is of utmost importance. The OPAL collaboration [2] is in the progress to completely rewrite the code according to the sketch in Fig. 5. With this new architecture we will be able to make efficient use of Exascale-Architecture that will come online soon. The core algorithms of OPAL are already performance portable as demonstrated in [17].
## Acknowledgments
The authors acknowledge the OPAL developer team for their continued support of this open source, community-driven code.
|
2310.03536 | Minimal quantum dot based Kitaev chain with only local superconducting
proximity effect | The possibility to engineer a Kitaev chain in quantum dots coupled via
superconductors has recently emerged as a promising path toward topological
superconductivity and possibly nonabelian physics. Here, we show that it is
possible to avoid some of the main experimental hurdles on this path by using
only local proximity effect on each quantum dot in a geometry that resembles a
two-dot version of the proposal in New J. Phys. 15 045020 (2013). There is no
need for narrow superconducting couplers, additional Andreev bound states, or
spatially varying magnetic fields; it suffices with spin-orbit interaction and
a constant magnetic field, in combination with control of the superconducting
phase to tune the relative strengths of elastic cotunneling and an effective
crossed-Andreev-reflection-like process generated by higher-order tunneling. We
use a realistic spinful, interacting model and show that high-quality Majorana
bound states can be generated already in a double quantum dot. | William Samuelson, Viktor Svensson, Martin Leijnse | 2023-10-05T13:35:27Z | http://arxiv.org/abs/2310.03536v2 | # A minimal quantum dot-based Kitaev chain with only local superconducting proximity effect
###### Abstract
The possibility to engineer a Kitaev chain in quantum dots coupled via superconductors has recently emerged as a promising path toward topological superconductivity and possibly nonabelian physics. Here, we show that it is possible to avoid some of the main experimental hurdles on this path by using only local proximity effect on each quantum dot in a geometry that resembles a two-dot version of the proposal in New J. Phys. **15** 045020 (2013). There is no need for narrow superconducting couplers, additional Andreev bound states, or spatially varying magnetic fields; it suffices with spin-orbit interaction and a constant magnetic field, in combination with control of the superconducting phase to tune the relative strengths of elastic cotunneling and an effective crossed-Andreev-reflection-like process generated by higher-order tunneling. We use a realistic spinful, interacting model and show that high-quality Majorana bound states can be generated already in a double quantum dot.
## I Introduction
Efforts to engineer a topological superconducting phase hosting Majorana bound states (MBSs) [1; 2; 3; 4; 5; 6; 7] have led to encouraging experimental progress (see Refs. [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] for some examples). However, it has also become clear that the imperfections and defects that are inevitable in real materials may lead to the emergence of nontopological Andreev bound states (ABSs) that can mimic many experimental signatures of MBSs [19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. One way to avoid the problems associated with disorder is to build up a discrete Kitaev chain [31] from quantum dots (QDs) coupled via superconducting segments [32; 33]. Reaching a true topological phase requires long chains, but it was shown in Ref. [33] that states which share all the properties of topological MBSs appear already in a minimal (two-site) Kitaev chain, based on two spin-polarized QDs coupled via crossed Andreev reflection (CAR) and elastic cotunneling (ECT) mediated by a single narrow superconductor. The catch is that these states, which were called poor man's Majoranas (PMMs), only appear at finetuned sweet spots in parameter space, namely when both QD levels align with the chemical potential of the superconductor and the amplitudes for CAR and ECT are equal. Ref. [33] suggested tuning the ratio of CAR and ECT amplitudes via a spatially inhomogeneous magnetic field.
The realization of the minimal Kitaev chain and PMMs suffered from difficulties associated with both controlling the inhomogeneous magnetic field and with reaching sufficiently large CAR and ECT amplitudes, which ultimately determine the gap to excited states. Both these problems were solved by an elegant proposal to couple the QDs via an ABS in the superconducting region [34]. This can lead to much stronger CAR and ECT and furthermore allows tuning their ratio by controlling the energy of the ABS because of an interference effect. Furthermore, it was shown in Ref. [35] that high-quality PMMs also survive in the regime of large QD-ABS couplings, realistic Zeeman fields, and strong Coulomb interactions. The recent experimental breakthrough reported in Ref. [36] (see also Refs. [37; 38; 39; 40]) showed transport spectroscopy data that are fully consistent with PMMs. However, coupling the QDs via an ABS might not be possible in all material platforms, restricts the maximum possible gap to excited states and is associated with difficulties and extensive tuning, in particular when going to longer QD chains or when coupling several PMM systems to, for example, study nonabelian physics.
In this work, we investigate a different way to engineer a Kitaev chain which entirely eliminates the need to couple the QDs via a superconductor. Instead, each QD only has local coupling to a superconductor, while the different QDs are directly tunnel coupled to each other. An effective nonlocal pairing amplitude, needed to simulate the Kitaev chain, appears due to higher order tunneling where local Andreev reflection is followed by tunneling between the QDs. This geometry was originally introduced in Ref. [41], which we extend by including the effects of intra- and inter-QD Coulomb interactions and, importantly, show that if the two superconductors are connected in a loop, the superconducting phase difference controls the amplitude of the effective nonlocal superconducting pairing. This allows us to reach the sweet spot where PMMs appear already for two QDs, which is our focus.
While we are considering the case with both QDs coupled to a superconductor, a parallel work shows that PMMs can be created by alternating normal and superconducting QDs [42]. Tuning to the sweet spot is then achieved by either controlling the amplitude of the induced superconductivity or by tuning the strength or direction of the Zeeman field.
The paper is organized as follows. Section II introduces the model of two interacting spinful QDs, each coupled to a superconductor. Then, Sec. III analyses the noninteracting limit, where we obtain analytic conditions for reaching a PMM sweet spot. This is followed by considering both intra- and inter-QD Coulomb interactions in Sec. IV, which depending on the details can either make
it harder or easier to find PMM sweet spots. Finally, we summarize and conclude in Sec. V.
## II Locally proximitized double QD model
We consider a double QD where each QD is locally proximitized by a bulk superconductor. A magnetic flux through the loop controls the phase difference between the induced pairing amplitudes of the QDs, see Fig. 1. Furthermore, we consider a fully interacting system with local (intra-QD) and nonlocal (inter-QD) Coulomb interactions. Note that in other minimal Kitaev chain platforms, nonlocal Coulomb interactions are expected to be small due to the screening by the intermediate superconductor [33]. However, we have to take such interactions into account.
The system is modeled with the Hamiltonian
\[\begin{split} H&=\sum_{j}H_{j}+t\sum_{\sigma}(d_{ 1\sigma}^{\dagger}d_{2\sigma}+\text{H.c.})\\ &+t_{\text{so}}(d_{1\downarrow}^{\dagger}d_{2\uparrow}-d_{1 \uparrow}^{\dagger}d_{2\downarrow}+\text{H.c.})+U_{nl}N_{1}N_{2},\end{split} \tag{1}\]
where \(H_{j}\) is the Hamiltonian of QD \(j\),
\[\begin{split} H_{j}&=\sum_{\sigma}(\varepsilon_{j}+ \eta_{\sigma}V_{z,j})n_{j\sigma}\\ &+(\Delta_{j}e^{i\phi_{j}}d_{j\uparrow}^{\dagger}d_{j\downarrow} ^{\dagger}+\text{H.c.})+U_{l,j}n_{j\uparrow}n_{j\downarrow}.\end{split} \tag{2}\]
In Eqs. (1) and (2), \(d_{j\sigma}\) annihilates a spin-\(\sigma\) electron on QD \(j=1,2\) with single-particle energy \(\varepsilon_{j}\) relative to the chemical potential of the superconductors. We consider a single orbital on each QD and denote the total occupation on the \(j\)th QD with \(N_{j}=n_{j\uparrow}+n_{j\downarrow}\), where \(n_{j\sigma}=d_{j\sigma}^{\dagger}d_{j\sigma}\). A magnetic field induces a Zeeman splitting of \(2V_{z,j}\) between the spin states on each QD. Furthermore, \(\eta_{\uparrow(\downarrow)}=\mp 1\) such that the spin-up state is energetically favorable. The amplitudes for spin-conserving and spin-flip tunneling, which we choose to be real and positive, are given by \(t\) and \(t_{\text{so}}\), respectively. The spin-flip tunneling results from spin-orbit interactions (we take the spin-orbit field perpendicular to the Zeeman field [43]). Furthermore, we include proximity-induced superconductivity within the infinite-gap approximation with pairing terms of amplitude \(\Delta_{j}e^{i\phi_{j}}\), where \(\phi_{j}\) is the phase of the superconductor proximitizing QD \(j\)[44; 45; 46]. A magnetic flux \(\Phi\) controls the superconducting phase difference \(\delta\phi=\phi_{1}-\phi_{2}\). Finally, we include local Coulomb interactions \(U_{l,j}\) on each QD and nonlocal interactions \(U_{nl}\) between the QDs.
For simplicity, we consider \(V_{z,j},\Delta_{j}\), and \(U_{l,j}\) to be site-independent and drop their site index in the rest of the paper. However, we have verified that these assumptions do not qualitatively affect our results. Unless otherwise stated, we use \(t=\Delta/2\) and \(t_{\text{so}}=t/5\).
The basic physics of the model is shown in Fig. 2. The energy difference \(\delta E\) between the lowest-energy odd and even states is plotted in Fig. 2(a) as a function of \(\varepsilon_{1}\) and \(\varepsilon_{2}\). The labels indicate the (QD1, QD2) ground-state occupations for vanishing \(\Delta,t\) and \(t_{\text{so}}\). Ground-state changes are accompanied by even-odd degeneracies (narrow white regions, \(\delta E=0\)). Sweet spots are found where two such degeneracy lines cross. We will investigate the conditions for such crossings below. Figure 2(b) shows the ground state parity (even or odd) around the upper left crossing in (a) and demonstrates that \(\delta\phi\) can be used to tune between a crossing (a sweet spot) and an avoided crossing (not a sweet spot).
## III Noninteracting limit
In this section, we analyze the Hamiltonian in Eq. (1) with \(U_{l}=U_{nl}=0\). This allows for analytical results that serve as a starting point for the interacting case discussed in Sec. IV. We begin by showing how an effective Kitaev chain description emerges by considering the "Kitaev limit" \(V_{z},\Delta\gg t,t_{\text{so}}\). Then we discuss how the system can be tuned to a PMM sweet spot. The main conclusion is that control of the QD energy levels and the superconducting phase difference is sufficient to reach the sweet spot. Such a tuning procedure requires a Zeeman energy that is larger than the induced gap, but not by too much.
### The Kitaev limit
Following Ref. [41], we explain how the Hamiltonian in Eq. (1) with \(U_{l}=U_{nl}=0\) and \(V_{z},\Delta\gg t,t_{\text{so}}\) maps to
Figure 1: Setup consisting of two spinful QDs (1,2) with superconductivity induced by local tunneling to a bulk superconductor. The magnetic flux \(\Phi\) controls the phase difference between the induced superconducting pairing amplitudes, \(\delta\phi=\phi_{1}-\phi_{2}\). The QDs are coupled by both spin-conserving tunneling \(t\) and (spin-orbit induced) spin-flip tunneling \(t_{\text{so}}\).
the minimal Kitaev chain
\[H_{K}=\varepsilon_{K,1}n_{K,1}+\varepsilon_{K,2}n_{K,2}+[t_{K}a_{2}^{\dagger}a_{1 }+\Delta_{K}a_{2}^{\dagger}a_{1}^{\dagger}+\text{H.c.}]. \tag{3}\]
Here, \(n_{K,j}=a_{j}^{\dagger}a_{j}\), where \(a_{j}\) annihilates a particle with energy \(\varepsilon_{K,j}\) at site \(j=1,2\) in the minimal Kitaev chain. By performing an appropriate gauge transformation, we can (in the two-site Kitaev model) choose the tunneling amplitude \(t_{K}\) and the amplitude of the \(p\)-wave pairing \(\Delta_{K}\) to be real and positive.
We first perform a Boguliobov transformation of each QD. The BdG-eigenstates of \(H_{j}\) become
\[a_{j} =\frac{e^{i\phi_{j}/2}}{\sqrt{2\beta_{j}}}\left(\sqrt{\beta_{j}- \varepsilon_{j}}\ d_{j\downarrow}^{\dagger}+e^{-i\phi_{j}}\sqrt{\beta_{j}+ \varepsilon_{j}}\ d_{j\uparrow}\right), \tag{4}\] \[b_{j} =\frac{e^{i\phi_{j}/2}}{\sqrt{2\beta_{j}}}\left(\sqrt{\beta_{j}- \varepsilon_{j}}\ d_{j\uparrow}^{\dagger}-e^{-i\phi_{j}}\sqrt{\beta_{j}+ \varepsilon_{j}}\ d_{j\downarrow}\right), \tag{5}\]
with energies
\[E_{a_{j}} =\beta_{j}-V_{z}, \tag{6}\] \[E_{b_{j}} =\beta_{j}+V_{z}, \tag{7}\]
where \(\beta_{j}=\sqrt{\varepsilon_{j}^{2}+\Delta^{2}}\) and \(j=1,2\).
By inverting Eqs. (4) and (5), the full Hamiltonian in Eq. (1) can be described in terms of the \(a_{j}\) and \(b_{j}\) operators. However, for \(V_{z}\gg t,t_{\text{so}}\), a low energy Hamiltonian can be found by projection onto the states with zero \(b_{j}\)-particles. As shown in Ref. [41], the projected Hamiltonian maps directly onto the Kitaev chain in terms of the low-energy \(a_{j}\) particles. The resulting effective Kitaev parameters (\(\varepsilon_{K,j},t_{K}\) and \(\Delta_{K}\)) depend on the physical parameters in Eq. (1), see below. However, we want to emphasize that a nonlocal pairing amplitude \(\Delta_{K}\) can be generated with only local proximity effect. The effective nonlocal pairing is generated by a third-order process, consisting of a local Andreev reflection onto a QD followed by one of the electrons tunneling to the other QD. Depending on how the QD energy levels are tuned, the electron either conserves or flips its spin during the tunneling process, see below.
### Tuning to the sweet spot
In the minimal Kitaev chain, one PMM localized entirely on each QD appears when tuning the system to the sweet spot \(\varepsilon_{K,1}=\varepsilon_{K,2}=0\) and \(t_{K}=\Delta_{K}\)[33]. To fulfill the first of these conditions, the \(a_{j}\) particles need to have zero energy (meaning that the corresponding state is aligned with the chemical potential of the bulk superconductor), i.e.,
\[\varepsilon_{K,j}=E_{a_{j}}=0\implies\sqrt{\varepsilon_{j}^{2}+\Delta^{2}}=V_ {z}, \tag{8}\]
see Eq. (6). Therefore, in the noninteracting model, the QD energy levels must be tuned such that
\[\varepsilon_{j}=\pm\sqrt{V_{z}^{2}-\Delta^{2}}\equiv\pm\varepsilon_{0}. \tag{9}\]
The solutions to \(\varepsilon_{j}\) in Eq. (9) can most easily be understood in the limit \(\Delta\to 0\) when the many-body eigenstates of \(H_{j}\) have definite charge and
\[a_{j}^{\dagger}\sim\begin{cases}d_{j\uparrow}^{\dagger},&\text{if }\varepsilon_{j}>0,\\ d_{j\downarrow}&\text{otherwise},\end{cases} \tag{10}\]
where we have dropped phase factors. Equation (10) tells us that, for the positive (negative) solution of \(\varepsilon_{j}\) in Eq. (9), one can add and remove a spin up (spin down) particle from QD \(j\) from the ground state without energy cost. Therefore, we refer to the choice of tuning the two QD levels to the same signs and the opposite signs in Eq. (9) as parallel and anti-parallel spin configurations, respectively. For a non-zero \(\Delta\), however, the excitations are instead given by Eq. (4) and consist of superpositions of particles and holes with opposite spins. Even though referring to the spins being anti-parallel or parallel is not entirely accurate in the general case, we use this terminology here.
In the anti-parallel spin configuration, the effective nonlocal pairing is generated by Andreev reflection followed by spin-conserving tunneling, while the effective
Figure 2: Calculated stability diagrams of the locally proximitized double QD using \(V_{z}=1.25\Delta,U_{l}=2.5\Delta\), and \(U_{nl}=0.1\Delta\). a) \(\tanh\left(\delta E\right)\) as a function of the QD energy levels using the sweet spot phase difference \(\delta\phi=\delta\phi_{\star}\approx 0.66\pi\), where \(\delta E=E_{\text{odd}}-E_{\text{even}}\) is the energy difference between the odd and even parity ground states. The black box indicates the location of the anti-parallel spin configuration, which is our focus. The labels indicate the (QD1, QD2) ground-state occupation for \(\Delta=t=t_{\text{so}}=0\). b) Parity of the ground state as a function of the QD energy levels in the anti-parallel spin configuration, using the same parameters as in a), except the superconducting phase difference which is varied with \(\pm\pi/2\).
\(t_{K}\) corresponds to spin-orbit induced spin-flip tunneling. Parallel spins, on the other hand, require a spin-flip tunneling process to generate an effective \(\Delta_{K}\), while the hopping term corresponds to spin-conserving tunneling. The anti-parallel spin configuration, therefore, amplifies the CAR-like process compared to parallel spins (if \(t>t_{\rm so}\)), which we will see is beneficial to create PMMs.
In the noninteracting case, anti-parallel spins mean that we tune the QD energy levels to \(\varepsilon_{2}=-\varepsilon_{1}=\varepsilon_{0}\), corresponding to the upper left corner in the stability diagram in Fig. 2(a) (although this plot includes interactions). The effective Kitaev parameters in Eq. (3) then become
\[\varepsilon_{K,1} =\varepsilon_{K,2}=0, \tag{11}\] \[t_{K} =t_{\rm so}\left|\cos\frac{\delta\phi}{2}+i\frac{\varepsilon_{0} }{V_{z}}\sin\frac{\delta\phi}{2}\right|,\] (12) \[\Delta_{K} =\frac{t\Delta}{V_{z}}\left|\sin\frac{\delta\phi}{2}\right|, \tag{13}\]
where we have taken the absolute values of the right-hand sides in Eqs. (12) and (13) since the phases can be gauged away. Note that \(\Delta_{K}\) decreases with increasing \(V_{z}\) unless we also increase \(\Delta\), so we should consider the limit \(V_{z},\Delta\gg t,t_{\rm so}\). Furthermore, we have relabeled \(t_{K}\longleftrightarrow\Delta_{K}\), which is the more natural labeling in the anti-parallel spin configuration. To see this, consider the terms proportional to \(a_{2}^{\dagger}a_{1}\) and \(a_{2}^{\dagger}a_{1}^{\dagger}\) in the projected Hamiltonian. Since \(a_{1}\approx d_{1\uparrow}\) and \(a_{2}^{\dagger}\approx d_{2\downarrow}\), we get \(a_{2}^{\dagger}a_{1}\approx d_{2\downarrow}d_{1\uparrow}\) and \(a_{2}^{\dagger}a_{1}^{\dagger}\approx d_{2\downarrow}d_{1\uparrow}^{\dagger}\), meaning that the tunneling term in the \(a\)-operators resembles nonlocal pairing in the original \(d\)-operators, and vice versa. The relabeling results in \(\Delta_{K}\propto t\Delta\), reinforcing the intuitive picture where the nonlocal pairing is generated by local Andreev reflection followed by spin-conserving tunneling between the QDs.
We also require \(t_{K}=\Delta_{K}\) to reach the sweet spot. From Eqs. (12) and (13), we note that \(t_{K}\) (\(\Delta_{K}\)) decreases (increases) monotonically with the superconducting phase difference \(\delta\phi\) as it is varied from \(0\) to \(\pi\), taking values within the ranges
\[t_{K} \in\left[t_{\rm so},\frac{t_{\rm so}\varepsilon_{0}}{V_{z}} \right], \tag{14}\] \[\Delta_{K} \in\left[0,\frac{t\Delta}{V_{z}}\right]. \tag{15}\]
Therefore, if
\[t_{\rm so}\varepsilon_{0}\leq t\Delta, \tag{16}\]
the superconducting phase difference can be used to tune to the sweet spot. Otherwise, \(t_{K}\) is always larger than \(\Delta_{K}\). For a given \(\Delta\), Eqs. (9) and (16) imply an upper bound for the Zeeman energy. Furthermore, according to Eq. (8), the Zeeman energy must be larger than \(\Delta\). \(V_{z}\) is therefore bounded by
\[\Delta\leq V_{z}\leq\Delta\sqrt{1+\left(\frac{t}{t_{\rm so}}\right)^{2}}\equiv V _{z,\rm max}. \tag{17}\]
If the inequality in Eq. (17) is fulfilled, the tuning procedure only involves the QD energy levels and the superconducting phase difference. Firstly, the QD levels can be tuned separately such that \(\varepsilon_{K,1/2}=0\). Then, tuning the superconducting phase difference to a sweet spot value \(\delta\phi_{\star}\) is sufficient to achieve \(\Delta_{K}=t_{K}\) and end up at the PMM sweet spot. The stability diagrams in Fig. 2(b) illustrate the tuning procedure. When increasing the superconducting phase difference, we can tune from having an even-parity dominated anti-crossing at \(\delta\phi<\delta\phi_{\star}\) (\(t_{K}>\Delta_{K}\)), to the PMM sweet spot at \(\delta\phi=\delta\phi_{\star}\) (\(t_{K}=\Delta_{K}\)), to an odd-parity dominated anti-crossing (\(t_{K}<\Delta_{K}\)) for even larger phase differences.
Since we measure parity in terms of electrons, the relation between having a \(\Delta_{K}\)- or \(t_{K}\)-dominated amplitude and the parity of the anti-crossing is opposite from the picture in the \(a_{j}\)-particles. This is a consequence of the anti-parallel spin configuration. For example, if \(t_{K}>\Delta_{K}\), the ground state is the odd parity state in terms of \(a_{j}\)-particles. However, there is an extra electron hiding at lower energy due to the anti-parallel spin arrangement, yielding an even electron-parity ground state in total.
### Energy gap and spin configuration
At the sweet spot in the minimal Kitaev chain, the degenerate ground states are separated by a gap \(E_{g}=2t_{K}=2\Delta_{K}\) to the next excited states [33]. To detect PMMs we need the gap to be much larger than the thermal energy. Furthermore, both qubit coherence times and the time-scale requirements for nonabelian operations will benefit from a large gap.
To provide an estimate of the resulting energy gap in our system, consider the situation when \(V_{z}=V_{z,\rm max}\). Then, \(t_{K}=\Delta_{K}\) at \(\delta\phi=\pi\), and we can obtain the energy gap by substituting \(V_{z}\) by \(V_{z,\rm max}\) in Eq. (15) and doubling the result:
\[E_{g}=\frac{2t\Delta}{V_{z,\rm max}}=\frac{2t}{\sqrt{1+(t/t_{\rm so})^{2}}}. \tag{18}\]
Compared with numerical results at the sweet spot in the full model, we find that Eq. (18) provides a good estimate of the gap in general.
If arranging the spins parallel instead, i.e., by tuning both QD energy levels to \(\varepsilon_{0}\) (or both to \(-\varepsilon_{0}\)), the bound on \(V_{z}\) in Eq. (17) and the energy gap in Eq. (18) are acquired by mapping \(t\longleftrightarrow t_{\rm so}\). Anti-parallel spins, therefore, allow for larger Zeeman energies than parallel spins (if \(t>t_{\rm so}\)), while the energy gap is the same for both configurations. Since we have not found any advantages of the parallel spin configuration when \(t>t_{\rm so}\), we focus on the anti-parallel case. However, if \(t<t_{\rm so}\), the parallel spin configuration is superior and we get almost identical results as the anti-parallel spins if \(t_{\rm so}/t\to t/t_{\rm so}\).
### Relaxing the Kitaev limit
Due to the restrictions on the Zeeman energy in Eq. (17), we cannot increase \(V_{z}\) indefinitely without also increasing \(\Delta\), which in turn is bounded by the gap of the parent superconductor. Furthermore, superconductivity might get quenched by the large magnetic field even before the upper bound in Eq. (17). Another option to approach the Kitaev limit is to decrease the tunneling amplitude between the QDs. However, since \(E_{g}\propto t\) in Eq. (18), decreasing the tunneling amplitude (and keeping \(t_{\text{so}}/t\) constant) would eventually result in an inadequate energy gap. Practically, there is hence a trade-off between approaching the Kitaev limit (small \(t\)) on one hand and a large energy gap (large \(t\)) on the other.
It was shown in Ref. [35] that a large Zeeman energy is necessary to get well-separated PMMs, but our system is rather different and the precise relation between Zeeman energy and PMM separation might be different. Furthermore, we don't expect the expressions for the effective Kitaev parameters in Eqs. (6), (12) and (13) to be correct in general, since they were derived in the limit \(V_{z}\gg t,t_{\text{so}}\). Therefore, when relaxing the Kitaev limit, it becomes difficult to predict where a possible sweet spot is located. We will below introduce a PMM quality measure that we can numerically optimize to locate the best sweet spot and evaluate its quality.
## IV Interactions and Majorana quality
Having understood how the noninteracting model can be described by an effective minimal Kitaev chain in the Kitaev limit, we now discuss the influence of interactions. Then we define a PMM quality measure and discuss how we locate PMM sweet spots by numerical optimization. Finally, in Sec. IV.4, we use the optimization procedure to find experimentally relevant parameter regimes with high-quality PMMs.
### Local Coulomb interactions
With local Coulomb interactions, we can still intuitively understand the system as a minimal Kitaev chain in the Kitaev limit. However, local Coulomb interactions renormalize the effective Kitaev parameters, resulting in a new sweet spot location. By considering the many-body picture of \(H_{j}\) in Eq. (2), we can get a new estimate for the \(\varepsilon_{K,1}=\varepsilon_{K,2}=0\) sweet spot condition, providing an initial guess for the optimization procedure. We seek solutions of \(\varepsilon_{j}\) such that \(H_{j}\) has degenerate ground states. The ground state of \(H_{j}\) is either the pure spin-down state or the low-energy BCS-singlet. Solving for their degeneracy modifies the condition in Eq. (9) to
\[\varepsilon_{j}=\pm\sqrt{\left(V_{z}+\frac{U_{l}}{2}\right)^{2}-\Delta^{2}}- \frac{U_{l}}{2}. \tag{19}\]
To obtain the anti-parallel spin configuration, the QD levels are tuned to have opposite signs in Eq. (19). Furthermore, the lower bound for \(V_{z}\) becomes \(V_{z}+U_{l}/2\geq\Delta\). Therefore, if \(U_{l}\geq 2\Delta\), there is no lower bound on \(V_{z}\) to achieve \(\varepsilon_{K,1}=\varepsilon_{K,2}\).
The local Coulomb interactions also affect the effective Kitaev tunneling \(t_{K}\) and \(p\)-wave pairing \(\Delta_{K}\), and we do not find simple, analytical expressions for them in the general case. Therefore, we cannot find an upper bound for \(V_{z}\) as in Eq. (17). As a rough estimate of the upper bound, we can consider the mean-field correction to the Zeeman energy due to the local Coulomb interaction \(V_{z}\to V_{z}+U_{l}/2\) and apply it to the noninteracting bound in Eq. (17). The estimated bound then becomes
\[\Delta\leq V_{z}+\frac{U_{l}}{2}\leq V_{z,\text{max}}. \tag{20}\]
### Nonlocal Coulomb interactions
To gain intuition about the effect of nonlocal interactions, we study the interacting minimal Kitaev chain
\[H_{K,\text{int}}=H_{K}+U_{K}n_{K,1}n_{K,2}, \tag{21}\]
where \(U_{K}\) is an intersite interaction. We now search for a PMM sweet spot in the interacting minimal Kitaev chain, where the odd and even parity ground states are degenerate, and one PMM is localized on each QD. As was shown in Ref. [47], such a sweet spot can be found at the point
\[\varepsilon_{K,1}=\varepsilon_{K,2}=-\frac{U_{K}}{2}, \tag{22}\] \[\Delta_{K}=t_{K}+\frac{U_{K}}{2}. \tag{23}\]
At the point in Eqs. (22) and (23), the odd and even parity ground states of \(H_{K,\text{int}}\) are degenerate, and all eigenstates are the same as at the sweet spot in the noninteracting case, analyzed in detail in Ref. [33]. Therefore, the same PMMs also appear in the interacting case. However, the PMMs in the interacting minimal Kitaev chain only map between the odd and even states in the ground state sector, not the whole spectrum (the excited states are not degenerate). Therefore, the PMMs in the interacting minimal Kitaev chain are weak Majoranas [48].
In the presence of nonlocal interactions \(U_{nl}\) in the full model, the sweet spot conditions are given by Eqs. (22) and (23), where \(U_{K}\) is an effective intersite interaction proportional to \(U_{nl}\). Furthermore, the nonlocal interactions renormalize the other effective Kitaev parameters. Therefore, we must rely on a numerical optimization procedure to locate PMM sweet spots when interactions are present.
### Majorana polarization and optimizing for the PMM sweet spot
To utilize an optimization procedure to locate PMM sweet spots, we need to define a PMM quality measure to optimize. In this work, we quantify PMM quality by ground state degeneracy and the Majorana polarization (MP), which provides a measure of the separation between the PMMs. A large MP is necessary to observe topologically protected nonabelian physics [49].
Since the Hamiltonian in Eq. (1) is complex, we need to generalize previous formulations of the MP [50, 35]. In Appendix A, we derive the expression
\[\text{MP}_{j}=\frac{\left|\sum_{\sigma,s}\left\langle e|\gamma_{j\sigma s}|o \right\rangle^{2}\right|}{\sum_{\sigma,s}\left|\left\langle e|\gamma_{j\sigma s }|o\right\rangle^{2}\right|}, \tag{24}\]
where \(|o\rangle\) and \(|e\rangle\) are the odd and even ground states and
\[\gamma_{j\sigma+} =d_{j\sigma}+d_{j\sigma}^{\dagger}, \tag{25}\] \[\gamma_{j\sigma-} =i(d_{j\sigma}-d_{j\sigma}^{\dagger}) \tag{26}\]
are the local Majorana operators on QD \(j\) with spin \(\sigma\). MP\({}_{j}\) takes values between \(0\) and \(1\), where \(1\) indicates that on QD \(j\), the lowest-energy excitation is entirely Majorana-like, while \(0\) indicates that it is entirely fermion-like. As our system is not symmetric, it is important to ensure that both MP\({}_{1}\) and MP\({}_{2}\) are large, so we define the total MP as
\[\text{MP}=\frac{\text{MP}_{1}+\text{MP}_{2}}{2}. \tag{27}\]
At the sweet spot, we would like the MP to be close to one and the energy difference between the odd and even ground states close to zero. For details on the numerical optimization, see Appendix B.
### Moderate Zeeman energies
Having separately discussed the effects of relaxing the Kitaev limit and of local and nonlocal interactions for creating PMMs in our model, we now study the system in its entirety. In particular, we want to find realistic parameter regimes such that we only need to fine-tune the QD energy levels and the phase difference to reach a PMM sweet spot. We will call such regions "PMM-compatible". To find PMM-compatible regimes, we perform the optimization procedure while varying \(U_{l}\) and either \(V_{z}\) or \(U_{nl}\) and see what regimes yield a large MP.
We first consider the case \(U_{nl}=0\). Varying the Zeeman energy \(V_{z}\) and the local Coulomb interaction \(U_{l}\), we optimize \(\varepsilon_{1},\varepsilon_{2}\) and \(\delta\phi\) for all combinations of \((U_{l},V_{z})\). Figure 3(a) shows the resulting heat map of \(1-\text{MP}\) at the optimized points. There is a distinct PMM-compatible region in the bright yellow and green area showing that we can create high-quality PMMs with moderate Zeeman energies. The PMM-compatible region is surrounded by darker blue regions with poor MP at large \(U_{l}\) and/or \(V_{z}\) in the top right and at small \(U_{l}\) and \(V_{z}\) in the bottom left. The lower dashed line in Fig. 3(a) shows the lower bound in Eq. (20), meaning that the \(\varepsilon_{K,1}=\varepsilon_{K,2}=0\) condition cannot be fulfilled for values of \(V_{z}\) and \(U_{l}\) below the line. This bound agrees well with the boundary to the PMM-compatible region.
Within the PMM-compatible region, increasing the Zeeman energy improves the MP. An increased local Coulomb interaction also improves the MP, and for \(V_{z}<\Delta\), a non-zero \(U_{l}\) is necessary to enter the PMM-compatible region. However, too large \(U_{l}\) and \(V_{z}\) both cause problems for tuning to a PMM sweet spot. We understand the problem for large \(V_{z}\) and/or \(U_{l}\) as the failure of fulfilling the \(t_{K}=\Delta_{K}\) condition while tuning \(\delta\phi\). In the Kitaev limit of the noninteracting model discussed in Sec. III, we derived a bound on \(V_{z}\) such that if \(V_{z}>V_{z,\text{max}}\), all \(\delta\phi\) yield \(t_{K}>\Delta_{K}\). We also estimated the corresponding upper bound in the presence of local Coulomb interactions in Eq. (20), which is included in Fig. 3(a) as the upper dashed line. To test if an insufficient \(\Delta_{K}\) is the problem for large \(U_{l}\), we study the purple marking in Fig. 3(a), lying outside the PMM-compatible region. In the corresponding stability diagram in Fig. 3(b), we see an anti-crossing with a dominating even ground state for all \(\delta\phi\). As discussed in Sec. III.2, an even-parity-dominated anti-crossing is consistent with \(t_{K}>\Delta_{K}\), indicating that the effective \(\Delta_{K}\) is too small to find a PMM sweet spot at large \(V_{z}\) and/or \(U_{l}\). In contrast, the blue marking in Fig. 3(a) corresponds to a crossing in the stability diagram in Fig. 3(b), indicating a sweet spot.
Next, we include nonlocal interactions \(U_{nl}\). In Fig. 4(a), we perform the optimization procedure at each point in the \((U_{l},U_{nl})\)-plane, fixing the Zeeman energy to \(V_{z}=1.25\Delta\). Despite nonlocal interactions, we still find a bright, PMM-compatible region. Figure 4(b) shows the stability diagrams at the blue and purple markings in Fig. 4(a). Note that the stability diagram corresponding to the high-MP case (blue marking) shows a tilted crossing in contrast to the straight crossing in the high-MP sweet spot in Fig. 3(b). The tilted crossing is an inherent feature of sweet spots in the interacting minimal Kitaev chain and is present even when MP \(=1\).
In Fig. 4(a), it is clear that the region with large \(U_{l}\) and/or \(U_{nl}\) is associated with poor MP. We seek a simple explanation for why large \(U_{nl}\) does not allow for PMM sweet spots based on the interacting Kitaev chain discussed in Sec. IV.2. In the interacting minimal Kitaev chain, the sweet spot condition of equal ECT and CAR becomes \(\Delta_{K}=t_{K}+U_{K}/2\), where \(U_{K}\propto U_{nl}\) in the Kitaev limit of our full system. Therefore, nonlocal interactions increase the minimum amplitude of \(\Delta_{K}\) required to reach a sweet spot, compared to when \(U_{nl}=0\). To support this reasoning, Fig. 4(b) shows a stability diagram also for the purple marking in Fig. 4(a). There, we see an even parity-dominated anti-crossing for all \(\delta\phi\), indi
cating that, indeed, \(\Delta_{K}<t_{K}+U_{K}/2\) for large nonlocal interactions.
In Appendix C, we provide additional data related to Figs. 3 and 4, consisting of heat maps of the optimized \(\varepsilon_{1},\varepsilon_{2}\), and \(\delta\phi\). Furthermore, we have calculated how close to degenerate the odd and even ground states are at the optimized sweet spots, as well as the energy gap to the lowest excited state (data not shown). Inside the PMM-compatible regions, the degeneracy is never broken by more than \(10^{-3}\Delta\) and the gap to excited states is rather constant and approximately given by Eq. (18). Therefore, we can trust that the optimization does not improve the MP by decreasing the energy gap or by failing to have degenerate ground states.
Finally, we briefly discuss the effect of varying the tunneling amplitudes \(t\) and \(t_{\rm so}\). Firstly, increasing the tunneling amplitudes while keeping \(t_{\rm so}/t\) fixed improves the energy gap but results in a smaller MP in the PMM-compatible region. Secondly, if \(t_{\rm so}/t\) is increased (fixing \(t\)), the energy gap improves, but the PMM-compatible region becomes smaller. This behavior can be understood in the Kitaev limit in the noninteracting model, where Eqs. (17) and (18) imply that an increased fraction of spin-orbit tunneling increases the gap but decreases \(V_{z,\rm max}\).
## V Conclusions
In this work, we have proposed to engineer a minimal Kitaev chain in a double QD with local superconducting proximity effect and spin-orbit interaction. Studying a spinful and fully interacting model, we have shown that tuning to a PMM sweet spot only requires precise control of the QD energy levels and the superconducting phase difference between the two QDs. Such a tuning procedure works if the Zeeman energy and the interactions are within a "PMM-compatible" region, where the local Coulomb interactions and the Zeeman energy are not too small or too large, and the nonlocal Coulomb interaction is small. Increasing the tunneling amplitude (keeping \(t_{\rm so}/t\) constant) increases the energy gap to excited states, but lowers the MP. Increasing \(t_{\rm so}\) while keeping \(t\) constant also increases the gap, but decreases the size of the PMM-compatible region.
As an example of a realistic set of parameter values with a sweet spot with reasonable MP and an adequate energy gap, consider the blue marking in Fig. 4. There, \(V_{z}=1.25\Delta,U_{l}=2\Delta\) and \(U_{nl}=0.2\Delta\), which results in a sweet spot with MP \(\approx 0.95\) and \(E_{g}\approx 0.18\Delta\). A larger \(V_{z}\) allows for higher MP. We have compared our model to the one in Ref. [35], where ECT and CAR are mediated by an intermediate QD coupled to a superconductor. We use the same values for the Zeeman energy, local Coulomb interaction, and the ratio between spin-conserving and spin-flip tunneling, but adjust the amplitude of the latter two so that the energy gap at the sweet spot is similar in both models. As long as one stays within the PMM-compatible region, the two models give similar values (within a few percent) for the MP.
One long-term future goal for minimal Kitaev chain platforms is to create longer Kitaev chains that host topologically protected MBSs. A factor to consider when going to longer chains is that the phases of the effective Kitaev tunneling and \(p\)-wave pairing amplitudes cannot
Figure 3: a) \(1-\mathrm{MP}\) as a function of \(U_{l}\) and \(V_{z}\), using \(U_{nl}=0\). At each point, \(\varepsilon_{1},\varepsilon_{2}\), and \(\delta\phi\) are optimized to find a PMM sweet spot. The dashed lines represent the upper and lower bounds in Eq. (20), estimating where we can tune to a PMM sweet spot. For each marking in a), a stability diagram indicating the ground state parity is shown in b). The stability diagrams are centered around the optimized QD energy levels while using the optimized phase difference. Furthermore, the MP at each marking is shown above the corresponding stability diagram.
Figure 4: a) \(1-\mathrm{MP}\) as a function of \(U_{l}\) and \(U_{nl}\), using \(V_{z}=1.25\Delta\). At each point, \(\varepsilon_{1},\varepsilon_{2}\), and \(\delta\phi\) are optimized to find a PMM sweet spot. Furthermore, the region \(U_{nl}>U_{l}\) is blurred since it is unphysical in a capacitive model [51]. b) The MP at the sweet spot and the stability diagram showing the ground state parity at each marking in a).
be removed by a gauge transformation. Since we tune the superconducting phase in the system, we will naturally acquire phases on the effective Kitaev parameters, which can suppress the resulting energy gap [32]. However, the sweet spot condition relating the effective Kitaev tunneling and pairing amplitudes still only depends on their absolute values.
To conclude, we emphasize that our proposal does not depend on an intermediate superconductor between the two QDs. Instead, it only relies on local superconducting proximity effect, which lowers the experimental barrier for minimal Kitaev chain platforms. The lack of an intermediate superconductor brings nonlocal interactions between the QDs into play. However, we have shown that PMMs can still be created in a fully interacting system with a simple tuning procedure. Therefore, we are hopeful that the locally proximitized double QD will provide an alternative platform for minimal Kitaev chains and next-generation experiments with PMMs.
###### Acknowledgements.
We acknowledge enlightening discussions with Ruben Seoane Souto and Athanasios Tsintzis and funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 856526, the Swedish Research Council under Grant Agreement No. 2020-03412, and NanoLund.
## Appendix A Majorana polarization
In this section, we derive an expression for the Majorana polarization (MP), and the unnormalized MP (MPU) which provide two different measures of the separation between Majoranas. Previous formulations of the MP [35; 50] work for models where the ground states can be chosen to be real. Since we tune the superconducting phase, we require a more general formulation.
This derivation works by extracting the MBSs from the odd and even ground states and then defining a measure of their separation. One must keep in mind that the relative phase between the odd and even ground states influences the form of the Majorana operators that map between them. However, because of superselection, this phase is unphysical. We parametrize this freedom by \(\theta\) and in the end set it to whatever maximizes the separation between the MBSs.
The Majorana operators map between the odd and even ground states as
\[\ket{o}=e^{-i\theta}\gamma\ket{e}, \tag{10}\]
where \(\theta\) is the phase described above. Define the local Majorana operators
\[\gamma_{n+} \equiv d_{n}+d_{n}^{\dagger}, \tag{11}\] \[\gamma_{n-} \equiv i(d_{n}-d_{n}^{\dagger}), \tag{12}\]
where \(n\) is a multi-index referring to, e.g., the spin and site indices.
We assume that \(\gamma\) can be expressed as a linear combination of the local Majorana operators as
\[\gamma=\sum_{n,s}x_{n,s}\gamma_{n,s}, \tag{13}\]
where \(x_{n,s}\) are real. Strictly speaking, this derivation holds only for noninteracting theories, as we impose a restriction to single-particle Majoranas in our ansatz. We assume that corrections coming from interactions are negligible.
Combining Eqs. (10) and (13), leads to
\[x_{n,s}=\text{Re}\big{\{}e^{i\theta}z_{n,s}\big{\}}, \tag{14}\]
where we have defined
\[z_{n,s}=\left.\bra{e}\gamma_{n,s}\ket{o}.\right. \tag{15}\]
Choosing two different values for \(\theta\), with \(\Delta\theta=\pi/2\), gives two different Majorana operators. The wavefunction of the other Majorana is then given by the imaginary part as
\[y_{n,s}=\text{Im}\big{\{}e^{i\theta}z_{n,s}\big{\}}. \tag{16}\]
The better separated these two Majoranas are from each other, the better protected the system is from perturbations. Their separation \(S_{R}\) in a region \(R\) can be measured by
\[S_{R}(\theta)=\sum_{s,n\in R}x_{n,s}^{2}-y_{n,s}^{2}=\sum_{s,n\in R}\text{Re} \big{\{}e^{2i\theta}z_{n,s}^{2}\big{\}}, \tag{17}\]
which depends on the phase \(\theta\). We can choose this phase so that the separation is maximal, where it then takes the value
\[\max_{\theta}S_{R}(\theta)=\left|\sum_{s,n\in R}z_{n,s}^{2}\right|\equiv \text{MPU}_{R}. \tag{18}\]
MPU is closely connected to the MP by
\[\text{MP}_{R}=\frac{\text{MPU}_{R}}{\sum_{s,n\in R}|z_{n,s}|^{2}}, \tag{19}\]
which reduces to the MP definition in Refs. [35; 50] when the ground states can be chosen to be real. From Eq. (19), it is clear why we refer to MPU as the unnormalized MP.
## Appendix B Optimization
By tuning \(\varepsilon_{1}\), \(\varepsilon_{2}\) and \(\delta\phi\), we seek a sweet spot where there is an exact energy degeneracy between the odd and even ground states and well-separated Majorana modes. In principle, one can impose an exact ground state degeneracy since there is no avoided crossing between odd and even states. In practice, we use a penalty factor that ensures that \(|\delta E|<10^{-3}\Delta\).
To quantify the separation of Majorana modes during the optimization, we use the symmetric _unnormalized_ Majorana polarization \(\text{MPU}=(\text{MPU}_{1}+\text{MPU}_{2})/2\). We found that the unnormalized version is more robust than the normalized version when the energy levels are tuned independently. In the PMM-compatible region it doesn't matter which measure is used, both measures result in very similar sweet spots.
The code to reproduce our calculations can be found at [52].
## Appendix C Supplementary results
Figure 5 shows how \(\delta\phi\), \(\varepsilon_{1}\) and \(\varepsilon_{2}\) at the optimized sweet spot vary with the Zeeman splitting and the interactions. Note how \(\delta\phi\) gradually increases when approaching the boundary to the PMM-compatible region and that \(\delta\phi=\pi\) almost everywhere outside of it. From Sec. III.2, we know that \(\Delta_{K}\) increases monotonically with \(\delta\phi\) between \(0\) and \(\pi\) in the Kitaev limit of the noninteracting model. We can hence understand the behavior of \(\delta\phi\) in Fig. 5 as follows. Inside the PMM-compatible region, the \(\delta\phi\) which fulfills the sweet spot condition \(\Delta_{K}=t_{K}+U_{K}/2\) gradually increases until \(\delta\phi=\pi\) when hitting the boundary. Outside of the PMM-compatible region, the optimization finds \(\delta\phi=\pi\) to maximize \(\Delta_{K}\), but it still is not enough to fulfill the sweet spot condition. See the discussion in Sec. IV.4.
|
2302.03727 | A Cryogenic Readout IC with 100 KSPS in-Pixel ADC for Skipper
CCD-in-CMOS Sensors | The Skipper CCD-in-CMOS Parallel Read-Out Circuit (SPROCKET) is a
mixed-signal front-end design for the readout of Skipper CCD-in-CMOS image
sensors. SPROCKET is fabricated in a 65 nm CMOS process and each pixel occupies
a 50$\mu$m $\times$ 50$\mu$m footprint. SPROCKET is intended to be
heterogeneously integrated with a Skipper-in-CMOS sensor array, such that one
readout pixel is connected to a multiplexed array of nine Skipper-in-CMOS
pixels to enable massively parallel readout. The front-end includes a variable
gain preamplifier, a correlated double sampling circuit, and a 10-bit serial
successive approximation register (SAR) ADC. The circuit achieves a sample rate
of 100 ksps with 0.48 $\mathrm{e^-_{rms}}$ equivalent noise at the input to the
ADC. SPROCKET achieves a maximum dynamic range of 9,000 $e^-$ at the lowest
gain setting (or 900 $e^-$ at the lowest noise setting). The circuit operates
at 100 Kelvin with a power consumption of 40 $\mu W$ per pixel. A SPROCKET test
chip was submitted in September 2022, and test results will be presented at the
conference. | Adam Quinn, Manuel B. Valentin, Thomas Zimmerman, Davide Braga, Seda Memik, Farah Fahim | 2023-02-07T19:47:46Z | http://arxiv.org/abs/2302.03727v1 | # A Cryogenic Readout IC with 100 Ksps in-Pixel ADC for Skipper CCD-in-CMOS Sensors
###### Abstract
The Skipper CCD-in-CMOS Parallel Read-Out Circuit (SPROCKET) is a mixed-signal front-end design for the readout of Skipper CCD-in-CMOS image sensors. SPROCKET is fabricated in a 65 nm CMOS process and each pixel occupies a \(50\,\mathrm{\SIUnitSymbolMicro m}\times 50\,\mathrm{\SIUnitSymbolMicro m}\) footprint. SPROCKET is intended to be heterogeneously integrated with a Skipper-in-CMOS sensor array, such that one readout pixel is connected to a multiplexed array of nine Skipper-in-CMOS pixels to enable massively parallel readout. The front-end includes a variable gain preamplifier, a correlated double sampling circuit, and a 10-bit serial successive approximation register (SAR) ADC. The circuit achieves a sample rate of 100 ksps with 0.48 \(\,\mathrm{\SIUnitSymbolMicro m}\) equivalent noise at the input to the ADC. SPROCKET achieves a maximum dynamic range of 9,000 e\({}^{-}\) at the lowest gain setting (or 900 e\({}^{-}\) at the lowest noise setting). The circuit operates at 100 Kelvin with a power consumption of 40 \(\,\mathrm{\SIUnitSymbolMicro m}\) per pixel. A SPROCKET test chip was submitted in September 2022, and test results will be presented at the conference.
## I Introduction
Future high energy physics (HEP) and dark matter detection experiments [1], as well as quantum imaging applications, will continue to require extremely sensitive and low-noise particle detectors with larger area and thus higher data throughput. Charge-Coupled Device (CCD) cameras offer excellent performance in a range of scientific imaging applications [2][3]. In particular, Skipper-CCDs, and the Skipper-in-CMOS sensor currently being developed by Fermilab and SLAC, are a highly attractive class of detectors because they allow sub-single-electron noise to be achieved by reading out the same packet of charge many times [4][5][6]. However, traditional monolithic architectures in which charge is read out only from the edge of the CCD array cannot scale to megapixel array sizes without severely compromising readout speed. A promising alternative is offered by "hybrid" readout architectures which integrate a readout integrated circuit (ROIC) directly beneath the image sensor, thus allowing each Skipper CCD-in-CMOS pixel (or a multiplexed group of several pixels) to be bump bonded to a dedicated readout circuit (see Fig. 2).
Integrating readout electronics per-pixel imposes severe constraints on the design of the analog front-end [7]. The front-end must be compact and consume low static and dynamic power, particularly if operated at cryogenic temperatures. These constraints must be met without compromising the noise performance of the detector. To trade between area and readout speed, SPROCKET is designed to interface with a multiplexed block of sixteen pixels. This allows a readout circuit area of \(60\,\mathrm{\SIUnitSymbolMicro m}\times 60\,\mathrm{\SIUnitSymbolMicro m}\) for a pixel footprint of \(15\,\mathrm{\SIUnitSymbolMicro m}\times 15\,\mathrm{\SIUnitSymbolMicro m}\).
The first SPROCKET test chip, which is based on the design described in [8] with modifications to increase sampling rate by a factor of three, includes two 32x32 mini-arrays of pixels with per-array high-speed readout [9], as well as a test structure for de-embedding noise performance. This test chip features a pixel area of \(50\,\mathrm{\SIUnitSymbolMicro m}\times 50\,\mathrm{\SIUnitSymbolMicro m}\); this will be expanded to \(60\,\mathrm{\SIUnitSymbolMicro m}\times 60\,\mathrm{\SIUnitSymbolMicro m}\) for the final tapeout to enable bump bonding to the Skipper-CCD-in-CMOS ASIC. The mini-array architecture is shown in Fig. 1. The following sections describe the pixel design architecture and simulation results, a brief discussion of array-level integration, and directions for future work.
## II Design Architecture
Fig. 3 shows a schematic of SPROCKET. When the ROIC is bump bonded to a Skipper CCD-in-CMOS sensor, the node
Fig. 1: Functional block diagram of one SPROCKET mini-array consisting of 1024 pixels, including readout by a synchronous binary tree [9]
Fig. 2: Illustration of SPROCKET’s hybrid integration approach
labeled "Bump Bond" will be connected to the source of the integrated source follower in the sensor's output stage. A bias current of approximately 5 \(\mu\)A will flow from the integrated source follower through the bump bond and into the NMOS termination shown in SPROCKET, such that the input to SPROCKET is a voltage-mode signal. The three stages of the SPROCKET front-end (preamplifier, correlated double-sampling circuit, and ADC) are presented in the following sections in the order of signal processing.
### _Preamplifier_
A preamplifier is placed immediately at the input of SPROCKET in order to maximize noise figure. The core of the amplifier is a gain-boosted cascode, and its gain is determined by the capacitive feedback network composed of _Cin_ and _Cfn_. A reset switch provides a bias to the high-impedance node at the input to the amplifier. _Cin_ and _Cfn_ are implemented as identical 3 FF metal-oxide-metal (MOM) capacitors, giving approximately unity gain.
The preamplifier has two special modes of operation. In _high-gain mode_, an additional 27 fF capacitor _Crange2_ (not shown) is switched in parallel with _Cin_. This increases the gain of the preamplifier to 10x. In _inject mode_, the bump bond node is directly connected to the global _Inject_ net, which allows the injection of a test signal.
### _Correlated Double-Sampling Circuit_
The preamplifier is immediately followed by a correlated double-sampling (CDS) circuit composed of two sampling capacitors _Cpre_ and _Csamp_ and their associated switches.
In the Pre-Sample phase, the CCD pedestal level is stored across _Cpre_, and in the Sample phase the signal (including pedestal) is stored across _Csamp_. The resulting voltage at the comparator input is:
\[V\left(C_{samp}\right)-V\left(C_{pre}\right)=\left(V_{sig}+V_{ped}\right)-V_{ ped}=V_{sig}\]
The complement switch \(\overline{\phi_{pre}}\) disconnects _Csamp_ from the output of the preamplifier during the Pre-Sample phase. This switch ensures that in the Pre-Sample phase, the preamplifier sees only the capacitive load of _Cpre_, rather than _Cpre_ + _Csamp_. Both _Csamp_ and _Cpre_ are relatively large (\(>100\) fF), so this techniques avoids over-design in the preamplifier and ensures better pedestal cancellation by presenting similar impedances to the preamplifier in both phases [10].
The sample and pre-sample switches are implemented as transmission gates to mitigate charge injection. Residual charge injection contributes an estimated \(<0.5\) mV non-linearity over the dynamic range of the ADC.
### _Zero Input-Capacitance Comparator_
The sampled voltage serves as the input to a zero input-capacitance comparator, first presented in [11], which forms the core of the SPROCKET ADC. The novel contribution of this comparator is the use of input buffers to generate level-shifted versions of the comparator's positive and negative inputs, _OutBuff_ and _OutBuffin_. These buffered signals can be used to shield the input nodes as shown in Fig. 3 because any capacitance between the input node and its corresponding buffered signal is bootstrapped. This technique has two major benefits: First, by tying the well potential of the buffer's input device to the buffered signal, nearly zero effective input capacitance is achieved, mitigating charge sharing of the sampled voltage. Second, trimming of _C_\(L\) and _C_\(R\) is accomplished by connecting trim capacitors to _OutBuffin_ instead of ground when they are not selected, boostrapping them.
Fig. 3: Simplified schematic of SPROCKET, modified from [8], with example waveforms from one 10-bit ADC acquisition.
Clearly, _OutBufin_ drives a much larger capacitive load than _OutBufp_. For this reason, the design of the comparator is augmented in SPROCKET by placing an additional common-drain buffer between _OutBufm_ and the trim capacitors. The additional driving capability of this buffer, along with modifications to the doping of switch devices in the capacitive DAC, allow the SAR ADC to operate at 100 KSPS, approximately four times faster than the ADC in [11].
### _Compact 100 KSPS Serial SAR ADC_
The architecture of the SPROCKET ADC is a ten bit serial successive approximation register (SAR) ADC. In order to achieve ten bit resolution within the power and area constraints of an in-pixel front-end, the ADC generates the successive approximation voltages using a charge redistribution DAC based on two capacitors \(C_{R}\) and \(C_{L}\). Six binary weighted trim capacitors are used to tune \(C_{L}=C_{R}\).
Once the capacitors are trimmed to have equal values, the ADC generates a reference voltage by briefly asserting _CapHi_ or _CapLo_ to charge \(C_{R}\) either to \(V_{ref}\) or to zero volts, and then asserting _Qequal_ to short the positive terminals of \(C_{R}\) and \(C_{L}\) together [11]. The final voltage developed at the positive terminal of \(C_{L}\) (the negative input to the comparator) after \(N\) phases is given by:
\[\begin{array}{c}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.
4. The sequence and control logic which performs functions such as shift left, shift right and parallel data transfer is shared among 32 ADCs. The data storage register and the DAC control register is dependent on the amount of charge deposited per pixel and contains unique data therefore it cannot be shared across pixels. However, since the DAC control register effectively uses a subset of information in the data storage register, the circuit utilizes tristate buffers to reuse the data without duplicating the logic. Furthermore, in order to reduce the dynamic power consumption, the digital switching activity is limited to clocking only one register at a time by using a round-robin, "walking one" based architecture which sequentially selects the data to be used for the serial DAC operation as well as the register for latching the ADC output.
## III Simulation Results
The performance of SPROCKET has been verified using post-parasitic-extraction simulations. A Skipper CCD-in-CMOS sensor conversion gain of approximately \(110\mu\)_V/e_\({}^{-}\) is based on conservative simulation results, and is used to convert voltage domain results to number of electrons. Table I summarizes the key specifications for SPROCKET and achieved results in simulation.
## IV Array-Level Integration of SPROCKET
The SPROCKET front-end is monolithically integrated with a digital back-end. 2x2 clusters of SPROCKET front-ends are grouped into analog "islands" with separated deep N-wells, and the space between these N-wells is filled by synthesized digital logic. The back-end distributes digital control signals, and it contains six calibration registers per pixel for DAC trimming. It also implements the SAR operation described above and routes ADC output data to a serializer and LVDS transmitter on the peripheral for off-chip transmission. Sequential read-out of all pixels is possible, but a parallel effort is designing compression algorithms to improve read-out speed.
A test chip was submitted in September 2022 containing two 32x32 mini-arrays of SPROCKET pixels as well as a test structure for the analog front-end. A layout capture of the full test chip is shown in Fig. 6 while a single SPROCKET pixel with annotated blocks is shown in Fig. 5
In this prototype, the two compression algorithms implemented in the two mini-arrays are (1) full readout, i.e. no compression, and (2) zero-suppressed readout, in which only pixels with non-zero data are read. All control signals and reference voltages are provided off-chip, and reference currents are mirrored by an on-chip bias block to provide front-end bias voltages.
## V Test Results
The compact serial SAR ADC design described above was prototyped in a previous design [8]. Testing demonstrated that DNL \(<\) 0.5 LSB was achievable with an increase in tuning capacitance, which was implemented in SPROCKET.
The September 2022 test chip has been received, and testing is currently underway. Although this chip will not be heterogeneously integrated to a Skipper-CCD-in-CMOS sensor, several array pixels are connected to wirebond pads, enabling single-channel testing with a separate Skipper-CCD-in-CMOS chip which has been developed by Fermilab. Test results will be presented at the conference.
## VI Conclusions and Future Work
The SPROCKET 65nm ASIC delivers a low-noise, low-power, area-constrained solution for the massively parallel readout of Skipper CCD-in-CMOS arrays. The design has been fabricated and is currently undergoing test.
Future SPROCKET chips that are already in design will expand the readout speed by allowing the analog pile-up of multiple samples before digitization, and by implementing more sophisticated compression algorithms. Future goals include the integration of analog references and digital control signals on-chip and expansion of the array size, with the ultimate goal of producing a full-reticle Readout IC and heterogeneously integrating it with a co-designed Skipper-CCD-in-CMOS image sensor.
|
2301.02169 | Computational analysis of NM-polynomial based topological indices and
graph-entropies of carbon nanotube Y-junctions | Carbon nanotube Y-junctions are of great interest to the next generation of
innovative multi-terminal nanodevices. Topological indices are
graph-theoretically based parameters that describe various structural
properties of a chemical molecule. The entropy of a graph is a topological
descriptor that serves to characterize the complexity of the underlying
molecular graph. The concept of entropy is a physical property of a
thermodynamic system. Graph entropies are the essential thermophysical
quantities defined for various graph invariants and are applied to measure the
heterogeneity and relative stabilities of molecules. In this paper, several
neighborhood degree sum-based topological indices including graph-based
entropies of carbon nanotube Y-junction graphs are computed. | Sohan Lal, Vijay Kumar Bhat, Sahil Sharma | 2023-01-03T12:41:57Z | http://arxiv.org/abs/2301.02169v1 | ###### Abstract
###### Abstract
Carbon nanotube Y-junctions are of great interest to the next generation of innovative multi-terminal nanodevices. Topological indices are graph-theoretically based parameters that describe various structural properties of a chemical molecule. The entropy of a graph is a topological descriptor that serves to characterize the complexity of the underlying molecular graph. The concept of entropy is a physical property of a thermodynamic system. Graph entropies are the essential thermophysical quantities defined for various graph invariants and are applied to measure the heterogeneity and relative stabilities of molecules. In this paper, several neighborhood degree sum-based topological indices including graph-based entropies of carbon nanotube Y-junction graphs are computed.
**Computational analysis of NM-polynomial based topological indices and graph-entropies of carbon nanotube Y-junctions**
Sohan Lal\({}^{1}\), Vijay Kumar Bhat\({}^{1,\ast}\), Sahil Sharma\({}^{1}\)
\({}^{1}\)School of Mathematics, Shri Mata Vaishno Devi University,
Katra-182320, Jammu and Kashmir, India.
[email protected], [email protected], [email protected]
**Keywords:** Armchair carbon nanotube, graph entropy, NM-polynomial, topological indices, Y-junction graph.
**MSC (2020):** 05C10, 05C35, 05C90
## 1 Introduction
Nanotechnology is currently popular because of its evolving, electron transfer property and low-cost implementation. Nanotubes [1], were discovered in 1985 and carbon nanotubes [2] in 1991. In nanoscience and technology, branched or non-straight carbon nanotubes such as L, T, X, and Y have a lot of applications in electronic devices, such as three-terminal transistors, multi-terminal nanoelectronics, switches, amplifiers, etc., [3, 4, 5, 6, 7, 8]. These junctions are a great option for the production of nanoscale electronic devices with better switching and reliable transport properties at room temperature. For more applications of carbon nanotube Y-junctions, we refer to [9, 10, 11].
The first proposed branched carbon nanotube was of Y shape, commonly known as Y-junction or three-terminal junction. These junctions are classified as an armchair, zig-zag, or chiral depending on the chirality of connected carbon nanotubes. Also, they can be single-walled or multi-walled, symmetric or asymmetric, capped or uncapped. A carbon nanotube is called uncapped if both ends are open. A Y-junction is called symmetric if the nanotubes joining in the Y shape are identical, heptagons appeared isolated, and are distributed symmetrically. For various symmetric and asymmetric carbon nanotube Y-junctions, we refer to [12, 13, 14, 15].
A carbon nanotube Y-junction is formed by joining three identical carbon nanotubes in a Y-shaped pattern. These junctions contain exactly six hexagons as well as heptagons at the branching points. The first structural model of symmetrical single-walled armchair carbon nanotube Y-junctions was proposed by Chernozatomskii [16] and Scuseria [17], independently, in 1992. These junctions were experimentally observed [18] in 1995. For more applications and properties of carbon nanotube Y-junction graphs, we refer to [19, 20, 21].
Mathematical chemistry is a branch of theoretical chemistry that employs mathematical techniques to explain the molecular structure of a chemical molecule and its physicochemical properties. Molecular graphs are a visual representation of a chemical molecule with vertices representing atoms and edges representing bonds between the atoms [22]. Let \(G=(V(G),E(G))\) be a molecular graph with vertex set \(V(G)\) and edge set \(E(G)\). The order of a molecular graph \(G\) is defined as the total number of vertices in \(G\), denoted by \(|V(G)|\), and the number of edges in \(G\) is called size of \(G\), denoted by \(|E(G)|\). Any edge of the graph connecting its vertices \(u\) and \(v\), is denoted by \(e=uv\in E(G)\). Two vertices of graph \(G\) are said to be adjacent if there exists an edge between them. The degree of vertex \(v\in V(G)\), denoted by \(d(v)\), is defined as the number of vertices that are adjacent to
vertex \(v\), i.e., \(d(v)\)= \(|\{u:e=uv\in E(G)\}|\). The neighborhood degree sum of vertex \(v\in V(G)\) is denoted by \(d_{n}(v)\), and is defined as the sum of the degrees of all vertices that are adjacent to \(v\), i.e., \(d_{n}(v)=\sum\limits_{u}d(v)\): \(uv\in E(G)\). The minimum cardinality of the set \(K\subseteq V(G)\) such that \(G\setminus K\) is disconnected graph is called connectivity or vertex-connectivity of a connected graph \(G\). The connected graph \(G\) is said to be \(k\)-connected if its connectivity is \(k\).
Topological indices are the numerical values calculated from molecular graphs to describe various structural properties of the chemical molecule. They are frequently used to model many physico-chemical properties in various quantitative structure-property/activity relationship (QSPR/QSAR) studies [23, 24, 25]. In 1947, the chemist Harold Wiener [26] initiated the concept of topological indices. Since then, various topological indices have been introduced, and a lot of research has been conducted toward computing the indices for different molecular graphs and networks. A topological index based on the degree of end vertices of an edge can predict various physicochemical properties of the molecule, such as heat of formation, strain energy, entropy, enthalpy, boiling points, flash point, etc., without using any weight lab [24].
The Zagreb indices and their variations have been used to investigate molecular complexity, ZE-isomerism, and chirality [27]. In general, the Zagreb indices have shown applicability for deriving multilinear regression models. Ghorbani and Hosseinzadeh [28] introduced the third version of the Zagreb index and shows that this index shows a good correlation with acentric factor and entropy of the octane isomers. Mondal et al. [29] introduced neighborhood degree sum-based topological indices namely neighborhood version of forgotten topological index and neighborhood version of second modified Zagreb index and discuss some mathematical properties and degeneracy of these novel indices. For more neighborhood degree sum-based topological indices, their properties, and applications, we refer to [24, 30, 31].
The process of computing the topological indices of a molecular graph from their definitions is complex and time-consuming. Thus, for a particular family of graphs and networks, algebraic polynomials play an important role in reducing the computational time and complexity when computing its topological indices. In short, with the help of algebraic polynomials, one can easily compute various kinds of graph indices within a short span of time. The NM-polynomial plays vital role in the computation of neighborhood degree sum-based topological indices. Let \(d_{n}(v)\) denotes the neighborhood degrees sum of vertex \(v\in V(G)\). Then, the neighborhood M-polynomial (NM-polynomial) of \(G\) is defined as [30, 32, 33]
\[NM(G;x,y)=\sum\limits_{i\leq j}\lvert E_{ij}(G)\rvert x^{i}y^{j} \tag{1}\]
where, \(\lvert E_{ij}(G)\rvert\), \(i,j\geq 1\), be the number of all edges \(e=uv\in E(G)\) such that \(\{d_{n}(u)=i,d_{n}(v)=j\}\).
Recently, various neighborhood degree sum-based topological indices have been computed via the NM-polynomial technique. For example, Mondal et al. [30, 34] obtained some neighborhood and multiplicative neighborhood degree sum-based indices of molecular graphs by using their NM-polynomials. Kirmani et al. [24] and Mondal et al. [35], investigated some neighborhood degree sum-based topological indices of antiviral drugs used for the treatment of COVID-19 via the NM-polynomial technique. Shanmukha et al. [36] computed the topological indices of porous graphene via NM-polynomial method. For more neighborhood degree sum-based topological indices via NM-polynomials, we refer to [24, 35, 37, 38].
Some neighborhood degree sum-based topological indices and their derivation from NM-polynomial are given in Table 1.
In chemical graph theory, the determination of the structural information content [39] of a graph is mostly based on the vertex partition of a graph to obtain a probability distribution of its vertex set [40]. Based on such a probability distribution, the entropy of a graph can be defined. Thus, the structural information content of a graph is defined as the entropy of the underlying graph topology. The concept of graph entropy or entropy of graph was first time appeared in [41], where molecular graphs are used to study the information content of an organism. Entropy-based methods are powerful tools to investigate various problems in cybernetics, mathematical chemistry, pattern recognition, and computational physics [22, 39, 42, 43, 44].
Entropy is a measure of randomness, uncertainty, heterogeneity, or lack of information in a system. Based on information indices, there are various approaches to deriving graph entropy from the topological structure of a given chemical molecule [45]. For example, Trucco [39] and Rashevsky [41] defined graph entropies in terms of degree of vertex, extended degree sequences, and number of vertices of a molecular graph. Tan and Wu [46] study network heterogeneity by using vertex-degree based entropies. Mowshowitz defined the entropy of a graph in terms of equivalence relations defined on the vertex set of a graph and discussed some properties related to structural information [47; 48; 49; 50].
Recently, Shabbir and Nadeem [51] defined graph entropies in terms of topological indices for the molecular graphs of carbon nanotube Y-junctions and developed the regression models between the graph entropies and topological indices. Nadeem et al. [52] calculated some degree-based topological indices for armchair carbon semicapped and capped nanotubes and investigated their chemical and physical properties. Baca et al. [53] computed some degree-based topological indices of a carbon nanotube network and studied its properties. Azeem et al. [54] calculated some M-polynomials based topological indices of carbon nanotube Y-junctions and their variants. Ahmad [55], studied some ve-degree based topological indices of carbon nanotube Y-junctions and discussed their properties. Ayesha [56] calculated the bond energy of symmetrical single-walled armchair carbon nanotube Y-junctions and developed regression models between bond energy and topological indices. Rahul et al. [57] calculated some degree-based topological indices and graph-entropies of graphene, graphyne, and graphdiyne by using Shannon's approach.
The above-mentioned literature and applications of carbon nanotubes in the field of nanoscience and technology inspired us to develop more research on the molecular structure of carbon nanotube Y-junction and their variants. In addition, no work has been reported on NM-polynomial based topological indices and index-entropies of Y-junction graphs. Therefore, the main contribution of this study includes the following:
* Computation of NM-polynomials of carbon nanotube Y-junction graphs.
* Computation of some neighborhood degree sum-based topological indices from NM-polynomials.
* Some graph index-entropies in terms of topological indices are defined and computed.
\begin{table}
\begin{tabular}{c c c} \hline Topological index & Formula & Derivation from \(NMI(G,x,y)\) \\ \hline Third version of Zageth index [28]: \(NM_{1}(G)\) & \(\underset{uv\in E(G)}{\sum}(d_{n}(u)+d_{n}(v))\) & \((D_{x}+D_{y})(NM(G;x,y))|_{x=y=1}\) \\ Neighborhood second Zageth index [29]: \(NM_{2}(G)\) & \(\underset{uv\in E(G)}{\sum}(d_{n}(u)d_{n}(v))\) & \((D_{x}D_{y})(NM(G;x,y))|_{x=y=1}\) \\ Neighborhood second modified Zageth index [30]: \(nmM_{2}(G)\) & \(\underset{uv\in E(G)}{\sum}(\frac{1}{8n(u)d_{n}(v)})\) & \((S_{x}S_{y})(NM(G;x,y))|_{x=y=1}\) \\ Neighborhood forgotten topological index [29]: \(NF(G)\) & \(\underset{uv\in E(G)}{\sum}(d_{n}^{2}(u)+d_{n}^{2}(v))\) & \((D_{x}^{2}+D_{y}^{2})(NM(G;x,y))|_{x=y=1}\) \\ Third NDE index [30]: \(ND_{3}(G)\) & \(\underset{uv\in E(G)}{\sum}d_{n}(u)d_{n}(v)(d_{n}(u)+d_{n}(v))\) & \(D_{x}D_{y}(D_{x}+D_{y})(NM(G;x,y))|_{x=y=1}\) \\ Neighborhood general Randic index [30]: \(NR_{n}(G)\) & \(\underset{uv\in E(G)}{\sum}d_{n}^{2}(u)d_{n}^{2}(v)\) & \((D_{x}^{2}D_{y}^{2})(NM(G;x,y))|_{x=y=1}\) \\ Neighborhood inverse Randic index [30]: \(NR_{n}(G)\) & \(\underset{uv\in E(G)}{\sum}\frac{1}{8n(u)d_{n}(u)}\) & \((S_{x}^{2}S_{y}^{2})(NM(G;x,y))|_{x=y=1}\) \\ Neighborhood inverse Randic index [30]: \(NR_{n}(G)\) & \(\underset{uv\in E(G)}{\sum}\frac{(d_{n}^{2}(u)+d_{n}^{2}(v))}{d_{n}(u)d_{n}(v)}\) & \((D_{x}S_{y}+S_{x}D_{y})(NM(G;x,y))|_{x=y=1}\) \\ Fifth NDE index [30]: \(ND_{3}(G)\) & \(\underset{uv\in E(G)}{\sum}\frac{(d_{n}^{2}(u)+d_{n}^{2}(v))}{d_{n}(u)d_{n}(v)}\) & \((D_{x}S_{y}+S_{x}D_{y})(NM(G;x,y))|_{x=y=1}\) \\ Neighborhood harmonic index [30]: \(NH(G)\) & \(\underset{uv\in E(G)}{\sum}\frac{2}{8n(G)}\frac{(d_{n}^{2}(u)+d_{n}(v))}{d_{n}(u)d_{n}(v)}\) & \(2S_{x}T(NM(G;x,y))|_{x=y=1}\) \\ Neighborhood inverse sum index [30]: \(NPi(G)\) & \(\underset{uv\in E(G)}{\sum}\frac{(d_{n}^{2}(u)+d_{n}(v))}{d_{n}(u)d_{n}(v)}\) & \((S_{x}TD_{x}D_{y})(NM(G;x,y))|_{x=y=1}\) \\ \hline \(\underset{uv}{\text{where, }D_{x}=x(\frac{((NM(G,x,y))}{d_{n}(x))}{d_{n}(u)d_{n}(v)})}\) & \(S_{x}=f\) & \(\frac{NM(G,x,y)}{d_{n}}\) \\ \hline \(\underset{uv}{\text{where, }D_{x}=x(\frac{((NM(G,x,y))}{d_{n}(x))}{d_{n}(x)})}\) & \(D_{y}=y(\frac{(MN(G,x,y))}{d_{n}(x)})\) & \(S_{x}=f\) & \(\frac{NM(G,x,y)}{d_{n}}\) \\ \hline \(\underset{uv}{\text{where, }D_{x}=x(\frac{((NM(G,x,y))}{d_{n}(x))})}\) & \(D_{y}=y(\frac{(MN(G,x,y))}{d_{n}(x)})\) & \(S_{x}=f\) & \(\frac{NM(G,x,y)}{d_{n}}\) \\ \hline \(\underset{uv}{\text{where, }D_{x}=x(\frac{(MN(G,x,y))}{d_{n}(x)})}\) & \(D_{y}=y(\frac{(MN(G,x,y))}{d_{n}(x)})\) & \(S_{x}=f\) & \(\frac{NM(G,x,y)}{d_{n}}\) \\ \hline \(\underset{uv}{\
* Comparative analysis of obtained topological indices and graph index-entropies of Y-junction graphs.
## 2 Aim and Methodology
We use the edge partition technique, graph-theoretical tools, combinatorial computation, and the degree counting method to derive our results. The degree of end vertices is used to generate the patterns of edge partitions of the Y-junction graphs. Using such partitions, a general expression of NM-polynomials is derived. Then, several neighborhood degree sum-based topological indices are obtained from the expression of these NM-polynomials with the help of Table 1. Also, graph index- entropies in terms of topological indices have been defined by using edge-weight functions and computed for Y-junction graphs.
The paper is structured as follows: In Section 3, we define topological index-based graph entropies. The Y-junction graphs and their constructions are described in Section 4. In Section 5, the general expression of the NM-polynomials and neighborhood degree sum-based topological indices of Y-junction graphs are presented. Section 6 describes the graph index-entropies of Y-junction graphs. The numerical analysis of the findings is discussed in Section 7. Finally, the conclusion is drawn and discussed in Section 8.
## 3 Definitions and Preliminaries
In this section, we define graph index-entropies in terms of an edge-weight function. In 2008, Dehmer [40] defined the entropy for a connected graph \(G\) as follows:
**Definition 1**.: [40] Let \(G=(V(G),E(G))\) be a connected graph of order \(n\) and \(g\) be an arbitrary information functional. Then the entropy of \(G\) is defined as
\[H_{g}(G)=-\sum\limits_{i=1}^{n}\frac{g(v_{i})}{\sum\limits_{i=1}^{n}g(v_{i})} log\bigg{(}\frac{g(v_{i})}{\sum\limits_{i=1}^{n}g(v_{i})}\bigg{)}. \tag{2}\]
Since an information function defined on the vertex set of a graph is an arbitrary function. Hence, Dehmer's definition shows the possibility of producing various graph entropies for a variation in the selection of information functionals. For such graph entropy, we can refer to [58, 59, 60].
Let \(\beta:E(G)\rightarrow\mathbb{R}^{+}\cup\{0\}\) be an edge-weight function and \(d_{n}(u)=\sum\limits_{uv\in E(G)}d(u)\), denotes the sum of degrees of end vertices of an edges incident to vertex \(u\in V(G)\) (also known as neighborhood degree-sum of vertex \(u\)). Then, for eight different edge-weight functions, the third-version of Zagreb index, neighborhood second Zagreb index, neighborhood forgotten topological index, neighborhood second modified Zagreb index, third NDe index, fifth NDe index, neighborhood harmonic index and neighborhood inverse sum indeg index-entropies have been defined in the following manner:
* **Third-version of Zagreb index-entropy:** If \(e=uv\) is an edge of a connected graph \(G\) and \(\beta_{1}(e)=d_{n}(u)+d_{n}(v)\) is an edge-weight function defined on \(E(G)\). Then, the third-version of Zagreb index is \[NM_{1}(G)=\sum\limits_{e=uv\in E(G)}\beta_{1}(e)=\sum\limits_{e=uv\in E(G)}d_{ n}(u)+d_{n}(v).\] (3) Equation (2) for this edge-weight function gives us \[H_{\beta_{1}}(G) = -\sum\limits_{e\in E(G)}\frac{\beta_{1}(e)}{\sum\limits_{e\in E( G)}\beta_{1}(e)}log\bigg{(}\frac{\beta_{1}(e)}{\sum\limits_{e\in E(G)}\beta_{1}(e)} \bigg{)}\] \[= -\frac{1}{\sum\limits_{e\in E(G)}\beta_{1}(e)}\sum\limits_{e\in E (G)}\beta_{1}(e)\bigg{(}log(\beta_{1}(e))-log\sum\limits_{e\in E(G)}\beta_{1} (e)\bigg{)}\]
\[= log\bigg{(}\sum_{e\in E(G)}\beta_{1}(e)\bigg{)}-\frac{1}{\sum_{e\in E(G)} \beta_{1}(e)}\sum_{e\in E(G)}\beta_{1}(e)log\beta_{1}(e).\]
On replacing \(\sum\limits_{e\in E(G)}\beta_{1}(e)\) by \(NM_{1}(G)\) in the above equation, we get the following third-version of Zagreb index-entropy
\[H_{\beta_{1}}(G)=log(NM_{1}(G))-\frac{1}{NM_{1}(G)}\sum_{e\in E(G)}\beta_{1}(e )log\beta_{1}(e). \tag{4}\]
Similarly, we define other graph index-entropies as follows:
* **Neighborhood second Zagreb index-entropy:** For \(\beta_{2}(e)=d_{n}(u)d_{n}(v)\), the neighborhood second Zagreb index and neighborhood second Zagreb index-entropy are \[NM_{2}(G)=\sum_{e=uv\in E(G)}d_{n}(u)d_{n}(v),\] (5) and \[H_{\beta_{2}}(G)=log(NM_{2}(G))-\frac{1}{NM_{2}(G)}\sum_{e\in E(G)}\beta_{2} (e)log\beta_{2}(e).\] (6)
* **Neighborhood forgotten topological index-entropy:** For \(\beta_{3}(e)=d_{n}^{2}(u)+d_{n}^{2}(v)\), the neighborhood forgotten topological index and neighborhood forgotten topological index-entropy are \[NF(G)=\sum_{e=uv\in E(G)}d_{n}^{2}(u)+d_{n}^{2}(v),\] (7) and \[H_{\beta_{3}}(G)=log(NF(G))-\frac{1}{NF(G)}\sum_{e\in E(G)}\beta_{3}(e)log \beta_{3}(e).\] (8)
* **Neighborhood second modified Zagreb index-entropy:** For \(\beta_{4}(e)=\frac{1}{d_{n}(u)d_{n}(v)}\), the neighborhood second modified Zagreb index and neighborhood second modified Zagreb index-entropy are \[{}^{nm}M_{2}(G)=\sum_{e=uv\in E(G)}\frac{1}{d_{n}(u)d_{n}(v)},\] (9) and \[H_{\beta_{4}}(G)=log(^{nm}M_{2}(G))-\frac{1}{{}^{nm}M_{2}(G)}\sum_{e\in E(G )}\beta_{4}(e)log\beta_{4}(e).\] (10)
* **Third NDe index-entropy:** For \(\beta_{5}(e)=d_{n}(u)d_{n}(v)\big{(}d_{n}(u)+d_{n}(v)\big{)}\), the third NDe index and third NDe index-entropy are \[ND_{3}(G)=\sum_{e=uv\in E(G)}d_{n}(u)d_{n}(v)\big{(}d_{n}(u)+d_{n}(v)\big{)},\] (11) and \[H_{\beta_{5}}(G)=log(ND_{3}(G))-\frac{1}{ND_{3}(G)}\sum_{e\in E(G)}\beta_{5} (e)log\beta_{5}(e).\] (12)
* **Fifth NDe index-entropy:** For \(\beta_{6}(e)=\frac{d_{n}(u)}{d_{n}(v)}+\frac{d_{n}(v)}{d_{n}(u)}\), the fifth NDe index and fifth NDe index-entropy are \[ND_{5}(G)=\sum_{e=uv\in E(G)}\frac{d_{n}(u)}{d_{n}(v)}+\frac{d_{n}(v)}{d_{n}(u )},\] (13) and \[H_{\beta_{6}}(G)=log(ND_{5}(G))-\frac{1}{ND_{5}(G)}\sum_{e\in E(G)}\beta_{6} (e)log\beta_{6}(e).\] (14)
* **Neighborhood harmonic index-entropy:** For \(\beta_{7}(e)=\frac{2}{d_{n}(u)+d_{n}(v)}\), the neighborhood harmonic index and neighborhood harmonic index-entropy are \[NH(G)=\sum_{e=uv\in E(G)}\frac{2}{d_{n}(u)+d_{n}(v)},\] (15) and \[H_{\beta_{7}}(G)=log(NH(G))-\frac{1}{NH(G)}\sum_{e\in E(G)}\beta_{7}(e)log\beta _{7}(e).\] (16)
* **Neighborhood inverse sum indeg index-entropy:** For \(\beta_{8}(e)=\frac{d_{n}(u)d_{n}(v)}{d_{n}(u)+d_{n}(v)}\), the neighborhood inverse sum index and neighborhood inverse sum index-entropy are \[NI(G)=\sum_{e=uv\in E(G)}\frac{d_{n}(u)d_{n}(v)}{d_{n}(u)+d_{n}(v)},\] (17) and \[H_{\beta_{8}}(G)=log(NI(G))-\frac{1}{NI(G)}\sum_{e\in E(G)}\beta_{8}(e)log \beta_{8}(e).\] (18)
## 4 Y-Junction Graphs
The Y-junctions examined in this study are created by the covalent connection of three identical single-walled carbon nanotubes crossing at an angle of \(120^{\circ}\) and are uniquely determined by their chiral vector \(v=nv_{1}+nv_{2}\), where \(v_{1}\) and \(v_{2}\) are graphene sheet lattice vectors and \(n\) is non-negative integer. Let \(m\geq 1\) and \(n\geq 4\) be an even integer. Then, an uncapped symmetrical single-walled carbon nanotube Y-junction is made up of an armchair \(Y(n,n)\) and three identical single-walled armchair carbon nanotubes \(T_{m}(n,n)\) each of length \(m\) (layers of hexogones), denoted by \(Y^{m}(n,n)\). In \(Y^{m}(n,n)\), we have \(\frac{3}{4}n^{2}-\frac{3}{2}n+5\) faces including three openings (where the tubes meet to the amchair) each of chirality \((n,n)\), six heptagones, and \(\frac{3}{4}n^{2}-\frac{3}{2}n-4\) hexagons. In addition, the tube \(T_{m}(n,n)\) contains \(2mn\) hexagonal faces.
Let \(n\), \(m\), and \(l\) be positive integers with \(m\geq 1\) and \(n=2l\), for some \(l\geq 2\). Then \(J=J^{m}(n,n)\) be the \(Y\)-junction graph of \(Y^{m}(n,n)\). It has \(9l^{2}-3l+2\) hexagonal rings along with six heptagons. The graph \(J\) is of order \(6l^{2}+18l+6+24ml\) and size \(9l^{2}+21l+9+36ml\). It has \(6l^{2}+12l+6+24ml\) vertices of degree three and \(12l\) vertices of degree two. Note that graph \(J\) is a 2-connected graph.
Along with 2-connected Y-junction graph \(J\), the 1-connected Y-junction graphs have also been taken into consideration. These graphs are obtained by adding pendants to the degree 2 vertices of the 2-connected graph \(J\). Note that, each tube of \(J\) has \(2n\) vertices of degree 2. Therefore, the graph \(J\) has \(6n\) vertices of degree 2.
The graph obtained by connecting \(2n\) pendants to any one tube in \(J\) is denoted by \(J_{1}\), and we call it as second type Y-junction graph. The order and size of graph \(J_{1}\) are \(6l^{2}+22l+6+24ml\) and \(9l^{2}+25l+9+36ml\), respectively. The graph \(J_{2}\) represents a graph which is obtained by attaching \(4n\) pendants to any two tubes of \(J\) and we call it as third type Y-junction graph. In \(J_{2}\), we have \(6l^{2}+26l+6+24ml\) vertices and \(9l^{2}+29l+9+36ml\) edges. The graph obtained by joining \(6n\) pendants to all the three tubes of \(J\) is denoted by \(J_{3}\), and we called it as fourth type Y-junction graph. It has \(6l^{2}+30l+6+24ml\) vertices and \(9l^{2}+33l+9+36ml\) edges. The carbon nanotube Y-junction graphs \(J\), \(J_{1}\), \(J_{2}\), and \(J_{3}\) are shown in Figure 1.
The edge partition of Y-junction graphs \(J\), \(J_{1}\), \(J_{2}\), and \(J_{3}\) based on the neighborhood degree-sum of end vertices of an edge is given in Table 2.
## 5 NM-Polynomials and Topological Indices of Y-Junction Graphs
In this section, we develop the general expression of NM-polynomials for the Y-junction graphs and then recover various neighborhood degree-sum based topological indices from these polynomials.
**Theorem 1**.: _Let \(J\) be the Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotube. Then \(NM(J;x,y)=6lx^{5}y^{5}+12lx^{5}y^{8}+6lx^{8}y^{8}+12lx^{8}y^{9}+(9l^{2}-15l+9+ 36ml)x^{9}y^{9}\)._
Proof.: The Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotubes has \(9l^{2}+21l+9+36ml\) number of edges. Let \(E_{(i,j)}\) be the set of all edges with neighborhood degree sum of end vertices \(i,j\), i.e., \(E_{(i,j)}=\{uv\in E(J):d_{n}(u)=i,\ d_{n}(v)=j\}\).
Figure 1: A symmetrical uncapped single-walled armchair carbon nanotubes Y-junction graphs
By means of structural analysis of \(J\), the edge set of \(J\) can be partitioned into five sets on the basis of neighborhood degree sum of end vertices as follows:
\(E_{(5,5)}=\{uv\in E(J):d_{n}(u)=5,\ d_{n}(v)=5\}\), \(E_{(5,8)}=\{uv\in E(J):d_{n}(u)=5,\ d_{n}(v)=8\}\), \(E_{(8,8)}=\{uv\in E(J^{m}(n,n)):d_{n}(u)=8,\ d_{n}(v)=8\}\), \(E_{(8,9)}=\{uv\in E(J):d_{n}(u)=8,\ d_{n}(v)=9\}\), \(E_{(9,9)}=\{uv\in E(J):d_{n}(u)=9,\ d_{n}(v)=9\}\), and \(|E_{(5,5)}|=6!\), \(|E_{(5,8)}|=12!\), \(|E_{(8,8)}|=6!\), \(|E_{(8,9)}|=12!\), \(|E_{(9,9)}|=9!^{2}-15!+9+36ml\).
From Equation (1), the NM-polynomial of \(J\) is obtained as follows:
\[NM(J;x,y) = \sum_{i\leq j}\hskip-14.226378pt|E_{(i,j)}|x^{i}y^{j}\] \[= |E_{(5,5)}|x^{5}y^{5}+|E_{(5,8)}|x^{5}y^{8}+|E_{(8,8)}|x^{8}y^{8} +|E_{(8,9)}|x^{8}y^{9}+|E_{(9,9)}|x^{9}y^{9}\] \[= 6lx^{5}y^{5}+12lx^{5}y^{8}+6lx^{8}y^{8}+12lx^{8}y^{9}+(9!^{2}- 15l+9+36ml)x^{9}y^{9}.\]
**Theorem 2**.: _Let \(J\) be the Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotube. Then_
_(i)_ \(NM_{1}(J)=162l^{2}+246l+648ml+162\)__
_(ii)_ \(NM_{2}(J)=729l^{2}+663l+2916ml+729\)__
_(iii)_ \(NF(J)=1458l^{2}+1446l+5832ml+1458\)__
_(iv)_ \({}^{n\!m}M_{2}(J)=0.11l^{2}+0.62l+0.44ml+0.11\)__
_(v)_ \(NR_{\alpha}(J)=6l(25^{\alpha}+2(40)^{\alpha}+64^{\alpha}+2(72)^{\alpha})+81^{ \alpha}(9l^{2}-15l+9+36ml)\)__
_(vi)_ \(ND_{3}(J)=13122l^{2}+6702l+52488ml+13122\)__
_(vii)_ \(ND_{5}(J)=18l^{2}+44.86l+72ml+18\)__
_(viii)_ \(NH(J)=l^{2}+9.69l+4ml+1\)__
_(ix)_ \(NI(J)=40.5l^{2}+59.24l+162ml+40.5\)__
_(x)_ \(S(J)=1167.7l^{2}+714.23l+4670.9ml+1167.7\)_._
Proof.: Let \(f(x,y)=NM(J;x,y)=6lx^{5}y^{5}+12lx^{5}y^{8}+6lx^{8}y^{8}+12lx^{8}y^{9}+(9!^ {2}-15l+9+36ml)x^{9}y^{9}\).
Then, we have
\(D_{x}(f(x,y))=30lx^{5}y^{5}+60lx^{5}y^{8}+48lx^{8}y^{8}+96lx^{8}y^{9}+9(9! ^{2}-15l+9+36ml)x^{9}y^{9}\).
\(D_{y}(f(x,y))=30lx^{5}y^{5}+96lx^{5}y^{8}+48lx^{8}y^{8}+108lx^{8}y^{9}+9(9!^{2}-15l+9+36ml)x^{9}y^{9}\).
\(D_{x}^{2}(f(x,y))=150lx^{5}y^{5}+300lx^{5}y^{8}+384lx^{8}y^{8}+768lx^{8}y^{9 }+81(9!^{2}-15l+9+36ml)x^{9}y^{9}\).
\(D_{y}^{2}(f(x,y))=150lx^{5}y^{5}+768lx^{5}y^{8}+384lx^{8}y^{8}+972lx^{8}y^{ 9}+81(9!^{2}-15l+9+36ml)x^{9}y^{9}\).
\(D_{x}D_{y}(f(x,y))=150lx^{5}y^{5}+480lx^{5}y^{8}+384lx^{8}y^{8}+864lx^{8}y^ {9}+81(9!^{2}-15l+9+36ml)x^{9}y^{9}\).
\((D_{x}+D_{y})f(x,y)=60lx^{5}y^{5}+156lx^{5}y^{8}+96lx^{8}y^{8}+204lx^{8}y^ {9}+18(9!^{2}-15l+9+36ml)x^{9}y^{9}\).
\(D_{x}D_{y}(D_{x}+D_{y})f(x,y)=\begin{array}{ll}1500lx^{5}y^{5}+6240lx^{5}y^{ 8}+6144lx^{8}y^{8}+14688lx^{8}y^{9}+1458(9!^{2}-15l+9+36ml)x^{9}y^{9}\).
\((D_{x}^{2}+D_{y}^{2})f(x,y)=300lx^{5}y^{5}+1068lx^{5}y^{8}+768lx^{8}y^{8}+ 1740lx^{8}y^{9}+162(9!^{2}-15l+9+36ml)x^{9}y^{9}\).
\(D_{x}^{\alpha}D_{y}^{\alpha}(f(x,y))=\begin{array}{ll}6l(25)^{\alpha}x^{5}y^{ 5}+12l(40)^{\alpha}x^{5}y^{8}+6l(64)^{\alpha}x^{8}y^{8}+12l(72)^{\alpha}x^{8}y^ {9}+(81)^{\alpha}\\ \end{array}\\ \begin{array}{ll}(9l^{2}-15l+9+36ml)x^{9}y^{9}.\end{array}\end{array}\)
\[S_{x}S_{y}(f(x,y))=\frac{6l}{25}x^{5}y^{5}+\frac{12l}{40}x^{5}y^{8}+\frac{6l}{64}x ^{8}y^{8}+\frac{12l}{72}x^{8}y^{9}+\frac{(9l^{2}-15l+9+36ml)}{81}x^{9}y^{9}.\]
\[S_{y}D_{x}+S_{x}D_{y}(f(x,y))=12lx^{5}y^{5}+\frac{267l}{72}x^{5}y^{8}+12lx^{8}y ^{8}+\frac{145l}{6}x^{8}y^{9}+2(9l^{2}-15l+9+36ml)x^{9}y^{9}.\]
\[2S_{x}T(f(x,y))=\frac{6l}{5}x^{10}+\frac{24l}{13}x^{13}+\frac{3l}{4}x^{16}+ \frac{24l}{17}x^{17}+\frac{(9l^{2}-15l+9+36ml)}{9}x^{18}.\]
\[S_{x}TD_{x}D_{y}(f(x,y))=15lx^{10}+\frac{480l}{13}x^{13}+\frac{384l}{16}x^{16}+ \frac{864l}{17}x^{17}+\frac{81(9l^{2}-15l+9+36ml)}{18}x^{18}.\]
\[S_{x}^{3}Q_{-2}TD_{x}^{3}D_{y}^{3}(f(x,y))=\frac{93750l}{512}x^{8}+\frac{76800 0l}{1331}x^{11}+\frac{1572864l}{2744}x^{14}+\frac{4478976l}{3375}x^{15}+\frac{ 531441(9l^{2}-15l+9+36ml)}{4096}x^{16}.\]
Now, using Table 1 we have
(i) \(NM_{1}(J)=(D_{x}+D_{y})f(x,y)|_{x=y=1}=162l^{2}+246l+648ml+162\).
(ii) \(NM_{2}(J)=(D_{x}D_{y})f(x,y)|_{x=y=1}=729l^{2}+663l+2916ml+729\).
(iii) \(NF(J)=(D_{x}^{2}+D_{y}^{2})f(x,y)|_{x=y=1}=1458l^{2}+1446l+5832ml+1458\).
(iv) \({}^{nm}M_{2}(J)=(S_{x}S_{y})f(x,y)|_{x=y=1}=0.11l^{2}+0.62l+0.44ml+0.11\).
(v) \(NR_{\alpha}(J)=(D_{x}^{\alpha}D_{y}^{\alpha})f(x,y)|_{x=y=1}=6l(25^{\alpha}+ 2(40)^{\alpha}+64^{\alpha}+2(72)^{\alpha})+81^{\alpha}(9l^{2}-15l+9+36ml)\).
(vi) \(ND_{3}(J)=D_{x}D_{y}(D_{x}+D_{y})f(x,y)|_{x=y=1}=13122l^{2}+6702l+52488ml+13122\).
(vii) \(ND_{5}(J)=S_{y}D_{x}+S_{x}D_{y}(f(x,y))|_{x=y=1}=18l^{2}+44.86l+72ml+18\).
(viii) \(NH(J)=2S_{x}T(f(x,y))|_{x=y=1}=l^{2}+9.69l+4ml+1\).
(ix) \(NI(J)=S_{x}TD_{x}D_{y}(f(x,y))|_{x=y=1}=40.5l^{2}+59.24l+162ml+40.5\).
(x) \(S(J)=S_{x}^{3}Q_{-2}TD_{x}^{3}D_{y}^{3}(f(x,y))|_{x=y=1}=1167.7l^{2}+714.23l+46 70.9ml+1167.7\).
**Theorem 3**.: _Let \(J_{1}\) be the second type Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotube. Then \(NM(J_{1};x,y)=4
_(i)_ \(NM_{1}(J_{1})=162l^{2}+314l+648ml+162\)
_(ii)_ \(NM_{2}(J_{1})=729l^{2}+957l+2916ml+729\)
_(iii)_ \(NF(J_{1})=1458l^{2}+2074l+5832ml+1458\)
_(iv)_ \({}^{nm}M_{2}(J_{1})=0.11l^{2}+0.72l+0.44ml+0.11\)
\((v)\)_ \(NR_{\alpha}(J_{1}) = 2l(2(21)^{\alpha}+2(25)^{\alpha}+4(40)^{\alpha}+(49)^{\alpha}+2(63) ^{\alpha}+2(64)^{\alpha}+4(72)^{\alpha})+(81)^{\alpha}(9l^{2}-9l+9\)_
\(+36ml)\)
_(vi)_ \(ND_{3}(J_{1})=13122l^{2}+12170l+52488ml+13122\)
_(vii)_ \(ND_{5}(J_{1})=18l^{2}+56.328l+72ml+18\)
_(viii)_ \(NH(J_{1})=l^{2}+3.98l+4ml+1\)
_(ix)_ \(NI(J_{1})=40.5l^{2}+75.15l+162ml+40.5\)
_(x)_ \(S(J_{1})=1167.7l^{2}+1178.92l+4670.9ml+1167.7\).
Proof.: Refer to Theorem 2 for proof.
**Theorem 5**.: _Let \(J_{2}\) be the third type Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotube. Then \(NM(J_{2};x,y)=8lx^{3}y^{7}+2lx^{5}y^{5}+4lx^{5}y^{8}+4lx^{7}y^{7}+8lx^{7}y^ {9}+2lx^{8}y^{8}+4lx^{8}y^{9}+(9l^{2}-3l+9+36ml)x^{9}y^{9}\)._
Proof.: The third type Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotubes has \(9l^{2}+29l+9+36ml\) number of edges. Let \(E_{(i,j)}\) be the set of all edges with neighborhood degree sum of end vertices \(i,j\), i.e., \(E_{(i,j)}=\{uv\in E(J_{2}):d_{n}(u)=i,\;d_{n}(v)=j\}\).
By means of structure analysis of \(J_{2}\), the edge set of \(J_{2}\) can be partitioned into eight sets on the basis of neighborhood degree sum of end vertices as follows:
\(E_{(3,7)}=\{uv\in E(J_{2}):d_{n}(u)=3,\;d_{n}(v)=7\}\), \(E_{(5,5)}=\{uv\in E(J_{2}):d_{n}(u)=5,\;d_{n}(v)=5\}\), \(E_{(5,8)}=\{uv\in E(J_{2}^{\prime}):d_{n}(u)=5,\;d_{n}(v)=8\}\), \(E_{(7,7)}=\{uv\in E(J_{2}):d_{n}(u)=7,\;d_{n}(v)=7\}\), \(E_{(7,9)}=\{uv\in E(J_{2}):d_{n}(u)=7,\;d_{n}(v)=9\}\), \(E_{(8,8)}=\{uv\in E(J_{2}):d_{n}(u)=8,\;d_{n}(v)=8\}\), \(E_{(8,9)}=\{uv\in E(J_{2}):d_{n}(u)=8,\;d_{n}(v)=9\}\), \(E_{(9,9)}=\{uv\in E(J_{2}):d_{n}(u)=9,\;d_{n}(v)=9\}\), and \(|E_{(3,7)}|=8l\), \(|E_{(5,5)}|=2l\), \(|E_{(5,8)}|=4l\), \(|E_{(7,7)}|=4l\), \(|E_{(7,9)}|=8l\), \(|E_{(8,8)}|=2l\), \(|E_{(8,9)}|=4l\), \(|E_{(9,9)}|=9l^{2}-3l+9+36ml\).
From Equation (1), the NM-polynomial of \(J_{2}\) is obtained as follows:
\[NM(J_{2};x,y) = \sum_{i\leq j}\lvert E_{(i,j)}\rvert x^{i}y^{j}\] \[= \lvert E_{(3,7)}\rvert x^{3}y^{7}+\lvert E_{(5,5)}\rvert x^{5}y^{5 }+\lvert E_{(5,8)}\rvert x^{5}y^{8}+\lvert E_{(7,7)}\rvert x^{7}y^{7}+\lvert E _{(7,9)}\rvert x^{7}y^{9}+\] \[\lvert E_{(8,8)}\rvert x^{8}y^{8}+\lvert E_{(8,9)}\rvert x^{8}y^{9 }+\lvert E_{(9,9)}\rvert x^{9}y^{9}\] \[= 8lx^{3}y^{7}+2lx^{5}y^{5}+4lx^{5}y^{8}+4lx^{7}y^{7}+8lx^{7}y^ {9}+2lx^{8}y^{8}+4lx^{8}y^{9}+\] \[(9l^{2}-3l+9+36ml)x^{9}y^{9}.\]
**Theorem 6**.: _Let \(J_{2}\) be the third type Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotube. Then (i) \(NM_{1}(J_{2})=162l^{2}+382l+648ml+162\) (ii) \(NM_{2}(J_{2})=729l^{2}+1251l+2916ml+729\) (iii) \(NF(J_{2})=1458l^{2}+2478l+5832ml+1458\) (iv) \({}^{nm}M_{2}(J_{2})=0.11l^{2}+0.819l+0.44ml+0.11\)_
_(v)_ \(NR_{\alpha}(J_{2})=2l(4(21)^{\alpha}+(25)^{\alpha}+2(40)^{\alpha}+2(49)^{\alpha}+ 4(63)^{\alpha}+(64)^{\alpha}+2(72)^{\alpha})+(81)^{\alpha}(9l^{2}-3l+9+36ml)\)
_(vi)_ \(ND_{3}(J_{2})=13122l^{2}+17638l+52488ml+13122\)
_(vii)_ \(ND_{5}(J_{2})=18l^{2}+65.56l+72ml+18\)
_(viii)_ \(NH(J_{2})=l^{2}+4.57l+4ml+1\)
_(ix)_ \(NI(J_{2})=40.5l^{2}+91.048l+162ml+40.5\)
_(x)_ \(S(J_{2})=1167.7l^{2}+1643.61l+4670.9ml+1167.7\).
Proof.: Refer to Theorem 2 for proof.
**Theorem 7**.: _Let \(J_{3}\) be the fourth type Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotube. Then \(NM(J_{3};x,y)=12lx^{3}y^{7}+6lx^{7}y^{7}+12lx^{7}y^{9}+(9l^{2}+3l+9+36ml)x^{ 9}y^{9}\)._
Proof.: The fourth type Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotube has \(9l^{2}+33l+9+36ml\) number of edges. Let \(E_{(i,j)}\) be the set of all edges with neighborhood degree sum of end vertices \(i,j\), i.e., \(E_{(i,j)}=\{uv\in E(J_{3}):d_{n}(u)=i,\ d_{n}(v)=j\}\).
By means of structure analysis of \(J_{3}\), the edge set of \(J_{3}\) can be partitioned into four sets on the basis of neighborhood degree sum of end vertices as follows:
\(E_{(3,7)}=\{uv\in E(J_{3}):d_{n}(u)=3,\ d_{n}(v)=7\}\), \(E_{(7,7)}=\{uv\in E(J_{3}):d_{n}(u)=7,\ d_{n}(v)=7\}\), \(E_{(7,9)}=\{uv\in E(J_{3}):d_{n}(u)=7,\ d_{n}(v)=9\}\), \(E_{(9,9)}=\{uv\in E(J_{3}):d_{n}(u)=9,\ d_{n}(v)=9\}\), and \(|E_{(3,7)}|=12l\), \(|E_{(7,7)}|=6\), \(|E_{(7,9)}|=12l\), \(|E_{(9,9)}|=9l^{2}+3l+9+36ml\).
From Equation (1), the NM-polynomial of \(J_{3}\) is obtained as follows:
\[NM(J_{3};x,y) = \sum_{i\leq j}|E_{(i,j)}|x^{i}y^{j}\] \[= |E_{(3,7)}|x^{3}y^{7}+|E_{(7,7)}|x^{7}y^{7}+|E_{(7,9)}|x^{7}y^{9 }+|E_{(9,9)}|x^{9}y^{9}\] \[= 12lx^{3}y^{7}+6lx^{7}y^{7}+12lx^{7}y^{9}+(9l^{2}+3l+9+36ml)x^{ 9}y^{9}.\]
**Theorem 8**.: _Let \(J_{3}\) be the fourth type Y-junction graph of an uncapped symmetrical single-walled armchair carbon nanotube. Then (i) \(NM_{1}(J_{3})=162l^{2}+450l+648ml+162\) (ii) \(NM_{2}(J_{3})=729l^{2}+1545l+2916ml+729\) (iii) \(NF(J_{3})=1458l^{2}+3330l+5832ml+1458\) (iv) \({}^{nm}M_{2}(J_{3})=0.11l^{2}+0.92l+0.44ml+0.11\) (v) \(NR_{\alpha}(J_{3})=6l(2(21)^{\alpha}+(49)^{\alpha}+2(63)^{\alpha})+(81)^{ \alpha}(9l^{2}+3l+9+36ml)\) (vi) \(ND_{3}(J_{3})=13122l^{2}+23106l+52488ml+13122\) (vii) \(ND_{5}(J_{3})=18l^{2}+75.90l+72ml+18\) (viii) \(NH(J_{3})=l^{2}+5.090l+4ml+1\) (ix) \(NI(J_{3})=40.5l^{2}+106.95l+162ml+40.5\) (x) \(S(J_{3})=1167.7l^{2}+2085.95l+4670.9ml+1167.7\).
Proof.: Refer to Theorem 2 for proof.
Graph Index-Entropies of Y-Junction Graphs
In this section, we compute the index-entropy of carbon nanotube Y-junctions in terms of neighborhood degree sum-based topological indices. We first compute index-entropies of the Y-junction graph \(J\) whose edge partition is given in Table 2.
* **Third-version of Zagreb index-entropy of \(J\)**
From part (i) of Theorem 2, we have
\[NM_{1}(J)=162l^{2}+246l+648ml+162. \tag{19}\]
Now, from Equation (4), the third-version of Zagreb index-entropy of \(J\) is
\[H_{\beta_{1}}(J)=log(NM_{1}(J))-\frac{1}{NM_{1}(J)}\sum_{e\in E(J)}\beta_{1}(e )log\beta_{1}(e). \tag{20}\]
Using Table 2 and Equation (19) in Equation (20), we get the required third-version of Zagreb index-entropy of \(J\) as follows:
\[H_{\beta_{1}}(J) = log(NM_{1}(J))-\frac{1}{NM_{1}(J)}\sum_{e\in E(J)}\beta_{1}(e) log\beta_{1}(e)\] \[= log(162l^{2}+246l+648ml+162)-\frac{1}{162l^{2}+246l+648ml+162} \bigg{(}6l(10)(log10)+\] \[12l(13)(log13)+6l(16)(log16)+12l(17)(log17)+(9l^{2}-15l+36ml+9)( 18)(log18)\bigg{)}\] \[= log(162l^{2}+246l+648ml+162)-\frac{1}{162l^{2}+246l+648ml+162} \bigg{(}60l(log10)+\] \[156l(log13)+96l(log16)+204l(log17)+(162l^{2}-270l+648ml+162)(log18 )\bigg{)}\] \[= log(162l^{2}+246l+648ml+162)-\frac{1}{162l^{2}+246l+648ml+162} \bigg{(}60l(1)+156l(1.1139433523)\] \[+96l(1.2041199827)+204l(1.2304489214)+(162l^{2}-270l+648ml+162)(1. 2552725051)\bigg{)}\] \[\approx log(162l^{2}+246l+648ml+162)-\frac{202.5l^{2}+261.78l+810ml +202.5}{162l^{2}+246l+648ml+162}.\]
* **Neighborhood second Zagreb index-entropy of \(J\)**
From part (ii) of Theorem 2, we have
\[NM_{2}(J)=729l^{2}+663l+2916ml+729. \tag{21}\]
By using the values given in Table 2 and Equation (21) in Equation (6), we get the required neighborhood second Zagreb index-entropy of \(J\) as follows:
\[H_{\beta_{2}}(J) = log(NM_{2}(J))-\frac{1}{NM_{2}(J)}\sum_{e\in E(J)}\beta_{2}(e)log \beta_{2}(e)\] \[= log(729l^{2}+663l+2916ml+729)-\frac{1}{729l^{2}+663l+2916ml+729} \bigg{(}6l(25)(log25)+\] \[12l(40)(log40)+6l(64)(log64)+12l(72)(log72)+(9l^{2}-15l+36ml+9)( 81)(log81)\bigg{)}\] \[\approx log(729l^{2}+663l+2916ml+729)-\frac{1391.22l^{2}+958.27l+ 5564.88ml+1391.22}{729l^{2}+663l+2916ml+729}.\]
Similarly, we compute the remaning index-entropies of \(J\). Table 3 shows some calculated graph index-entropies of \(J\).
In this way, the topological index-based entropies for Y-junction graphs \(J_{1}\), \(J_{2}\), and \(J_{3}\) are calculated.
The index-based entropies of \(J_{1}\)\(J_{2}\), and \(J_{3}\) are given in Tables 4, 5, and 6.
\begin{table}
\begin{tabular}{l c} \hline Entropy & Values of entropies \\ \hline \(H_{\beta_{1}}(J_{1})\) & \(\log(162l^{2}+314l+648ml+162)-\frac{203.31l^{2}+346.09l+813.24ml+203.31}{162l^{ 2}+314l+648ml+162}\) \\ \(H_{\beta_{2}}(J_{1})\) & \(\log(729l^{2}+957l+2916ml+729)-\frac{1391.22l^{2}+1523.54l+568.48ml+1391.22}{1 720l^{2}+957l+2916ml+729}\) \\ \(H_{\beta_{3}}(J_{1})\) & \(\log(1458l^{2}+2074l+5832ml+1458)-\frac{3221.64l^{2}+3991+12885.84ml+3221.46}{1 458l^{2}+2074l+5832ml+1458}\) \\ \(H_{\beta_{4}}(J_{1})\) & \(\log(0.11l^{2}+0.72l+0.44ml+0.11)+\frac{0.207l^{2}+1.065l+0.897ml+0.207}{0.11l ^{2}+0.72l+0.44ml+0.11}\) \\ \(H_{\beta_{5}}(J_{1})\) & \(\log(13122l^{2}+12170l+52488ml+13122)-\frac{41514.75l^{2}+15202.13l+166059.31ml+ 41514.75}{1312l^{2}+12170l+52488ml+13122}\) \\ \(H_{\beta_{6}}(J_{1})\) & \(\log(18l^{2}+56.328l+72ml+18)-\frac{5.41l^{2}+9.08l+21.67ml+5.41}{18l^{2}+56.3 28l+722ml+18}\) \\ \(H_{\beta_{7}}(J_{1})\) & \(\log(l^{2}+3.98l+4ml+1)+\frac{0.9l^{2}+3.21l+3.6ml+0.9}{12l+3.98l+4ml+1}\) \\ \(H_{\beta_{8}}(J_{1})\) & \(\log(40.5l^{2}+75.15l+162ml+40.5)-\frac{26.37l^{2}+36.84l+105.48ml+26.37}{40.5 l^{2}+75.15l+1622ml+40.5}\) \\ \hline \end{tabular}
\end{table}
Table 4: Index-entropies of \(J_{1}\)
\begin{table}
\begin{tabular}{l c} \hline Entropy & Values of entropies \\ \hline \(H_{\beta_{1}}(J_{2})\) & \(\log(162l^{2}+382l+648ml+162)-\frac{203.31l^{2}+430.65l+813.24ml+203.31}{162l^ {2}+382l+648ml+162}\) \\ \(H_{\beta_{2}}(J_{2})\) & \(\log(729l^{2}+1251l+2916ml+729)-\frac{1391.22l^{2}+2088.77l+5564.88ml+1391.22}{72 20l^{2}+1251l+2916ml+729}\) \\ \(H_{\beta_{3}}(J_{2})\) & \(\log(1458l^{2}+2478l+5832ml+1458)-\frac{3221.46l^{2}+5380.37l+12885.84ml+3221.46}{1 458l^{2}+2478l+5832ml+1458}\) \\ \(H_{\beta_{4}}(J_{2})\) & \(\log(0.11l^{2}+0.819l+0.44ml+0.11)+\frac{0.096l^{2}+1.007l+0.99}{11l^{2}+0.819 0l+0.44ml+0.11}\) \\ \(H_{\beta_{5}}(J_{2})\) & \(\log(13122l^{2}+17638l+52488ml+13122)-\frac{41514.75l^{2}+15202.13l+166059.31ml+ 41514.75}{1312l^{2}+12170l+52488ml+13122}\) \\ \(H_{\beta_{6}}(J_{2})\) & \(\log(18l^{2}+44.86l+72ml+18)-\frac{5.41l^{2}+44.81l+124.67ml+5.41}{18l^{2}+4 4.84l+72ml+18}\) \\ \(H_{\beta_{7}}(J_{2})\) & \(\log(18l^{2}+65.56l+72ml+18)-\frac{5.41l^{2}+23.44l^{2}+21.67ml+5.41}{18l^{2}+ 65.56l+72ml+18}\) \\ \(H_{\beta_{8}}(J_{2})\) & \(\log(40.5l^{2}+59.24l+162ml+40.5)-\frac{26.37l^{2}+46.387l+105.48ml+26.37}{40.5 l^{2}+9.1048l+162ml+40.5}\) \\ \hline \end{tabular}
\end{table}
Table 5: Index-entropies of \(J_{2}\)
\begin{table}
\begin{tabular}{l c} \hline Entropy & Values of entropies \\ \hline \(H_{\beta_{1}}(J_{2})\) & \(\log(162l^{2}+382l+648ml+162)-\frac{203.31l^{2}+430.65l+813.24ml+203.31}{162l^{ 2}+4832l+648ml+162}\) \\ \(H_{\beta_{2}}(J_{2})\) & \(\log(729l^{2}+1251l+2916ml+729)-\frac{1391.22l^{2}+2088.77l+5564.88ml+1391.22}{72 20l^{2}+1251l+2916ml+729}\) \\ \(H_{\beta_{3}}(J_{2})\) & \(\log(1458l^{2}+2478l+5832ml+1458)-\frac{3221.46l^{2}+5380.37l+12885.84ml+3221.46}{1 458l^{2}+2478l+5832ml+1458}\) \\ \(H_{\beta_{4}}(J_{2})\) & \(\log(0.11l^{2}+0.819l+0.44ml+0.11)+\frac{0.096l^{2}+1.007l+0.99}{11l^{2}+0.819 0l+0.44ml+0.11}\) \\ \(H_{\beta_{5}}(J_{2})\) & \(\log(13122l^{2}+17638l+52488ml+13122)-\frac{41514.75l^{2}+51096.56l+166059. 31l+501659.95}{1312l^{2}+17638l+52488ml+13122}\) \\ \(H_{\beta_{6}}(J_{2})\) & \(\log(18l^{2}+65.56l+72ml+18)-\frac{5.41l^{2}+23.44l^{2}+21.67ml+5.41}{18l^{2}+65.56 +72ml+18}\) \\ \(H_{\beta_{7}}(J_{2})\) & \(\log(l^{2}+4.57l+4ml+1)+\frac{0.9l^{2}+3.61l+3.6ml+0.9}{l^{2}+4.57l+4ml+1}\) \\ \(H_{\beta_{8}}(J_{2})\) & \(\log(40.5l^{2}+91.048l+162ml+40.5)-\frac{26.37l^{2}+46.387l+105.48ml+26.37}{40.5 l^{2}+9.1048l+162ml+40.5}\) \\ \hline \end{tabular}
\end{table}
Table 3: Index-entropies of \(J_{2}\)
## 7 Numerical Results and Discussions
The numerical values of topological indices and graph index-entropies of Y-junction graphs are computed in this section for some values of \(l\) and \(m\). In addition, we plot line and bar graphs for comparison of the obtained results. Here, we use the logarithm of the base 10 for calculations.
The numerical values of topological indices for Y-junction graph \(J\) are given in Table 7. The logarithmic values of Table 7 are plotted in Figure 2. From the vertical axis of Figure 2, we can conclude that for Y-junction graph \(J\), the topological indices have the following order: \({}^{mm}M_{2}\leq NR_{-1/2}\leq NH\leq ND_{5}\leq NI\leq NM_{1}\leq NM_{2}\leq S \leq NF\leq ND_{3}\). The third NDe index has the most dominating nature compared to other topological indices, whereas neighborhood second modified Zagreb index grew slowly.
Table 8 shows some numerical values of topological indices for Y-junction graph \(J_{1}\). The logarithmic values of these topological indices are plotted in Figure 3. From Figure 3, we can conclude that the topological indices for Y-junction graph \(J_{1}\) have the following order: \({}^{nm}M_{2}\leq NH\leq NR_{-1/2}\leq ND_{5}\leq NI\leq NM_{1}\leq NM_{2}\leq S \leq NF\leq ND_{3}\). Also, we see that the logarithemic values of \(NR_{-1/2}\) and \(NH\) for \(J_{1}\) are almost same.
Table 9 shows some calculated values of topological indices for Y-junction graph \(J_{2}\). The logarithmic values of these indices are plotted in Figure 4. The vertical axis of Figure 4 shows the comparison clearly. Figure 4 shows that the logarithmic values of \(ND_{3}\) are extremely high when compared to other topological indices of \(J_{2}\). From Figure 4, we see that the graph of \(NR_{-1/2}\) and \(NH\) are almost coincide.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} \hline \([l,m]\) & \(NM_{1}(J_{2})\) & \(NM_{2}(J_{2})\) & \(NF(J_{2})\) & \({}^{nm}M_{2}(J_{2})\) & \(NR_{-\frac{1}{2}}(J_{2})\) & \(ND_{3}(J_{2})\) & \(ND_{5}(J_{2})\) & \(NH(J_{2})\) & \(NI(J_{2})\) & \(S(J_{2})\) \\ \hline \([2,2]\) & 44060 & 17811 & 35574 & 3.948 & 30.49121 & 310388 & 509.12 & 30.14 & 1032.596 & 27809.32 \\ \([3,3]\) & 8598 & 37287 & 74502 & 7.517 & 60.23681 & 656526 & 1024.68 & 59.71 & 2136.144 & 58645.93 \\ \([4,4]\) & 14650 & 64053 & 128010 & 12.186 & 99.98241 & 1133344 & 1720.24 & 90.28 & 964.692 & 101150.7 \\ \([5,5]\) & 22322 & 98100 & 106088 & 17.955 & 149.728 & 1741562 & 2595.8 & 148.85 & 5558.24 & 155350.8 \\ \([6,6]\) & 31614 & 134545 & 278796 & 24.824 & 209.192 & 2480010 & 3651.36 & 2048.42 & 7874.788 & 221219 \\ \([7,7]\) & 42526 & 188091 & 370614 & 32.793 & 279.2192 & 33531478 & 4886.92 & 277.99 & 10060.34 & 298764.4 \\ \([8,8]\) & 55058 & 244017 & 487842 & 41.862 & 358.9648 & 4355226 & 6302.48 & 357.56 & 13728.88 & 387987 \\ \([9,9]\) & 69210 & 307233 & 614250 & 52.031 & 448.7104 & 5486274 & 7898.48 & 447.13 & 17262.43 & 488886.8 \\ \([10,10]\) & 84082 & 377739 & 755238 & 62.3 & 548.456 & 6755052 & 9673.6 & 546.7 & 21200.98 & 601463.8 \\ \hline \end{tabular}
\end{table}
Table 9: Numerical values of topological indices for Y-junction graph \(J_{2}\)
Figure 3: Graphical comparison among topological indices of Y-junction graph \(J_{1}\)
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \([l,m]\) & \(NM_{1}(J_{2})\) & \(NM_{2}(J_{2})\) & \(NF(J_{2})\) & \({}^{nm}M_{2}(J_{2})\) & \(NR_{-\frac{1}{2}}(J_{2})\) & \(ND_{3}(J_{2})\) & \(ND_{5}(J_{2})\) & \(NH(J_{2})\) & \(NI(J_{2})\) & \(S(J_{2})\) \\ \hline \([2,2]\) & 44060 & 17811 & 35574 & 3.948 & 30.49121 & 31038 & 509.12 & 30.14 & 1032.596 & 27809.32 \\ \([3,3]\) & 8598 & 37287 & 74502 & 7.517 & 60.23681 & 656526 & 1024.68 & 59.71 & 2136.144 & 58645.93 \\ \([4,4]\) & 14650 & 64053 & 128010 & 12.186 & 99.98241 & 1133344 & 1720.24 & 90.28 & 964.692 & 101150.7 \\ \([5,5]\) & 22322 & 98100 & 106088 & 17.955 & 149.728 & 1741562 & 2595.8 & 148.85 & 5558.24 & 155350.8 \\ \([6,6]\) & 31614 & 134545 & 278796 & 24.824 & 209.192 & 2480100 & 3651.36 & 2048.42 & 7874.788 & 221219 \\ \([7,7]\) & 42526 & 188091 & 370614 & 32.793 & 279.2192 & 33531478 & 4886.92 & 277.99 & 10060.34 & 298764.4 \\ \([8,8]\) & 55058 & 244017 & 487842 & 41.862 & 358.9648 & 4355226 & 6302.48 & 357.56 & 13728.88 & 387987 \\ \([9,9]\) & 69210 & 307233 & 614250 & 52.031 & 448.7104 & 5486274 & 7898.48 & 447.13 & 17262.43 & 488886.8 \\ \([10,10]\) & 84082 & 377739 & 755238 & 62.3 & 548.456 & 6755052 & 9673.6 & 546.7 & 21200.98 & 601463.8 \\ \hline \end{tabular}
\end{table}
Table 8: Numerical values of topological indices for Y-junction graph \(J_{1}\)
Table 10 shows some numerical values of topological indices of Y-junction \(J_{3}\). Figure 5 depicts the graphical comparison of these indices. Table 10 and Figure 5 show that the values of topological indices strictly increase as the values of \(l\) and \(m\) increases.
From Tables 7, 8, 9, and 10, we see that as the values of \(l\) and \(m\) in Y-junction graphs increases, the corresponding values of topological indices grew very fastly.
A few values of graph index-entropies of Y-junction graph \(J\) are listed in Table 11 and illustrated in Figure 6. From Figure 6, we see that entropy measures of \(H_{\beta_{1}}\), \(H_{\beta_{2}}\), \(H_{\beta_{3}}\), and \(H_{\beta_{4}}\) almost
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \([l,m]\) & \(NM_{1}(J_{3})\) & \(NM_{2}(J_{3})\) & \(NF(J_{3})\) & \(N^{m}M_{2}(J_{3})\) & \(NR_{-\frac{1}{2}(J_{3})}\) & \(NB_{3}(J_{3})\) & \(ND_{5}(J_{3})\) & \(NH(J_{3})\) & \(N(J_{3})\) & \(S(J_{3})\) \\ \hline \([2,3]\) & 4302 & 18390 & 37278 & 4.15 & 31.6419 & 321774 & 529.8 & 31.18 & 1064.4 & 28694 \\ \([3,3]\) & 8802 & 38169 & 77088 & 7.82 & 61.9628 & 672900 & 1055.7 & 61.27 & 2183.85 & 59073 \\ \([4,4]\) & 14922 & 65229 & 131418 & 12.59 & 102.284 & 1155306 & 1701.6 & 101.36 & 3708.3 & 1092929 \\ \([5,5]\) & 22662 & 99579 & 200358 & 18.46 & 152.605 & 1768902 & 2647.5 & 151.45 & 5637.75 & 157562 \\ \([6,4]\) & 32022 & 141219 & 283878 & 25.43 & 212.926 & 2513718 & 3713.4 & 211.54 & 7972.2 & 223873 \\ \([7,7]\) & 43002 & 190149 & 381978 & 33.5 & 263.247 & 3388754 & 4959.3 & 281.63 & 10711.7 & 301861 \\ \([8,8]\) & 55602 & 246309 & 494658 & 42.67 & 363.568 & 4397010 & 638.2 & 361.72 & 13886.1 & 391526 \\ \([9,9]\) & 69822 & 309879 & 621918 & 52.94 & 453.889 & 5535466 & 7091.1 & 451.81 & 17405.6 & 492686 \\ \([10,10]\) & 85662 & 380679 & 763758 & 64.31 & 554.209 & 6605152 & 9777 & 551.9 & 21360 & 605887 \\ \hline \end{tabular}
\end{table}
Table 10: Numerical values of topological indices of Y-junction graph \(J_{3}\)
Figure 4: Graphical comparison among topological indices of Y-junction \(J_{2}\)
Figure 5: Graphical comparison among topological indices of Y-junction graph \(J_{3}\)
coincide.
The values of index-entropy of Y-junction graph \(J_{1}\) is listed in Table 12 and illustrated in Figure 7. From Table 12 and Figure 7, we find that measures of graph index-entropies \(H_{\beta_{1}}\), \(H_{\beta_{2}}\), \(H_{\beta_{3}}\), \(H_{\beta_{5}}\), \(H_{\beta_{6}}\), and \(H_{\beta_{1}}\) are almost same.
Table 13 depicts some graph index-entropies of Y-junction graph \(J_{2}\). The graphical comparison of index-entropies of Y-junction graph \(J_{2}\) is shown in Figure 8. From Figure 8, we see that graph index-entropies of \(J_{2}\) increases as the values of \(l\) and \(m\) increases.
In Table 14, we calculate some graph index-entropies of Y-junction graph \(J_{3}\). Figure 9 shows the graphical comparison among index-entropies of \(J_{3}\). From Table 14 and Figure 9, we see that index entropies \(H_{\beta_{1}}\), \(H_{\beta_{2}}\), \(H_{\beta_{3}}\), \(H_{\beta_{6}}\), and \(H_{\beta_{8}}\) of \(J_{3}\) are almost same. Also, Tables 11, 12, 13, and 14 shows that graph index-entropies of Y-junction graph increases as the values of \(l\) and \(m\) increases.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \([l,m]\) & \(H_{\beta_{1}}(J_{2})\) & \(H_{\beta_{2}}(J_{2})\) & \(H_{\beta_{3}}(J_{2})\) & \(H_{\beta_{4}}(J_{2})\) & \(H_{\beta_{6}}(J_{2})\) & \(H_{\beta_{6}}(J_{2})\) & \(H_{\beta_{6}}(J_{2})\) & \(H_{\beta_{6}}(J_{2})\) \\ \hline \([2,2]\) & 2.388128 & 2.375827 & 2.346055 & 1.633105 & 2.330917 & 2.391354 & 2.345766 & 2.35432 \\ \([3,3]\) & 2.606411 & 2.687189 & 2.666470 & 1.88376 & 2.665889 & 2.068832 & 2.650778 & 2.67237 \\ \([4,4]\) & 2.924515 & 2.916793 & 2.9007 & 2.074555 & 2.902766 & 2.902608 & 2.875969 & 2.9056747 \\ \([5,5]\) & 3.104655 & 3.098533 & 3.085384 & 2.229546 & 3.088295 & 3.106231 & 3.058583 & 3.09892 \\ \([6,6]\) & 3.254134 & 3.248888 & 3.237774 & 2.360107 & 3.240916 & 3.255465 & 3.204459 & 3.241908 \\ \([7,7]\) & 3.381681 & 3.377087 & 3.367462 & 2.473994 & 3.370602 & 3.388288 & 3.331363 & 3.371323 \\ \([8,8]\) & 3.402905 & 3.488816 & 3.480327 & 2.573399 & 3.48337 & 3.49301 & 3.442095 & 3.483078 \\ \([9,9]\) & 3.59151 & 3.587821 & 3.580028 & 2.662948 & 3.563139 & 3.592399 & 3.540300 & 3.563714 \\ \([10,10]\) & 3.680065 & 3.676703 & 3.659833 & 2.744042 & 3.672602 & 3.680861 & 3.625848 & 3.673185 \\ \hline \end{tabular}
\end{table}
Table 13: Numerical values of index-entropies of \(J_{2}\)
Figure 8: Graphical comparison among index-entropies of \(J_{2}\)
Figure 7: Graphical comparison among index-entropies of \(J_{1}\)
## 8 Conclusion and Future work
In this study, the general expression of NM-polynomial for carbon nanotube Y-junction graphs is derived. Also, various neighborhood degree sum-based topological indices are retrieved from the expression of these polynomials. In addition, eight graph entropies in terms of these topological indices have been defined and calculated for Y-junction graphs. Furthermore, some numerical values of topological indices and index-entropies of Y-junction graphs are plotted for comparison. Since topological indices based on the degree of vertices has a significant ability to predict various physicochemical properties and biological activities of the chemical molecule. Therefore, the study's findings will be a viable option for predicting various physicochemical properties and understanding the structural problems of carbon nanotube Y-junctions.
We mention some possible directions for future research, including multiplicative topological indices, graph index-entropies, regression models between the index-entropies and the topological indices, metric and edge metric dimension, etc., to predict thermochemical data, physicochemical properties, and structural information of carbon nanotube Y-junctions.
**Data Availability**
No data was used to support the findings of this study.
**Conflicts of Interest**
There are no conflicts of interest declared by the authors.
**Funding Statement**
The authors received no specific funding for this study.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \([1,m]\) & \(H_{\beta_{1}}({{}_{3}})\) & \(H_{\beta_{2}}({{}_{3}})\) & \(H_{\beta_{3}}({{}_{3}})\) & \(H_{\beta_{4}}({{}_{3}})\) & \(H_{\beta_{5}}({{}_{3}})\) & \(H_{\beta_{6}}({{}_{3}})\) & \(H_{\beta_{7}}({{}_{3}})\) & \(H_{\beta_{8}}({{}_{3}})\) \\ \hline \([2,2]\) & 2.401692 & 2.38893 & 2.293488 & 2.17494 & 1.755856 & 2.404539 & 2.359174 & 2.40079 \\ \([5,3]\) & 2.706459 & 2.696449 & 2.7002 & 2.46933 & 2.242514 & 2.708584 & 2.660759 & 2.70624 \\ \([4,4]\) & 2.932101 & 2.924005 & 2.927043 & 2.687 & 2.571439 & 2.933776 & 2.884517 & 2.932298 \\ \([5,5]\) & 3.11224 & 3.104551 & 3.106972 & 2.86049 & 2.816575 & 3.112596 & 3.062409 & 3.111608 \\ \([6,6]\) & 3.25977 & 3.254033 & 3.25051 & 3.0365032 & 3.016781 & 3.360882 & 3.210468 & 3.206369 \\ \([7,7]\) & 3.33855 & 3.381543 & 3.38306 & 3.128925 & 3.117078 & 3.387542 & 3.338232 & 3.387369 \\ \([8,8]\) & 3.497215 & 4.492748 & 3.492407 & 3.237341 & 3.307302 & 4.49892 & 3.446407 & 3.498149 \\ \([9,9]\) & 3.563375 & 3.591345 & 3.562735 & 3.33372 & 3.425608 & 3.506141 & 3.544179 & 3.5064 \\ \([10,10]\) & 3.683569 & 3.679896 & 3.681147 & 3.420469 & 3.530087 & 3.684252 & 3.632058 & 3.684668 \\ \hline \hline \end{tabular}
\end{table}
Table 14: Numerical values of index-entropies of \(J_{3}\)
Figure 9: Graphical comparison among index-entropies of \(J_{3}\)
**Author's Contribution Statement**
The final draft was written by **Sohan Lal** and **Vijay Kumar Bhat**. Figures and Tables are prepared by **Sohan Lal** and **Sahil Sharma**. All authors reviewed and edited the final draft.
|
2309.01242 | On input-to-state stability verification of identified models obtained
by Koopman operator | This paper proposes a class of basis functions for realizing the
input-to-state stability verification of identified models obtained from the
true system (assumed to be input-to-state stable) using the Koopman operator.
The formulated input-to-state stability conditions are in the form of linear
matrix inequalities. Two extensions are presented to relax the imposed
restrictions on the basis functions. Several numerical examples are provided to
demonstrate the efficacy of the proposed results. | Wenjie Mei, Dongzhe Zheng, Yu Zhou, Ahmad Taha, Chengyan Zhao | 2023-09-03T19:03:18Z | http://arxiv.org/abs/2309.01242v2 | # On input-to-state stability verification of identified models obtained by Koopman operator
###### Abstract
This paper proposes a class of basis functions for realizing the input-to-state stability verification of identified models obtained from the true system (assumed to be input-to-state stable) using the Koopman operator. The formulated input-to-state stability conditions are in the form of linear matrix inequalities. We also present extensions to relax the imposed restrictions on the basis functions. A numerical example is provided to demonstrate the efficacy of the proposed results.
keywords: Basis functions, input-to-state stability verification, Koopman operator +
Footnote †: journal: Journal of the Franklin Institute
## 1 Introduction
It has become more facile to collect or measure data from networks/dynamics in recent years, making data-driven methods popular tools in technical systems. For nonlinear control applications, the identification problems may become intractable and, indeed, complex for implementation, especially under significant noise or uncertainty. To that end, a theoretic framework called _Koopman operator_[1] has been exploited for lifting complicated non-linear dynamical systems to linear models with higher dimensions, which can be utilized for prediction, control, and stability analysis.
Such a framework is frequently combined with data-driven methods. Specifically, the collected data from the system trajectory may be used to approximate the Koopman operator, making the application of the Koopman operator more viable. Various related investigations can be found in the directions of its spectral properties [2], linear predictors [3], and neural network training [4], to mention a few examples.
In this work, under the realized system identification (defined as the determination, based on the measured input/output data, of a system model within a specified class of dynamics [5]) of nonlinear systems (they can be well approximated by "quasi generalized Persidskii systems" [6]), we focus on the stability verification problem in the identified model derived by the Koopman operator theory and its conjunction with a useful data-driven method, namely the _extended dynamic mode decomposition (EDMD)_[7]. One can find that there are studies involved in system identification in the context of the Koopman operator in recent years, such as the adaption of Koopman operator theory to identification [8] and its application to soft robots [9]; the investigation on the convergence of a variant of the generator EDMD algorithm [10] for calculating a linear representation of the operator. Nevertheless, there still exist many gaps in the stability verification issue in identified models (the true system is assumed to be stable) under the employment of the Koopman operator theory.
For that purpose, this paper proposes a general scheme to examine if an identified system modeled via the Koopman operator technique is input-to-state stable. To the author's knowledge, rare studies have addressed the stability analysis problem of identified systems obtained by using the Koopman operator theory. That is, in practice, the true system is usually assumed to be input-to-state stable. However, due to many factors, such as noise and modeling errors, the input-to-state stability (ISS) of identified models can not be guaranteed. This motivated the works of stability guarantees presented in, for example, [11] and this paper.
Due to the intricate forms and possibly highly dynamic nature of nonlinear systems, especially in the presence of external perturbations, the stability analyses may be difficult [12]. Among them, the most widely utilized framework is the ISS concept [13; 14]. As stated above, the stability analysis of an identified model could also be tricky since it can be a nonlinear system taking an unpredictable, thus probably complex form. However, the ISS of the true system is necessary for robustness and makes system identification practical [11]. These inspire us to tackle the target problem in this paper.
The main contributions of this work can be summarized as follows:
i) If one selects basis functions for the identification as in complex forms, even if only one of them does, the ISS analysis of the resulting identified system (for example, with many nonlinearities) can still be laborious. To that end, we introduce a class of basis functions (they can be linear or nonlinear in generalized Persidskii dynamics) to facilitate the relevant stability analysis since it serves the adaptable number of nonlinearities under the preservation of the generality of the functions. ii) Since it is also beneficial to relax the imposed conditions on the basis function, we present two extensions from the considered kind of functions for enlarging the selection range. By doing this, the analyzable forms of the identified system can be further expanded. iii) Under a mild assumption (Assumption 1 in this note), we bridge the considered class of nonlinear systems with the actual system. This enables us to skip the obstruction in approximating the differentiation of the input but only focus on being involved in identifying the vector field of the true system.
The rest of this work is organized as follows. The lifting approach, the data-driven method for calculating Koopman operators, and the class of nonlinear systems under consideration are all provided in Section 2. Section 4 presents the ISS conditions of identified systems, followed by introducing two extending forms of the basis functions. In Section 5, we show an example to illustrate the efficiency of the proposed results. The notation is provided next:
\(\mathbb{N}\), \(\mathbb{Z}\) and \(\mathbb{R}\) represent the sets of natural numbers, integers and real numbers, respectively, \(\mathbb{R}_{+}=\left\{s\in\mathbb{R}\mid s\geq 0\right\}\); \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m\times n}\) denote the vector spaces of \(n\)-tuples of real numbers and \(m\times n\) real matrices, respectively. The symbol \(\|\cdot\|\) refers to the Euclidean norm on \(\mathbb{R}^{n}\) (and the induced matrix norm \(\|A\|\) (or the Frobenius norm \(\|A\|_{F}\)) for a matrix \(A\in\mathbb{R}^{m\times n}\)). The set of \(n\times n\) diagonal matrices (with nonnegative diagonal elements) is denoted by \(\mathbb{D}^{n}\) (\(\mathbb{D}^{n}_{+}\)). We let \(O_{p\times n}\) refer to the \(p\times n\) zero matrix. For \(p\), \(n\in\mathbb{N}\) with \(p\leq n\), the notation \(\overline{p,n}\) is used to represent the set \(\{p,p+1,\ldots,n\}\); \(p\bmod n\) is the remainder of the Euclidean division of \(p\) by \(n\). \(\operatorname{vec}(A)\) represents the vectorization of a matrix \(A^{m\times n}\); for a vector \(\ell\in\mathbb{R}^{mn}\), \(\operatorname{vec}_{m\times n}^{-1}(\ell)=(\operatorname{vec}(I_{n})^{\top} \otimes I_{m})(I_{n}\otimes\ell)\in\mathbb{R}^{m\times n}\), where \(I_{n}\) denotes the \(n\times n\) identity matrix and \(\otimes\) denotes the Kronecker product. Let \(\lfloor\cdot\rfloor\) represent the floor function defined on \(\mathbb{R}\). \(C(U,R)\) denotes the space of continuous functions \(f\colon U\to R\), where \(U,R\) are metric spaces. For \(t_{1}\), \(t_{2}\in\mathbb{R}\), with \(t_{1}<t_{2}\), we denote by \(C^{1}_{n}([t_{1},t_{2}))\) the Banach space of continuously differentiable functions
with the norm \(\|\psi\|_{[t_{1},t_{2})}=\sup_{r\in[t_{1},t_{2})}\|\psi(r)\|+\sup_{r\in[t_{1},t_{2} )}\|\frac{d\psi(r)}{dr}\|<+\infty\).
## 2 Preliminaries
This section introduces the lifting approach based on utilizing the Koopman operator, which can be represented as a matrix after projection. Based on that, the relationship between the Koopman operator and system identification is clarified. Then, the _Extended Dynamic Mode Decomposition_ (EDMD) algorithm (see _e.g._, [7]) is shown for approximating the Koopman operator so that a system identification can be realized.
### Koopman operator via a matrix
Following the definitions given in the Appendices, for brevity, consider rewriting system (1) to the dynamics
\[\dot{\chi}(t)=\tilde{F}(\chi(t)):=\begin{bmatrix}F(x(t),u(t))\\ \dot{u}(t)\end{bmatrix},\quad t\in\mathbb{R}_{+} \tag{1}\]
with an extended state \(\chi=\begin{bmatrix}x^{\top}&u^{\top}\end{bmatrix}^{\top}\in\mathbb{R}^{n+m}\), which admits a unique solution \(\chi^{t}(\chi_{0})\) for any initial condition \(\chi_{0}\in\mathbb{R}^{n+m}\) defined for \(t\in[0,T_{\chi_{0}})\) with some \(T_{\chi_{0}}>0\). Then, the Koopman operator theory states that
\[K^{t}\tilde{H}=\tilde{H}\circ\chi^{t},\]
where \(\tilde{H}:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}\) are observable functions in a space \(\tilde{\mathcal{H}}\) and \(K^{t}\) is the Koopman operator. In order to define a procedure for lifting the dimension of the model (1) for using the data-driven method that will be presented in the sequel, we need to introduce a finite-dimensional subspace of \(\tilde{\mathcal{H}}\): \(\tilde{\mathcal{H}}^{N_{H}}:=\text{span }\{H_{1}(\chi)\ldots H_{N_{H}}(\chi)\}\) spanned by \(N_{H}\) linear independent scalar-valued functions \(H_{1},\ldots,H_{N_{H}}\), which are called _lifting functions_[9]. Then the projection of \(\tilde{H}\) onto the space \(\tilde{\mathcal{H}}^{N_{H}}(\subset\tilde{\mathcal{H}})\) can be expressed as:
\[\mathbb{P}\;\tilde{H}(\chi)=a^{\top}\begin{bmatrix}H_{1}(\chi)\\ \vdots\\ H_{N_{H}}(\chi)\end{bmatrix}:=a^{\top}P(\chi)\in\mathbb{R},\;a\in\mathbb{R}^{ N_{H}},\]
and
\[\mathbb{P}\;K^{t}\tilde{H}(\chi)=b^{\top}P(\chi)\in\mathbb{R}, \quad b\in\mathbb{R}^{N_{H}}, \tag{2}\]
where \(P(\chi)=\left[H_{1}(\chi)\ldots H_{N_{H}}(\chi)\right]^{\top}\) and \(\mathbb{P}:\tilde{\mathcal{H}}\rightarrow\tilde{\mathcal{H}}^{N_{H}}\) is a projection operator. Note that to get (2), we used a property of the operator \(\mathbb{P}\), _i.e._, \(K^{t}:\tilde{\mathcal{H}}^{N_{H}}\rightarrow\tilde{\mathcal{H}}\), such that \(K_{\text{rep}}a=b\), where the matrix \(K_{\text{rep}}\) is a representation of \(K^{t}\)[8].
In short, the matrix \(K_{\text{rep}}\) is a linear representation of the nonlinear dynamic system (1). It can be calculated numerically using data-driven approaches, such as the EDMD, which will be briefly introduced later. Notice that if one selects lifting functions under the restriction of \(K^{t}\)_-invariance_, _i.e._, \(K^{t}(\tilde{\mathcal{H}}^{N_{H}})=\tilde{\mathcal{H}}^{N_{H}}\), then in (2) the projection operator \(\mathbb{P}\) can be ignored.
### Connecting Koopman operator and system identification
In this subsection, the relationship between the Koopman operator and system identification [8] is clarified, illustrating how to identify the vector field \(\tilde{F}\) by applying the Koopman operator technique.
For system (1), assume that the function \(\tilde{F}_{i}\) can be projected onto the space \(\tilde{\mathcal{H}}^{N_{F}}\) (here \(\tilde{\mathcal{H}}^{N_{F}}:=\text{span }\{G_{1}(\chi)\ldots G_{N_{F}}(\chi)\}\) with the linear independent _basis functions_\(G_{1}\ldots G_{N_{F}}\); note that \(N_{F}\) is not necessarily equal to \(N_{H}\)) as
\[\tilde{F}_{i}(\chi)=\sum_{j=1}^{N_{F}}\lambda_{ij}G_{j}(\chi),\quad\forall i \in\overline{1,n+m}, \tag{3}\]
where \(\tilde{F}_{i}\) is the \(i\)-th element of the function \(\tilde{F}\) defined in (1). By the Koopman operator theory, there is a Koopman infinitesimal generator \(L=\sum_{i=1}^{n+m}\sum_{j=1}^{N_{F}}\lambda_{ij}L_{ij}\) with the operators \(L_{ij}=G_{j}\cdot\frac{\partial}{\partial\chi_{i}}\) such that
\[\dot{P}(\chi) =\sum_{i=1}^{n+m}\dot{\chi}_{i}\cdot\frac{\partial P(\chi)}{ \partial\chi_{i}}=\sum_{i=1}^{n+m}\tilde{F}_{i}\cdot\frac{\partial P(\chi)}{ \partial\chi_{i}}\] \[=\sum_{i=1}^{n+m}\sum_{j=1}^{N_{F}}\left(\lambda_{ij}G_{j}\right) \cdot\frac{\partial P(\chi)}{\partial\chi_{i}}=\sum_{i=1}^{n+m}\sum_{j=1}^{N_ {F}}\lambda_{ij}\left(G_{j}\cdot\frac{\partial P(\chi)}{\partial\chi_{i}}\right)\] \[=LP(\chi),\]
which is a linear system so that
\[\boxed{P(\chi(T+t))=e^{Lt}P(\chi(T))=K^{t}P(\chi(T)).} \tag{4}\]
Until now, we have introduced the linear operator \(K^{t}\) and demonstrated the connections among the Koopman operator \(K^{t}\), the generators \(L_{ij}\) and
the identification of \(\tilde{F}\), for the implementation of the system identification procedures, the forms of \(G_{j}\) should be properly selected by a designer. The remaining part of this section provides a way to calculate the values of \(\lambda_{ij}\) to actualize a system identification.
Note that the operator \(L\) can also be represented as a matrix
\[L_{\text{rep}}=\sum_{i=1}^{n+m}\sum_{j=1}^{N_{F}}\lambda_{ij}L_{ij,\;\text{rep}}, \tag{5}\]
which usually can be computed by
\[L_{\text{rep}}=\frac{1}{t}\ln(K_{\text{rep}}), \tag{6}\]
due to (4). In addition, by the relation (5), it can be deduced that
\[\Gamma :=\begin{bmatrix}\lambda_{11}&\dots&\lambda_{1N_{F}}&\dots& \lambda_{(m+n)1}&\dots&\lambda_{(m+n)N_{F}}\end{bmatrix}^{\top} \tag{7}\] \[=\begin{bmatrix}\text{vec}(L_{11,\;\text{rep}})&\dots&\text{vec} (L_{1N_{F},\;\text{rep}})&\dots\\ &\text{vec}(L_{(n+m)1,\;\text{rep}})&\dots&\text{vec}(L_{(n+m)N_{F},\;\text{ rep}})\end{bmatrix}^{\dagger}\text{vec}(L_{\text{rep}}).\]
We will describe how \(K_{\text{rep}},L_{\text{rep}}\) can be obtained by the EDMD algorithm in the next section. Thus, if one has the values of \(L_{ij,\;\text{rep}}\), then it is direct to get the parameters \(\lambda_{ij}\) from (7).
It is worth mentioning here that, in practice preserving the \(K^{t}\)-invariance is challenging since each operator \(L_{ij}\) generates terms that may increase the amount of linear independent lifting functions in the resulting space (see, _e.g._, in (2), the Koopman operator \(K^{t}:\tilde{\mathcal{H}}^{N_{H}}\to\tilde{\mathcal{H}}\) and \(\tilde{\mathcal{H}}^{N_{H}}\subset\tilde{\mathcal{H}}\)). Then the idea of this work is that by selecting \(G_{j}\) and the elements \(P_{l}\) of \(P\) satisfied the sector condition formulated in Assumption 3 of the Appendices, there always exists an operator \(L_{ij}\) such that \(L_{ij}P=G_{j}\cdot\frac{\partial P}{\partial\chi_{i}}\) since the scalar-valued functions \(G_{j},P_{l}\) are in the semigroup \((\mathcal{F}(\mathbb{R}),\cdot)\), where \(\mathcal{F}(\mathbb{R})=\{f\in C(\mathbb{R},\mathbb{R})\mid f(0)=0,\nu f(\nu)> 0,\;\forall\;\nu\neq 0\}\). This provides a possible scheme to ensure the \(K^{t}\)-invariance in theory, which demonstrates one of the strengths of the criterion of choosing \(G_{j}\) as functions satisfying Assumption 3, whose further merits will be interpreted at length in the sequel.
### Approximating Koopman operator
This section presents a data-driven approach: the EDMD algorithm (see [7] for reference). It can calculate a matrix \(K_{\text{rep}}\) to represent the Koopman operator.
For a given constant sampling time \(T_{c}>0\) and a series of the input values \(\{u_{k}\}_{k=1}^{N+1}\) of length \(N\in\mathbb{N}\), collect pairs of the state \(x_{k}\) and the output \(y_{k}\) of the true system: \(\{(x_{k},u_{k}),(y_{k},u_{k+1})\}_{k=1}^{N}\) with \(u_{k+1}=\Theta u_{k}\), where \(\Theta\) is a left shift operator. Here \(y_{k}\) is not necessarily equal to \(x_{k+1}=x^{T_{c}}(x_{k},u)\) due to the existence of measurement noises. In the case that there do not exist such noises, one can assume that \(y_{k}=x^{T_{c}}(x_{k},u)\).
The EDMD converts an approximation problem to a minimization one as follows:
\[(K_{\mathrm{rep}})^{\top}=\operatorname*{argmin}_{(K_{\mathrm{rep}})^{\top}} \sum_{k=1}^{N}\left\|P(y_{k},u_{k+1})-(K_{\mathrm{rep}})^{\top}P(x_{k},u_{k}) \right\|_{F}. \tag{8}\]
Then, the corresponding least square solution of (8) can be calculated by
\[K_{\mathrm{rep}}=\begin{bmatrix}P(x_{1},u_{1})^{\top}\\ \vdots\\ P(x_{N},u_{N})^{\top}\end{bmatrix}^{\dagger}\begin{bmatrix}P(y_{1},u_{2})^{ \top}\\ \vdots\\ P(y_{N},u_{N+1})^{\top}\end{bmatrix}, \tag{9}\]
where the symbol \(\dagger\) stands for the pseudoinverse.
So far, we have introduced the nonlinear system (1) into account to give the general definitions of the lifting approach, the Koopman operator and its connection with identification, and the EDMD algorithm.
## 3 Problem statement
In this paper, we assume that the considered identified models take the form of (generalized) Persidskii dynamics, and the following assumption is imposed on the true system (1) (see Appendices for the detailed definition of (1)).
**Assumption 1**.: _Assume that the true system (1) is ISS and can be presented in the form_
\[\dot{x}=f(x)+Du, \tag{10}\]
_where \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a vector field and \(D\in\mathbb{R}^{n\times m}\) is a constant matrix._
**Remark 1**.: _Note that in this work, we use generalized Persidskii systems to approximate the true system (10). Then, one can see that non-restrictive
(since (10) is also a general nonlinear system) Assumption 1 makes the identification feasible since the continuous function \(f\) can be approximated (or identified) arbitrarily well by a neural network represented by the generalized Persidskii dynamics (see [15] for the details of representing recurrent neural networks by generalized Persidskii systems; also, see Universal Approximation Theorem [16; 17] for the illustration that a recurrent neural network can approximate the function \(f\) arbitrarily well)._
_On the other hand, at least the generalized Persidskii system is more adaptable for approximating the true system (10) than the linearization of (10) (locally or globally), considering, for example, \(\dot{x}=-x\tanh(x)+Du=-x^{2}+\frac{x^{4}}{3}-\frac{2x^{6}}{15}+\cdots+Du\), and the latter is a particular case of the former one._
The main goal of this work is to propose a generic scheme to verify if an identified model obtained by applying the Koopman operator theory is ISS, under the preliminary that the system (10) is ISS and its implicit form is known (_i.e._ Assumption 1 is verified).
## 4 Main results: Basis functions for ISS analysis of identified models
In this section, we propose a novel type of basis functions (they belong to the nonlinearity classes in generalized Persidskii systems) helpful in analyzing the ISS property of an identified model associated with the system (10), which is also the main contribution of this study.
### A class of basis functions
The considered kind of functions \(G_{j}\) (in (3)) is defined as follows.
**Definition 1**.: _The functions \(G_{j}\) satisfying Assumption 3 are called sector basis functions (SBFs)._
We then show the usefulness of SBFs in system identification. Firstly, to suppress unnecessary computation of the given input \(u\) during the solution process of (8), we further impose that the basis functions take the form \(G_{s}(\chi)=\varphi_{s}(x)+\psi_{s}(u),\ s\in\overline{1,nM+m}\), where the functions \(\varphi_{s}:\mathbb{R}^{n}\to\mathbb{R}\) and \(\psi_{s}:\mathrm{U}\to\mathbb{R}\). Moreover, let \(\psi_{s}(u)=0\) for \(s\in\overline{1,nM}\) and \(G_{s^{\prime}}(\chi)=u_{s^{\prime}-nM}\) for \(s^{\prime}\in\overline{nM+1,nM+m}\). We have
\[G(\chi):=\begin{bmatrix}\varphi_{1}(x)&\ldots&\varphi_{nM}(x)&u^{\top}\end{bmatrix}^ {\top}.\]
Here \(N_{F}=nM+m\). Then, with conciseness we apply SBFs \(\tilde{f}_{1},\ldots,\tilde{f}_{nM}\) to \(G(\chi)\) as:
\[f_{j^{\prime}}(x)=\begin{bmatrix}\varphi_{(j^{\prime}-1)n+1}(x)\\ \vdots\\ \varphi_{j^{\prime}n}(x)\end{bmatrix}=\begin{bmatrix}\tilde{f}_{(j^{\prime}-1) n+1}(x_{1})\\ \vdots\\ \tilde{f}_{j^{\prime}n}(x_{n})\end{bmatrix}\in\mathbb{R}^{n}\]
for all \(j^{\prime}\in\overline{1,M}\), so
\[G(\chi)=\begin{bmatrix}f_{1}(x)&\ldots&f_{M}(x)&u\end{bmatrix}^{\top}. \tag{11}\]
The identification of \(u\) is not considered since it is given. Hence, without loss of generality and by (3), we deal solely with identifying the vector field \(F\):
\[\tilde{F}(\chi)=\begin{bmatrix}F(x,u)\\ \dot{u}\end{bmatrix}=\begin{bmatrix}\Gamma_{1}\,...\,\Gamma_{M}\,\vdots\,B.\\...\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
Also, the matrices \(E_{j^{\prime}},\tilde{B}\) are used for the extractions. Therefore, the identified system
\[\boxed{\dot{x}=\sum_{j^{\prime}=1}^{M}\Gamma_{j^{\prime}}f_{j^{\prime}}(x)+Bu} \tag{12}\]
is in the form of generalized Persidskii systems (3) (\(f_{1},\ \dots,\ f_{M}\) are SBFs).
The notable advantages of considering the generalized Persidskii system are that its stability properties have been well investigated in, for instance, [18; 6], and letting basis functions take the form of such a class of dynamics allows us to analyze the stability of the identified model (random selections may lead to complex stability analysis) and preserve the generality of basis functions to some extent. Apart from these, the theoretically infinite number of nonlinearities in the generalized Persidskii systems can be well-fitted into the possibly voluminous dimension of the lifting/basis function.
### ISS property of the identified model
In this section, the ISS conditions of the identified system are formulated, based on the selection of SBFs, for verifying the ISS property of (12) under Assumption 1. The following theorem investigates the ISS of the identified model (12).
**Theorem 1**.: _Let Assumption 3 be satisfied with \(\phi\in\mathbb{N}\backslash\{0\}\). If there exist \(0\leq P=P^{\top}\in\mathbb{R}^{n\times n}\); \(\left\{\Lambda^{j}=\mathrm{diag}(\Lambda^{j}_{1},\dots,\Lambda^{j}_{n})\right\} _{j=1}^{M}\), \(\left\{\Xi^{k}\right\}_{k=1}^{M}\), \(\left\{\Upsilon_{s,z}\right\}_{0\leq s<z\leq M}\)\(\subset\mathbb{D}^{n}_{+}\); \(0<\Phi=\Phi^{\top}\in\mathbb{R}^{n\times n}\); \(\rho\in\mathbb{R}\) such that_
\[P+\rho\sum_{j=1}^{\mu}\Lambda^{j}>0,\ Q\leq 0,\ \sum_{k=1}^{\phi}\Xi^{k}+2 \sum_{s=0}^{\phi}\sum_{z=s+1}^{\phi}\Upsilon_{s,z}>0, \tag{13}\]
_where_
\[Q_{1,1}=O_{n\times n};\ Q_{j+1,j+1}=\Gamma_{j}^{\top}\Lambda^{j} +\Lambda^{j}\Gamma_{j}+\Xi^{j},\ j\in\overline{1,M}\] \[Q_{1,j+1}=P\Gamma_{j}+\Upsilon_{0,j},\ j\in\overline{1,M};\ Q_{1,M+2}=P,\] \[Q_{s+1,z+1}=\Gamma_{s}^{\top}\Lambda^{z}+\Lambda^{s}\Gamma_{z}+ \Upsilon_{s,z},\ s\in\overline{1,M-1},\ z\in\overline{s+1,M},\] \[Q_{j+1,M+2}=\Lambda^{j},\ Q_{M+2,M+2}=-\Phi,\]
_and \(\phi\) is defined in the Appendices, then a forward complete system (12) is ISS._
Proof. A similar proof can be founded in [6, 18], where the ISS analysis of (12) can be performed using a Lyapunov function \(V(x)=x^{\top}Px+2\sum_{j\in\overline{1,M}}\sum_{i\in\overline{1,n}}\Lambda_{i}^{j }\int_{0}^{x}f_{j}^{i}(\nu)d\nu\), where \(f_{j}^{i}\) is the \(i\)-th element of \(f_{j}\), and \(0\leq P=P^{\top}\in\mathbb{R}^{n\times n}\) and \(\Lambda^{j}=\operatorname{diag}(\Lambda_{1}^{j},...,\Lambda_{n}^{j})\in \mathbb{D}_{+}^{n}\) are tuning matrices. They are selected in a way that ensures positive definiteness of \(V\) under Assumption 3 (see the first inequality of (13)), then there exist functions \(\alpha_{1}^{P,\Lambda^{1},\ldots,\Lambda^{M}},\alpha_{2}^{P,\Lambda^{1}, \ldots,\Lambda^{M}}\in\mathcal{K}_{\infty}\) such that
\[\alpha_{1}^{P,\Lambda^{1},\ldots,\Lambda^{M}}(\|x\|)\leq V(x)\leq\alpha_{2}^{ P,\Lambda^{1},\ldots,\Lambda^{M}}(\|x\|)\]
for all \(x\in\mathbb{R}^{n}\). For instance, the function \(\alpha_{2}^{P,\Lambda^{1},\ldots,\Lambda^{M}}\) can always be taken as
\[\alpha_{2}^{P,\Lambda^{1},\ldots,\Lambda^{M}}(\tau)=\lambda_{\max}(P)\tau^{2} +2Mn\max_{j\in\overline{1,M},i\in\overline{1,n}}\left\{\Lambda_{i}^{j}\int_{0} ^{\tau}f_{j}^{i}(\nu)\;d\nu\right\}.\]
These recover the first relation in (2). Then consider the second condition in (2): Take the time derivative \(\dot{V}=\nabla V(x)\dot{x}\), we have
\[\dot{V}= \left[\begin{smallmatrix}x\\ f_{1}(x)\\ \vdots\\ f_{M}(x)\\ Bu\end{smallmatrix}\right]^{\top}Q\left[\begin{smallmatrix}x\\ f_{1}(x)\\ \vdots\\ f_{M}(x)\\ Bu\end{smallmatrix}\right]-\sum_{j=1}^{M}f_{j}(x)^{\top}\Xi^{j}f_{j}(x)-2\sum_{ j=1}^{M}x^{\top}\Upsilon_{0,j}f_{j}(x)\] \[-2\sum_{s=1}^{M-1}\sum_{z=s+1}^{M}f_{s}(x)^{\top}\Upsilon_{s,z}f_ {z}(x)+(Bu)^{\top}\Phi Bu\] \[\leq-\sum_{j=1}^{M}f_{j}(x)^{\top}\Xi^{j}f_{j}(x)-2\sum_{j=1}^{M} x^{\top}\Upsilon_{0,j}f_{j}(x)\] \[-2\sum_{s=1}^{M-1}\sum_{z=s+1}^{M}f_{s}(x)^{\top}\Upsilon_{s,z}f_ {z}(x)+(Bu)^{\top}\Phi Bu,\]
from which we see that the function of \(x\) in the right-hand side of the last inequality is radially unbounded (this satisfies the second relation in (2)) due to the last LMI of (13). Here only the nonlinearities \(f_{1},\ldots,f_{\phi}\) are radially unbounded. By Theorem 2, the proof is complete.
From Section 4.1 to Theorem 1, we have placed mild restrictions on a problem with great freedom to obtain an analyzable one (in terms of ISS analysis). In addition, the significance of stability verification of identified systems and the retention of the generality of basis functions indicate the acceptable price is worth it. These reflect the main novelty of this work.
### Extensions of basis functions
This section demonstrates that extending the considered basis functions is possible to increase the selection range when one implements data-driven methods. Let us consider the first approach of relaxing the imposed form of basis functions:
1) _Translation of functions_: We start by defining a vector-valued function \(G(\nu)=\begin{bmatrix}g_{1}(\nu)&\ldots&g_{n}(\nu)\end{bmatrix}^{\top}=F(\nu)+ \ell=\begin{bmatrix}f_{1}(\nu)&\ldots&f_{n}(\nu)\end{bmatrix}^{\top}+\ell\) for all \(\nu\in\mathbb{R}^{n}\), where \(f_{1},\ldots,f_{n}\in\mathcal{F}(\mathbb{R})\) and \(\ell\in\mathbb{R}^{n}\). This formulation can also be expressed as: \(G(\nu)-\ell=F(\nu)\), where the functions \(f_{i}\) are in the semigroup \(\mathcal{F}(\mathbb{R})\). It is clear that \(g_{i}\) do not satisfy Assumption 3 if the \(i\)-th element of \(\ell\) is not equal to zero. Afterwards, we can extend the system (3) to \(\dot{x}(t)=A_{0}x(t)+\sum_{j^{\prime}=1}^{M}A_{j^{\prime}}G_{j^{\prime}}(x(t)) +u(t)\), which is essentially equivalent to \(\dot{x}(t)=A_{0}x(t)+\sum_{j^{\prime}=1}^{M}A_{j^{\prime}}F_{j^{\prime}}(x(t)) +u(t)+\sum_{j^{\prime}=1}^{M}A_{j^{\prime}}l_{j^{\prime}}\), where the constant vectors \(l_{1},\ldots,l_{M}\in\mathbb{R}^{n}\). Therefore, one can see that the generality of the functions \(G_{j^{\prime}}\) is greater than that of \(F_{j^{\prime}}\), which illustrates a possible direction of extensions. It is also worth mentioning that such a transformation does not affect the use of the lifting approach and the stability analysis introduced above.
2) _Change of independent variables of functions_: We consider generalizing the variables of the functions \(F_{j^{\prime}}\), for which the linear operators \(\mathcal{T}_{j^{\prime}}x:=R_{j^{\prime}}x\) are taken into account, where \(R_{j^{\prime}}\in\mathbb{R}^{k_{j^{\prime}}\times n}\) are constant matrices with appropriate dimensions (\(\mathcal{T}_{j^{\prime}}\) may be the identity operator, then the independent variables do not change). Thus, under the substitutions of \(x\) to \(\mathcal{T}_{j^{\prime}}x\) in the nonlinearities of the system (3), we can obtain the resulting system:
\[\dot{x}(t)=A_{0}x(t)+\sum_{j^{\prime}=1}^{M}A_{j^{\prime}}F_{j^{\prime}}(R_{j ^{\prime}}x(t))+u(t), \tag{14}\]
extending the model (3). For such a generalization, we require a minor revision of Assumption 3, provided next.
**Assumption 2**.: _Assume that for any \(i\in\overline{1,k_{j^{\prime}}}\) and \(j^{\prime}\in\overline{1,M}\), \(\nu f_{j^{\prime}}^{i}(\nu)>0\), for all \(\nu\neq 0\)._
Given this assumption1, we can now formulate the ISS conditions for the system (14) in the following corollary.
**Corollary 1**.: _Let Assumption 2 be satisfied with \(\phi\in\mathbb{N}\backslash\{0\}\). If there exist \(0\leq P=P^{\top}\in\mathbb{R}^{n\times n}\); \(\Lambda^{j}=\mathrm{diag}(\Lambda^{j}_{1},\ldots,\Lambda^{j}_{k_{j}})\in\mathbb{ D}^{k_{j}}_{+}\)\((j\in\overline{1,M})\); \(\Xi^{s}\in\mathbb{D}^{k_{s}}_{+}\)\((s\in\overline{0,M})\), \(\Upsilon_{0,s}\in\mathbb{D}^{k_{s}}_{+}\)\((s\in\overline{1,M})\); \(\{\Upsilon_{s,r}\}_{r=s+1}^{M}\subset\mathbb{D}^{n}_{+}\)\((s\in\overline{1,M-1})\); \(\varrho\in\mathbb{R}\) and \(0<\Phi=\Phi^{\top}\in\mathbb{R}^{n\times n}\) such that_
\[P+\varrho\sum_{j=1}^{\mu}\Lambda^{j}>0,\quad Q=Q^{\top}=(Q_{a,\, b})_{a,\,b=1}^{M+2}\leq 0,\] \[\sum_{s=0}^{\phi}\Xi^{s}+2\sum_{s=0}^{\phi}\sum_{r=s+1}^{\phi} \Upsilon_{s,r}>0,\]
_where \(Q_{1,1}=A_{0}^{\top}P+PA_{0}+\Xi^{0};\;\;Q_{j+1,j+1}=A_{j}^{\top}R_{j}^{\top} \Lambda^{j}+\Lambda^{j}R_{j}A_{j}+\Xi^{j},\;j\in\overline{1,M};\;Q_{1,j+1}=PA_ {j}+A_{0}^{\top}R_{j}^{\top}\Lambda^{j}+R_{j}^{\top}\Upsilon_{0,j},\;j\in \overline{1,M};\;Q_{s+1,r+1}=A_{s}^{\top}R_{r}^{\top}\Lambda^{r}+\Lambda^{s}R_ {s}A_{r}+R_{s}^{\top}R_{s}\Upsilon_{s,r}R_{r}^{\top}R_{r},\;s\in\overline{1,M-1},r\in s+1,M;\;Q_{1,\,M+2}=P,\;Q_{M+2,\,M+2}=-\Phi,\;Q_{j+1,\,M+2}=\Lambda^{j}R_ {j},\;j\in\overline{1,M}\), then the system (14) is ISS._
The proof developments for Corollary 1 refer to the methodology used in the works [6] and are omitted from this paper due to the unobtrusive modifications.
## 5 Numerical Case Study
This section is involved in a generalized traffic system in [19] obtained from the Lighthill Whitham Richards (LWR) flow model [20] under an ODE approximation technique [21]. Such a model describes the evolution of traffic densities on prescribed highway segments and on/off-ramps. We dealt with its identification first, then verified the ISS property to illustrate the efficacy of the proposed result.
The considered model can be expressed as:
\[\dot{x}(t)=\begin{bmatrix}A_{1}&A_{2}\\ O&A_{3}\end{bmatrix}x(t)+g(x)+B_{u}u(t)+d(t), \tag{15}\]
where \(x\) is the state vector; \(A_{1},A_{2},A_{3},B_{u}\) are constant matrices; each element of the function \(g\) is a quadratic polynomial with one or more variables
containing only quadratic terms, _i.e._, \(g_{i}(x)=c_{1}x_{i}^{2}+c_{2}x_{j}^{2}+...\in\mathbb{R}\); \(u(t),d(t)\) are the input and the disturbance with the appropriate dimensions. We transformed the system (15) into its linearization, thus the functions \(g_{i}\) satisfy Assumption 3. In the simulation, we let \(n=2,\ m=1,\ N_{F}=4,\ G(x)=\begin{bmatrix}x_{1}&x_{2}&x_{1}^{3}&x_{2}^{3}\end{bmatrix} ^{\top},\ T_{c}=0.01,\ N=348,\ t\in[0,3.5]\), and obtained that it is possible to identify the model (15) with high precision in the case of no disturbance (\(d(t)=0\); see Fig. 1). And the ISS conditions in Theorem 1 are verified by solving the LMIs. However, under the disturbance (\(d(t)=10^{-1}\times\begin{bmatrix}\sin(t)\\ \tanh(t)\end{bmatrix}\); appeared in the process of system identification), another simulation result shows that the ISS property has not been retained, illustrated by Fig. 2. In this case, one may reexamine the working environment and perform the system identification under a better situation or consider disturbance rejection.
## 6 Paper Summary and Future Work
This work proposed a beneficial class of functions named _sector basis functions_ for identifying the vector field of the system. The identified system takes the form of a specific kind of nonlinear system whose input-to-state stability conditions were formulated as linear algebraic inequalities and, thus, can be constructively verified. Additionally, two directions of the extensions of the basis functions were introduced. Simulation on the macroscopic traffic dynamics was exploited for examining the proposed results. Future work topics include further applications to power system dynamics and extending the theory to descriptor systems.
Figure 1: The system and estimated trajectories \(x(t)\) and \(\hat{x}(t)\) (no disturbance)
## Appendices
This section gives the used preliminaries for the general continuous-time nonlinear dynamical system:
\[\dot{x}(t)=F(x(t),u(t)),\quad t\in\mathbb{R}_{+}, \tag{.1}\]
where \(x(t)\in\mathbb{R}^{n}\) is the state vector; \(u(t)\in\mathrm{U}\subset\mathbb{R}^{m}\) is the given external input, \(u\in C^{1}_{m}([0,\infty))\); and vector-valued nonlinearity is defined as \(F\in C(\mathbb{R}^{n}\times\mathrm{U},\mathbb{R}^{n})\) being also locally Lipschitz continuous in \(x(t)\). For an initial state \(x_{0}\in\mathbb{R}^{n}\), \(u\in C^{1}_{m}([0,\infty))\) and \(t\in\mathbb{R}_{+}\), the corresponding solution of system (.1) is denoted by \(x^{t}(x_{0},u)=x(t,x_{0},u)\). It is assumed that in (.1) such a solution is uniquely defined for any \(x_{0}\in\mathbb{R}^{n},u\in C^{1}_{m}([0,\infty))\) and all \(t\in\mathbb{R}_{+}\).
### Koopman operator
We present the definition of the Koopman operator [22]\(K^{t}:\mathcal{H}\to\mathcal{H}\) (\(t\in\mathbb{R}_{+}\)) associated with the system (.1) as
\[K^{t}H=H\circ x^{t},\]
where \(H:\mathbb{R}^{n}\to\mathbb{R}\) is observable function (\(H\) belongs to a Banach space \(\mathcal{H}\) of such functions) and \(\circ\) denotes the composition operator. The Koopman operator is of interest since it can transform nonlinear dynamics into a linear representation of the theoretically infinite-dimensional system. Let
Figure 2: The system and estimated trajectories \(x(t)\) and \(\hat{x}(t)\) (disturbed)
\(\mathcal{K}=\{K^{t}\}_{t\geq 0}\) represent the \(C_{0}\)-semigroup of Koopman operators \(K^{t}\), then \(L:=\lim_{t\to 0^{+}}\frac{K^{t}-I}{t}\) stands for the infinitesimal generator of \(\mathcal{K}\), where \(I\) is the identity operator on the space \(\mathcal{H}\). Note that theoretically, \(\mathcal{H}\) can be infinite-dimensional, and \(K^{t}\) is a linear operator for each \(t\in\mathbb{R}_{+}\).
### Input-to-state stability properties
**Definition 2**.: [13; 14] _A forward complete system (1) is said to be input-to-state stable (ISS) if there exist \(\beta\in\mathscr{K}\mathscr{L}\), \(\gamma\in\mathscr{K}\) such that \(\|x^{t}(x_{0},u)\|\leq\beta\left(\|x_{0}\|,t\right)+\gamma(\|u\|_{\infty}), \ \forall t\in\mathbb{R}_{+}\) for any \(x_{0}\in\mathbb{R}^{n}\) and \(u\in C^{1}_{m}([0,\infty))\). For the system (1), a smooth function \(V\colon\mathbb{R}^{n}\to\mathbb{R}_{+}\) is an ISS-Lyapunov function if there exist \(\alpha_{1},\alpha_{2},\alpha_{3}\in\mathscr{K}_{\infty}\) and \(\chi\in\mathscr{K}\) such that_
\[\alpha_{1}(\|x\|)\leq V(x)\leq\alpha_{2}(\|x\|), \tag{2}\] \[\|x\|\geq\chi(\|u\|)\quad\Rightarrow\quad\nabla V(x)F(x,u)\leq- \alpha_{3}(\|x\|)\]
_for all \(x\in\mathbb{R}^{n}\) and \(u\in\mathrm{U}\)._
**Theorem 2**.: [13; 14] _The system (1) is ISS if and only if it admits an ISS-Lyapunov function._
### Generalized Persidskii systems
We then introduce _generalized Persidskii dynamics_[6]:
\[\dot{x}(t)=A_{0}x(t)+\sum_{j^{\prime}=1}^{M}A_{j^{\prime}}F_{j^{\prime}}(x(t) )+u(t),\quad t\in\mathbb{R}_{+}, \tag{3}\]
where \(x=[x_{1},\ldots,x_{n}]^{\top}\in\mathbb{R}^{n}\) is the state, \(x(0)=x_{0}\); \(A_{s}\in\mathbb{R}^{n\times n}\), \(s\in\overline{0,M}\) are constant matrices; the input \(u=[u_{1},\ldots,u_{n}]^{\top}\in\mathscr{L}_{\infty}^{n}\), and the functions \(F_{j^{\prime}}\in C(\mathbb{R}^{n},\mathbb{R}^{n}),\ F_{j^{\prime}}(x)=[\ f_{j^{ \prime}}^{1}(x_{1})\ \ \ldots\ f_{j^{\prime}}^{n}(x_{n})\ ]^{\top},\ \forall j^{ \prime}\in\overline{1,M}.\) The nonlinearity \(F_{j^{\prime}}\) has a diagonal structure: each element of \(F_{j^{\prime}}\) (_i.e._, each function \(f_{j^{\prime}}^{i}\)) depends only on the respective coordinate \(x_{i}\), \(i\in\overline{1,n}\). We also impose a sector boundedness or a passivity condition on \(f_{j^{\prime}}^{i}\):
**Assumption 3**.: _For system (3), assume that for any \(i\in\overline{1,n}\) and \(j^{\prime}\in\overline{1,M}\), we have \(\nu f_{j^{\prime}}^{i}(\nu)>0\) for all \(\nu\neq 0.\)_
Under Assumption 3, with a reordering of nonlinearities and their decomposition, there exists an index \(\phi\in\overline{0,M}\) such that for all \(a\in\overline{1,\phi}\), \(i\in\overline{1,n}\): \(\lim_{\nu\to\pm\infty}f_{a}^{i}(\nu)=\pm\infty\), and that there exists \(\mu\in\overline{\phi,M}\) such that for all \(b\in\overline{1,\mu}\), \(i\in\overline{1,n}\): \(\lim_{\nu\to\pm\infty}\int_{0}^{\nu}f_{b}^{i}(\tau)d\tau=+\infty\).
Assumption 3 is not restrictive, considering, for instance, the identity rectified linear unit (ReLU), tanh, and sigmoid functions are some representative examples fulfilling Assumption 3.
|
2303.04361 | Sample Efficient Multimodal Semantic Augmentation for Incremental
Summarization | In this work, we develop a prompting approach for incremental summarization
of task videos. We develop a sample-efficient few-shot approach for extracting
semantic concepts as an intermediate step. We leverage an existing model for
extracting the concepts from the images and extend it to videos and introduce a
clustering and querying approach for sample efficiency, motivated by the recent
advances in perceiver-based architectures. Our work provides further evidence
that an approach with richer input context with relevant entities and actions
from the videos and using these as prompts could enhance the summaries
generated by the model. We show the results on a relevant dataset and discuss
possible directions for the work. | Sumanta Bhattacharyya, Ramesh Manuvinakurike, Sahisnu Mazumder, Saurav Sahay | 2023-03-08T03:58:06Z | http://arxiv.org/abs/2303.04361v1 | # Sample Efficient Multimodal Semantic Augmentation for
###### Abstract
In this work, we develop a prompting approach for incremental summarization of task videos. We develop a sample-efficient few-shot approach for extracting semantic concepts as an intermediate step. We leverage an existing model for extracting the concepts from the images and extend it to videos and introduce a clustering and querying approach for sample efficiency, motivated by the recent advances in perceiver-based architectures. Our work provides further evidence that an approach with richer input context with relevant entities and actions from the videos and using these as prompts could enhance the summaries generated by the model. We show the results on a relevant dataset and discuss possible directions for the work.
## 1 Introduction
Summarization is the consolidated format for a large document and has been widely used for many applications _i.e_., understanding a long meeting/event, story summarization etc. Abstractive summarization is challenging in the Natural Language Generation(NLG) domain as it requires an understanding of all the salient information in the input document and rewriting logically in a condensed manner rather than selection (extractive). Recent advancements in transformer-based abstractive summarization have shown promising attempts Su et al. (2020); Hoang et al. (2019); Wang et al. (2020) with ideas ranging from the two-stage method,domain-adaptive training to plug and play topic models on top of the transformer. Despite these strong advancements in text-based summarization, there is a huge potential for how we can improve summarization from multimodal data. Since in real-time, data prevails in different modes rather than a single mode like text, there has been an increasing demand for how we can bridge the gap between these modalities _i.e_., cross-modal search applications for video, utilize the text data associated with the video to search for relevant video content Otani et al. (2016); Song et al. (2011), which requires a complete understanding of the video without ignoring the subtle differences Wang et al. (2012). Recent work Palaskar et al. (2021) suggests that learning a semantic concept as an intermediate step can help the model to learn efficiently. Learning a semantic concept has always been beneficial in categorization tasks like scene recognition, video tagging, etc Zhou et al. (2017); Ghadiyaram et al. (2019).
Recent advancements in the vision-language-based models Radford et al. (2021); Alayrac et al. (2022) have shown immense potential for generating text-based descriptions from images/videos. In our context, we refer to these text-based descriptions as "semantic concepts". Our work utilizes learning of these semantic concepts as an intermediate step from the videos. These semantic concepts along with the transcriptions (semantic augmentation) as input to a pre-trained summarizer model enrich the performance. In this work, we address the problem of (i) generating semantically relevant annotations of a video (semantic concepts) using a fixed number of sampled frames from each video segment. (ii) utilize these semantic concepts along with input transcription (semantic augmentation) to enrich the summarization output of pre-trained models. (_i.e_.BART).
In summary, Our contributions are the following:
* We propose a novel CLIP-based approach Radford et al. (2021) to generate semantic concepts from video frames.
* In order to maintain diversity in each batch, we propose a clustering-based batch creation approach.
* We have experimented with our proposed approach using the YOUCOOK2 Zhou et al. (
2018a) dataset. The results perfectly demonstrate the efficiency of our approach.
## 2 Related work
Early attempts show promising ideas (_i.e_., reinforcement approach, the copy-pointer mechanism) in abstractive summarization using the advancement in sequence to sequence model(Rush et al., 2015; Nallapati et al., 2016; See et al., 2017; Henss et al., 2015). Although these approaches mainly focus on single-document summarization, There are other attempts at multi-document summarization (Yasunaga et al., 2017; Cao et al., 2015) as well.
Recent advancements in deep learning and transformer-based models (Li et al., 2019; Liu et al., 2019) have achieved impressive performance in abstractive summarization tasks (Zhang et al., 2020; Raffel et al., 2020; Lewis et al., 2020; Zhu et al., 2020). Such transformer-based models are typically pre-trained on a large dataset and then fine-tuned on a smaller dataset to achieve impressive performance. There are also other methods to improve summarization using auxiliary tasks. Since summarization should contain all the salient information, It should generate answers to the logical question about the input document. Automatic question-answer (QA) generation in the process of summarization has shown promise in recent times (Guo et al., 2018; Dong et al., 2020). Such an automated QA generation method is used to verify if the generated summary entails the same information as the content by matching the answer generated from the content and the summary.
Text generation from multimodal data has always been a challenging research area in the NLG domain. Tasks like video captioning(Zhou et al., 2018), or summarization involve generating a compressed textual description of the data(Palaskar et al., 2019). Recent developments show how these tasks can be benefitted from semantic representation learning in latent space that provides general-purpose embedding for downstream tasks (Lu et al., 2019; Hubert Tsai et al., 2017). Despite the performance, this approach limits due to controllability issues in tasks like summarization. As an alternative approach, there is also recent interest in utilizing Reranking-based approaches (Pernes et al., 2022) in abstractive summarization similar to machine translation (Bhattacharyya et al., 2020).
Evaluation of the summaries generated is a challenging task as there is no'single' correct summary for a dialogue (Lloret et al., 2018). Numerous automatic metrics have been proposed for evaluating summaries (Lin, 2004; Yogatama et al., 2015; Jung et al., 2019; Zhang et al., 2019; Hashimoto et al., 2019; Gao et al., 2020; Sellam et al., 2020). Human evaluation of summaries are another popular approach to evaluate the summaries, either by experts or by crowd-workers (Iskender et al., 2020; Dang, 2006; Khashabi et al., 2021).
Our approach does not contribute to the development of a new model architecture for summarization instead it intends to benchmark and adapt the training methodology for incremental temporal summarization tasks. We adopt the current state-of-the-art transformer architecture and utilize transfer learning to generate summaries. We also evaluate the summaries ( generated by the experts) qualitatively using crowd-workers.
Liu et al., 2022; Tsimpoukelli et al., 2021; Zeng et al., 2022; Pasca et al., 2023)
## 3 Task Formulation
### Image frame sampling
Since each video segment contains a lot of image frames based on its duration, it is essential to sample a fixed number of image frames for computational efficiency but to sample a fixed number of frames from a pool of frames that describe the entire event is tricky (Shi et al., 2019). We designed various experiments with and without sampling and observed that the middle frames of a video segment are the best frames that we can use to capture reasonable augmentation. For all the experiments we have performed, we used three frames from each video, _i.e_., if N is the total number of image frames for a video, we use \(N/2\), \(N/2-1\), and \(N/2+1\) frames. **For ease of understanding we use "frames" to signify three middle frames for the rest of the discussion.**
To know more about the details of the experiments we designed, Please refer to Appendix 1. We have also designed a different network that learns to sample frames from the frame pool but we will keep this discussion for the sake of future directions of our work.
### Clustering-based batch creation
In a single batch of data, instead of having similar event frames along with the corresponding event annotation, we performed a k-means-based clustering on the encoded feature of the image frames.
Since similar event frames, in a single batch can not possess enough diversity, based on the clustering we can identify which event frame's features are dissimilar and use those features from different clusters to create a batch. Since frames is a collection of three middle frames we concatenate the features of each of these three middle frames and perform the clustering. This concatenation operation preserves the temporal relation between these middle frames for a particular annotation. This strategy improved our performance in augmentation generation compared to keeping similar event frames as depicted in the video data.
### Perceiver Resampler
Recent developments in transformer architectures (Jaegle et al., 2021) show we can scale transformers without the quadratic scaling concern in the attention. It involves learning a predefined number of latent input queries as input to the transformer and cross-attend to the feature. State-of-the-art vision-language-based model (Alayrac et al., 2022) architecture, utilizes this concept (perceiver resampler) to generate fixed-size embedding from variable length inputs.
Traditional transformer-based image and text encoders use different kinds of pooling layers(mean/linear) to generate fixed embedding sizes from variable length input. We replace the last pooling layer with the perceiver resampler architecture to get fixed-size output from both encoders in a similar fashion (Alayrac et al., 2022), keeping the encoder layers frozen. This approach can also scale to larger inputs while retaining the model's expressivity. As shown in the following table, using a learnable attention-based layer to generate fixed-size embedding compared to the pooling layer improves the feature quality. The Top1 accuracy for correctly predicting the annotation (semantic concept/augmentation) of the frame is more than other approaches.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \# samples & TOP-1 & TOP-1 & TOP-3 & TOP-3 \\ & (Kmeans) & (Random) & (Kmeans) & (Random) \\ \hline
150 (10) & 0.2156 & 0.196 & 0.549 & 0.5098 \\ \hline
300 (10) & 0.2749 & 0.2745 & 0.6176 & 0.5098 \\ \hline
1500 (10) & 0.2941 & 0.2156 & 0.6176 & 0.598 \\ \hline
150 (20) & 0.480392 & 0.441176 & 0.803922 & 0.77451 \\ \hline
300 (20) & 0.470588 & 0.382353 & 0.784314 & 0.735294 \\ \hline
1500 (20) & 0.303922 & 0.245 & 0.647059 & 0.6372 \\ \hline \end{tabular}
\end{table}
Table 1: Clustering & accuracy of semantic entities extracted K-means vs Random sampling.
\begin{table}
\begin{tabular}{c c} \hline architecture & accuracy \\ & (Top1\%/Top3\%) \\ \hline pre-trained encoder & 18/27 \\ pre-trained encoder+custom pooling & 21/32 \\
**pre-trained encoder+perceiver resampler** & 26/38 \\ \hline \end{tabular}
\end{table}
Table 2: In custom pooling, we replace the pooling layer of the pre-trained model with a learnable pooling layer. for the learnable parameters, the result is for 5 epochs on the Youccok2 dataset.
Figure 1: Shows the architecture of the system and the use of semantic augmentation for summarization.
Models
We develop a two-stage approach (i) **Phase I:** Learning the correct annotation from the video frame (frame to text). (ii) **Phase II:** Use these augmentations along with the summarizer's input to generate summarization (text to text) using pre-trained models (_i.e_., BART or distilBART).
### Phase I:
We used the CLIP model to generate annotations from the video frames. CLIP models are known for learning visual concepts from language supervision. It involves two pre-trained encoders for image and text to predict the correlation between image and text. For image, CLIP uses a similar to ResNet50 architecture and for text, CLIP uses a masked self-attention Transformer. In order to train the CLIP model we used frames (as discussed in section 2.1) and corresponding annotation as input. Our experiments during feeding the data into the clip answer the following questions (**i**) How to efficiently create a batch of data that consists of diverse examples? (as discussed in section 2.2) (**ii**) Which frames in a pool of frame that describes a single event should be used for input? (as discussed in section 2.1).
### Phase II:
It takes the semantic augmentation generated in phase I along with the transcription of the video as input to the pre-trained model and generates summarization (system flowchart in Figure 1.)
Since each video is divided between segments based on the procedure, we can learn to predict the annotation (semantic concepts) for each of these segments from the video frames and use these concepts to augment the summarizer model's input, which is the transcript of the entire video, to generate summaries.
## 5 Experimental Setup
In Phase I, we finetuned the CLIP model for 1 epoch, and for the rest of the epochs, we only train the learnable perceiver resampler part keeping the encoder layers frozen. We observed finetuning the CLIP model for more than 2 epochs heavily degrades the performance on the prediction. Since it is already pre-trained on huge datasets, we found finetuning for one epoch on the new dataset is reasonable. For phase II also since we are using a pre-trained summarizer model, we adopt a similar strategy.
### Dataset
We use Youcook2 for all our experiments. Existing datasets on instructional videos lack in many aspects (_i.e_., limited videos, limited actions etc.). The Youcook2 dataset is a collection of around 2000 cooking videos which contains around 89 cooking recipes and 14000 annotated clips with one descriptive sentence.
Unlike Other datasets, as shown in Table 1, Youcook2 includes temporally localized procedure annotation with descriptions along with long-duration videos. Each video contains 3 to 16 procedure annotations. These procedure segments preserve rich semantic information which is useful for our task compared to other datasets. We randomly split the dataset to 67% for training, 8% for validation, and 25% for testing.
### Evaluation Metrics.
Our experiments are evaluated on the widely used evaluation metric Recall-Oriented Understudy for Gisting Evaluation (ROUGE score) for text-based summarization. It considers both the precision and recall between predicted and target summaries. Recall defines the proportion of words in the target summary generated by the predicted summary and precision defines the proportion of words generated by the predicted summary that appears in the target summary. ROUGE score has several methods and as shown in Table 1, we evaluate on ROUGE-1(R-1)/ROUGE-2(R-2)/ROUGE-L(R-L) (Precision and Recall compare the similarity of uni-grams/bigrams/Longest Common sub-sequence between target and generated summaries). For summarization, Recall is significant since it shows the generated summary captures all of the target summary's information. We gained a significant amount of improvement in recall using our method compared to the existing pre-trained model.
\begin{table}
\begin{tabular}{c c} \hline Dataset & Duration \\ \hline YouCook & 140 minutes \\
50Salads & 320 minutes \\ Breakfast & 34.25 hours \\ \hline
**Youcook2** & **176hours** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of other instructional video datasets
### Results and Analysis
Table 3 contains the results of experiments on CLIP-based semantic augmentation based on different batching of data. We found clustering based on the image features improves our performance rather than text features. We also uniformly sample the frames from the pool of frames as the input, which contains frames that are not contributing to the event depicted in the video (second experiment). We found adding more frames from the initial and end position of a video segment does not contribute to the accuracy much compared to middle frames. One of the experiments (experiment 4) includes training the text encoder's second last layer along with the perceiver resampler. we found the accuracy is less compared to a completely frozen encoder and learn only the perceiver resampler layer.
Table 4 contains the results of the summarization output. we used the predicted semantic concepts along with the transcription as input to the pretrained summarizer model. In order to show the efficiency of the augmentation, we did not fine-tune our pre-trained summarizer model with the semantic concept. Our approach shows significant improvement in terms of accuracy in all the metrics when we augment the input with predicted concepts from the CLIP model.
## 6 Conclusion
We presented two stage multimodal abstractive video to text summarization model that takes advantage of the extra semantic concepts along with summarizer input. We have provided detailed evaluation for each of our step. We demonstrate that our method gains a significant improvement over the existing pre trained summarizer model. We use this semantic augmentation generation step as an intermediate process. We also showed using various methods like adding a perceiver resampler layer and batching using k-means based clustering with temporal relation can improve the accuracy for concept generation which in turn improves the summarizer quality.
|
2301.04518 | Large Scale Qualitative Evaluation of Generative Image Model Outputs | Evaluating generative image models remains a difficult problem. This is due
to the high dimensionality of the outputs, the challenging task of representing
but not replicating training data, and the lack of metrics that fully
correspond to human perception and capture all the properties we want these
models to exhibit. Therefore, qualitative evaluation of model outputs is an
important part of model development and research publication practice.
Quantitative evaluation is currently under-served by existing tools, which do
not easily facilitate structured exploration of a large number of examples
across the latent space of the model. To address this issue, we present Ravel,
a visual analytics system that enables qualitative evaluation of model outputs
on the order of hundreds of thousands of images. Ravel allows users to discover
phenomena such as mode collapse, and find areas of training data that the model
has failed to capture. It allows users to evaluate both quality and diversity
of generated images in comparison to real images or to the output of another
model that serves as a baseline. Our paper describes three case studies
demonstrating the key insights made possible with Ravel, supported by a domain
expert user study. | Yannick Assogba, Adam Pearce, Madison Elliott | 2023-01-11T15:31:46Z | http://arxiv.org/abs/2301.04518v1 | # Large Scale Qualitative Evaluation of Generative Image Model Outputs
###### Abstract
Evaluating generative image models remains a difficult problem. This is due to the high dimensionality of the outputs, the challenging task of representing but not replicating training data, and the lack of metrics that fully correspond to human perception and capture all the properties we want these models to exhibit. Therefore, qualitative evaluation of model outputs is an important part of model development and research publication practice. Quantitative evaluation is currently under-served by existing tools, which do not easily facilitate structured exploration of a large number of examples across the latent space of the model. To address this issue, we present Ravel, a visual analytics system that enables qualitative evaluation of model outputs on the order of hundreds of thousands of images. Ravel allows users to discover phenomena such as mode collapse, and find areas of training data that the model has failed to capture. It allows users to evaluate both quality and diversity of generated images in comparison to real images or to the output of another model that serves as a baseline. Our paper describes three case studies demonstrating the key insights made possible with Ravel, supported by a domain expert user study.
Information visualization, Picture/Image Generation, Machine learning
## 1 Introduction
Generative image models are a class of neural network based models that aim to produce **novel**, **high-quality** and **diverse** images that faithfully model a target image distribution. A variety of architectures and training methods have been designed to learn such models, such as Generative Adversarial Networks (GANs) [1], Variational Auto-Encoders (VAEs) [2], Flow Based Models [3] and Diffusion Models [4].
Evaluating these models remains difficult [5]. The high dimensionality of the output architectures used make likelihood estimates of model outputs difficult, and in some cases intractable. It has also been demonstrated that measures like average log likelihood do not always correlate with human perceptual judgments of sample quality [6]. Additionally, while we want models to capture the target distribution well, we do not want them to produce images that are actually in the training set (an issue commonly referred to as memorization).
A number of metrics have emerged in the literature around generative image models [7][8], with Frechet Inception Distance (FID) [9] being the most popular. However, issues have been identified with FID, leading to the development of more granular metrics such as precision and recall [10][11].
Single-number metrics such as FID, while necessary for forward progress in the field, do not capture the full range of qualities desired of these models. Because of this, human visual inspection often plays a critical role in the evaluation and dissemination of advances in generative image modeling. However with existing evaluation tools, practitioners can typically only look at a small fraction of the output space of these models, on the order of 10s to 100s of images (e.g., [12, 13, 14, 15, 16]).
Our interviews with domain experts confirm that human evaluation is a critical part of practitioner workflows. Some experts rely on human evaluation with crowd-sourced evaluators, they however recognize that these are often expensive or time consuming and are thus left to the final stages of evaluation if done at all, leaving them to primarily rely on small scale qualitative evaluation during the model development process.
At the same time, experts in the field are concerned about cherry picking of results for publication but typically have no means to expansively explore model outputs in the rare occasions that these are published alongside academic manuscripts.
To address these needs, we built a system called _Rarel_, which enables users to perform visual inspection of model outputs on scales up to three orders of magnitude greater that typical user workflows. We demonstrate usage of this system on datasets varying from 50k - 120k images. These dataset sizes are comparable to those used in standard _quantitative evaluation_ of generative image models.
Our primary contributions include:
* A visual analytics system that supports _multiple evaluation tasks_ (e.g. evaluating quality & diversity, discovering mode collapse or gaps in model output) for generative image models and is agnostic to model architecture and internals.
* _Interactive exploration of large generative image model datasets_, facilitated by clustering and the use of fine grained visualization of cluster metrics to guide qualitative evaluation.
* A user interface that uses _visual comparison_ driven by semantically meaningful embedding spaces to support reasoning about differences between image distributions and generate hypotheses about model behaviour.
## 2 Background
### _Generative Image Models_
The capabilities of generative image models have greatly increased over the last several years. Since the original GAN paper [1] that broke the dam on modelling of faces, we now have systems like BigGAN [12], StyleGAN [13], GLOW [14], VQ-VAE [15], CDM [16] and many others that present a wide variety of model architecture and training algorithms and are capable of producing very realistic images in a wide variety of domains.
### _Quantitative Metrics for Evaluating Generative Image Models_
In this section, we outline the most commonly cited metrics in the research literature:
* **Frechet Inception Distance**[9]: Uses a pre-trained InceptionV3 classifier [17] to generate embeddings for both real and generated images, then uses a statistical measure to compare the distribution of embeddings from the two sources. FID is the most popular metric in the literature. It requires a large number of samples to produce an accurate estimate (generally at least 50k generated images), and cannot detect memorization of the training set. Karras et al [18] point out that the texture bias in ImageNet based CNNs like InceptionV3 [19] imply that metrics derived from them will not capture all aspects of image quality.
* **Inception Score**[20]: Uses a pre-trained inception classifier to measure, _a)_ how well each generated image matches a single ImageNet class, and _b)_ if the full the set of generated images has uniform coverage over all the ImageNet classes. Similar to FID, Inception Score requires a fairly large number of images and cannot be used to detect memorization. It also cannot measure intra-class diversity or detect mode collapse. See Barrat and Sharma [21] for a detailed discussion of issues with this metric. **Both Inception Score and FID** are scalar scores designed to capture both image quality and diversity, and thus cannot reveal if the model is trading off one of these properties for the other to achieve a better score.
* i.e. diversity.
These metrics are generally _computed over the entire dataset_, and thus have low granularity. Even when they indicate that a model is better or worse, they don't specify where in the distribution of generated images improvements or regressions lie. Ravel increases the granularity of these metrics, providing a way to find specific clusters of images that score poorly on some metric
### _Qualitative Evaluation / Visualization of Generative Image Model Outputs_
Due to the limitations of quantitative metrics discussed above, researchers also rely on visual inspection of model outputs to evaluate model performance. Visual inspection is typically performed during training, to monitor that the process has not immediately failed, as well as after training, to evaluate overall quality of the model. We validate and elaborate on this workflow and strategy with domain experts in Section 6.1. Although models often output 100s of thousands of images, our user study found that researchers are only able to inspect a small portion of images (in the 100s) during their analyses.
Visually impressive samples are paramount for successfully publishing model advancements in scientific venues. Relevant papers in the field of generative image models typically include 10s-100s of images [12][13][18][22]. This represents a very limited sample of the variety of images these models are typically trained to generate. Our user study also found that there is an assumption that authors "cherry-pick" the best images to include in their publications. Relatively few authors have published large datasets of un-cherry-picked output images alongside their publications.
There are currently no purpose-built interfaces that make these convenient to browse or examine. For example, [13][18] each publish 100k output images to a publicly accessible Google drive. These images are organized into 1000 sub-folders to make the interface more usable given the large number of files. To view this output, users must either click through the folders individually, or bring their own interface to browse the images.
## 3 Related Work
Borji [7][8] catalogues many of the metrics for automatic evaluation of generative models. Ravel utilizes the precision and recall metrics from [10] and [11], but does **not** propose any new metrics. We thus situate this work in primarily relation to work on **qualitative evaluation** and **interfaces** to explore generative model output.
**Crowd-worker Evaluation**
Denton et al. [23] use a small volunteer sample of human annotators to estimate quality by asking whether they can distinguish real from generated images. Zhou et al. [24] refine and scale this technique, asking crowd-sourced workers from Amazon's Mechanical Turk to make psychophysical judgments about real vs. generated images. While these methods are good at scaling up evaluation to larger dataset sizes, they are more time-consuming and expensive than manual inspection by researchers. Thus they are typically reserved for later stages of the evaluation pipeline. They also tend to focus on measures that can be evaluated on individual images (e.g. image quality), rather that corpus level properties (such as image diversity).
By contrast, Ravel is designed to support _researcher evaluation earlier in the development pipeline before the use of external raters_, and allows researchers to evaluate _both quality and diversity in the same interface_. It fits in between initial monitoring of training dynamics to ensure that the model is converging and larger scale human rater evaluation typically performed closer to model release.
**Explaining Model Internals**
Bau et al. [25] explore finding interpretable units (neurons) within GANs and visualizing the causal effects of ablating these neurons. Their method depends on having access to model internals and having a pre-trained object segmentation network to find objects within the scene to establish the casual relationship between neuron activation and network output. [26] Bau et al. use a pre-trained segmentation network to compare the distribution of objects found in generated images with those found in a set of real images. This provides a measure of diversity of the models outputs with respect to the objects that can be segmented by the pre-trained network. The authors also propose a method (Layer Inversion), to train networks that compute approximate _inversions_ of real images into the latent space of the model, to see what the network generates instead of the missing objects.
While these approaches are critical to better understanding of the internal mechanisms that drive model behavior, they are typically specific to a particular model architecture. Ravel treats models as black boxes and is thus _agnostic to model architecture_. Ravel does use pre-trained networks to compute vector representations of images, but is less sensitive to the final task the pre-trained network is trained to perform. For
example one could use the embeddings from the model under examination or any model that has learned semantically useful features such as InceptionV3.
**Online Exploration of Model Outputs**
White [27] explores a variety of ways to sample images from latent space that enable repeatable visual comparisons between models. They introduce a number of visualizations designed to examine how models perform with respect to _specific input images_ that are used to test model behaviour. In a follow-up work White & Loh [28] introduce a novel visual interface based on a spreadsheet metaphor that allows users to use geometric operations in the latent space to interactively query these models and thus explore their output.
While online methods enable users to explore specific hypotheses, they generally suffer from supporting relatively small exploration spaces due to the slow generation of images. Ravel focuses on offline analysis of generated images, which allows examining much larger datasets and can thus complement online methods as means of generating hypotheses for further, more targeted investigation.
**Embedding Based Visualizations**
A number of works have visualized embedding spaces of large, high-dimensional datasets [29][30][31][32].
Liu et al. [33] present a visual analysis system which uses the latent space learned by the _encoder_ of a VAE to explore variation in an existing image dataset. This task is conceptually similar to what we support in Ravel, however we do not attempt to learn a new latent space as we want to focus on the generator output and its latent space (rather than fixed input data).
Ravel builds on these earlier works visualizing embedding spaces, and incorporates clustering to make navigating these spaces more tractable. We focus on dataset comparison as a way to ground exploration of the output space and cater to the needs of the image generation use case.
Xiang et al. [34] present a visual analytics system for correcting labelling errors in large datasets. They visualize t-SNE projections of image embeddings _color coded by label to highlight data points that are mislabelled_. In contrast, Ravel is designed to support unconditional generation scenarios, where the generated (and ground truth images) _have no labels_. Thus, rather than relying on labels for grounding, Ravel enables comparative evaluation of datasets to allow grounding evaluation of generated images in the distribution of real images.
## 4 Approach & System Design
Ravel focuses on enabling visual inspection of model outputs in comparison with some baseline set of images, typically real 'ground truth' images. From our interviews (section 6.1), we know that practitioners perform visual analysis of image samples coming out of models, but typically on small numbers of images. _Our aim is to support and scale up those workflows to allow more systematic visual inspection from 10s or 100s of images to 10,000s or 100,000s of images_. We leverage four key concepts to facilitate this comparison across a large set of images:
1. **Image Embeddings**: Neural image embeddings provide a semantically rich vector space for images that allow computing similarity scores between pairs of images [35]. They have become the standard in evaluation pipelines for generative image models (section 4.1.2) and **we want to leverage the familiarity researchers have with embedding based metrics** in Ravel. The ability to compute semantic similarity between images allows us to group similar generated images together **and** allow comparison of those images to similar ground truth images. By default we compute the standard InceptionV3 embeddings used in metrics like FID, but can also add embeddings from other models including pretrained models or from the model under examination if those are available.
2. **Clustering Images**: In order to enable visual exploration of hundreds of thousands of images we need to group them into a smaller number of meaningful groups. We use unsupervised clustering of images to reduce the number of _top-level items_ the user has to consider from e.g. 100k images to 1000, 500 or even 250 clusters. We use k-means clustering and provide more details about this in the _pipeline details_ section below
3. **Cluster Metrics**: Once we have reduced the number of top-level elements to a manageable number we wish to provide hints to the user as **to which clusters might be most interesting to explore first**, as well as **scaffold a repeatable workflow that can guide exploration**. We compute metrics over each cluster that enable sorting clusters into predictable order. Many of the metrics we compute are designed to surface differences between the generated data and the baseline data in a cluster, with others show general properties of the cluster itself. We provide more details about the metrics we compute in the _pipeline details_ section below. By sorting clusters by these metrics we allow discovery of outlier clusters as well as analyzing properties of different parts of the generative image space (e.g. seeing what kinds of output images have low precision)
4. **Interactive Visualization**: Finally we provide responsive interactive visualization of images, clusters and associated cluster metrics. Because we compute all the data offline, the user interface is quite responsive and enable fast and free exploration of a large number of data-points.
### _Pipeline Details_
#### 4.1.1 Clustering
Ravel clusters the image embeddings using the implementation of k-means clustering from scikit-learn. k-means was an attractive clustering algorithm to start with because it is a well known algorithm that has a single easily understood hyperparameter, namely the number of clusters. This allows the user to _directly specify values for this hyper-parameter that they believe make sense for their dataset_. In addition to this, compared to other algorithms with similar hyper parameter options such as spectral clustering, k-means scales well with the number of examples and number of clusters selected [36]. k-means also tends to produce fairly evenly sized clusters, which is helpful for displaying their contents in a uniform way.
All clustering methods suffer from issues of imperfect cluster assignment, particularly at the boundaries of clusters. However, Ravel mitigates this issue somewhat by visualizing clusters according to their positions in latent space. This allows users to quickly discover and examine clusters that are near to each other and ascertain whether they are truly a single semantic cluster. For more details see the _dimensionality reduction_ section (4.1.3) below.
#### 4.1.2 Cluster Metrics
We compute a number of metrics for each cluster. Having multiple metrics provides different lenses through which to consider the clusters. For example one might be interested in
here recall or precision are low, or alternatively where clusters are tightly packed or more spread out.
Clusters may have images from either the generated data or the baseline/ground truth, in the UI we refer to these data sources as _splits_ and typically display the _baseline data on the left_, though the user can change this interactively. Many metrics are geared toward exposing differences between the two splits that are currently being explored.
* **Percent of Split 2:** The percentage of images in the cluster that come from the split displayed on the right (usually the generated data).
* **Recall:** Recall is a metric that describes what fraction of samples in the _ground truth_ split have support in the alternate split. We compute recall as defined in [11] but aggregate the per-image values for each cluster. When comparing generated images to real ones a high recall implies the model is producing images as diverse as the real data.
* **Precision:** Precision is an metric that describes what fraction of samples in the generated data have support in the ground truth. As above we compute precision as defined in [11] but aggregate the per-image values for each cluster. When comparing generated images to real ones a high precision implies better image quality/realism.
* **Distance between split centroids:** Measures the distance between centroids of the samples in a cluster that belong to each split. Larger values suggest that the data from the two splits are more visually different while smaller values suggest that the left and right split are more visually similar.
* **Median distance to centroid:** Median distance of samples in the cluster to the centroid of that cluster. Smaller values here imply more compact clusters with more similar samples, higher values suggest the cluster has a greater variety of images.
#### 4.1.3 Dimensionality Reduction
In addition to visualizing clusters by the metrics described above, we also visualize the positions of cluster centroids in the embedding space itself. Since these are high dimensional embeddings spaces we need to use dimensionality reduction in order to visualize them in 2D. We use the UMAP algorithm [37] to do this projection. Other options for dimensionality reduction include PCA [38] or t-SNE [39]. Compared to t-SNE, UMAP has been reported to preserve more of the global structure of the original space and is significantly more computationally efficient both as the number of dimensions increases and as the size of the dataset grows [37][40]. The InceptionV3 embedding we use is 2048 dimensional embedding, making a computationally efficient method particularly attractive. While not as directly interpretable as linear methods like PCA, UMAP is better able to capture some of the complex non-linear relationships between images encoded in the embedding space.
### _User Interface_
The user interface is a browser-based application. We describe the design of the user interface and further discuss the utility of these affordances in the case study section.
Figure 1 shows the Ravel interface, divided into 3 main sections:
Fig. 1: The Ravel interface primarily consists of: A) Dataset & view options. B) Summary charts & linked cluster plots. C) Side by side image grids for visual comparison of clusters. This view shows a cluster comparing real images on the left to generated images on the right.
#### 4.2.1 (A): Dataset and View Controls
The user can select which embedding to use, the number of clusters and which split to show on the left or right.
#### 4.2.2 (B): Charts
There are three main kinds of data display in the charts section of the UI.
**Summary Statistics**
First is a static table showing summary statistics and metrics for the dataset as a whole. These include things like the number of samples in the whole dataset, the number of samples in each split as well as dataset level metrics such as Frechet distance1, recall and precision [11].
**Cluster Metric Plots**
Below the summary charts are a series of beeswarm plots for the per-cluster metrics that were computed (see _Cluster Metrics_ section above for details). Each beeswarm shows **each cluster as a dot** and plots the distribution of clusters over that metric. These charts are interactive, individual dots can be selected to show the images from that cluster in the sample viewer.
Footnote 1: [https://www.cs.ucsd.edu/~jlabs/datasets/](https://www.cs.ucsd.edu/~jlabs/datasets/)
A color encoding can also be applied by clicking on the rainbow icon toggle next to that plot. This color encoding is applied across all charts allowing comparison of one metric to another (Figure 3).
**UMAP Plots**
1. When using the Inceptionv3 embedding this is Frechet Inception Distance (FID)
Below the beeswarm plots are two plots showing 2D UMAP projections of: a) all the clusters and b) images from the _currently selected cluster_ (See Figure 4). The Cluster UMAP view in particular allows browsing clusters by visual similarity rather than by the metric scores. Previews of individual images are shown on mouseover of points on the Samples UMAP plot
**Highlight Classes**
In cases where the dataset does contain class labels, Ravel will display a search interface that allows users to search for and select any number of matching classes in the dataset.
If one or more classes is selected Ravel will dim clusters that do not contain any images from those classes in the cluster plots (5).
All the cluster plot interactions are linked, enabling users to see where a cluster falls in multiple metrics or in embedding space.
Any chart can be collapsed by by clicking on its title, this allows making more room for the charts the user finds most useful for their analysis.
#### 4.2.3 Sample Viewer
To the right of the charts is the sample viewer. This consists of a pair of scrollable views displaying image thumbnails from the selected cluster divided by split. If there are classes/labels associated with the images they are displayed below the thumbnail and images from the same class are grouped together, otherwise they are displayed in the order they were processed by the pipeline (effectively random). Users can click on images to get a larger view of the image and see any additional metadata.
Image thumbnails are displayed at a maximum size of 150x150px and a minimum size of 100x100px depending on
Fig. 4: Top: UMAP projection of **cluster centroids** in embedding space. Bottom: UMAP projection of currently selected cluster, real samples are shown in blue and generated samples are show in red
Fig. 3: Recall and Precision beeswarm plots colored by **precision** score. High recall, low precision clusters can be identified using the position encoding in the recall plot and the color encoding applied from the precision plot. To save space in the display we do not display a legend, as the relationship between high-low values and hue is directly visible in the chart whose rainbow icon was togled and exact values are of less importance than relative comparisons
Fig. 2: Beeswarm plot showing distribution of cluster precision scores. Each dot is a cluster which the currently selected dot shown in orange. A description of the metric can be accessed by clicking on the? icon.
how large the browser window is. On a 3840 x 2160 resolution display2 one can typically see 72 images per split for a total of 144 images in a single screenful. However one clear limitation is that if there are more images than this in a cluster a user cannot see them all at once and does have to scroll through to get a better understanding of the cluster. One way to mitigate this is to use choose a higher number of clusters to view. Which results in smaller, more granular clusters, at the cost of having more to explore.
Footnote 2: This resolution is the default \(4K\) resolution which is widely available. It is also close to the native resolution of a 2021 14 inch MacBook Pro (3024 x 1964)
## 5 Case Studies
In this section, we illustrate how the design features of Ravel support user exploration in a series of case studies. These case studies highlight discoveries the authors made when using the tool. Our 3 use cases are: 1) Unconditional image generation in a single domain (faces), 2) Class conditioned generation on ImageNet data and 3) Analyzing downstream use of a generative model for another task - in this case, super-resolution.
While Ravel supports the use of multiple embeddings for clustering and exploration, in this paper we focus primarily on _the same Inception3 embeddings used in FID score and other metrics_, as they have become a de-facto standard in the generative model space and are publicly available.
#### 5.0.1 Unconditional Image Generation
Unconditional generation refers to situations where a trained model is used to generate images with no 'conditioning' other than an input vector (often referred to as a 'noise' or 'Z' vector) drawn from a random distribution. In this case study, we generate 60k images from an implementation of the StyleGAN2 architecture 3[18] trained on the FFHQ dataset. The images are generated using Z vectors drawn from a random distribution with a mean of zero and a standard deviation of one. We load those images, along with 67,542 images from the training set for a total of 127,542 images.
Footnote 3: We want to make clear that we are using an independently trained StyleGAN2 and not the StyleGAN2 weights released by the original authors
_Are there images that this model generates that are not part of the input distribution?_
Our analyst opens the Ravel interface in her browser and sets the number of clusters to 250, with images from the training data on the left and generated images on the right. Knowing that low precision generally implies less realistic images, she begins by exploring clusters on the low end of the precision chart and notes that the clusters with the 10 lowest values each contain generated images with visually obvious defects. Seeing these images contrasted with ground truth data gives her a sense of what the model is struggling to capture in each category. For example she notes that the model has a difficult time modeling complex backgrounds, portraits where there are more than one face, or portraits with occluding objects like hands or microphones. Another problem that stands out to her is the difficulty the model has generating faces with face-paint as in Figure 6. Seeing these corrupted images juxtaposed with real images from the training data allows her to hypothesize why the model struggles with this, in particularly she observes that these types of images are relatively rare in the training data.
In examining 10 clusters, she has quickly scanned through approximately 4234 generated images and 2718 real images.
#### 5.0.2 Class Conditioned Image Generation
Class conditioned generation refers to models that have been trained to generate output images for a fixed set of distinct classes. Here, we look at directly comparing the output of two models trained on the same data and task: namely, generating images from 1000 classes of the ImageNet dataset [41] at a resolution of 128x128px. We use model implementations from Lucic et al's "Are GANs Created Equal? A Large-Scale Study" [42].
We set up a Ravel instance with 50k images generated by BigGAN 128 and BigGAN-deep 128 [12], and 50k images from
Fig. 5: Clusters containing at least one image with the selected classes are highlighted in all cluster plots.
Fig. 6: The StyleGAN2-based model struggles to model images of faces with facepaint and other colorful accessories occluding the face, it instead produces these artefacts.
the validation set for ImageNet4. Input vectors for each model are drawn from a random normal distribution, as described in [42], and no truncation is applied to the input vectors.
Footnote 4: Data retrived from [https://www.tensorflow.org/datasets/](https://www.tensorflow.org/datasets/) catalog/imagenet2012
Using these models demonstrates a workflow where researchers are trying to determine how one variant of a model, or an improved model architecture, behaves differently from some baseline model. This comparison is a common workflow in machine learning, typically achieved by reporting performance on key metrics such as FID. We demonstrate how Ravel can complement traditional metrics to find specific differences in model behaviour.
_How does BigGAN compare to BigGAN-deep in terms of diversity of output_
Our analyst opens Ravel and sets the number of clusters to 500, with the left split showing output from BigGAN, and the right split showing output from BigGAN-deep.
It immediately jumps out to him that there are a number of clusters that have scores of 0 in the recall chart. He clicks on one of them and discovers it only contains images from BigGAN-deep. All 100 images from this model are from two classes, appear virtually identical, and are of low visual quality (see Figure 7). This is a classic case of mode collapse [43]; the model was unable to learn the true distribution for these classes and has 'collapsed' all output for these classes to a single image. In this case, both classes have been collapsed into the same output image. The analyst clicks on other clusters with zero recall and finds similar mode collapse in BigGAN-deep for the following classes: _"space bar", "typewriter keyboard", "pickellnube", "mpspeed", "carbon", "chumbered nutilus, partly naultus, naultus", and "odometer"._
Using the "highlight classes" feature of Ravel 5, our analyst is able to find all clusters that contain images from these classes, and confirms that the BigGAN model _does not_ show mode collapse for these classes (Figure 8).
_What about classes where both models do okay at generation (i.e. neither model exhibits a pathological failure like mode collapse)?_
Our analyst now chooses to increase the granularity of clusters by setting the number of clusters to 1000. He turns on the _color by_ option of the **precision chart**, and then looks at the **recall chart** to find clusters with high recall but relatively low precision (though not as low as in the previous section). Looking at higher recall clusters allows finding ones that have at least some overlap in their distribution, while keeping an eye on precision suggests differences in quality. These appear as light yellow dots on the right side of the recall chart. He randomly selects one, and discovers a cluster of sea urchins. Both generators produce high quality outputs, however visual inspection shows that BigGAN-deep produces more diverse output (Figure 9).
A benefit of clustering by visual similarity, rather than grouping by label, is making it easier to discover overlap or leakage of visual semantics between classes. Using the same method as above, our analyst selects another cluster that consists of relatively realistic images of dishrs (Figure 10). He then highlights all clusters containing
Fig. 8: Output of the “space bar” and “typewriter keyboard” classes in BigGAN show much more variety. This model does not exhibit mode collapse for these classes
Fig. 7: Mode collapse of the “space bar” and “typewriter keyboard” classes in the BigGAN-deep model. All 100 images in this cluster look identical as both classes have collapsed onto the same output.
Fig. 9: A cluster of sea urchin samples from two generators, both produce high quality images, but the generator on the right produces more diverse output. The images on the right show a greater variety of colors, textures and poses.
individually examine all these clusters. His findings, shown in (Figure 11 and Figure 12), suggests that other classes, namely handkerchief' and 'doormat', are 'leaking' into how each generator models dishrags.
#### 5.0.3 Downstream Applications of Generative Models: Super-Resolution
In our final case study we consider the PULSE (Photo Upsampling via Latent Space Exploration) algorithm [22], a super-resolution algorithm that searches the manifold of a generative image model (in this case StyleGAN2) to create plausible upsampled images corresponding to low resolution input images. The post-publication release of this model received sharp critique, as users quickly discovered weaknesses in the models ability to upsample images of people with non-cancasian features [44]. We use this example to illustrate how broader exploration of output manifolds could help researchers and practitioners who are using these kinds models in downstream applications to understand their weaknesses and mitigate risks before release.
In this scenario, we use the original CelebA-HQ dataset from Karras, et al. [13]. This dataset consists 30,000 facial portraits of celebrities at 1024x1024px. We take CelebA-HQ images at 32x32px and upsample them back to 1020x1024px using the PULSE algorithm5. We invoke PULSE with the default settings provided by the authors in their model release 6, however we increased the number of steps we run the algorithm from 100 to 200 steps to increase the chances of PULSE producing a result for a given input.7
Footnote 5: This scale factor (24x) is within the range of scale factors (8x-64x) the original authors used in evaluation
Footnote 6: [https://github.com/adamian98/pulse](https://github.com/adamian98/pulse)
We then compare the original 1024x1024px images and the PULSE up-sampled ones using Ravel. In total, we have 30000 original CelebA-HQ images and 22092 images output from PULSE. 7908 images failed to produce any output after 200 steps of the PULSE algorithm.
_What kinds of images does PULSE struggle to produce any output for?_
Our analyst sets the number of clusters to 250 and the original CelabA-HQ images on the left split, and immediately notices a group of clusters on the lowest end of the precision chart. She notices that many clusters also have low recall and are mostly _composed of images only from the ground truth_ (see Figure 13). She clicks on some low-recall/low-precision clusters and observes a few categories where the algorithm struggles to produce any output. These include: people wearing hats or other headgear, images where the person has a a microphone or hand in front of them, faces with a lot of facepaint or makeup, and images where people are wearing sunglasses. Each of these clusters has between 60-120 images and a few samples are shown in Figure 14.
_What kinds of images does PULSE struggle to produce high quality output for?_
Fig. 11: BigGAN cluster with a few dishrags but many colorful handkerchief that have similar patterns to those seen in the main dishrag cluster. This suggests some leakage in visual representation between the two concepts.
Fig. 12: BigGAN-deep cluster with a few dishrags but many doormats, this suggests a hypothesis for why many of the BigGAN-deep dishrags have a very rough texture.
Fig. 10: A cluster of ‘dishrag’ samples from BigGAN on the left and BigGAN-deep on the right, both produce somewhat realistic dishrags, but the generator on the right produces images with more diverse textures and colors.
She continues to explore low precision clusters by scanning left to right along the chart. One of the lowest precision clusters contains ground truth images of people wearing spectacles (and a few wearing sunglasses), however from the algorithm output she sees a number of low quality (i.e. unrealistic) images that are likely up-scaled from ones where the subject is wearing sunglasses (Figure 15). She refines her hypothesis about what the algorithm does to portraits with sunglasses, determining that, _"if a person is wearing sunglasses, the algorithm often fails to produce any image, and when it does, it produces unrealistic output"_.
_What kinds of images does PULSE do well at upsampling?_
The precision chart can also be used to look at what images PULSE does well at. As our analyst browses clusters on the right side of the distribution, she observes that a majority of high quality outputs seem to be of lighter skin tone faces that look relatively young or middle aged.
Her manual inspection reveals some of the model's failure modes and strengths along demographic categories even though she does not have access to quantitative measures of performance across sensitive features such as skin tone or age.
## 6 Domain Expert User Study
We evaluated Ravel in a two-stage study, where the first stage identified domain experts' model evaluation goals, workflows and existing tools, and the second stage investigated the usability and utility of the Ravel UI. Participants were recruited from a convenience sample of full time employees as a large technology company, and there was a diversity in gender identity (\(n_{female}\) = 1, \(n_{Male}\) = 5), product area (5 different teams), and office location (\(n_{Isravel}\) = 2, \(n_{UnitedStates}\) = 4). Stage one (n = 4) included four expert research scientists and software engineers who currently work on training and evaluating generative models, and stage two (\(n\) = 5) included three of the users from stage one plus two additional users who have experience working with generative models. Both evaluation stages used remote moderated Google Meet video calls to speak with users and allow them to share their screen to show how they interacted with Ravel.
### _Stage One: The Current State of Evaluation_
Stage one aimed to understand the current state of evaluation for generative image model output. Semi-structured interviews were used, with questions about participants' familiarity and day to day work with generative models, goals and motivation for model evaluation, as well as current workflows and practices. All four users currently work on generative models, and were familiar with StyleGAN and its variants, BigGAN and its variants, as well as other models trained on ImageNet and FFHQ. There was diversity in architectures they work with and the tasks they apply generative models to. Here, we report the most common practices among users.
#### 6.1.1 Evaluation Goals & Workflows
All four participants reported that publication of model performance/improvement was a primary goal for their work. All four participants also expressed that evaluating output image quality was critical to understanding model performance.
All four participants reported a mix of quantitative and qualitative evaluation methods. In general, their workflows could be characterized by a common pattern of: model training accompanied by limited visual inspection for sanity-checking \(\rightarrow\) examination of metrics \(\rightarrow\) continued training \(\rightarrow\) reexamination of metrics until a predetermined threshold is reached \(\rightarrow\) qualitative visual inspection of image output.
Two participants indicated that they believed the current "best practice" for determining image quality was human qualitative evaluation, often from crowdsourced studies. Importantly, all four participants also emphasized the ubiquitous reliance upon and limitations of FID scores in model evaluation. While one user suggested that "FID scores are much better than inception scores and other metrics", they also conceded that "there is not a gold standard for evaluation of generated images". All users indicated that they were dissatisfied with FID score as a primary evaluation metric, and that they had all experienced and read cases where they felt that FID score did not align with human inference, a sentiment supported in Zhou, et al. [24].
#### 6.1.2 Evaluation Tools
All four users mentioned using TensorBoard [45] or Colab8 (a computational notebook environment similar to Jupyter Notebook [46]) to examine model output. For Colab, the main strength reported was flexibility. The flexibility of Colab enables bespoke analyses that we elaborate on in section 6.2.5. Its main limitations were difficulties reusing code between projects and sharing results, especially with non-technical collaborators or stakeholders not directly involved with model training. For TensorBoard, the main strengths reported were ease of use during training to quickly visualize metrics and see sample output at different stages of training. The main limitations mentioned about TensorBoard were its latency when loading many images, and lack of customization compared to an opened ended tool like Colab.
Fig. 14: Each row contains samples from a different cluster with that only has images from the ground truth data and none from the algorithm output. These are examples of images the algorithm struggles to upscale.
Fig. 15: Samples from a cluster of upscaled images of people wearing sunglasses.
#### 6.1.3 Qualitative Visual Evaluation Tasks
Determining image quality was reported as the most important task. Determining the diversity of images was also important to all four users, and one user mentioned that looking for the occurrence of mode collapse was an explicit part of their evaluation workflow. All users indicated that it was important to do more granular and bespoke image generation, including looking at samples within certain classes or samples close to each other in embedding space. One user discussed examining different levels of truncation for latent vectors to make decisions about both realism and aesthetics in the output. Users reported viewing a small number of images. One user reported looking at approximately 64 images, and never more than 100 images, per evaluation step of the model training pipeline. Another user explained that when they work with a team on an evaluation, they typically generate and inspect about 50 images. However, when working alone, this user said that they would inspect at least 200 images, noting that it was difficult to get a group consensus on more than 50 images at a time.
We determined four categories of critical evaluation tasks from this part of the study: image diversity, image quality, mode collapse, and a "catch-all" category of additional explorations of classes and samples.
### _Stage Two: Observing how Ravel is Used_
The objective of this stage was to determine how the Ravel UI can be used for the critical evaluation tasks identified in stage one. A brief slide deck and video explaining the UI components was sent to users upon confirmation of their participation, which they were asked to review before the study session. The study sessions themselves lasted for one hour and consisted of task based user exploration of the tool and semi-structured interview questions.
#### 6.2.1 Task one: Diversity
The first task we asked users to attempt was to decide whether BigGAN or BigGAN-deep was performing better in terms of the diversity of sea urchin images, in a setup mirroring the Ravel instance described in Section 5.0.2. This instance showed 128x128 pixel images from both models, with 50,000 images per model and 50 images per class from each model. The UI resolution was set to show the same number of images for each user: 5 images per row in each split.
Users were presented with an instance of Ravel with the "sea urchin" class selected in the _Highlighted Classes_ menu, showing results from BigGAN and BigGAN-deep in the left split and right split, respectively. On initial load the instance displayed the cluster containing most of sea urchin images selected: 39 (out of 50) urchins from BigGAN and 45 (out of 50) urchins from BigGAN-deep.
All users started by attending to images in each split. No users examined metrics or asked about metrics as a first step. This emphasizes the importance of Ravel intuitively depicting some of the most important information for assessing image diversity: a salient grid of sample images within a cluster of interest. Three users immediately noted the number of samples in each split, paying close attention to which model had a greater number of samples in the cluster of interest. The most common flow involved inspecting individual images and counting or "eyeballing" the number of images in each model with unique background colors, sea urchin poses, and sea urchin colors (all users mentioned these three features). Three users scanned the metrics charts and clicked on the other highlighted clusters with sea urchins. All five users made the determination that BigGAN-deep was performing better in terms of the diversity of sea urchin images, with one user explicitly stating that this confirmed their prior expectations. This demonstrates consistency in how users complete this critical task with Ravel, and shows the potential for convergent decision making and operationalization of workflows in qualitative visual evaluation. It is also notable that Ravel helped confirm one user's prior expectations about BigGAN-deep's diversity performance, although that could also have been a potential biasing factor in their evaluation process.
#### 6.2.2 Task two: Quality
For the second task, users were asked to decide whether BigGAN or BigGAN-deep was performing better in terms of the quality of golden retriever images. Users were presented with the same Ravel instance as task one, but this time the "golden retriever" class was selected in the Highlighted Classes menu and a cluster showing most of the golden retriever images was selected: 46 (out of 50) in the BigGAN split and 46 (out of 50) in the BigGAN-deep split.
All users immediately noted many artifacts in output images from both models. All users clicked on individual images and narrated particular issues, such as the shape of dogs' noses, the number of legs, and the accuracy of the form and pose of the dogs in each sample. The most common workflow was to count or "eyeball" the number of artifacts in each split to make an initial determination, but evaluation workflows were notably more diverse between users for the rest of the task. Five users reported that this task was more difficult than the diversity task while one user reported that it was easier. Some users indicated that FID and Inception score for each model would be an important part of making this determination in their typical workflow. Four out of the five users made the determination that BigGAN-deep also performed better in terms of the quality of golden retriever images, and one user did not make an explicit determination for this task. Once again, Ravel supported consistency in the primary image quality evaluation strategy across all users, but individual exploration varied after this initial decision. Some users validated their visual judgments by looking at recall and precision charts, but others simply explored the charts without forming additional opinions about the task.
#### 6.2.3 Task three: Mode Collapse
Following the Quality task, users were asked if they were familiar with the term mode collapse and if it was something they used to evaluate generative model output. Four out of five users were familiar with mode collapse and reported it as a useful discovery in the model evaluation process, while one user was unfamiliar with the term. Users who were familiar with mode collapse were then asked to use Ravel to determine whether it had occurred for either model within any class.
Because this task was not constrained to a single class, it was more open-ended, and thus more challenging for users to make a determination about. Workflows varied widely between users, as they were given no guidance about how to accomplish the task or make a determination. Several users mentioned that they expected BigGAN to exhibit more mode collapse after observing that BigGAN-deep had better diversity in the first task. Four users who found concrete instances of mode collapse (e.g. Fig. 6) did so by selecting clusters with the smallest values in the Recall chart. Four out of five users viewed the cluster samples displayed in figure 7 at some point during their exploration, but one user did not identify it as mode collapse, and one user did not select it at all. Nevertheless,
this was a promising observation, and demonstrates further consistency of Ravel's usability and specific utility of the recall visualization. There was majority agreement that the recall chart can be used to identify mode collapse, and it was revealed to occur more often in BigGAN-deep.
#### 6.2.4 Task four: Additional exploration of classes, samples, and features
For their final task, users were asked to view an instance of Ravel showing generated images from a StyleGAN2-based model in the right split, and images from its ground-truth training data, FFHQ, in the left split. Users were told they could freely explore the interface, either repeating the quality and diversity assessments or trying a new task that they might be interested in.
This task was fully unguided, but most users started by assessing image quality for the StyleGAN2-based model. It was common for users to select clusters at the low and high ends of the Precision distribution. One user observed that clicking on a cluster with high precision revealed "typical FFHQ images...very good images without occlusion, faces looking at the camera, showing people with straight hair, and the model output is very similar to these images.". Four out of five users discovered and explicitly verbalized that StyleGAN2-based model struggled to generate images of people with facepoint. They did this by clicking on clusters with low precision and noticing artifacts on many of the generated images. All four of these users expressed surprise upon making this new discovery about the model. This demonstrates that even without a specific prompt, many users will make the same kinds of discoveries and follow the same types of evaluation processes with Ravel. One of our participants was on the team that had originally trained the StyleGAN2-based model that we used and discovered that the model was unable to generate faces of people wearing a particular style of fuzzy winter hat that was fairly common in the ground truth data. He remarked that he wasn't aware of that inability and was pleasantly surprised to be able to discover it.
The other common tasks were "semantic explorations" of the clusters and the embedding space. Four out of five users examined images in the FFHQ split to make decisions about whether the clusters were semantically meaningful, and to explore the diversity of FFHQ. Two users pondered whether or not the embedding space was doing a good job of capturing what they, as humans, would group together. These users investigated the proximity of samples in the Samples UMAP chart and determined that the ImageNet feature space is not optimal for clustering faces, since they could not find consistent visual similarity between nearby samples in some of the clusters.
#### 6.2.5 Discussion of Expert Feedback
Users were overall impressed with the tool. All five users thought that Ravel would fit into their current evaluation process, and that it was especially useful for researchers who publish generative models. Here, we summarize additional feedback from the reflection portion of stage two.
Two users reported that their exploration made them doubt whether the Inceptionv3 features were good for evaluating face portrait generation. In reflection, one of these users stated that Ravel could be used to learn about the feature space itself, which could be broadly useful in generative image model evaluation.
Upon discovering mode collapse in BigGAN-deep, one user stated that Ravel would be useful for probing state-of-the-art models to learn which classes they can't generate images for, which could point to systematic failure modes for researchers to focus on improving.
One user noted that Ravel could help their team reach conclusions about model performance _more quickly than using Colab_, especially for understanding the diversity of images. They explained that using Colab required developing a bespoke visualization tool and manual calculations of FID in each cluster, whereas the same type of useful information was readily available in the Ravel UI. Two users emphasized that Ravel could help operationalize how researchers evaluate quality and diversity, one of which explained that Ravel could be "a forcing function for having a standard UI/pipeline for results", arguing that this would make it "easier to share results with someone on another team, or someone non-technical...even with my manager who doesn't have time to run my code". This confirms for us that there is a place for bespoke purpose built interfaces like Ravel, that while less flexible than Colab, are more purpose built for common evaluation tasks that researchers perform.
Overall, the expert feedback from stage two was positive and enthusiastic, with several users expressing excitement about continued exploration in Ravel and incorporating it into their own workflows.
## 7 Limitations and Future Work
Our qualitative user study was performed with a small, domain expert sample from a single company, and therefore may not be an externally valid representation of all researcher experiences with generative image models. The study sessions were also limited to one hour per user, with each evaluation task time-boxed to 15 minutes or less. Richer and more diverse interactions and discoveries could be possible with a longer duration of tool use.
Two users wanted to see images at a higher resolution, and two other users wanted to see the original resolution of the images when examining them for artifacts; this is not an inherent issue with Ravel but is an important design affordance for the future. Two users wanted to be able to mark images in each split once they had viewed them, and enabling this would support the 'counting' based workflows we saw participants use to complete the tasks.
When using the class conditional model, users initially reported that they would prefer to see all images from a given class in the same view (i.e. clustering by class label) to make a judgement about that class. However on further exploration they noted it was useful to see 'outlier' images for a given label in context with the other images they are most similar to. This suggests that both workflows are important to support for class conditioned models and should be supported by tools like Ravel.
The authors also note a number of limitations of the system we observed while watching users use the tool:
User's cannot always interpret the'meaning' of clusters (i.e., construct a rationale for why a set of images are clustered together). This is a general issue with unsupervised methods like clustering, but we found that users would often try to attach some semantically meaningful description to each cluster to ground their comparison.
Exploring ways to 'describe' clusters or summarize differences between clusters could be important future work to aid user comprehension. We also think exploring other clustering methods, in particular hierarchical methods, could be a particularly attractive means to produce clusters at different levels of granularity to help build understanding of groups within the dataset.
In describing the Sample viewer (section 4.2.3) we noted that one limitation is that not all of the images are visible in one screen if there are many images present in the cluster. While one mitigation is to decrease the size of clusters by increasing the number of clusters, we believe that future work could provide better ways to get the visual gestalt of the entire cluster in one view. Possibly adapting techniques such as those described in Activation Atlas [47], creating stacks of very similar images within a cluster, or other ways of sub-sampling or sorting to ensure that we are displaying the maximum variety about a cluster in a single screen.
One user task that Ravel does not directly support is detecting memorization. One user commented that adding a real time nearest-neighbor search to the interface would likely make it useful for this task.
## 8 Conclusion
We presented Ravel, a visual analysis tool that enables researchers to perform large scale qualitative evaluation of generative model outputs. Our primary contributions included:
* A visual analytics system that supports _multiple evaluation tasks_ (e.g. evaluating quality & diversity, discovering mode collapse or gaps in model output) for generative image models and is agnostic to model architecture and internals.
* _Interactive exploration of large generative image model datasets_, facilitated by clustering and the use of fine grained visualization of cluster metrics to guide qualitative evaluation.
* A user interface that uses _visual comparison_ driven by semantically meaningful embedding spaces to support reasoning about differences between image distributions and generate hypotheses about model behaviour.
The expert users in our study were able to generate consistent insights about model behaviour including identifying areas of the _true data distribution the model was not capturing_, such as face paint or certain kinds of headgear in the StyleGAN2-based model or mode collapse in BigGAN-deep. This kind of insight is an example of one that is not possible to get from just looking at quantitative metrics like FID.
Our study participants confirmed our hypotheses that single number metrics are not fully sufficient measures of model performance. In addition to exploring metrics at greater granularity, Ravel allows users to explore metrics at finer granularity, revealing areas of model output where metrics like recall or precision do not capture problems in the generated images. Our users hypothesized that the underlying InceptionV3 embedding, used both in our tool and in the primarily metrics in the field, may not attend to certain kinds visual artefacts that are easily visible to humans. We believe that future work in this direction could enable better understanding of limits of the embedding spaces themselves and how they affect both metrics and the workflows that use them.
## Acknowledgments
The authors wish to thank Mario Lucic, Marvin Ritter, Ben Poole, Han Zhang, Chitwan Saharia, James Wexler and Lucas Dixon for their help and feedback on this work. We also thank our study participants for their feedback.
|
2303.12503 | Optimum phase estimation with two control qubits | Phase estimation is used in many quantum algorithms, particularly in order to
estimate energy eigenvalues for quantum systems. When using a single qubit as
the probe (used to control the unitary we wish to estimate the eigenvalue of),
it is not possible to measure the phase with a minimum mean-square error. In
standard methods, there would be a logarithmic (in error) number of control
qubits needed in order to achieve this minimum error. Here show how to perform
this measurement using only two control qubits, thereby reducing the qubit
requirements of the quantum algorithm. Our method corresponds to preparing the
optimal control state one qubit at a time, while it is simultaneously consumed
by the measurement procedure. | Peyman Najafi, Pedro C. S. Costa, Dominic W. Berry | 2023-03-22T12:18:33Z | http://arxiv.org/abs/2303.12503v1 | # Optimum phase estimation with two control qubits
###### Abstract
Phase estimation is used in many quantum algorithms, particularly in order to estimate energy eigenvalues for quantum systems. When using a single qubit as the probe (used to control the unitary we wish to estimate the eigenvalue of), it is not possible to measure the phase with a minimum mean-square error. In standard methods, there would be a logarithmic (in error) number of control qubits needed in order to achieve this minimum error. Here show how to perform this measurement using only two control qubits, thereby reducing the qubit requirements of the quantum algorithm. Our method corresponds to preparing the optimal control state one qubit at a time, while it is simultaneously consumed by the measurement procedure.
## I Introduction
Quantum phase estimation was originally applied in quantum algorithms for the task of period finding, as in Shor's algorithm Shor (1996). Later, quantum phase estimation was applied to the task of estimating eigenvalues for Hamiltonians in quantum chemistry Nielsen and Chuang (1997). The appropriate way to perform quantum phase estimation is different between these applications, due to the costing of the operations. In particular, for estimating eigenvalues, the cost of Hamiltonian simulation is (at least) proportional to the time of evolution, so the phase estimation procedure should attempt to minimise the total evolution time. At the same time the mean-square error in the estimate should be minimised.
As part of the phase estimation, the inverse quantum Fourier transform is used. This operation can be decomposed into a'semiclassical' form Shor (1996), where one performs measurements on the control qubits in sequence, with rotations controlled according to the results of previous measurements. In the form of phase estimation as in Shor's algorithm, the control qubits would be an equal superposition state, which is just a tensor product of \(|+\rangle\) states on the individual qubits. In that scenario, only one control qubit need be used at a time, because it can be prepared in the \(|+\rangle\) state, used as a control, then rotated and measured before the next qubit is used.
This procedure with the control qubits in \(|+\rangle\) states gives a probability distribution for the error as a sinc function, which has a significant probability for large errors. That is still suitable for Shor's algorithm, because it is possible to take large powers of the operators with relatively small cost, which suppresses the phase measurement error. On the other hand, for quantum chemistry where there is a cost of Hamiltonian simulation proportional to time, the large error of the sinc is a problem. Then it is more appropriate to use qubits in an entangled state Nielsen and Chuang (1997), which was originally derived in an optical context in 1996 Nielsen and Chuang (1997).
In 2000 we analysed the problem of how to perform measurements on these states in a Mach-Zehnder interferometer Nielsen and Chuang (1997). The same year, Jon Dowling introduced NOON states in the context of lithography Dowling (1997), and then in 2002 showed how NOON states may be used in interferometry for phase measurement Dowling (1997); Dowling (1998). A drawback to using NOON states is that they are highly sensitive to loss. In 2010 one of us (DWB) visited Jon Dowling's group to work on the problem of how to generate states that are more resistant to loss and effectively perform measurements with them. This resulted in the publication (separately from Jon) Dowling (1997), followed by our first joint publication Dowling (1997). We continued collaborating with Jon for many years on phase measurement Dowling (1997), as well as state preparation Dowling (2000), and Boson-sampling inspired cryptography Nielsen and Chuang (2000).
In separate work, we showed how to combine results from multiple NOON states in order to provide highly accurate phase measurements suitable for quantum algorithms Nielsen and Chuang (2000). Phase measurement via NOON states is analogous to taking a \(|+\rangle\) state and performing a controlled \(U^{N}\) on a target system in quantum computing. The photons in the arms of the interferometer are analogous to the control qubit in quantum computing, with the phase shift from \(U^{N}\) instead arising from an optical phase shift between the arms of the interferometer. The NOON state gives very high frequency variation of the probability distribution for the phase, rather than a probability distribution with a single peak. In 2007 we showed how to combine the results from NOON states with different values of \(N\) in order to provide a phase measurement analogous to the procedure giving a sinc distribution in quantum algorithms Nielsen and Chuang (2000). (It was experimentally demonstrated with multiple passes through the phase shift rather than NOON states.)
A further advance in [15] was to show how to use an adaptive procedure, still with individual \(|+\rangle\) states, in order to give the 'Heisenberg limited' phase estimate. That is, rather than the mean-square error scaling as it does for the sinc, it scales as it does for the optimal (entangled) control state. This procedure still only uses a single control qubit at a time, so is suitable for using in quantum algorithms where the number of qubits available is strongly limited; this is why it was used, for example, in [16]. On the other hand, although it gives the optimal scaling, the constant factor is not optimal, and improved performance is provided by using the optimal entangled state.
In this paper we show how to achieve the best of both worlds. That is, we show how to provide the optimal phase estimate (with the correct constant factor), while only increasing the number of control qubits by one. It is therefore suitable for quantum algorithms with a small number of qubits, while enabling the minimum complexity for a given required accuracy.
In Section II we discuss the optimal state for phase estimation and how its usage can be combined with the semiclassical quantum Fourier transform. Then in Section III, we introduce a orthogonal basis of states for subsets of qubits, and prove a recursive form. Finally, in Section IV we show how the recursive form can be translated into a sequence of two-qubit unitaries to create the optimal state.
## II Phase measurement using optimal quantum states
### The optimal states
The optimal states for phase estimation from [5] are of the form
\[|\psi_{\text{opt}}\rangle=\sqrt{\frac{2}{N+2}}\sum_{n=0}^{N}\sin\left(\frac{ \pi(n+1)}{N+2}\right)|n\rangle, \tag{1}\]
where \(N\) is the total photon number in two modes, and \(n\) is the photon number in one of the modes, as for example in a Mach-Zehnder interferometer. It is also possible to consider the single-mode case where \(N\) is a maximum photon number and \(|n\rangle\) a Fock state.
In either case a physical phase shift of \(\phi\) results in a state of the form
\[|\psi_{\text{opt}}\rangle=\sqrt{\frac{2}{N+2}}\sum_{n=0}^{N}e^{in\phi}\sin \left(\frac{\pi(n+1)}{N+2}\right)|n\rangle. \tag{2}\]
The ideal 'canonical' phase measurement is then a positive operator-valued measure (POVM) with elements [17]
\[\frac{N+1}{2\pi}|\check{\phi}\rangle\langle\check{\phi}|\,d\phi, \tag{3}\]
where
\[|\check{\phi}\rangle=\frac{1}{\sqrt{N+1}}\sum_{n=0}^{N}e^{in\hat{\phi}}|n\rangle. \tag{4}\]
Here we are using \(\check{\phi}\) for the result of the measurement, as distinct from the actual phase \(\phi\). Such a canonical measurement typically cannot be implemented using standard linear optical elements, though it can be approximated with adaptive measurements [6].
It is easily seen that the error distribution after the measurement is then
\[\frac{1}{\pi(N+2)}\left(\frac{\cos\bigl{(}(\check{\phi}-\phi)(1+N/2)\bigr{)} \sin(\pi/(2+N))}{\cos(\pi/(2+N))-\cos\bigl{(}\check{\phi}-\phi\bigr{)}}\right) ^{2}. \tag{5}\]
In contrast, if one were to use the state with an equal distribution over basis states, then the error probability distribution would be close to a sinc
\[\frac{1}{2\pi(N+1)}\frac{\sin^{2}((N+1)(\check{\phi}-\phi)/2)}{\sin^{2}(( \check{\phi}-\phi)/2)}. \tag{6}\]
he error distributions for these two states are shown in Figure 1. The central peak for the equal superposition state is a little narrower, but it has large tails in the distribution, whereas the probabilities of large errors for the optimal state are strongly suppressed.
The optimal state (1) is optimal for minimising a slightly different measure of error than usual. The Holevo variance for a distribution can be taken as [18]
\[|\langle e^{i\hat{\phi}}\rangle|^{-2}-1. \tag{7}\]
This measure has the advantages that it is naturally modulo \(2\pi\), as is suitable for phase, and approaches infinity for a flat distribution (with no phase information). Moreover it approaches the usual variance for suitably narrowly peaked distributions. To eliminate biased estimate, one can alternatively use the measure
\[\langle\cos\bigl{(}\check{\phi}-\phi\bigr{)}\rangle^{-2}-1. \tag{8}\]
This measure is analogous to the mean-square error. One could also take the measure, as in [5],
\[2[1-\langle\cos\bigl{(}\check{\phi}-\phi\bigr{)}\rangle], \tag{9}\]
and the optimisation problem is equivalent. The optimal state (1) gives a minimum Holevo variance of
\[\tan^{2}\left(\frac{\pi}{N+2}\right). \tag{10}\]
It is also possible to consider minimisation of the mean-square error, but there is not a simple analytic solution [19].
### Phase measurement with the inverse Fourier transform
In the case of phase measurements in quantum computing, \(\phi\) would instead be obtained from a unitary operator \(U\) with eigenvalue \(e^{i\phi}\). If the target system is in the corresponding eigenstate of \(U\), denoted \(|\phi\rangle\), then if state \(|n\rangle\) is used to control application of \(U^{n}\), then the \(\phi\)-dependent state from Eq. (2) is again obtained. In practice, the integer \(n\) is represented in binary in ancilla qubits. Then the most-significant bit, \(n_{1}\), is used to control \(U\), the next most significant bit, \(n_{2}\), is used to control \(U^{2}\), and so forth. In general,
\[n=\sum_{j=1}^{m}n_{j}2^{j-1}, \tag{11}\]
\(|n_{j}\rangle\) is used to control \(U^{2^{j-1}}\). This procedure is depicted in Figure 2.
Here we have taken \(m\) to be the number of bits. In practice, it is convenient to take \(N-1\) to be a power of 2, so \(N=2^{m}-1\). In order to estimate the phase, one wishes to perform the canonical measurement on the ancilla qubits. To explain this, it is convenient to consider the POVM with \(N+1\) states \(|\check{\phi}_{j}\rangle\) with \(\check{\phi}_{j}=2\pi j/(N+1)\) for \(j=0\) to \(N\)
Figure 1: The probability distribution for the error in phase measurements with \(N=10\) and the optimal state (1) (blue) and the equal superposition state (orange). The left shows the linear scale and the right shows a log plot.
Then the states \(|\tilde{\phi}\rangle\) are mutually orthogonal. Such a projective measurement can then be obtained if one can perform the unitary operation
\[\sum_{j=0}^{N}|j\rangle\langle\tilde{\phi}_{j}|. \tag{12}\]
That is, it maps the state \(|\tilde{\phi}_{j}\rangle\) to a computational basis state \(|j\rangle\), so a measurement in the computational basis gives the result for the phase. This operation is the inverse of the usual quantum Fourier transform, which would map from \(|j\rangle\) to \(|\tilde{\phi}_{j}\rangle\).
If one aims to obtain the original POVM, one can randomly (with equal probability) select \(\delta\phi\in[0,2\pi/(N+1)]\), and choose the states with \(\tilde{\phi}_{j}=2\pi j/(N+1)+\delta\phi\). Then perform a measurement in the basis \(|\tilde{\phi}_{j}\rangle\) with this randomly chosen offset. The complete measurement, including the random choice of \(\delta\phi\), is then equivalent to the POVM with the set of outcomes over a continuous range of \(\tilde{\phi}\). This approach can be used in order to give a measurement that is covariant (has an error distribution independent of the system phase \(\phi\)). In practice it is not usually needed, so we will not consider it further in this paper.
In order to obtain the estimate for the phase, one should therefore perform the inverse quantum Fourier transform on the control qubits. The inverse quantum Fourier transform can be performed in a semiclassical way, by performing measurements on successive qubits followed by controlled rotations [3]. The usual terminology is the'semiclassical Fourier transform', though this is the inverse transform. An example with three qubits is given in Figure 3. The bottom (least significant qubit) is measured first. The result is used to control a phase rotation on the middle qubit. Then the middle qubit is measured, and the results of both measurements are used to control phase rotations on the top qubit. The net result is the same as performing the inverse quantum Fourier transform and measuring in the computational basis.
A further advantage of this procedure is that the fact that the controlled \(U\) operations are also performed in sequence means that the sequences can be matched. That is, we have the combined procedure as shown in Figure 4. In the case where control registers are prepared in an equal superposition state, then they are unentangled. This means that preparation of each successive qubit can be delayed until it is needed, as shown in Figure 5.
What this means is that only one control qubit need be used at once. The preparation of the next control qubit can be delayed until after measurement of the previous one, and that qubit can be reset and reused. That is useful in quantum algorithms with a limited number of qubits available, and is also useful in quantum phase estimation. In that case, one can replace the control qubits with NOON states with photon numbers that are powers of 2. Then these NOON states can be measured in sequence to give a canonical measurement of phase, even though a canonical measurement of phase would not be possible on a single two-mode state. In [15] we demonstrated this, using multiple passes through a phase shift rather than NOON states.
Figure 3: An example of the semiclassical Fourier transform on three qubits, where the bottom qubit corresponds to the least significant bit.
Figure 2: The circuit for a controlled power of \(U\), where \(|n\rangle\) on the control qubits gives \(U^{n}\) on the target register. With the target prepared in an eigenstate \(|\phi\rangle\) of \(U\), the phase shift \(e^{in\phi}\) is obtained.
The drawback now is that, even though it is possible to perform the canonical measurement, a suboptimal state is being used. We would like to be able to perform measurements achieving that minimum Holevo phase variance. In [15] we showed that, by using multiple NOON states of each number it is possible to obtain the desired scaling with total photon number, even though there is a different constant factor so the true minimum error is not achieved.
### Performing phase measurement with two control qubits
Up until this point this section has been revision of prior work. What is new here is that we show how to prepare the optimal state for phase measurement in a sequential way, so the number of qubits that need be used at once is minimised. We will show how the optimal state can be prepared using a sequence of two-qubit operations, as in Figure 6.
When the optimal state is prepared in this way, its preparation may be delayed until the qubits are needed, as shown in Figure 7. This is illustrated with three control qubits, where introduction of the third qubit can be delayed until the first qubit is measured. In general, with more control qubits, introduction of each additional qubit can be
Figure 4: The combined procedure with controlled \(U^{2^{j}}\) operations to give phase kickback, together with the semiclassical Fourier transform. The final controlled phase rotation is just shown as \(R\), because the angle of rotation is controlled by the combined results of the first two measurements.
Figure 5: The combined procedure with preparation of control ancillas delayed until they are needed.
Figure 6: Preparation of the optimal state with a sequence of two-qubit operations.
delayed until after measurement of the qubit two places down, so only two control qubits need be used at once.
The reason why it is possible to prepare the optimal state in this way is that it is a superposition of two unentangled states. The sine is a combination of a positive and negative complex exponential as
\[|\psi_{\rm opt}\rangle=\frac{1}{2i}\sqrt{\frac{2}{N+2}}\sum_{n=0}^{N}\left[ \exp\left(i\frac{\pi(n+1)}{N+2}\right)-\exp\left(-i\frac{\pi(n+1)}{N+2} \right)\right]|n\rangle. \tag{13}\]
When \(N=2^{m}-1\), we can write this as
\[|\psi_{\rm opt}\rangle = \frac{e^{i\pi/M}}{2i}\sqrt{\frac{2}{M}}\sum_{n_{1},\cdots,n_{m}=0 }^{1}e^{i\pi\sum_{j=1}^{m}n_{j}2^{j-1}/M}|n_{1}\cdots n_{m}\rangle-\frac{e^{-i \pi/M}}{2i}\sqrt{\frac{2}{M}}\sum_{n_{1},\cdots,n_{m}=0}^{1}e^{-i\pi\sum_{j=1} ^{m}n_{j}2^{j-1}/M}|n_{1}\cdots n_{m}\rangle \tag{14}\] \[= \frac{e^{i\pi/M}}{2i}\sqrt{\frac{2^{m+1}}{M}}\bigotimes_{j=0}^{m }\left(\sum_{n_{j}=0}^{1}\frac{e^{i\pi n_{j}2^{j-1}/M}}{\sqrt{2}}|n_{j}\rangle \right)-\frac{e^{-i\pi/M}}{2i}\sqrt{\frac{2^{m+1}}{M}}\bigotimes_{j=0}^{m} \left(\sum_{n_{j}=0}^{1}\frac{e^{-i\pi n_{j}2^{j-1}/M}}{\sqrt{2}}|n_{j}\rangle\right)\] \[= \sqrt{\frac{2^{m-1}}{M}}\bigotimes_{j=0}^{m}\left(\sum_{n_{j}=0} ^{1}\frac{e^{-i\pi(-1)^{n_{j}}2^{j-2}/M}}{\sqrt{2}}|n_{j}\rangle\right)+\sqrt {\frac{2^{m-1}}{M}}\bigotimes_{j=0}^{m}\left(\sum_{n_{j}=0}^{1}\frac{e^{i\pi(- 1)^{n_{j}}2^{j-2}/M}}{\sqrt{2}}|n_{j}\rangle\right),\]
where \(M=N+2=2^{m}+1\). In the last line we have used
\[\sum_{n_{j}=0}^{1}\frac{e^{i\pi n_{j}2^{j-1}/M}}{\sqrt{2}} = \frac{1}{\sqrt{2}}\left(|0\rangle+e^{i\pi n_{j}2^{j-1}/M}\right) \tag{15}\] \[= e^{i\pi n_{j}2^{j-2}/M}\frac{1}{\sqrt{2}}\left(e^{-i\pi n_{j}2 ^{j-2}/M}|0\rangle+e^{i\pi n_{j}2^{j-2}/M}|1\rangle\right)\]
and then
\[e^{i\pi/M}\prod_{j=0}^{m}e^{i\pi n_{j}2^{j-2}/M} = e^{i\pi/M}e^{i\pi n_{j}\sum_{j=0}^{m}2^{j-2}/M} \tag{16}\] \[= e^{i\pi/M}e^{i\pi n_{j}(2^{m}-1)/2M}\] \[= e^{i\pi n_{j}(2^{m}+1)/2M}\] \[= e^{i\pi n_{j}/2}\] \[= i.\]
In order to write the optimal state in a more compact way we define the following
\[c\coloneqq\sqrt{\frac{2^{m-1}}{M}},\]
Figure 7: The combined procedure with preparation of control ancillas in the optimal state delayed until they are needed.
\[|\phi_{j}^{+}\rangle \coloneqq\frac{e^{i\pi 2^{j-2}/M}|0\rangle+e^{-i\pi 2^{j-2}/M}|1 \rangle}{\sqrt{2}},\] \[|\phi_{j}^{-}\rangle \coloneqq\frac{e^{-i\pi 2^{j-2}/M}|0\rangle+e^{i\pi 2^{j-2}/M}|1 \rangle}{\sqrt{2}}. \tag{17}\]
Then the optimal state in Eq. (14) can be written as
\[|\psi_{\mathrm{opt}}\rangle=c\bigotimes_{j=1}^{m}|\phi_{j}^{+}\rangle+c \bigotimes_{j=1}^{m}|\phi_{j}^{-}\rangle. \tag{18}\]
That is, it is an equally weighted superposition of two states, which are each unentangled between all qubits. What this means is that any bipartite split of the state will have Schmidt number \(2\), so the entanglement across the bipartite split can be represented on a single qubit on one side. We use that principle in the state preparation. At any stage, after performing the two-qubit operation between qubit \(j\) and \(j+1\), there will be the correct bipartite entanglement in the split between qubits up to \(j\) and qubits from \(j+1\) to \(m\). However, at that stage qubits from \(1\) to \(j-1\) have not been initialised yet, so the entanglement across the bipartite split (for qubits \(1\) to \(j\)) is represented just on qubit \(j\).
## III Recursive construction of the optimum state
In this section, we show how to create the optimal state Eq. (18) recursively. We introduce the partial tensor product states
\[|\phi_{[\ell]}^{+}\rangle\coloneqq\bigotimes_{j=1}^{\ell}|\phi_{j}^{+} \rangle,\qquad|\phi_{[\ell]}^{-}\rangle\coloneqq\bigotimes_{j=1}^{\ell}|\phi _{j}^{-}\rangle. \tag{19}\]
Because \(|\psi_{\mathrm{opt}}\rangle\) is a linear combination of \(|\phi_{[m]}^{\pm}\rangle\), the state of qubits \(1\) to \(\ell\) an be represented as a linear combination of \(|\phi_{[\ell]}^{\pm}\rangle\). In order to describe the operations needed to prepare the state \(|\psi_{\mathrm{opt}}\rangle\), we need to describe the state of qubits \(1\) to \(\ell\) in terms of orthogonal states, which we will denote by \(|\Phi_{[\ell]}^{\pm}\rangle\). These orthogonal (but not normalised) states
\[|\Phi_{[\ell]}^{+}\rangle\coloneqq|\phi_{[\ell]}^{+}\rangle+|\phi_{[\ell]}^{- }\rangle,\qquad|\Phi_{[\ell]}^{-}\rangle\coloneqq|\phi_{[\ell]}^{+}\rangle-| \phi_{[\ell]}^{-}\rangle. \tag{20}\]
It is possible to prove that these states are orthogonal as in the following Lemma.
**Lemma 1**.: _The states \(|\Phi_{[\ell]}^{+}\rangle\) and \(|\Phi_{[\ell]}^{-}\rangle\) defined in Eq. (20), are orthogonal:_
\[\langle\Phi_{[\ell]}^{-}|\Phi_{[\ell]}^{+}\rangle=0. \tag{21}\]
Proof.: From Eq. (17) we have
\[\langle\phi_{j}^{-}|\phi_{j}^{+}\rangle=\cos\bigl{(}\pi 2^{j-1}/M\bigr{)}, \tag{22}\]
which is real. In turn, that implies \(\langle\phi_{[\ell]}^{-}|\phi_{[\ell]}^{+}\rangle\) is real, and equal to \(\langle\phi_{[\ell]}^{+}|\phi_{[\ell]}^{-}\rangle\). Moreover, because \(|\Phi_{j}^{\pm}\rangle\) are normalised, so are \(|\Phi_{[\ell]}^{\pm}\rangle\). Therefore we obtain
\[\langle\Phi_{[\ell]}^{-}|\Phi_{[\ell]}^{+}\rangle=\langle\phi_{[\ell]}^{+}| \phi_{[\ell]}^{+}\rangle-\langle\phi_{[\ell]}^{-}|\phi_{[\ell]}^{-}\rangle+ \langle\phi_{[\ell]}^{+}|\phi_{[\ell]}^{-}\rangle-\langle\phi_{[\ell]}^{-}| \phi_{[\ell]}^{+}\rangle=0 \tag{23}\]
Here, the first two terms cancel because they are both \(1\) (due to normalisation) and the second two terms cancel because they are equal (because they are real).
Next we wish to show that there is a simple recurrence relation for the states \(|\Phi_{[\ell]}^{\pm}\rangle\), in their normalised form
\[|\widetilde{\Phi}_{[\ell]}^{\pm}\rangle\coloneqq\frac{|\Phi_{[\ell]}^{\pm} \rangle}{\sqrt{\langle\Phi_{[\ell]}^{\pm}|\Phi_{[\ell]}^{\pm}\rangle}}. \tag{24}\]
We will use the recurrence relation to derive the sequence of two-qubit operations to prepare the initial state. The result is as follows.
**Lemma 2**.: _The states \(|\widetilde{\Phi}^{+}_{[\ell]}\rangle\) and \(|\widetilde{\Phi}^{-}_{[\ell]}\rangle\) defined in Eq. (20) and Eq. (24), have recurrence relation_
\[|\widetilde{\Phi}^{\pm}_{[\ell+1]}\rangle=\mu^{(0,\pm)}_{\ell}|\widetilde{\Phi }^{\pm}_{[\ell]}\rangle|+\rangle+\mu^{(1,\pm)}_{\ell}|\widetilde{\Phi}^{\mp}_{[ \ell]}\rangle|-\rangle, \tag{25}\]
_where_
\[\mu^{(0,\pm)}_{\ell} =\cos\bigl{(}2^{\ell-1}\pi/M\bigr{)}P^{0,\pm}, \tag{26}\] \[\mu^{(1,\pm)}_{\ell} =i\sin\bigl{(}2^{\ell-1}\pi/M\bigr{)}P^{1,\pm},\] (27) \[P^{s,\pm} =\sqrt{\frac{1\pm(-1)^{s}\prod_{j=1}^{\ell}\cos(2^{j-1}\pi/M)}{1 \pm\prod_{j=1}^{\ell+1}\cos(2^{j-1}\pi/M)}}. \tag{28}\]
Proof.: To prove this, we start by noting that
\[\cos\bigl{(}2^{j-2}\pi/M\bigr{)} =\langle+|\phi^{\pm}_{j}\rangle,\] \[i\sin\bigl{(}2^{j-2}\pi/M\bigr{)} =\langle-|\phi^{+}_{j}\rangle,\] \[-i\sin\bigl{(}2^{j-2}\pi/M\bigr{)} =\langle-|\phi^{-}_{j}\rangle. \tag{29}\]
Therefore we can see that
\[\langle+|\Phi^{+}_{[\ell+1]}\rangle=\cos\bigl{(}2^{\ell-1}\pi/M \bigr{)}|\Phi^{+}_{[\ell]}\rangle,\] \[\langle-|\Phi^{+}_{[\ell+1]}\rangle=i\sin\bigl{(}2^{\ell-1}\pi/M \bigr{)}|\Phi^{-}_{[\ell]}\rangle, \tag{30}\]
which implies
\[|\Phi^{+}_{[\ell+1]}\rangle=\cos\bigl{(}2^{\ell-1}\pi/M\bigr{)}|\Phi^{+}_{[ \ell]}\rangle|+\rangle+i\sin\bigl{(}2^{\ell-1}\pi/M\bigr{)}|\Phi^{-}_{[\ell]} \rangle|-\rangle. \tag{31}\]
Similarly, we find
\[\langle+|\Phi^{-}_{[\ell+1]}\rangle=\cos\bigl{(}2^{\ell-1}\pi/M \bigr{)}|\Phi^{-}_{[\ell]}\rangle,\] \[\langle-|\Phi^{-}_{[\ell+1]}\rangle=i\sin\bigl{(}2^{\ell-1}\pi/M \bigr{)}|\Phi^{+}_{[\ell]}\rangle, \tag{32}\]
which implies
\[|\Phi^{-}_{[\ell+1]}\rangle=\cos\bigl{(}2^{\ell-1}\pi/M\bigr{)}|\Phi^{-}_{[ \ell]}\rangle|+\rangle+i\sin\bigl{(}2^{\ell-1}\pi/M\bigr{)}|\Phi^{+}_{[\ell]} \rangle|-\rangle. \tag{33}\]
This gives us recurrence relations for \(|\Phi^{\pm}_{[\ell]}\rangle\), which can be written
\[|\Phi^{\pm}_{[\ell+1]}\rangle=\cos\bigl{(}2^{\ell-1}\pi/M\bigr{)}|\Phi^{\pm}_{[ \ell]}\rangle|+\rangle+i\sin\bigl{(}2^{\ell-1}\pi/M\bigr{)}|\Phi^{\mp}_{[\ell]} \rangle|-\rangle. \tag{34}\]
Let us define the normalisation
\[\mathcal{N}^{\pm}_{[\ell]}=\sqrt{\langle\Phi^{\pm}_{[\ell]}|\Phi^{\pm}_{[\ell ]}\rangle}. \tag{35}\]
In terms of this, the recurrence relation for the normalised states is
\[|\widetilde{\Phi}^{\pm}_{[\ell+1]}\rangle=\cos\bigl{(}2^{\ell-1}\pi/M\bigr{)} \frac{\mathcal{N}^{+}_{[\ell]}}{\mathcal{N}^{\pm}_{[\ell+1]}}|\Phi^{\pm}_{[ \ell]}\rangle|+\rangle+i\sin\bigl{(}2^{\ell-1}\pi/M\bigr{)}\frac{\mathcal{N}^{- }_{[\ell]}}{\mathcal{N}^{\pm}_{[\ell+1]}}|\widetilde{\Phi}^{\mp}_{[\ell]} \rangle|-\rangle. \tag{36}\]
The normalisation can be determined using
\[\langle\phi^{-}_{[\ell]}|\phi^{+}_{[\ell]}\rangle=\prod_{j=1}^{\ell}\cos\bigl{(} \pi 2^{j-1}/M\bigr{)}, \tag{37}\]
which gives
\[(\mathcal{N}^{\pm}_{[\ell]})^{2}=\langle\Phi^{\pm}_{[\ell]}|\Phi^{\pm}_{[\ell] }\rangle=2\pm 2\prod_{j=1}^{\ell}\cos\bigl{(}\pi 2^{j-1}/M\bigr{)}. \tag{38}\]
That gives us the ratios of norms
\[\frac{\mathcal{N}^{+}_{[\ell]}}{\mathcal{N}^{\pm}_{[\ell+1]}}=P^{0,\pm}, \qquad\frac{\mathcal{N}^{-}_{[\ell]}}{\mathcal{N}^{\pm}_{[\ell+1]}}=P^{1,\pm}. \tag{39}\]
Hence Eq. (37) is the form of the recurrence relation required.
Preparing optimum state with two-qubit unitaries
In the previous section, we showed that it is possible to construct an orthonormal basis for the state on qubits \(1\) to \(\ell\) as \(|\widetilde{\Phi}^{\pm}_{[\ell]}\rangle\), which satisfy the recursive relation
\[|\widetilde{\Phi}^{+}_{[\ell+1]}\rangle =\mu^{(0,+)}_{\ell}|\widetilde{\Phi}^{+}_{[\ell]}\rangle|+ \rangle+\mu^{(1,+)}_{\ell}|\widetilde{\Phi}^{-}_{[\ell]}\rangle|-\rangle,\] \[|\widetilde{\Phi}^{-}_{[\ell+1]}\rangle =\mu^{(0,-)}_{\ell}|\widetilde{\Phi}^{-}_{[\ell]}\rangle|+ \rangle+\mu^{(1,-)}_{\ell}|\widetilde{\Phi}^{+}_{[\ell]}\rangle|-\rangle\,, \tag{40}\]
for constants \(\mu^{(s,\pm)}_{\ell}\). Moreover, it is obvious from the definitions that \(|\widetilde{\Phi}^{+}_{[m]}\rangle=|\psi_{\rm opt}\rangle\). Therefore, the optimum state may be constructed in a recursive way from \(|\widetilde{\Phi}^{\pm}_{[m-1]}\rangle\) on qubits \(1\) to \(m-1\), which can be constructed from states \(|\widetilde{\Phi}^{\pm}_{[m-2]}\rangle\) on qubits \(1\) to \(m-2\), and so forth.
In order to prepare the optimal state, we can apply a stepwise procedure where the principle is to use a single qubit to flag which of \(|\Phi^{\pm}_{[\ell]}\rangle\) is to be prepared on the remaining qubits \(1\) to \(\ell\). We start from qubits \(m\) and \(m-1\) and work back to qubits \(1\) and \(2\). It is convenient to describe the operations as acting on qubits initialised as \(|+\rangle\). Then we initially perform an operation on qubits \(m\) and \(m-1\) that maps
\[U_{m-1}|+\rangle|+\rangle=\mu^{(0,+)}_{m-1}|+\rangle|+\rangle+\mu^{(1,+)}_{m-1 }|-\rangle|-\rangle. \tag{41}\]
The principle of this operation is that it corresponds to the recursion
\[|\psi_{\rm opt}\rangle=|\widetilde{\Phi}^{+}_{[m]}\rangle=\mu^{(1)}_{m-1}| \widetilde{\Phi}^{+}_{[m-1]}\rangle|+\rangle+\mu^{(2)}_{m-1}|\widetilde{\Phi }^{-}_{[m-1]}\rangle|-\rangle\,. \tag{42}\]
At this stage we only have qubits, so the states \(|\widetilde{\Phi}^{\pm}_{[m-1]}\rangle\) on qubits \(1\) to \(m-1\) need to be represented by \(|\pm\rangle\) on a single qubit.
It is trivial to see that unitary \(U_{m-1}\) exists; it can explicitly be performed by rotating qubit \(m\) as
\[|+\rangle\mapsto\mu^{(0,+)}_{m-1}|+\rangle+\mu^{(1,+)}_{m-1}|-\rangle. \tag{43}\]
We can alternatively describe the operation as having the matrix form in the \(|\pm\rangle\) basis
\[U_{m-1}\equiv\begin{pmatrix}\mu^{(0,+)}_{m-1}&*&*&*\\ 0&*&*&*\\ 0&*&*&*\\ \mu^{(1,+)}_{m-1}&*&*&*\end{pmatrix}, \tag{44}\]
where \(*\) indicates entries where the value is unimportant.
We then perform \(U_{m-2}\) on qubits \(m-1\) and \(m-2\), down to \(U_{1}\) on qubits \(1\) and \(2\). The unitary \(U_{\ell}\) needs to map
\[U_{\ell}|+\rangle|+\rangle =\mu^{(0,+)}_{\ell}|+\rangle|+\rangle+\mu^{(1,-)}_{\ell}|- \rangle|-\rangle\] \[U_{\ell}|+\rangle|-\rangle =\nu^{(0,+)}_{\ell}|-\rangle|+\rangle+\nu^{(1,-)}_{\ell}|+\rangle| -\rangle. \tag{45}\]
This corresponds to the recursion given in Eq. (40), and the states \(|\pm\rangle\) on qubit \(\ell\) are being used to represent \(|\widetilde{\Phi}^{\pm}_{[\ell]}\rangle\) on qubits \(1\) to \(\ell\). This operation has the matrix entries
\[U_{\ell}\equiv\begin{pmatrix}\mu^{(0,+)}_{\ell}&0&*&*\\ 0&\mu^{(1,-)}_{\ell}&*&*\\ 0&\mu^{(0,-)}_{\ell}&*&*\\ \mu^{(1,+)}_{\ell}&0&*&*\end{pmatrix}. \tag{46}\]
This operation may be achieved in the following way. Define the single-qubit rotations \(V^{(0,+)}_{\ell}\) and \(V^{(1)}_{\ell}\) to act as
\[V^{(0,+)}_{\ell}|+\rangle =\mu^{(0,+)}_{\ell}|+\rangle+\mu^{(1,+)}_{\ell}|-\rangle\] \[V^{(0,-)}_{\ell}|+\rangle =\mu^{(0,-)}_{\ell}|-\rangle+\mu^{(1,-)}_{\ell}|+\rangle\,. \tag{47}\]
Then \(U_{\ell}\) corresponds to the controlled operation
\[U_{\ell}=V_{\ell}^{(0,+)}\otimes\left|+\right\rangle\left\langle+\right|+V_{\ell }^{(0,-)}\otimes\left|-\right\rangle\left\langle-\right|. \tag{48}\]
This method could be used for \(U_{m-1}\), though the method described above is simpler.
After performing this sequence of unitaries, we then need to map \(\left|\pm\right\rangle\) to \(\left|\widetilde{\Phi}_{[1]}^{\pm}\right\rangle\) on qubit \(1\). This is a simple single-qubit unitary operation, which can be combined with \(U_{1}\) to give the correct final state with a sequence of two-qubit unitary operations. Thus our recursive expression for the states gives us a sequence of two-qubit unitaries to create the optimal state.
To be more specific about what operation is needed,
\[\left|\phi_{[1]}^{+}\right\rangle=\frac{e^{i\pi/2M}|0\rangle+e^{-i\pi/2M}|1 \rangle}{\sqrt{2}}, \tag{49}\]
so that
\[|\Phi_{[1]}^{+}\rangle =\sqrt{2}\left(\cos(\pi/2M)|0\rangle+\cos(\pi/2M)|1\rangle\right), \tag{50}\] \[|\Phi_{[1]}^{-}\rangle =i\sqrt{2}\left(\sin(\pi/2M)|0\rangle-\sin(\pi/2M)|1\rangle \right). \tag{51}\]
That gives the normalised states
\[|\widetilde{\Phi}_{[1]}^{+}\rangle=|+\rangle,\qquad|\widetilde{\Phi}_{[1]}^{ -}\rangle=i|-\rangle. \tag{52}\]
Therefore the operation needed is an \(i\) phase shift on \(\left|-\right\rangle\).
## V Conclusion
We have shown how to create the optimal state for phase estimation, in the sense of minimising Holevo variance, using a sequence of two qubit operations. When combining this sequential process with the semiclassical quantum Fourier transform, we can entangle new qubits after measuring control qubits in such a way that only two control qubits are needed at once. This means that the qubit that is measured can be reset and used as the new qubit to be entangled, minimising the need for ancilla qubits.
In quantum algorithms where phase estimation is needed with a small number of logical qubits this is ideal. Previously the method used was either many entangled qubits, increasing the size of the quantum computer needed, or a single control qubit, which significantly increases the error. In our method the number of control qubits is only increased by \(1\), while giving the minimal error. Here the quantity being exactly minimised is the Holevo variance, which is very close to the mean-square error (MSE) for sharply peaked distributions. If one were interested in minimising MSE, then these states give the same leading order term for MSE as the minimum MSE [19], so these states are still suitable.
Our method of preparing the state, although it has been derived for the specific case of the optimal state for minimising Holevo variance, could also be applied to other states that are a superposition of two unentangled states. The crucial feature is that the Schmidt number is \(2\) for any bipartite split across the qubits. One could also consider states with larger Schmidt number, and use a larger number of qubits as controls. That could potentially be used for states that are optimal for minimising other measures of error. For example, one could consider methods of approximating Kaiser windows or the digital prolate spheroidal sequence, as is suitable for optimising confidence intervals [20].
Another interesting question is whether this procedure could be demonstrated with photons. A scheme with optimal phase states for \(N=2\) using two photons was demonstrated in [21]. With the preparation scheme we have outlined, it would potentially be possible to demonstrate these states with larger \(N\), though it would require entangling operations that might require nonlinear optical elements.
###### Acknowledgements.
DWB worked on this project under a sponsored research agreement with Google Quantum AI. DWB is also supported by Australian Research Council Discovery Projects DP190102633, DP210101367, and DP220101602.
## Author Declarations
### Conflict of interest
The authors have no conflicts to disclose.
## Data Availability
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
|
2306.08385 | NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification | Graph neural networks have been extensively studied for learning with
inter-connected data. Despite this, recent evidence has revealed GNNs'
deficiencies related to over-squashing, heterophily, handling long-range
dependencies, edge incompleteness and particularly, the absence of graphs
altogether. While a plausible solution is to learn new adaptive topology for
message passing, issues concerning quadratic complexity hinder simultaneous
guarantees for scalability and precision in large networks. In this paper, we
introduce a novel all-pair message passing scheme for efficiently propagating
node signals between arbitrary nodes, as an important building block for a
pioneering Transformer-style network for node classification on large graphs,
dubbed as \textsc{NodeFormer}. Specifically, the efficient computation is
enabled by a kernerlized Gumbel-Softmax operator that reduces the algorithmic
complexity to linearity w.r.t. node numbers for learning latent graph
structures from large, potentially fully-connected graphs in a differentiable
manner. We also provide accompanying theory as justification for our design.
Extensive experiments demonstrate the promising efficacy of the method in
various tasks including node classification on graphs (with up to 2M nodes) and
graph-enhanced applications (e.g., image classification) where input graphs are
missing. | Qitian Wu, Wentao Zhao, Zenan Li, David Wipf, Junchi Yan | 2023-06-14T09:21:15Z | http://arxiv.org/abs/2306.08385v1 | # NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification
###### Abstract
Graph neural networks have been extensively studied for learning with interconnected data. Despite this, recent evidence has revealed GNNs' deficiencies related to over-squashing, heterophily, handling long-range dependencies, edge incompleteness and particularly, the absence of graphs altogether. While a plausible solution is to learn new adaptive topology for message passing, issues concerning quadratic complexity hinder simultaneous guarantees for scalability and precision in large networks. In this paper, we introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes, as an important building block for a pioneering Transformer-style network for node classification on large graphs, dubbed as NodeFormer. Specifically, the efficient computation is enabled by a kernelized Gumbel-Softmax operator that reduces the algorithmic complexity to linearity w.r.t. node numbers for learning latent graph structures from large, potentially fully-connected graphs in a differentiable manner. We also provide accompanying theory as justification for our design. Extensive experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs (with up to 2M nodes) and graph-enhanced applications (e.g., image classification) where input graphs are missing. The codes are available at [https://github.com/qitianwu/NodeFormer](https://github.com/qitianwu/NodeFormer).
## 1 Introduction
Relational structure inter-connecting instance nodes as a graph is ubiquitous from social domains (e.g., citation networks) to natural science (protein-protein interaction), where graph neural networks (GNNs) [32; 19; 14; 36] have shown promising power for leveraging such data dependence as geometric priors. However, there arises increasing evidence challenging the core GNN hypothesis that propagating information along observed graph structures will necessarily produce better node-level representations for prediction on each individual instance node. Conflicts with this premise lead to commonly identified deficiencies with GNN message-passing rules w.r.t. heterophily [53], over-squashing [2], long-range dependencies [8], and graph incompleteness [11], etc.
Moreover, in graph-enhanced applications, e.g., text classification [46], vision navigation [12], physics simulation [30], etc., graph structures are often unavailable though individual instances are strongly inter-correlated. A common practice is to artificially construct a graph via some predefined rules (e.g., \(k\)-NN), which is agnostic to downstream tasks and may presumably cause the misspecification of GNNs' inductive bias on input geometry (induced by the local feature propagation design).
Natural solutions resort to organically combining learning optimal graph topology with message passing. However, one critical difficulty is the _scalability_ issue with \(O(N^{2})\) (where \(N\) denotes
#nodes) computational complexity, which is prohibitive for large networks (with \(10K\sim 1M\) nodes). Some existing approaches harness neighbor sampling [51], anchor-based adjacency surrogates [4] and hashing schemes [43] to reduce the overhead; however, these strategies may sacrifice model precision and still struggle to handle graphs with million-level nodes. Another obstacle lies in the increased degrees of freedom due to at least an \(N\times N\) all-pair similarity matrix, which may result in large combinatorial search space and vulnerability to over-fitting.
In this work, we introduce a novel all-pair message passing scheme that can scale to large systems without compromising performance. We develop a kernelized Gumbel-Softmax operator that seamlessly synthesizes _random feature map_[27] and approximated sampling strategy [16], for distilling latent structures among all the instance nodes and yielding moderate gradients through differentiable optimization. Though such a combination of two operations involving randomness could potentially result in mutual distortion, we theoretically prove that the new operator can still guarantee a well-posed approximation for concrete variables (discrete structures) with the error bounded by feature dimensions. Furthermore, such a design can reduce the algorithmic complexity of learning new topology per layer to \(O(N)\) by avoiding explicit computation for the cumbersome all-pair similarity.
The proposed module opens the door to a new class of graph networks, i.e., NodeFormer (_Scalable Transformers for Node Classification_), that is capable of efficiently propagating messages between arbitrary node pairs in flexible layer-specific latent graphs. And to accommodate input graphs (if any), we devise two simple techniques: a relational bias and an edge-level regularization loss, as guidance for properly learning adaptive structures. We evaluate our approach on diverse node classification tasks ranging from citation networks to images/texts. The results show its promising power for tackling heterophily, long-range dependencies, large-scale graphs, graph incompleteness and the absence of input graphs. The contributions of this paper are summarized as follows:
\(\bullet\) We develop a kernelized Gumbel-Softmax operator which is proven to serve as a well-posed approximation for concrete variables, particularly the discrete latent structure among data points. The new module can reduce the algorithmic complexity for learning new message-passing topology from quadratic to linear w.r.t. node numbers, without sacrificing the precision. This serves as a pioneering model that successfully scales graph structure learning to large graphs with million-level nodes.
\(\bullet\) We further propose NodeFormer, a new class of graph networks with layer-wise message passing as operated over latent graphs potentially connecting all nodes. The latter are optimized in an end-to-end differentiable fashion through a new objective that essentially pursues sampling optimal topology from a posterior conditioned on node features and labels. To our knowledge, NodeFormer is the first Transformer model that scales all-pair message passing to large node classification graphs.
\(\bullet\) We demonstrate the model's efficacy by extensive experiments over a diverse set of datasets, including node classification benchmarks and image/text classification, where significant improvement over strong GNN models and SOTA structure learning methods is shown. Besides, it successfully scales to large graph datasets with up to 2M nodes where prior arts failed, and reduces the time/space consumption of the competitors by up to 93.1%/80.6% on moderate sized datasets.
## 2 Related Works
**Graph Neural Networks**. Building expressive GNNs is a fundamental problem in learning over graph data. With Graph Attention Networks (GAT) [36] as an early attempt, there are many follow-up works, e.g., [22; 42], considering weighting the edges in input graph for enhancing the expressiveness. Other studies, e.g., [28; 52] focus on sparsifying input structures to promote robust representations. There are also quite a few approaches that propose scalable GNNs through, e.g., subgraph sampling [48], linear feature mapping [39], and channel-wise transformation [49], etc. However, these works cannot learn new edges out of the scope of input geometry, which may limit the model's receptive fields within local neighbors and neglect global information.
**Graph Structure Learning.** Going beyond observed topology, graph structure learning targets learning a new graph for message passing among all the instances [54]. One line of work is similarity-driven where the confidence of edges are reflected by some similarity functions between node pairs, e.g., Gaussian kernels [43], cosine similarity [4], attention networks [17], non-linear MLP [7] etc. Another line of work optimizes the adjacency matrix. Due to the increased optimization difficulties, some sophisticated training methods are introduced, such as bi-level optimization [11], variational
approaches [10; 20], Bayesian inference [51] and projected gradient descent [18]. To push further the limits of structure learning, this paper proposes a new model NodeFormer (for enabling scalable node-level Transformers) whose merits are highlighted via a high-level comparison in Table 1. In particular, NodeFormer enables efficient structure learning in each layer, does not require input graphs and successfully scales to graphs with 2M nodes.
**Node-Level v.s. Graph-Level Prediction.** We emphasize upfront that our focus is on _node-level_ prediction tasks involving a single large graph such that scalability is paramount, especially if we are to consider arbitrary relationships across _all_ nodes (each node is an instance with label and one can treat all the nodes non-i.i.d. generated due to the inter-dependency) for structure-learning purposes. Critically though, this scenario is quite distinct from _graph-level_ classification tasks whereby each i.i.d. instance is itself a small graph and fully connecting nodes _within_ each graph is computationally inexpensive. While this latter scenario has been explored in the context of graph structure learning [38] and all-pair message passing design, e.g., graph Transformers [9], existing efforts do not scale to the large graphs endemic to node-level prediction.
## 3 NodeFormer: A Transformer Graph Network at Scale
Let \(\mathcal{G}=(\mathcal{N},\mathcal{E})\) denote a graph with \(\mathcal{N}\) a node set (\(|\mathcal{N}|=N\)) and \(\mathcal{E}\subseteq\mathcal{N}\times\mathcal{N}\) an edge set (\(|\mathcal{E}|=E\)). Each node \(u\in\mathcal{N}\) is assigned with node features \(\mathbf{x}_{u}\in\mathbb{R}^{D}\) and a label \(y_{u}\). We define an adjacency matrix \(\mathbf{A}=\{a_{uv}\}\in\{0,1\}^{N\times N}\) where \(a_{uv}=1\) if edge \((u,v)\in\mathcal{E}\) and \(a_{uv}=0\) otherwise. Without loss of generality, \(\mathcal{E}\) could be an empty set in case of no input structure. There are two common settings: transductive learning, where testing nodes are within the graph used for training, and inductive learning which handles new unseen nodes out of the training graph. The target is to learn a function for node-level prediction, i.e., estimate labels for unlabeled or new nodes in the graph.
**General Model and Key Challenges.** We start with the observation that the input structures may not be the ideal one for propagating signals among nodes and instead there exist certain latent structures that could facilitate learning better node representations. We thus consider the updating rule
\[\tilde{\mathbf{A}}^{(l)}=g(\mathbf{A},\mathbf{Z}^{(l)};\omega), \quad\mathbf{Z}^{(l+1)}=h(\tilde{\mathbf{A}}^{(l)},\mathbf{A},\mathbf{Z}^{(l) };\theta), \tag{1}\]
where \(\mathbf{Z}^{(l)}=\{\mathbf{z}^{(l)}_{u}\}_{u\in\mathcal{N}}\) and \(\tilde{\mathbf{A}}^{(l)}=\{\tilde{a}^{(l)}_{uv}\}_{u,v\in\mathcal{N}}\) denotes the node representations and the estimated latent graph of the \(l\)-th layer, respectively, and \(g\), \(h\) are both differentiable functions aiming at 1) structure estimation for a layer-specific latent graph \(\tilde{\mathbf{A}}^{(l)}\) based on node representations and 2) feature propagation for updating node representations, respectively. The model defined by Eqn. 1 follows the spirit of Transformers [35] (where in particular \(\tilde{\mathbf{A}}^{(l)}\) can be seen as an attentive graph) that potentially enables message passing between any node pair in each layer, which, however, poses two _challenges_:
\(\bullet\)**(Scalability)**: How to reduce the prohibitive quadratic complexity for learning new graphs?
\(\bullet\)**(Differentiability)**: How to enable end-to-end differentiable optimization for discrete structures?
Notice that the first challenge is non-trivial in node-level prediction tasks (the focus of our paper), since the latent graphs could potentially connect _all the instance nodes_ (e.g., from thousands to millions, depending on dataset sizes), which is fairly hard to guarantee both precision and scalability.
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline
**Models** & **Parameterization** & **Expressivity** & **Input Graphs** & **Inductive** & **Complexity** & **Largest Demo** \\ \hline LDS-GNN [11] & Adjacency & Fixed & Required & No & \(O(N^{2})\) & 0.01M \\ ProGNN [18] & Adjacency & Fixed & Required & No & \(O(N^{2})\) & 0.02M \\ VGGN [10] & Adjacency & Fixed & Required & No & \(O(N^{2})\) & 0.02M \\ BGCN [51] & Adjacency & Fixed & Required & No & \(O(N^{2})\) & 0.02M \\ GLCN [17] & Function & Fixed & Not necessary & Yes & \(O(N^{2})\) & 0.02M \\ IDGL [4] & Function & Fixed & Required & Yes & \(O(N^{2})\) or \(O(Nm)^{\dagger}\) & 0.1M \\ \hline NodeFormer (Ours) & Function & Layer-wise & Not necessary & Yes & \(O(N)\) or \(O(E)\) & 2M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of popular graph structure learning approaches for _node-level tasks_ where in particular, the graph connects all instance nodes and one’s target is for prediction on each individual node. For _parameterization_, ‘Function’ means learning through functional mapping and ‘Adjacency’ means directly optimizing graph adjacency. For _expressivity_, ‘Fixed’ means learning one graph shared by all propagation layers and ‘Layer-wise’ means learning graph structures per layers. The _largest demo_ means the largest # nodes of datasets used. \(\dagger\,m\) denotes # anchors (i.e., a subset of nodes).
### Efficient Learning Discrete Structures
We describe our new message-passing scheme with an efficient kernelized Gumbel-Softmax operator to resolve the aforementioned challenges. We assume \(\mathbf{z}_{u}^{(0)}=\mathbf{x}_{u}\) as the initial node representation.
**Kernelized Message Passing.** We define a full-graph attentive network that estimates latent interactions among instance nodes and enables corresponding densely-connected message passing:
\[\tilde{a}_{uv}^{(l)}=\frac{\exp((W_{Q}^{(l)}\mathbf{z}_{u}^{(l)})^{\top}(W_{K}^ {(l)}\mathbf{z}_{v}^{(l)}))}{\sum_{w=1}^{N}\exp((W_{Q}^{(l)}\mathbf{z}_{u}^{(l) })^{\top}(W_{K}^{(l)}\mathbf{z}_{w}^{(l)}))},\quad\mathbf{z}_{u}^{(l+1)}=\sum_ {v=1}^{N}\tilde{a}_{uv}^{(l)}\cdot(W_{V}^{(l)}\mathbf{z}_{v}^{(l)}), \tag{2}\]
where \(W_{Q}^{(l)}\), \(W_{K}^{(l)}\) and \(W_{V}^{(l)}\) are learnable parameters in \(l\)-th layer. We omit non-linearity activation (after aggregation) for brevity. The updating for \(N\) nodes in one layer using Eqn. 2 requires prohibitive \(\mathcal{O}(N^{2})\) complexity. Also, given large \(N\), the normalization in the denominator would shrink attention weights to zero and lead to gradient vanishing. We call this problem as _over-normalizing_.
To accelerate the full-graph model, we observe that the _dot-then-exponentiate_ operation in Eqn. 2 can be converted into a pairwise similarity function:
\[\mathbf{z}_{u}^{(l+1)}=\sum_{v=1}^{N}\frac{\kappa(W_{Q}^{(l)}\mathbf{z}_{u}^{ (l)},W_{K}^{(l)}\mathbf{z}_{v}^{(l)})}{\sum_{w=1}^{N}\kappa(W_{Q}^{(l)} \mathbf{z}_{u}^{(l)},W_{K}^{(l)}\mathbf{z}_{w}^{(l)})}\cdot(W_{V}^{(l)} \mathbf{z}_{v}^{(l)}), \tag{3}\]
where \(\kappa(\cdot,\cdot):\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) is a positive-definite kernel measuring the pairwise similarity. The kernel function can be further approximated by random features (RF) [27]which serves as an unbiased estimation via \(\kappa(\mathbf{a},\mathbf{b})=\langle\Phi(\mathbf{a}),\Phi(\mathbf{b}) \rangle_{\mathcal{V}}\approx\phi(\mathbf{a})^{\top}\phi(\mathbf{b})\), where the first equation is by Mercer's theorem with \(\Phi:\mathbb{R}^{d}\rightarrow\mathcal{V}\) a basis function and \(\mathcal{V}\) a high-dimensional vector space, and \(\phi(\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{m}\) is a low-dimensional feature map with random transformation. There are many potential choices for \(\phi\), e.g., Positive Random Features (PRF) [6]
\[\phi(\mathbf{x})=\frac{\exp{(-\|\mathbf{x}\|_{2}^{2})}}{\sqrt{m}}[\exp( \mathbf{w}_{1}^{\top}\mathbf{x}),\cdots,\exp(\mathbf{w}_{m}^{\top}\mathbf{x} )], \tag{4}\]
where \(\mathbf{w}_{k}\sim\mathcal{N}(0,I_{d})\) is i.i.d. sampled random transformation. The RF converts dot-then-exponentiate operation into inner-product in vector space, which enables us to re-write Eqn. 3 (assuming \(\mathbf{q}_{u}=W_{Q}^{(l)}\mathbf{z}_{u}^{(l)}\), \(\mathbf{k}_{u}=W_{K}^{(l)}\mathbf{z}_{u}^{(l)}\) and \(\mathbf{v}_{u}=W_{V}^{(l)}\mathbf{z}_{u}^{(l)}\) for simplicity):
\[\mathbf{z}_{u}^{(l+1)}=\sum_{v=1}^{N}\frac{\phi(\mathbf{q}_{u})^{\top}\phi( \mathbf{k}_{v})}{\sum_{w=1}^{N}\phi(\mathbf{q}_{u})^{\top}\phi(\mathbf{k}_{w })}\cdot\mathbf{v}_{v}=\frac{\phi(\mathbf{q}_{u})^{\top}\sum_{v=1}^{N}\phi( \mathbf{k}_{v})\cdot\mathbf{v}_{v}^{\top}}{\phi(\mathbf{q}_{u})^{\top}\sum_{w =1}^{N}\phi(\mathbf{k}_{w})}. \tag{5}\]
The key advantage of Eqn. 5 is that the two summations are shared by each \(u\), so that one only needs to compute them once and re-used for others. Such a property enables \(\mathcal{O}(N)\) computational complexity for full-graph message passing, which paves the way for learning graph structures among large-scale instances. Moreover, one can notice that Eqn. 5 avoids computing the \(N\times N\) similarity matrix, i.e., \(\{\tilde{a}_{uv}^{(l)}\}_{N\times N}\), required by Eqn. 2, thus also reducing the learning difficulties.
Nevertheless, Eqn. 5 still suffers what we mentioned the over-normalizing issue. The crux is that the message passing is operated on a weighted fully-connected graph where, in fact, only partial edges are important. Also, such a deterministic way of feature aggregation over all the instances may increase the risk for over-fitting, especially when \(N\) is large. We next resolve the issues by distilling a sparse structure from the fully-connected graph.
**Differentiable Stochastic Structure Learning.** The difficultly lies in how to enable differentiable optimization for discrete graph structures. The weight \(\tilde{a}_{uv}^{(l)}\) given by Eqn. 2 could be used to define a categorical distribution for generating latent edges from distribution \(\text{Cat}(\boldsymbol{\pi}_{u}^{(l)})\) where \(\boldsymbol{\pi}_{u}^{(l)}=\{\pi_{uv}^{(l)}\}_{v=1}^{N}\) and \(\pi_{uv}^{(l)}=p(v|u)=\tilde{a}_{uv}^{(l)}\). Then in principle, we can sample over the categorical distribution multiple times for each node to obtain its neighbors. However, the sampling process would introduce discontinuity and hinders back-propagation. Fortunately, we notice that the Eqn. 3 can be modified to incorporate the reparametrization trick [16] to allow differentiable learning:
\[\mathbf{z}_{u}^{(l+1)}=\sum_{v=1}^{N}\frac{\exp((\mathbf{q}_{u}^{\top}\mathbf{k}_ {v}+g_{v})/\tau)}{\sum_{w=1}^{N}\exp((\mathbf{q}_{u}^{\top}\mathbf{k}_{w}+g_{ w})/\tau)}\cdot\mathbf{v}_{v}=\sum_{v=1}^{N}\frac{\kappa(\mathbf{q}_{u}/\sqrt{ \tau},\mathbf{k}_{v}/\sqrt{\tau})e^{g_{v}/\tau}}{\sum_{w=1}^{N}\kappa(\mathbf{q }_{u}/\sqrt{\tau},\mathbf{k}_{w}/\sqrt{\tau})e^{g_{w}/\tau}}\cdot\mathbf{v}_{v}, \tag{6}\]
where \(g_{u}\) is i.i.d. sampled from Gumbel distribution and \(\tau\) is a temperature coefficient. Eqn. 6 is a continuous relaxation of sampling one neighbored node for \(u\) over \(\text{Cat}(\mathbf{\pi}_{u}^{(l)})\) and \(\tau\) controls the closeness to hard discrete samples [23]. Following similar reasoning as Eqn. 3 and 5, we can yield
\[\mathbf{z}_{u}^{(l+1)}\approx\sum_{v=1}^{N}\frac{\phi(\mathbf{q}_{u}/\sqrt{ \tau})^{\top}\phi(\mathbf{k}_{v}/\sqrt{\tau})e^{g_{v}/\tau}}{\sum_{w=1}^{N} \phi(\mathbf{q}_{u}/\sqrt{\tau})^{\top}\phi(\mathbf{k}_{w}/\sqrt{\tau})e^{g_{w }/\tau}}.\mathbf{v}_{v}=\frac{\phi(\mathbf{q}_{u}/\sqrt{\tau})^{\top}\sum_{v=1 }^{N}e^{g_{v}/\tau}\phi(\mathbf{k}_{v}/\sqrt{\tau})}{\phi(\mathbf{q}_{u}/\sqrt {\tau})^{\top}\sum_{w=1}^{N}e^{g_{w}/\tau}\phi(\mathbf{k}_{w}/\sqrt{\tau})}. \tag{7}\]
Eqn. 7 achieves message passing over a sampled latent graph (where we only sample once for each node) and still guarantees linear complexity as Eqn. 5. In practice, we can sample \(K\) times (e.g., \(K=5\)) for each node and take an average of the aggregated results. Due to space limit, we defer more details concerning the differentiable sampling-based message passing to Appendix A. Besides, in Fig. 5 and Alg. 1 of Appendix A, we present an illustration for node embedding updating in each layer, from a matrix view that is practically used for implementation.
### Well-posedness of the Kernelized Gumbel-Softmax Operator
One reasonable concern for Eqn. 7 is whether the RF approximation for kernel functions maintains the well-posedness of Gumbel approximation for the target discrete variables. As a justification for the new message-passing function, we next answer two theoretical questions: 1) How is the approximation capability of RF for the original dot-then-exponentiate operation with Gumbel variables in Eqn. 6? 2) Does Eqn. 7 still guarantee a continuous relaxation of the categorical distributions? We formulate the results as follows and defer proofs to Appendix B.
**Theorem 1** (Approximation Error for Softmax-Kernel).: _Assume \(\|\mathbf{q}_{u}\|_{2}\) and \(\|\mathbf{k}_{v}\|_{2}\) are bounded by \(r\), then with probability at least \(1-\epsilon\), the gap \(\Delta=\left|\phi(\mathbf{q}_{u}/\sqrt{\tau})^{\top}\phi(\mathbf{k}_{v}/\sqrt {\tau})-\kappa(\mathbf{q}_{u}/\sqrt{\tau},\mathbf{k}_{v}/\sqrt{\tau})\right|)\), where \(\phi\) is defined by Eqn. 4, will be bounded by \(\mathcal{O}\left(\sqrt{\frac{\exp(6r/\tau)}{m\epsilon}}\right)\)._
We can see that the error bound of RF for approximating original softmax-kernel function depends on both the dimension of feature map \(\phi\) and temperature \(\tau\). Notably, the error bound is independent of node number \(N\), which implies that the approximation ability is insensitive to dataset sizes.
The second question is non-trivial since Eqn. 7 involves randomness of Gumbel variables and random transformation in \(\phi\), which _cannot_ be decoupled apart. We define \(c_{uv}=\frac{\phi(\mathbf{q}_{u}/\sqrt{\tau})^{\top}\phi(\mathbf{k}_{v}/\sqrt {\tau})e^{g_{v}/\tau}}{\sum_{w=1}^{N}\phi(\mathbf{q}_{u}/\sqrt{\tau})^{\top} \phi(\mathbf{k}_{w}/\sqrt{\tau})^{\top}\phi^{g_{w}/\tau}}\) as the result from the kernelized Gumbel-Softmax and \(\mathbf{c}_{u}=\{c_{uv}\}_{v=1}^{N}\) denotes the sampled edge vector for node \(u\). We can arrive at the result as follows.
**Theorem 2** (Property of Kernelized Gumbel-Softmax Random Variables).: _Suppose \(m\) is sufficiently large, we have the convergence property for the kernelized Gumbel-Softmax operator_
\[\lim_{\tau\to 0}\mathbb{P}(c_{uv}>c_{uv^{\prime}},\forall v^{\prime}\neq v)= \frac{\exp(\mathbf{q}_{u}^{\top}\mathbf{k}_{v})}{\sum_{w=1}^{N}\exp(\mathbf{q} _{u}^{\top}\mathbf{k}_{w})},\quad\lim_{\tau\to 0}\mathbb{P}(c_{uv}=1)= \frac{\exp(\mathbf{q}_{u}^{\top}\mathbf{k}_{v})}{\sum_{w=1}^{N}\exp(\mathbf{q} _{u}^{\top}\mathbf{k}_{w})}.\]
It shows that when i) the dimension of feature map is large enough and ii) the temperature goes to zero, the distribution from which latent structures are sampled would converge to the original categorical distribution.
_Remark_.: The two theorems imply a trade-off between RF approximation and Gumbel-Softmax approximation w.r.t. the choice of \(\tau\). A large \(\tau\) would help to reduce the burden on kernel dimension \(m\), and namely, small \(\tau\) would require a very large \(m\) to guarantee enough RF approximation precision. On the other hand, if \(\tau\) is too large, the weight on each edge will converge to \(\frac{1}{N}\), i.e., the model nearly degrades to mean pooling, while a small \(\tau\) would endow the kernelized Gumbel-Softmax with better approximation to the categorical distribution. Empirical studies on this are presented in Appendix E.
### Input Structures as Relational Bias
Eqn. 7 does not leverage any information from observed geometry which, however, is often recognized important for modeling physically-structured data [3]. We therefore accommodate input topology (if any) as relational bias via modifying the attention weight as \(\tilde{a}_{uv}^{(l)}\leftarrow\tilde{a}_{uv}^{(l)}+\mathbb{I}[a_{uv}=1]\sigma(b ^{(l)})\)
where \(b^{(l)}\) is a learnable scalar as relational bias for any adjacent node pairs \((u,v)\) and \(\sigma\) is a certain (bounded) activation function like sigmoid. The relational bias aims at assigning adjacent nodes in \(\mathcal{G}\) with proper weights, and the node representations could be accordingly updated by
\[\mathbf{z}_{u}^{(l+1)}\leftarrow\mathbf{z}_{u}^{(l+1)}+\sum_{v,a_{uv}=1}\sigma (b^{(l)})\cdot\mathbf{v}_{v}. \tag{8}\]
Eqn. 8 increases the algorithmic complexity for message passing to \(\mathcal{O}(N+E)\), albeit within the same order-of-magnitude as common GNNs operating on input graphs. Also, one can consider higher-order adjacency as relational bias for better expressiveness at some expense of efficiency, as similarly done by [1]. We summarize the feed-forward computation of NodeFormer in Alg. 1.
### Learning Objective
Given training labels \(\mathbf{Y}_{tr}=\{y_{u}\}_{u\in\mathcal{N}_{tr}}\), where \(\mathcal{N}_{tr}\) denotes the set of labeled nodes, the common practice is to maximize the observed data log-likelihood which yields a supervised loss (with \(C\) classes)
\[\mathcal{L}_{s}(\mathbf{Y}_{tr},\hat{\mathbf{Y}}_{tr})=-\frac{1}{N_{tr}}\sum_ {v\in\mathcal{N}_{tr}}\sum_{c=1}^{C}\mathbb{I}[y_{u}=c]\log\hat{y}_{u,c}, \tag{9}\]
where \(\mathbb{I}[\cdot]\) is an indicator function. However, it may not suffice to generalize well due to that the graph topology learning increases the degrees of freedom and the number of training labels is not comparable to that. Therefore, we additionally introduce an edge-level regularization:
\[\mathcal{L}_{e}(\mathbf{A},\tilde{\mathbf{A}})=-\frac{1}{NL}\sum_{l=1}^{L} \sum_{(u,v)\in\mathcal{E}}\frac{1}{d_{u}}\log\pi_{uv}^{(l)}, \tag{10}\]
where \(d_{u}\) denotes the in-degree of node \(u\) and \(\pi_{uv}^{(l)}\) is the predicted probability for edge \((u,v)\) at the \(l\)-th layer. Eqn. 10 is a maximum likelihood estimation for edges in \(\mathcal{E}\), with data distribution defined
\[p_{0}(v|u)=\left\{\begin{array}{rl}&\frac{1}{d_{u}},\quad a_{uv}=1\\ &0,\quad otherwise.\end{array}\right. \tag{11}\]
We next show how to efficiently obtain \(\pi_{uv}^{(l)}\). Although the feed-forward NodeFormer computation defined by Eqn. 7 does not explicitly produce the value for each \(\pi_{uv}^{(l)}\), we can query their values by
\[\pi_{uv}^{(l)}=\frac{\phi(W_{Q}^{(l)}\mathbf{z}_{u}^{(l)})^{\top}\phi(W_{K}^{ (l)}\mathbf{z}_{v}^{(l)})}{\phi(W_{Q}^{(l)}\mathbf{z}_{u}^{(l)})^{\top}\sum_{ w=1}^{N}\phi(W_{K}^{(l)}\mathbf{z}_{w}^{(l)})}, \tag{12}\]
Figure 1: Illustration for the data flow of NodeFormer which takes node embedding matrix \(\mathbf{X}\) and (optional) graph adjacency matrix \(\mathbf{A}\) as input. There are three components in NodeFormer. The first one is the all-pair message passing (MP) module (colored red) which adopts our proposed kernelized Gumbel-Softmax operator to update node embeddings in each layer with \(\mathcal{O}(N)\) complexity. The other two components are optional based on the availability of input graphs: 1) relational bias (colored green) that reinforces the propagation weight on observed edges; 2) edge regularization loss (colored blue) that aims to maximize the probability for observed edges. These two components require \(\mathcal{O}(E)\) complexity. The final training loss \(\mathcal{L}\) is the weighted sum of the standard supervised classification loss and the edge regularization loss.
where the summation term can be re-used from once computation, as is done by Eqn. 5 and Eqn. 7. Therefore, after once computation for the summation that requires \(\mathcal{O}(N)\), the computation for each \(\pi_{uv}^{(l)}\) requires \(\mathcal{O}(1)\) complexity, yielding the total complexity controlled within \(\mathcal{O}(E)\) (since we only need to query the observed edges). The final objective can be the combination of two: \(\mathcal{L}=\mathcal{L}_{s}+\lambda\mathcal{L}_{e}\), where \(\lambda\) controls how much emphasis is put on input topology. We depict the whole data flow of NodeFormer's training in Fig. 1.
## 4 Evaluation
We consider a diverse set of datasets for experiments and present detailed dataset information in Appendix D. For implementation, we set \(\sigma\) as sigmoid function and \(\tau\) as 0.25 for all datasets. The output prediction layer is a one-layer MLP. More implementation details are presented in Appendix C. All experiments are conducted on a NVIDIA V100 with 16 GB memory.
As baseline models, we basically consider GCN [19] and GAT [36]. Besides, we compare with some advanced GNN models, including JKNet [44] and MixHop [1]. These GNN models all rely on input graphs. We further consider DropEdge [28] and two SOTA graph structure learning methods, LDS-GNN [11] and IDGL [4] for comparison. For large-scale datasets, we additionally compare with two scalable GNNs, a linear model SGC [39] and a graph-sampling model GraphSAINT [48]. More detailed information about these models are presented in Appendix C. All the experiments are repeated five times with different initializations.
### Experiments on Transductive Node Classification
We study supervised node classification in transductive setting on common graph datasets: Cora, Citeseer, Deezer and Actor. The first two have high homophily ratios and the last two are identified as heterophilic graphs [53; 21]. These datasets are of small or medium sizes (with 2K\(\sim\)20K nodes). We use random splits with train/valid/test ratios as 50%/25%/25%. For evaluation metrics, we use ROC-AUC for binary classification on Deezer and Accuracy for other datasets with more than 2 classes. Results are plotted in Fig. 2 and NodeFormer achieves the best mean Accuracy/ROC-AUC across four datasets and in particular, outperforms other models by a large margin on two heterophilic graphs. The results indicate that NodeFormer can handle both homophilious and non-homophilious graphs. Compared with two structure learning models LDS and IDGL, NodeFormer yields significantly better performance, which shows its superiority. Also, for Deezer, LDS and IDGL suffers from out-of-memory (OOM). In fact, the major difficulty for Deezer is the large
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Method** & **ROC-AUC (\%)** & **Train Mem** \\ \hline MLP & 72.04 \(\pm\) 0.48 & 2.0 GB \\ GCN & 72.51 \(\pm\) 0.35 & 2.5 GB \\ SGC & 70.31 \(\pm\) 0.23 & 1.2 GB \\ GraphSAINT-GCN & 73.51 \(\pm\) 1.31 & 2.3 GB \\ GraphSAINT-GAT & 74.63 \(\pm\) 1.24 & 5.2 GB \\ \hline NodeFormer & **77.45**\(\pm\) 0.15 & 3.2 GB \\ NodeFormer-dt & 75.50 \(\pm\) 0.64 & 3.1 GB \\ NodeFormer-tp & 76.18 \(\pm\) 0.09 & 3.2 GB \\ \hline \hline \end{tabular}
\end{table}
Table 2: Testing ROC-AUC and training memory cost on OGB-Proteins with batch size 10K.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Method** & **Accuracy (\%)** & **Train Mem** \\ \hline MLP & 63.46 \(\pm\) 0.10 & 1.4 GB \\ GCN & 83.90 \(\pm\) 0.10 & 5.7 GB \\ SGC & 81.21 \(\pm\) 0.12 & 1.7 GB \\ GraphSAINT-GCN & 83.84 \(\pm\) 0.42 & 2.1 GB \\ GraphSAINT-GAT & 85.17 \(\pm\) 0.32 & 2.2 GB \\ \hline NodeFormer & **87.85**\(\pm\) 0.24 & 4.0 GB \\ NodeFormer-dt & 87.02 \(\pm\) 0.75 & 2.9 GB \\ NodeFormer-tp & 87.55 \(\pm\) 0.11 & 4.0 GB \\ \hline \hline \end{tabular}
\end{table}
Table 3: Testing Accuracy and training memory cost on Amazon2M with batch size 100K.
Figure 2: Experimental results for node classification in transductive setting on four common datasets. The missing results on Deezer is caused by out-of-memory (OOM).
dimensions of input node features (nearly 30K), which causes OOM for IDGL even with the anchor approximation. In contrast, NodeFormer manages to scale and produce desirable accuracy.
### Experiments on Larger Graph Datasets
To further test the scalability, we consider two large-sized networks, OGB-Proteins and Amazon2M, with over 0.1 million and 2 million of nodes, respectively. OGB-Proteins is a multi-task dataset with 112 output dimensions, while Amazon2M is extracted from the Amazon Co-Purchasing network that entails long-range dependence [13]. For OGB-Proteins, we use the protocol of [15] and ROC-AUC for evaluation. For Amazon2M, we adopt random splitting with 50%/25%/25% nodes for training, validation and testing, respectively. Due to the large dataset size, we adopt mini-batch partition for training, in which case, for NodeFormer we only consider structure learning among nodes in a random mini-batch. We use batch size 10000 and 100000 for Proteins and Amazon2M, respectively. While the mini-batch partition may sacrifice the exposure to all instances, we found using large batch size can yield decent performance, which is also allowable thanks to the \(\mathcal{O}(N)\) complexity of our model. For example, even setting the batch size as 100000, we found NodeFormer costs only 4GB GPU memory for training on Amazon2M. Table 2 presents the results on OGB-Proteins where for fair comparison mini-batch training is also used for other models except GraphSAINT. We found that NodeFormer yields much better ROC-AUC and only requires comparable memory as simple GNN models. Table 3 reports the results on Amazon2M which shows that NodeFormer outperforms baselines by a large margin and the memory cost is even fewer than GCN. This shows its practical efficacy and scalability on large-scale datasets and also the capability for addressing long-range dependence with shallow layers (we use \(L=3\)).
### Experiments on Graph-Enhanced Applications
We apply our model to semi-supervised image and text classification on Mini-ImageNet and 20News-Groups datasets, without input graphs. The instances of Mini-ImageNet [37] are 84x84 RGB images and we randomly choose 30 classes each of which contains 600 samples for experiments. 20News-Groups [25] consists of nearly 10K texts whose features are extracted by TF-IDF. More details for preprocessing are presented in Appendix D. Also, for each dataset, we randomly split instances into 50%/25%/25% for train/valid/test. Since there is no input graph, we use \(k\)-NN (over input node features) for artificially constructing a graph for enabling GNN's message passing and the graph-based components (edge regularization and relational bias) of NodeFormer. Table 4 presents the comparison results under different \(k\)'s. We can see that NodeFormer achieves the best performance in seven cases out of eight. The performance of GNN competitors varies significantly with different \(k\) values, and NodeFormer is much less sensitive. Intriguingly, when we do not use
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{Mini-ImageNet} & \multicolumn{4}{c}{20News-Group} \\ & \(k=5\) & \(k=10\) & \(k=15\) & \(k=20\) & \(k=5\) & \(k=10\) & \(k=15\) & \(k=20\) \\ \hline GCN & 84.86 \(\pm\) 0.42 & 85.61 \(\pm\) 0.42 & 85.93 \(\pm\) 0.48 & 85.96 \(\pm\) 0.46 & 65.98 \(\pm\) 0.46 & 64.13 \(\pm\) 0.48 & 62.95 \(\pm\) 0.49 & 62.95 \(\pm\) 0.42 \\ GAT & 84.70 \(\pm\) 0.48 & 85.24 \(\pm\) 0.42 & 85.41 \(\pm\) 0.45 & 85.37 \(\pm\) 0.41 & 64.06 \(\pm\) 0.44 & 62.51 \(\pm\) 0.71 & 61.38 \(\pm\) 0.48 & 60.80 \(\pm\) 0.49 \\ DropEdge & 83.91 \(\pm\) 0.44 & 85.35 \(\pm\) 0.44 & 85.25 \(\pm\) 0.43 & 85.81 \(\pm\) 0.45 & 64.46 \(\pm\) 0.43 & 64.01 \(\pm\) 0.42 & 62.46 \(\pm\) 0.51 & 62.68 \(\pm\) 0.71 \\ IDGL & 83.63 \(\pm\) 0.32 & 84.41 \(\pm\) 0.35 & 85.02 \(\pm\) 0.42 & 85.60 \(\pm\) 0.42 & 65.09 \(\pm\) 1.23 & 63.41 \(\pm\) 1.26 & 63.57 \(\pm\) 0.43 & 62.21 \(\pm\) 0.49 \\ LDS & OOM & OOM & OOM & 66.12 \(\pm\) 0.47 & 64.07 \(\pm\) 1.07 & 63.51 \(\pm\) 0.46 & 63.51 \(\pm\) 1.75 \\ \hline NodeFormer & 84.72 \(\pm\) 0.66 & 86.74 \(\pm\) 0.31 & 86.37 \(\pm\) 0.41 & 86.64 \(\pm\) 0.41 & 66.01 \(\pm\) 1.18 & 65.21 \(\pm\) 0.56 & **64.60** & 64.55 \(\pm\) 0.11 \\ \hline NodeFormer w/o graph & \multicolumn{4}{c}{**87.46**} & \multicolumn{4}{c}{**64.71**} & \multicolumn{4}{c}{} \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experimental results on semi-supervised classification on Mini-ImageNet and 20News-Groups where we use \(k\)-NN (with different \(k\)’s) for artificially constructing an input graph.
Figure 3: Comparison of training/inference time and GPU memory cost w.r.t. different instance numbers (by removing a certain portion of nodes) on 20News-Groups.
the input graph, i.e., removing both the edge regularization and relational bias, NodeFormer can still yield competitive even superior results on Mini-ImageNet. This suggests that the \(k\)-NN graphs are not necessarily informative and besides, our model learns useful latent graph structures from data.
### Further Discussions
**Comparison of Time/Space Consumption.** Fig. 3 plots training/inference time and GPU memory costs of NodeFormer and two SOTA structure learning models. Compared with LDS, NodeFormer reduces the training time, inference time, memory cost by up to 93.1%, 97.9%, 75.6%, respectively; compared with IDGL (using anchor-based approximation for speedup), NodeFormer reduces the training time, inference time, memory cost by up to 61.8%, 80.8%, 80.6%, respectively.
**Ablation on Stochastic Components.** Table 2 and 3 also include two variants of NodeFormer for ablation study. 1) NodeFormer-dt: replace Gumbel-Softmax by original Softmax (with temperature 1.0) for deterministic propagation; 2) NodeFormer-tp: use original Softmax with temperature set as 0.25 (the same as NodeFormer). There is performance drop when removing the Gumbel components, which may be due to over-normalizing or over-fitting that are amplified in large datasets, as we discussed in Section 3.1 and the kernelized Gumbel-Softmax operator shows its effectiveness.
**Ablation on Edge Loss and Relational Bias.** We study the effects of edge-level regularization and relation bias as ablation study shown in Table 6 located in Appendix E, where the results consistently show that both components contribute to some positive effects and suggest that our edge-level loss and relation bias can both help to leverage useful information from input graphs.
**Impact of Temperature and Feature Map Dimension.** We study the effects of \(\tau\) and \(m\) in Fig. 6 located in Appendix E and the variation trend accords with our theoretical analysis in Section 3.2. Specifically, the result shows that the test accuracy increases and then falls with the temperature changing from low to high values (usually achieves the peak accuracy with a temperature of 0.4). Besides, we can see that when the temperature is relatively small, the test accuracy goes high with the dimension of random features increasing. However, when the temperature is large, the accuracy would drop even with large feature dimension \(m\). Such a phenomenon accords with the theoretical result presented in Section 3.2. For low temperature which enables desirable approximation performance for Gumbel-Softmax, then larger random feature dimension would help to produce better approximation to the original exponentiate-then-dot operator. In contrast, high temperature could not guarantee precise approximation for the original categorical distribution, which deteriorates the performance.
**Visualization and Implications.** Fig. 4 visualizes node embeddings and edge connections (filter out the edges with weights larger than a threshold) on 20News-Groups and Mini-Imagenet, which show that NodeFormer tends to assign more weights for nodes with the same class and sparse edges for nodes with different classes. This helps to interpret why NodeFormer improves the performance on downstream node-level prediction: the latent structures can propagate useful information to help the model learn better node representations that can be easily distinguished by the classifier. We also compare the learned structures with original graphs in Fig. 7 located in Appendix E. We can see that the latent structures learned by NodeFormer show different patterns from the observed ones, especially for heterophilic graphs. Another interesting phenomenon is that there exist some dominant nodes which are assigned large weights by other nodes, forming some vertical 'lines' in the heatmap. This suggests that these nodes could contain critical information for the learning tasks and play as pivots that could improve the connectivity of the whole system.
## 5 Why NodeFormer Improves Downstream Prediction?
There remains a natural question concerning our learning process: how effective can the learned latent topology be for downstream tasks? We next dissect the rationale from a Bayesian perspective. In fact, our model induces a predictive distribution \(p(\mathbf{Y},\tilde{\mathbf{A}}|\mathbf{X},\mathbf{A})=p(\tilde{\mathbf{A}}| \mathbf{X},\mathbf{A})p(\mathbf{Y}|\tilde{\mathbf{A}},\mathbf{X},\mathbf{A})\) where we can treat the estimated graph \(\tilde{\mathbf{A}}\) as a latent variable.2 Specifically, \(p(\tilde{\mathbf{A}}|\mathbf{X},\mathbf{A})\) is instantiated with the structure estimation module and \(p(\mathbf{Y}|\tilde{\mathbf{A}},\mathbf{X},\mathbf{A})\) is instantiated with the feature propagation module. In principle, ideal latent graphs should account for downstream tasks and maximize the potentials
of message passing for producing informative node representations. Thus, optimal latent graphs presumably come from the posterior \(p(\tilde{\mathbf{A}}|\mathbf{Y},\mathbf{X},\mathbf{A})=\frac{p(\mathbf{Y}| \mathbf{X},\mathbf{A},\tilde{\mathbf{A}})p(\tilde{\mathbf{A}}|\mathbf{X}, \mathbf{A})}{\int_{\mathbf{Y}}p(\mathbf{Y}|\mathbf{X},\mathbf{A},\mathbf{A})p (\mathbf{A}|\mathbf{X},\mathbf{A})d\mathbf{Y}}\) which is given by Bayes theorem. Unfortunately, such a posterior is unknown and intractable for the integration.
**A Variational Perspective.** An intriguing conclusion stems from another view into the learning process: we can treat the structure estimation as a variational distribution \(q(\tilde{\mathbf{A}}|\mathbf{X},\mathbf{A})\) and our learning objective in Section 3.4 can be viewed as the embodiment of a minimization problem over the predictive and variational distributions via
\[p^{*},q^{*}=\arg\min_{p,q}\underbrace{-\mathbb{E}_{q}[\log p(\mathbf{Y}| \tilde{\mathbf{A}},\mathbf{X},\mathbf{A})]}_{\mathcal{L}_{s}}+\underbrace{ \mathcal{D}(q(\tilde{\mathbf{A}}|\mathbf{X},\mathbf{A})\|p_{0}(\tilde{ \mathbf{A}}|\mathbf{X},\mathbf{A}))}_{\mathcal{L}_{e}}, \tag{13}\]
where \(\mathcal{D}\) denotes the Kullback-Leibler divergence. Specifically, the _predictive_ term is equivalent to minimizing the supervised loss (with Gumbel-Softmax as a surrogate for sampling-based estimates over \(q(\tilde{\mathbf{A}}|\mathbf{X},\mathbf{A})\)), and the KL _regularization_ term is embodied with the edge-level MLE loss (Eqn. 10) (if we define the prior distribution \(p_{0}(\tilde{\mathbf{A}}|\mathbf{X},\mathbf{A})\) following Eqn. 11). One may notice that Eqn. 13 is essentially the Evidence Lower Bound (ELBO) for the log-likelihood \(\log p(\mathbf{Y}|\mathbf{X},\mathbf{A})\).
**Proposition 1**.: _Assume \(q\) can exploit arbitrary distributions over \(\tilde{\mathbf{A}}\). When Eqn. 13 achieves the optimum, we have 1) \(\mathcal{D}(q(\tilde{\mathbf{A}}|\mathbf{X},\mathbf{A})\|p(\tilde{\mathbf{A}}| \mathbf{Y},\mathbf{X},\mathbf{A}))=0\) and 2) \(\log p(\mathbf{Y}|\mathbf{X},\mathbf{A})\) is maximized._
The proposition indicates that our adopted learning objective intrinsically minimizes the divergence between latent graphs generated by the model and the samples from the posterior \(p(\tilde{\mathbf{A}}|\mathbf{Y},\mathbf{X},\mathbf{A})\) that ideally helps to propagate useful adjacent information w.r.t. downstream tasks. Therefore, a well-trained network of NodeFormer on labeled data could produce effective latent topology that contributes to boosting the downstream performance.
## 6 Conclusion
This paper proposes a scalable and efficient graph Transformer (especially for node level) that can propagate layer-wise node signals between arbitrary pairs beyond input topology. The key module, a kernelized Gumbel-Softmax operator, enables us to learn layer-specific latent graphs with linear algorithmic complexity without compromising the precision. The results on diverse graph datasets and situations verify the effectiveness, scalability, and stability. We provide more discussions on the limitations and potential impacts in Appendix F.
## Acknowledgement
This work was partly supported by National Key Research and Development Program of China (2020AAA0107600), National Natural Science Foundation of China (61972250, 72061127003), and Shanghai Municipal Science and Technology (Major) Project (22511105100, 2021SHZDZX0102).
Figure 4: Visualization of node embeddings and edge connections produced by NodeFormer on graph-enhanced application datasets. We mark the nodes with a particular class with one color. More comparison between the learned structures and original input graphs is presented in Appendix E. |
2302.02439 | Thin flexible multi-octave metamaterial absorber for millimetre
wavelengths | Development of novel radiation-absorbent materials and devices for millimetre
and submillimetre astronomy instruments is a research area of high interest,
and with substantial engineering challenges. Alongside low-profile structure
and ultra-wideband performance in a wide range of angles of incidence, advanced
absorbers in CMB instruments are aimed at reducing optical systematics, notably
instrument polarisation, far beyond previous specifications. This paper
presents an innovative design of flat thin flexible absorber operating in a
wide frequency range of 80-400 GHz. The structure comprises a combination of
sub-wavelength metal-mesh capacitive and inductive grids and dielectric layers,
making use of the magnetic mirror concept for large bandwidth. The overall
stack thickness is a quarter of the longest operating wavelength and is close
to the theoretical limit stipulated by Rozanov criterion. The test device is
designed to operate at 22.5deg. incidence. The iterative numerical-experimental
design procedure of the new absorber is discussed in detail, as well as the
practical challenges of its manufacture. A well-established mesh-filter
fabrication process has been successfully employed for prototype fabrication,
which ensures cryogenic operation of the hot-pressed quasi-optical devices. The
final prototype, extensively tested in quasi-optical testbeds using a
Fourier-transform spectrometer and a vector network analyser, demonstrated
performance closely matching the finite-element analysis simulations, viz.,
greater than 99% absorbance for both polarisations, with only 0.2% difference,
across the frequency band of 80-400 GHz. The angular stability for up to
+/-10deg. has been confirmed by simulations. To the best of the authors
knowledge, this is the first successful implementation of a low-profile
ultrawideband metamaterial absorber for this frequency range and operating
conditions. | Giampaolo Pisano, Christopher Dunscombe, Peter Hargrave, Alexey Shitvov, Carole Tucker | 2023-02-05T17:34:58Z | http://arxiv.org/abs/2302.02439v2 | # Thin flexible multi-octave metamaterial absorber for millimetre wavelengths
###### Abstract
Development of novel radiation-absorbent materials and devices for millimetre and submillimetre astronomy instruments is a research area of significant interest, and with substantial engineering challenges. Alongside low-profile structure and ultra-wideband performance in a wide range of angles of incidence, advanced absorbers in Cosmic Microwave Background (CMB) instruments are aimed at reducing optical systematics, notably instrument polarisation, far beyond previously achievable specifications. This paper presents an innovative design of metamaterial-inspired flat conformable absorber operating in a wide frequency range of 80-400 GHz. The structure comprises a combination of sub-wavelength metal-mesh capacitive and inductive grids and dielectric layers, making use of the magnetic mirror concept for large bandwidth. The overall stack thickness is a quarter of the longest operating wavelength and is close to the theoretical limit stipulated by Rozanov's criterion. The test device is designed to operate at 22.5\({}^{\circ}\) incidence. The iterative numerical-experimental design procedure of the new metamaterial absorber is discussed in detail, as well as the practical challenges of its manufacture. A well-established mesh-filter fabrication process has been successfully employed for prototype fabrication, which ensures cryogenic operation of the hot-pressed quasi-optical devices. The final prototype, extensively tested in quasi-optical testbeds using a Fourier-transform spectrometer and a vector network analyser, demonstrated performance closely matching the finite-element analysis simulations, viz., greater than 99% absorbance for both polarisations, with only 0.2% difference, across the frequency band of 80-400 GHz. The angular stability for up to \(\pm\)10\({}^{\circ}\) has been confirmed by simulations. To the best of the authors' knowledge, this is the first successful implementation of a low-profile ultrawideband metamaterial absorber for this frequency range and operating conditions.
_[http://dx.doi.org/10.1364/AO.99.09999_](http://dx.doi.org/10.1364/AO.99.09999_)
## 1 Introduction
The rapid development of millimetre-wave astronomy instrumentation is partly driven by the scientific opportunity provided by Cosmic Microwave Background (CMB) observations. In particular, ongoing CMB polarisation measurements demand unprecedented instrument sensitivity and bandwidth. Proposed millimetre-wave astronomy telescopes feature large multicrude detector arrays operated at sub-Kelvin temperatures. High polarisation sensitivity can be achieved with the aid of a rotating half-wave plate. The corresponding cold optics are designed for maximum optical throughput, wide field of view, and multiple frequency bands. As the cold detector sensitivity approaches theoretical limits and atmospheric systematics become less of an issue for space-borne instruments, low-level instrument systematics have emerged as a crucial requirement for CMB telescopes. These include instrument polarisation effects, notably those associated with the rotating half-wave plate, beam fidelity, and wavefront impairments caused by stray light channelled by spurious reflections along the optical chain. In relation to stray light, there is an ongoing effort to design new absorber material for the optical cavities of the LiteBIRD telescopes [1], which will be deployed in space at the end of the 2020s to carry out an ambitious mission to detect primordial gravitational waves through observations of the CMB B-mode pattern.
Millimetre-wave and infrared absorbers are routinely used in CMB instrument design, e.g. [2], to prevent reflection from the walls and surfaces of the cryostat, pupils, and baffles, as well as other structural elements along the optical path, thus terminating the stray light at the higher temperatures before it reaches the focal plane. Absorbers are designed to meet a range of interrelated, and often conflicting optical, structural, and thermal requirements, including low reflectivity, wide bandwidth, angular and polarisation independence, small thickness, high mechanical strength, high thermal conductivity at cryogenic temperatures, practical conformability, low weight, and low cost.
Conventional millimetre-wave telescope absorbers include open foam materials loaded with carbon or stainless-steel particles [3]. Tesselated terahertz radiation absorptive material (RAM), developed by Thomas Keating Ltd. [4], is a carbon-loaded polypropylene compound manufactured by injection moulding. It features pyramidal anti
reflection structure on the exterior face and provides low reflectance in 50-1000 GHz frequency range in a wide range of angles of incidence. However, this material is stiff, thick, and heavy, which makes it inconvenient for covering large conformal surfaces inside cylindrical optical cryostats. A modification of such RAM has been reported in [2].
Two-component conductively loaded epoxics, such as thermally conductive epoxy encapsulant 2850FT by Henkel Corp. [5], are also commonly used as submillimetre-wave absorbers in low-temperature applications, cf., [6]. A coat of Stycast2850FT usually exhibits \(\sim\)15% reflectivity, depending on the surface roughness. Epoxy coating can be moulded to reduce reflectivity at lower frequencies, but the process becomes costly. A graphite-loaded epoxy-based moulded pyramidal absorber, constituting a cryogenic thermal source and demonstrating <0.1% reflectance in 75-330 GHz spectral range, was reported in [7], alongside an accurate design model based on geometrical optics analysis. Considerable effort has been dedicated to adopting additive manufacturing techniques for millimetre-wave RAM structures, including 3D printed pyramidal absorber moulds [8], and Hilbert-curve impedance matched structures [9]. Pyramidal tapers of various shapes represent the mainstream technology of broadband absorbers in microwave through infrared optical ranges, cf, [10], [11].
Conventional open-foam and epoxy based broadband millimetre-wave RAM suffer from drawbacks such as high volume, weight, and low conformability, which affect the thermal budget of the cryogenic optical tubes and complicates absorber installation. An alternative electromagnetic absorber design approach is based on the use of thin multi-layered engineered surfaces, conventionally referred to as planar metamaterials or metasurfaces. The use of the patterned conductive and resistive surfaces with controlled surface impedance, as well as judicious choice of the dielectric interfaces enables robust and cost-effective solutions for millimetre-wave absorbers, as detailed in the following section.
The paper is organized as follows. In Section 2, a review of the planar electromagnetic absorbers is carried out, while the working principle of the new design is detailed in Section 3. Section 4 presents the device's modelling and its initial design. Preliminary fabrication and measurements of the absorber breadboards, aimed to evaluate the actual parameters of the materials and processes, are reported in Section 5. The refined recipe is used to fabricate the final device, and the respective results of its experimental characterization are presented in Section 6. The final conclusions are drawn in Section 7.
## 2 Absorbing metasurfaces at microwave and millimetre- wave frequencies
A classic example of the absorbing surface design is the Salisbury screen, [12], described in detail in Section 3, where a resistive sheet is placed at a quarter-wavelength distance from a metallic reflector. Thinner structures can be realized using a high-impedance ground plane (HIGF) instead of the metallic reflector, [13] and [14]. Another type of the device, the Dallenbach absorber, comprises a grounded quarter-wave slab of a lossy dielectric material [15]. Dielectric thickness specified in terms of the wavelength is the major cause of the narrowband performance of the conventional Salisbury and Dallenbach absorbers. The Jaumann absorber [13], comprising multiple alternating layers of resistive sheets and low-density spacers, as well as multi-layer Dallenbach structures allow significant increase of the bandwidth, yet.at the expense of the increased volume. Some hybrid broadband structures, combining the features of the three-dimensional and planar multi-layer absorbers, have also been reported in microwave [16], and optical [17] frequency bands. HIGF-based structures appear to be inherently broadband.
Using patterned resistive grids instead of uniform sheets offers additional geometrical degrees of freedom in the design of thin broadband electromagnetic absorbers, [13] and [18]. Planar frequency selective surfaces (FSSs) with engineered surface impedance provide an effective means of reducing the thickness of absorbers. The respective design concept is known as the circuit-analogue absorber (CAA) [19]. The resonant FSS layers could be either resistive or conductive stacked with uniform resistive sheets. Multi-octave bandwidth enhancement can be achieved either with a single-layer FSS comprising a double-periodic array of multi-resonant dipoles or nested resonators [20],or by stacking vertically single-resonant FSS sheets [21], or by combining both approaches [22]. However, design of wideband resonant FSS absorbers proved to be very involved, particularly, with respect to the choice of the resonant elements and harmonic frequencies. Also, such absorbers are inherently limited to a narrow angular range, due to polarisation-dependent response at oblique incidence. Implementing intricate designs of the broadband FSS absorbers in terahertz range is highly challenging technologically.
The use of metasurfaces made of capacitive grids of sub-wavelength square patches, emulating a low-pass filter response, allows one to overcome the limitations of the resonant FSS structures in terms of the bandwidth and angle of incidence. The CAA design paradigm still applies. It has been shown theoretically that a capacitive CAA device, designed using 4 free-standing capacitive grids of resistive patches, could exhibit 20 dB absorption within a \(\sim\)10:1 frequency range at normal incidence [19]. A large bandwidth with good angular coverage was demonstrated with a structure comprising two lossy capacitive grids, a uniform resistive sheet and 5 dielectric layers with low and high refractive indexes. The simulated structure exhibited a 20 dB absorption over a 7.5:1 bandwidth for up to 45deg angle of incidence for both TE and TM polarisations. Both structures reported in [19] feature nearly optimum thickness for the given bandwidth, according to the Rozanov's criterion derived in [23], although their performance was verified by simulations only. A recent microwave metamaterial absorber structure in [24], designed with circuit modelling, comprises three layers of square resistive patches interspersed with dielectric layers and demonstrates an ultra-wideband and wide-angle absorption response from 4.73 to 39.04 GHz (ie, 825:1 band ratio) in simulations, which was partly confirmed by measurements within a reduced frequency band. The device thickness appeared to be a factor of 0.15 of the longest operating wavelength. A great variety of circuit-model designs of microwave absorbers aimed at achieving the optimum performance in terms of the bandwidth, thickness, and reflectivity, have been reported in the literature, predominantly by simulations only, but to the best of our knowledge there has been only a rudimentary attempt to implement such structures in the millimetre-wave through terahertz range, not to mention their experimental demonstrations. It must be noted that such implementation does not reduce to a simple scaling of the geometry, because of specific behaviour of materials in the millimetre-wave range under specific operation conditions, as further discussed below.
The use of exotic metamaterial phenomena arising from concerted electric and magnetic polarishabilities of sub-wavelength scatterers allow even greater freedom in the design of millimetre through optical range absorbers. Although widely viewed as inherently narrowband and lossy, advanced dispersion-engineered metamaterial and metasurface structures feature increasingly broader performance. A three-dimensional honeycomb-like metamaterial absorber reported in [25] demonstrates in simulation a 90%-absorption bandwidth from 50 to 460 GHz (1:9.2 bandwidth ratio), as well as polarisation and angle independence. A concept of Huguress' metasurfaces, featuring inherent unidirectional scattering has been employed to design a perfect metamaterial absorber [26], although at a single frequency. A conceptual multi-band absorber for infrared and optical bands proposed in [27] exploits interference phenomena in stacked layers of
epsilon-near-zero metamaterial and high-permittivity dielectric. It has been shown that bandwidth enhancement could be observed in plasmonic metamaterial absorbers [28].
In our opinion, despite the high volume of research on dispersion-engineered metamaterial absorbers, such devices still remain relatively narrowband and lossy, see e.g. [29] for relevant discussion, and less practical from the points of view of design and manufacture, as compared to the FSS and capacitive-grid absorbers. In addition, most of the research is based solely on simulations, with very few frequency-downscaled prototypes having ever been manufactured and tested.
This paper presents the theory, design, fabrication, and experimental characterization of a novel millimeter-wave absorber based on a very simple working principle. It is based on the use of a broadband magnetic mirror, rather than a metallic mirror, which can be built with dielectric layers and with just one resistive sheet [30]. The higher the number of layers of the absorber, the larger the bandwidth. For the sake of demonstration, we set our requirement to those we use for CMB instruments, meaning reflectivity at -20dB level (ie. 99% absorption) over a large frequency bandwidth (\(\sim\)80 to 440 GHz, i.e. a 5:1 ratio), independence between S and P polarizations (perpendicular and orthogonal to the radiation plane of incidence), and high performance within a wide range of incidence angles (e.g. 0\({}^{\circ}\)-50\({}^{\circ}\)). We also aimed to develop a thin and flexible profile, compatible in terms of materials and manufacturing processes with metal-mesh filters [31], flat lenses [32], and metamaterial half-wave plates [33], that we have developed in the past. The new absorber comprises four dielectric layers and one resistive mesh. The refractive indices of two layers were artificially synthesised by embedding sub-wavelength capacitive meshes within polypropylene. The device was designed using the same formalism employed we used to develop an artificial magnetic conductor [30] based on the mesh filters technology. The new device, which has never been demonstrated before, successfully achieved these performance requirements.
## 3 Mesh-absorber Surface Working Principle
Here we describe the working principle of the new absorber which implies a transition from the conventional means of absorbing radiation using a perfect electric conductor as backshort, to the idea to use a perfect magnetic conductor in its place.
### Salisbury Screen
The simplest design for an effective absorbing surface is the Salisbury screen [12, 13]. It consists of an absorbing sheet located at a quarter of the wavelength distance \(\lambda_{0}/4\) from a metallic mirror (see Fig. 1a). The wavelength \(\lambda_{0}\) defines the frequency \(\nu_{0}\) where the absorption is maximum. The radiation passing through the sheet will be in phase with that bouncing back from the mirror. This happens because the latter gains a phase-factor \(\pi\) in the metal reflection and another factor \(\pi\) for the half-wavelength extra path back and forth, for a total of \(2\pi\). The standing wave so created has its electric field maximum exactly where the absorbing sheet's. The surface impedance of the sheet is equal to the free-space impedance and provides unitary absorption and minimum reflection at the central frequency \(\nu_{0}=c/\lambda_{0}\), where \(c\) is the speed of light. These structures can be easily modelled using transmission-line or propagation-matrix codes. In this work we have chosen our central frequency to be \(\nu_{0}=240\) GHz. The frequency dependent absorption and reflection coefficients for a Salisbury screen are shown in Fig. 2, curves labelled a). The averaged absorption (and reflection) coefficient across a 5:1 bandwidth around \(\nu_{0}\), i.e., from 80 GHz to 400 GHz, are respectively \(A=87.89\%\) and \(R=-9.1\)dB. This is obtained by setting the absorber surface impedance equal to the free space impedance, \(Z_{abs}=377\,\Omega/\square\).
We note that, if the absorbing sheet has a frequency-independent impedance, the equphase condition will be also satisfied at higher frequencies, those for which the absorber-backshort distance is an odd multiple of their quarter wavelength, i.e, when \(\lambda_{0}/4=(2m+1)\,\lambda_{0}/4\), where \(m=0,1,...\) The result is a periodic absorption (and reflection) behavior with maxima (and minima) at harmonic frequencies \(\nu_{1}\)=3\(\,\nu_{0}\), \(\nu_{2}\)=5\(\,\nu_{0}\) etc.
Figure 1: Sketch of different types of absorbers based on an absorbing film: a) Salisbury screen (air-gap); b) Salisbury screen on substrate; c) Salisbury screen on substrate and air-gap; d) realization of a magnetic mirror absorber; e) ideal magnetic mirror based absorber.
Figure 2: Absorption and reflection coefficients vs frequency for the different absorbers a) to d) sketched in Fig. 1.
### Salisbury Screens on substrates
The goal of our work is twofold: i) to design a robust absorbing device supported by dielectric substrates that could be manufactured using the mesh-filters technology; 2) to obtain high absorption over very large bandwidths.
The first attempt would be to study an embedded version of the Salisbury screen. It could be made by replacing the \(\lambda_{0}/4\) air-gap with a quarter-wavelength dielectric layer (\(\lambda_{0}/4n\)) and by depositing the absorber on its external surface (Fig. 1b). This would lead to a device much more robust than the previous one. However, by using for example polypropylene as a substrate (refractive index \(n\)\(\simeq\)15), the optimized design (with \(Z_{\text{abs}}\) = 273 \(\Omega\) /\(\square\)) would decrease the average absorption within the 5:1 band down to \(A\) = 80.9% (\(R\) = -7.2 dB), as shown in Fig. 2, curves labelled b). This is because some additional out-of-phase reflection is now added at the free-space to dielectric boundary, where the absorber is. Indeed, a mismatch from a low to high index implies a negative reflection coefficient.
We can look at a different configuration where the substrate and the absorbing film are now reversed compared to the incoming radiation direction, i.e., we keep the absorbing film at the same distance from the metal backshort but we reverse the position of its substrate, as shown in Fig. 1c, The optimized absorption curve (obtained with \(Z_{\text{abs}}\) \(\simeq\) 213 \(\Omega\) /\(\square\)), changes dramatically and its 5:1 band average boosts up to \(A\) = 96.8% (\(R\) = -15.0 dB); see curves labelled c) in Fig. 2. This is because some additional in-phase reflection is now added at the dielectric to free-space boundary (high to low index), where the absorber is. The effect that we are describing is at the roots of the main idea of this work, i.e., forcing most of the radiation to be reflected in phase where the absorbing surface is. This can be achieved by using a magnetic mirror, as described in the next section.
We note that the latter configuration, although showing good performance, would be difficult to manufacture because it would require a structure able to maintain the air-gap distance constant over large surfaces. It would be even more challenging to develop it if the absorbing surface is not meant to be flat.
### AMC-based absorber
Magnetic mirrors, or Artificial Magnetic Conductors (AMCs), are surfaces designed to mimic the behaviour of ideal Perfect Magnetic Conductors (PMCs). These ideal materials reflect 100% of the incident radiation, they provide a null phase-shift and can be modelled as surfaces exhibiting infinite impedance. This contrasts with normal Perfect Electric Conductors (PECs), which provide a \(\pi\) phase-shift and can be modelled with a null surface impedance.
As we have seen in the previous section, the quarter-wavelength air or dielectric gap is the main factor affecting the bandwidth and a natural way to improve absorption is to have in-phase reflection right at the plane of the absorber. By definition, the radiation reflected off a magnetic mirror is in phase with the incident one and an absorber located right on its surface should greatly increase its efficiency. This means that the maximum of the electric fields is not anymore localized at a certain \(\lambda_{0}/4\) distance from the mirror-backshort but directly on its surface. This does not impose any geometrical constraint or frequency dependence to the system. Such an ideal system, sketched in Fig. 1e, would absorb radiation at all frequencies, i.e, it would have an infinite bandwidth.
There are many ways to design magnetic mirrors (or AMCs). Here we base our work on a device developed using the mesh-filter technology that led to multi-octave bandwidth operation [30]. The working principle of this magnetic mirror, part of the device sketched in Fig. 1d, is based on the null phase-shift obtained in the reflection occurring at a high-to-low index boundary. The higher the index difference, the higher the reflection coefficient. For this reason, a graded index section is used at the input of the device to drive the radiation adiabatically into a high index medium. A sudden jump into a lower index medium provides a high reflection coefficient with null phase shift, over large bandwidths. A backshort located at a quarter wavelength in that medium defines the central frequency of operation.
The broadband mesh-absorber presented here is made with a magnetic mirror similar to the one described above with an embedded absorber film located at the high-to-low index interface (Fig. 1d).
## 4 Modelling and initial design
The absorber presented in this work is based on the mesh-filter technology. In this section we briefly describe this technology, how we model the device and its initial design, which will not include the details of the absorbing film, discussed in Sec. 5.
### Mesh-filter technology
The mesh absorber was manufactured using the processes developed for mesh-filters [31]. This has been employed not only to realise filters, used in many astronomical instruments operating at millimetre and sub-millimetre waves, but also to develop novel devices such as flat mesh lenses [32] and mesh half-wave plates [33]. The devices are manufactured by embedding metal grids within polypropylene layers, and by hot-pressing them all into a single homogenous polypropylene matrix where the metal grids remain suspended within it. In this specific application the grids are not designed to interfere, like in above mentioned devices; they are instead closed-packed to create the artificial dielectrics [34] required for the graded index medium of the absorber.
A sketch of the ideal mesh-absorber made with homogeneous materials and its actual realization employing mesh-filter technology are reported respectively in Fig. 3a and Fig. 3b. The ideal device requires four quarter wavelength layers: the first three with increasing refractive indices (\(n_{1}\) = 1.25, \(n_{2}\) = 1.77 and \(n_{3}\) = 2.93) and the last one with a low intermediate index (\(n_{1}\) = 1.48). The absorbing film, with optimized surface impedance \(Z_{\text{abs}}\) \(\simeq\) 103 \(\Omega\) /\(\square\) is located between the third and the fourth layers. The metal backshort is at the end of the stack. The absorption curve is now very dose to 1 across the 5:1 bandwidth around \(v_{0}\), see curves labelled d) in Fig. 2, and its average value is boosted up to \(A\) = 99.5% (\(R\) = -23dB).
The materials generally employed to build mesh-devices are: i) porous PTFE (\(n_{\text{p}PTE}\) \(\simeq\) 125), used as anti-reflection (AR) coating: ii) polypropylene (\(n_{\text{p}P}\) \(\simeq\) 1.48), used as the substrate supporting the embedded metal grids. These two materials can be used respectively for the first and the fourth layer of the mesh-absorber, so that \(m\) = \(n_{\text{p}PTE}\) and \(n_{\text{e}}\) = \(n_{\text{p}P}\). However, the indices \(m\) and \(n_{3}\) can be artificially synthesized by embedding close-packed copper grids into polypropylene, following a method described elsewhere [34]. The two quarter-wavelength sections will require pairs of embedded capacitive grids, i.e., periodic square patches, with different filling factors. The capacitive grids have a period of \(g\) = 100\(\mu\)m and this size allows the artificial media to be'seen' by the radiation as homogeneous dielectrics, with indices very close to \(n_{2}\) and \(n_{3}\), across a large bandwidth (0-500 GHz) [30].
The material used for the absorbing film is bismuth. The film will be required to have a specific pattern and surface impedance; this will be discussed in detail in Sec. 5. The metal backshort is realized with a 400 nm copper deposition.
The final device, apart from the AR-coating layer, is completely made with polypropylene with the four capacitive grids, the absorber and the backshort either embedded or deposited on it (Fig. 3b). The total
thickness of the device is \(\sim\)750 \(\mu\)m, equivalent to \(\sim\)0.6 \(\lambda\) and \(\sim\)0.2 \(\lambda\) respectively at the central and lowest operational frequencies.
### Finite-element modelling
As mentioned in Sec. 3, the performance of the devices sketched in Fig.1 have been computed using a propagation matrix code. In this case the absorbing films were modelled as shunt admittances, the various layers as homogenous media and the radiation assumed to be at normal incidence. The actual mesh-absorber device is made with homogeneous layers but also with metamaterials which can be accurately simulated using finite-element analysis (FEA). We have used the commercial software Ansys HFSS [35] for our preliminary modelling, detailed design and parameter retrieval of the final manufactured device.
The HFSS models consist of unit cells with periodic boundaries that mimic infinite arrays (Fig. 4). The boundaries are of the'master & slave' type to allow the simulation of radiation at any angle of incidence and for both types of polarisations (S or P). The porous PTFE and the polypropylene substrates are modelled as homogenous dielectric materials (the latter with a frequency dependent loss tangent), the metal grids as thin copper patches with finite conductivity. The absorbers are modelled in two different ways: a) as homogeneous surfaces with associated impedance; b) as patterned surfaces with associated surface impedance. The reason for using these two types of models will be clarified later.
In designing an absorbing device, in addition to low reflectivity and large bandwidth, there could be also requirements in terms of incidence angles. Depending on the application, the device might be required to work off-axis, or to maintain high performance over a wide range of angles. In these cases, we expect the absorption to vary because the radiation will travel different path lengths through the layers of the absorber. However, the absorber can be designed to have maximum performance at a specific angle \(\theta\) and this could correspond to the average angle within the above range. In our case, to prove the device working principle we have chosen \(\theta=22.5^{\circ}\), which corresponds to the reflection angle of our testing setups (see Sec. 6). We note that working off-axis implies the distinction between S and P polarisations, and the absorber will then need to be efficient in both configurations.
The initial design was simulated using a model of the type shown in Fig. 4a, i.e, with a homogeneous absorbing surface between the third and the fourth layers. The surface impedance was varied in the range 95 - 115 \(\Omega\)/\(\square\), the incidence angle was set to \(\theta=22.5^{\circ}\) and both S and P polarisation absorption coefficients evaluated. The results of these simulations are reported in Fig. 5 and Fig. 6 for the S and P polarisations, respectively. The best averaged off-axis absorptions across the 5:1 bandwidth (80-400 GHz) are achieved by choosing values of surface impedance \(Z_{\text{abs},22.5^{\circ}}\)\(\approx\) 105 \(\Omega\)/\(\square\) and \(Z_{\text{abs},22.5^{\circ}}\)\(\approx\) 100 \(\Omega\)/\(\square\) respectively for the S and P polarisations. The average of these values is close to the one obtained in the on-axis case, i.e, \(Z_{\text{abs}}\)\(\equiv\) 103 \(\Omega\)/\(\square\) by choosing the latter as an off-axis trade-off value, the averaged absorption appears to be almost polarisation independent, at -21.6 dB level for both S and P polarisations. We note that the off-axis operation slightly shifts the operating band to higher frequencies (Fig. 5 and Fig. 6).
Fig. 4: Mesh-absorber HFSS models: a) homogeneous absorbing surface; b) patterned (inductive) absorbing surface.
Fig. 3: a) Sketch of the magnetic mirror absorber made with homogeneous materials; b) Sketch of the mesh-absorber realization using copper grids and the bismuth layer embedded into polypropylene.
Fig. 6: P polarisation reflection coefficient vs. frequency for 22.5\({}^{\circ}\) incidence angle and different impedances for the absorber.
## 5 Manufacture and Design Fine-Tuning
The manufacture of the mesh-absorber followed different steps: a) manufacture and tests of the graded index section (Part 1); b) R&D on uniform resistive films; c) R&D on patterned resistive films d) heat bonding test with a dummy absorber device; e) manufacture of the backshort and the final assembly.
### Part 1 - Graded index
The graded index part of the absorber, Part 1 in Fig 3, was the first to be manufactured using the standard mesh-filter processes. It consisted of three quarter-wavelength layers: the first made with pPTFE (\(m\)), the second and the third with polypropylene embedded capacitive meshes designed with their geometry and spacing to achieve the effective refractive indices of \(n_{2}\) and \(n_{3}\). The measured thickness of the assembly was close to the nominal value within the measurement error (\(\sim\)2 \(\mu\)m). Transmission measurements were performed on-axis using the FTS to check the performance of this part of the absorber (see Fig. 7). A finite-element model of Part 1 was built to compare its predictions with the measured data. The shape of the transmission curve of Part 1 is not really relevant but indeed useful to extract more accurate values of the various parameters. The measured data, up to 600 GHz, goes beyond the frequency range of the device. The transmission peak around 480 GHz is used to fit the data with higher accuracy. A four-parameter optimisation of the finite-element model was run across a discrete number of frequency points to fit the measured data. The parameters were the pPTFE and polypropylene refractive indices and the \(a/g\) parameters of the two pairs of capacitive grids (see Fig. 4). The overall thickness of Part 1 was set to the nominal one and the grid period (\(g\) = 100\(\mu\)m) was not varied. The copper conductivity was assumed to have the standard value we use for these grids, \(\alpha_{0}\)=4 \(\times\) 10\({}^{7}\)S/m. The fit procedure led to the following values: \(n_{pPTFE}\) =1.23, \(n_{p}\)=1.48, (\(a/g\))\({}_{1}\)=0.186 and (\(a/g\))\({}_{2}\)=0.045. The refractive indices fall in the expected ranges of variability related to the bonding processes. The slightly higher fitted values of \(a/g\), as compared to the design values, imply over-etching of the capacitive grids: the sides of the square patches resulted to be smaller by \(\sim\)1 \(\mu\)m in both pairs. The values of all the above parameters, either measured or estimated, will be used from now on to model this part of the device and to optimise other parts, as discussed in the following sections.
### Homogeneous absorbing films
One way to realise a resistive sheet consists in evaporating a thin film of metal on a polymer substrate. If the film thickness is well below the skin depth \(\delta\)s at all the frequencies of operation, the radiation going through the film will interact with the resistive layer and some of its power will be dissipated across it. For a given resistivity \(\rho\) and thickness \(t\), the film surface impedance equals \(Z\)=\(\rho t\). If \(t<\)\(\delta\)s then the surface impedance can be considered constant with frequency.
In the case of an ideal free-standing resistive film, the transmission, absorption and reflection coefficients as a function of the surface impedance can be easily computed using a transmission line circuit. From these curves, reported in Fig. 8, we note that by measuring the transmission coefficient of an absorbing film, we can infer immediately its surface impedance.
One of the outcomes of the preliminary design is that the absorbing film of our device needs to have a surface impedance of the order of \(Z_{\mathrm{S}}\)\(\approx\) 103 \(\Omega\) /\(\square\). Materials such as copper or gold cannot be used, because their high electrical conductivity would imply film thicknesses less than 1 nm. What is required is a metal which much lower conductivity, at least two orders of magnitudes below, that could be also evaporated on a polymer substrate and processed using the mesh-filter techniques. The conductivity of a thin film of evaporated bismuth depends on its thickness and varies typically in the range \(\alpha_{0}\) \(\sim\) (0.35\(\times\)1.92) \(\times\) 10\({}^{5}\)S/m for thicknesses \(t\)=14 + 220 nm [36]. Using these values as starting point, the targeted \(Z_{\mathrm{S}}\)\(\sim\)103 \(\Omega\) /\(\square\) could be achieved with a bismuth film roughly 70nm thick.
Before processing a specific absorbing film, it was required to carry out some bismuth evaporation tests on the polymer substrates normally used in mesh devices: Mylar and polypropylene. The bismuth deposition was processed via thermal evaporation in a 10\({}^{6}\) mbar vacuum. Achieving reproducibility in the evaporation processes was one of the most critical factors. The first samples were tested in transmission using an FTS test-bed covering the frequency range of 150-600 GHz. Examples of these measurements are shown in Fig. 9. Having verified the very flat response with frequency of the films, the samples manufactured subsequently were tested using a VNA test-bed across a narrower frequency range, i.e. 160-260 GHz.
The first sample, Bi-1, was a 45 nm Bi film on a 0.9 \(\mu\)m thick Mylar substrate, whereas the second, Bi-2, - a 450 nm Bi film on a 4 \(\mu\)m thick polypropylene. The Bi thickness was measured using a Quartz crystal monitor which was calibrated using a surface profilometer (error of \(\pm\)59\(\%\)). These samples were useful to validate both the modelling tools and processing.
Bi-1 and Bi-2 samples showed constant transmissions of the order of \(T_{\mathrm{Bi}}\)\(\simeq\) 0.252 and \(T_{\mathrm{Bi}}\)\(\simeq\) 0.003, respectively. The curves in Fig. 8, although not including the small effects of the thin substrates, can be used to infer the associated surface impedances with a good degree of accuracy (at a
Figure 8: Ideal free-space absorber transmission, absorption and reflection coefficients as a function of its surface impedance. The absorber is modelled as an infinite surface with no thickness.
Figure 7: Part 1 on-axis transmission measured and finite-element best fit model obtained by varying the pPTFE and PP refractive indices and the grid geometries.
1% level): \(Z_{6\times 10^{5}}\)S/m and \(Z_{6\times 2}\sim 10\,\Omega/\square\). Simplified versions of the HFSS model shown in Fig 4a have been used to infer the conductivity of bismuth. These models consisted of a dielectric substrate and a thin metal layer with variable conductivity. Optimisation fits were run to match the transmission coefficients of the model and measured data, yielding the following conductivity values: \(\sigma_{8\times 1}=1.2x10^{5}\)S/m and \(\sigma_{8\times 2}=2.2x10^{5}\)S/m. These values are consistent (although outside their measured range) with those reported in [36].
### Patterned absorbing films
A homogeneous absorbing film with the target impedance \(Z_{5}\)\(\sim\)\(103\,\Omega/\square\) could be, in principle, processed. However, the final device, as sketched in Fig 3b, requires the Bi-film to be sandwiched between polypropylene layers, and the associated heat-bonding process does not work in the presence of uniform metal layers. For this reason, the Bi-film needs to be patterned as an inductive grid, e.g, with square holes in it. This would allow the polypropylene layers on either side of the film to penetrate the sheet and bond during the hot-pressing process. We used an inductive pattern with a period \(g\)=25um and the standard \(a\)/\(g\)=0.14 (see Fig. 4b).
For a given thickness, the surface impedance of a patterned layer of bismuth will be higher than for the uniform film, due to its dilution across the unit area. This means that the target surface impedance will be achieved with a thickness which is larger than the one estimated earlier (70nm). The required thickness was estimated using another HFSS model (similar to the previous one but including a patterned bismuth layer) and resulted to be \(t\)\(\simeq\)180 nm.
Again, before proceeding to the final film evaporation, the above process required some development. The inductive pattern was obtained using a lift-off process, rather than the standard etching process on a homogenous layer of bismuth. This was more suitable due to the materials, dimensions and thicknesses involved. However, the lift-off process provided directly the final inductive pattern without giving the possibility to test first the uniform layer.
Two patterned film samples were processed, Bi-3 and Bi4, respectively with a 90nm and a 270nm bismuth layer on 9 um thick PP substrates. FTS and VNA transmission measurements provided averaged transmissions of \(T_{83}\)\(\simeq\) 0.340 and \(T_{84}\)\(\simeq\) 0.050, corresponding to the effective surface impedances of \(Z_{83}\)\(\sim\)263M/\(\square\) and \(Z_{84}\)\(\sim\)54\(\Omega/\square\). Using the HFSS models and running optimisations to fit the data, it was possible to infer the values of the thin film conductivities which resulted to be: \(\sigma_{8\times 2}\)\(\simeq\)1.4x10\({}^{5}\)S/m and \(\sigma_{8\times 2}\)=2.3x10\({}^{5}\)S/m.
Given the success with the previous processing, the final absorbing film with the targeted \(Z_{5}\)\(\sim\)103\(\,\Omega/\square\) could be now manufactured. The patterned sample Bi-5 had a 175 nm thick layer of bismuth on a 9 um thick PP substrate. Its averaged transmission \(T_{83}\)\(\simeq\)0.122 implied an equivalent surface impedance \(Z_{83}\)\(\sim\)101\(\,\Omega/\square\), not far from the goal, and a film conductivity \(\sigma_{83}\)=1.9x10\({}^{5}\)S/m, close to what could be extrapolated from [36],i.e., \(\sigma_{83}\)(175 nm)\(\simeq\) 1.7 x 10\({}^{5}\)S/m.
### Bonding process test with dummy PP layers and absorbing film
All the constituent parts of the mesh-absorber were ready to be assembled. The final process, required to be checked, was the bonding of the patterned bismuth film between PP layers, which had never been done before. For this purpose, the patterned Bi-3 sample was sandwiched and successfully hot-pressed between two layers of PP with thicknesses equal to those of the final device, i.e, 284um and 213um.
VNA transmission measurements of this 'inefficient' dummy absorber were conducted to check for any potential variations of the surface impedance during the bonding process (Fig. 10). Two HFSS models were built to simulate the dummy sandwich with either the uniform or patterned resistive films. Both were used to fit the experimental data by varying the substrate's refractive index, equivalent surface impedance (first model), and conductivity of the bismuth thick pattern (second model). The optimisation yielded almost indistinguishable transmission curves and the following values of the film parameters: \(Z_{83}\)\(\sim\) 233\(\,\Omega/\square\) and \(\sigma_{83}\)\(\sim\) 1.6 x 10\({}^{5}\)S/m. These results imply a reduction of \(\sim\)11% of the original impedance as a consequence of the bonding process.
### Part 2 and the final assembly
The results provided by the dummy bonding implied a potential reduction of the surface impedance of the sample Bi-5 when bonded within the final device. An HFSS model of the complete absorber, including the patterned Bi-5 film with the nominal (\(Z_{83}\)\(\sim\)103\(\,\Omega/\square\)) and reduced (\(Z_{83}\)\(\sim\) 91\(\Omega/\square\)) values of surface impedance were run for
\begin{table}
\begin{tabular}{c c|c|c|c|c}
**Film** & \(t\) & **Pattern** & \(T\) & \(Z_{5}\) & \(G_{\text{HFSS}}\) \\
**[\#]** & **[nm]** & & & [\(\Omega/\square\)] & [S/m] \\ Bi-1 & 45 & Full & 0.252 & 189 & 1.2E+5 \\ Bi-2 & 450 & Full & 0.003 & 10 & 2.2E+5 \\ Bi-3 & 90 & 125/0.14 & 0.340 & 263 & 1.4E+5 \\ Bi-4 & 270 & 125/0.14 & 0.050 & 54 & 2.3E+5 \\ Bi-5 & 175 & 125/0.14 & 0.122 & 101 & 1.9E+5 \\ \hline \end{tabular}
\end{table}
Table 1: **Bismuth absorbing films characteristics**
Figure 10: Transmission measurements of the dummy absorber together with its best model fit obtained by varying the substrate refractive index and bismuth conductivity.
Figure 9: Transmission measurements of the bismuth film samples.
both polarisations. The overall absorption averaged over the 80-400 GHz band appeared to remain below -20dB for both S and P polarisations, thus allowing us to proceed with the assembly of the final mesh-absorber.
The manufacture of Part 2 was straightforward, because this part consisted of bonded layers of PP with uniform copper evaporation on one side (Fig. (b)b). The whole device was eventually manufactured by stacking and hot-pressing Part 1, Bi-5, and Part 2. Photographs of the front and back sides of the final mesh-absorber are shown in Fig. 11. An example of the finite-element simulations of the mesh-absorber, run at 250 GHz, are shown in Fig. 12.
## 6 Experimental characterisation
Here we briefly describe the experimental setups used for the various tests and the detailed characterisation of the final mesh-absorber.
### VNA and FTS experimental setups
The breadboard samples and the final device were tested using two different testbeds: a Fourier transform spectrometer of the Martin-Pulpet type, operating in the 50-600 GHz frequency range by means of a cryogenically cooled bolometer detector, and a Rohde & Schwartz ZVA67 vector-network analyser operating from 75 to 330 GHz by means of standalone frequency extenders. As mentioned earlier, the FTS was initially used to measure the transmission coefficients of the first Bi-samples and verify their broadband flat response. The VNA was then used to extract the surface impedances of the other Bi-samples over narrower frequency ranges and to quantify the Bi conductivity changes in the dummy absorber. The FTS was used for the transmission measurements of Part 1 and for the full-band reflection measurements of the final absorber. The relevant experimental setups have been described in detail elsewhere: the VNA transmission measurements in [37], the FTS transmission measurements in [33], and the FTS reflection measurements in [38].
We notice that the sample flatness proved to be a crucial factor of the data quality in reflection measurements. In the tests, the absorber device was clamped in a holder and placed inside a mount behind an aperture. In the test, the holder was rotated about the optical axis and the small deviations of the received signal provided evidence of suitable flatness.
### Absorber tests and results
The mesh absorber was tested with the FTS at 22.5\({}^{\circ}\) incidence angle with both S and P polarisations. The measured data are reported in Fig. 13 and Fig 14. A very good performance of the device across a wide frequency band is immediately evidenced, for both polarisations. Quantitatively, the measured absorption coefficients averaged in the 80-400 GHz (a 5:1 bandwidth) frequency range, for both polarisations, resulted to be \(A_{22.5,s}\) = 99.2% (\(R_{22.5,s}\) = -21.2 dB) and \(A_{22.5,p}\) = 99.4% (\(R_{22.5,p}\) = -22.0 dB), respectively. These results meet the requirement that was set at the beginning, i.e., absorption to be \(\geq\) 99% (reflection \(<\) -20dB). Also, the differential absorption between S and P polarisations resulted to be very small, at 0.2% level.
Although the design and development procedure successfully led to the desired performance, we have tried to fit the final data with an HFSS model of the type b) of Fig.4. Within the many parameters required to
Figure 11: Photographs of the mesh absorber taken from the AR-coating side (white) and from its copper backshort side. The device has a diameter of 100mm, it is \(\sim\)750\(\upmu\)m thick, and it is mechanically flexible.
Figure 14: Mesh-absorber simulated and measured reflection coefficient versus frequency for the P polarisation at 22.5\({}^{\circ}\) incidence angle.
Figure 12: Finite-element simulation showing the absorption of the electromagnetic field at 250 GHz, with light incident at 22.5\({}^{\circ}\) and both S and P polarisations present.
Figure 13: Mesh-absorber simulated and measured reflection coefficient versus frequency for the S polarisation at 22.5\({}^{\circ}\) incidence angle.
run the model, some were identified and fixed previously: the copper conductivity, the device thickness (directly measured) and the capacitive grid geometries (\(\alpha/g\))\({}_{12}\) (extracted by testing Part 1). The remaining parameters, the PP and pPTFE refractive indices as well as the bismuth conductivity, could still change during the final bonding, and we left them vary in the final fit. Changes in \(n_{P}\ |
2307.10185 | BigDipper: A hyperscale BFT system with short term censorship resistance | Byzantine-fault-tolerant (BFT) protocols underlie a variety of decentralized
applications including payments, auctions, data feed oracles, and decentralized
social networks\cite{chainlink,lens}. In most leader-based BFT protocols, an
important property that has been missing is the censorship resistance of
transaction in the short term. The protocol should provide inclusion guarantees
in the next block height even if the current and future leaders have the intent
of censoring.In this paper, we present a BFT system, BigDipper, that achieves
censorship resistance while providing fast confirmation for clients and
hyperscale throughput. The core idea is to decentralize inclusion of
transactions by allowing every BFT replica to create their own mini-block, and
then enforcing the leader on their inclusions. To achieve this, BigDipper
creates a modular system made of three components. First, clients use a
transaction broadcast protocol to send transaction to multiple replicas. As a
distribution of replicas receiving the client's transactions, they prepare
mini-blocks to send to the data availability (DA) component, which
characterizes the censorship resistant properties of the whole system. We
design three censorship resistant DA (DA-CR) protocols whose properties are
captured by three parameters. The third component interleaves the second DA-CR
protocol into the leader based BFT protocol, it enforces the leader to include
all the data from the DA-CR into the final block. At last, we demonstrate an
integration with a two-phase Hotstuff-2. | Bowen Xue, Soubhik Deb, Sreeram Kannan | 2023-07-03T22:41:27Z | http://arxiv.org/abs/2307.10185v3 | # BigDipper: A hyperscale BFT system with short term censorship resistance
###### Abstract
Byzantine-fault-tolerant (BFT) protocols underlie a variety of decentralized applications including payments, auctions, data feed oracles, and decentralized social networks[15, 29]. In most leader-based BFT protocols, an important property that has been missing is the censorship resistance of transaction in the short term. The protocol should provide inclusion guarantees in the next block height even if the current and future leaders have the intent of censoring. In this paper, we present a BFT system, BigDipper, that achieves censorship resistance while providing fast confirmation for clients and hyperscale throughput. The core idea is to decentralize inclusion of transactions by allowing every BFT replica to create their own mini-block, and then enforcing the leader on their inclusions. To achieve this, BigDipper creates a modular system made of three components. First, clients use a transaction broadcast protocol to send transaction to multiple replicas. As a distribution of replicas receiving the client's transactions, they prepare mini-blocks to send to the data availability (DA) component, which characterizes the censorship resistant properties of the whole system. We design three censorship resistant DA (DA-CR) protocols whose properties are captured by three parameters. The third component interleaves the second DA-CR protocol into the leader based BFT protocol, it enforces the leader to include all the data from the DA-CR into the final block. At last, we demonstrate an integration with a two-phase Hotstuff-2.
## 1 Introduction
Short term censorship resistance provides an unusual property that differentiates from the conventional liveness property in BFT protocols. It guarantees transaction inclusion in the next block even if the current and future leaders have the intent of censoring. This property is especially useful for transactions whose value is sensitive to block heights when they are included; for example, an auction bid trying to enter at the last permissible block height. The benefits of th property can be generalized further, such as periodic inclusion at precise block height and real time interactions like high frequency trading, collateral liquidation and web3 social network. In a partially synchronous network, conventional leader based BFT protocols [42, 9, 30, 14] cannot provide this property, because the leader unilaterally decides which transaction appear at what place. The middleman nature of the leader creates a principle agent problem between
the BFT protocol and the leader, where the leader can extract benefits from the protocol through arbitrage, sandwich attack and bribery for censorship. There are prior works[27, 28] that use a notion of fair ordering to constrain the leader. However, those protocols require every transaction to be received by majority of replicas, so that a fair order consensus can be reached based on the relative ordering of receiving time for producing a first come first serve (FCFS) ordering. Both fairness and censorship resistance are addressed in one shot, but it comes with downsides of limited scalability and high protocol complexity. Moreover, the rationale and implication of FCFS is still at debate. BigDipper treats the problem differently by focusing primarily on the property of short term censorship resistance while leaving an interface to support any ordering mechanism. The benefits are twofold. First the property of short-term censorship resistance itself is useful for many applications, by removing fair ordering constraint the throughput of the protocol can scale linearly as the number of replicas joining the system increases. Second, by decomposing the protocol's stack with modular interfaces, new protocols can work directly at the fair ordering problem by stacking on top of existing protocols. A concurrent submission called Travelers makes use of this modular approach to develop a scalable fair ordering protocol, BigDipper can be directly plugged in to its protocol stack as its censorship resistance BFT consensus.
Transactions differ by their urgency for censorship resistance, like an expiring auction bid is more important than some dinner fees transfer among friends. But this information is known by the clients only, who are also dynamically changing over time. So a flexible protocol is unlikely to be one that offers a rigid inclusion guarantee for everything. The Shard protocol in Appendix 8 fails on this part. Essentially between the protocol and clients, there are two information asymmetries, one is about time sensitivity on the value, the other is about which replicas are malicious. Although clients can uses transaction value or tips to differentiate its urgency from others, but those information alone are not enough for censorship resistance, because determined malicious replicas can censor regardless of its tips.
In BigDipper, we offer a robust and flexible censorship resistance. It is achieved by an architectural change to the conventional BFT protocol: all transactions need to submit to a new component called **Data Availability and Censorship Resistant (DA-CR)** layer which is maintained among the BFT replicas, then a valid block can be created by a leader by including all the transactions available within DA-CR. If the leader fails to include some transactions from it, the leader will fail to reach consensus and will be replaced for violating the liveness. But the leader enforcement alone is insufficient to achieve censorship resistance, we need all replicas be able to permissionlessly add transactions to DA-CR. Because the BFT replicas are decentralized, it is used as the source for providing censorship resistance. From the perspective of a consensus protocol, DA-CR is an intermediate layer that restores the decentralization from the leader. In order to do that, replicas collect and batch transactions into a new data structure called mini-block, which is the basic unit to send to DA-CR.
The final BFT block is made of those mini-blocks. A client uses a transaction broadcast protocol to have its transactions batched into mini-blocks by replicas.
We made two contributions for the DA-CR component. First we delineate the properties by three parameters, \((\rho,t,\eta)\), and we produce three designs that demonstrate different trade-offs with regard to the message complexity, trust assumption and degree of censorship resistance. The parameterization of each property allows us to evaluate the effectiveness of the protocol: a protocol is more useful if it can achieve a desired property with low cost. Specifically, \(\rho\) measures the data tampering resistance to adversarial attacks, \(t\) measures the amount of censorship resistance, i.e. how many honest mini-blocks have to be included in the final block, and \(\eta\) measures how much influence a leader can impose on the inclusion of mini-blocks. We elaborate the exact definitions in Section 3. Each of three protocol designs has distinct \((\rho,t,\eta)\) properties. The three DA-CR designs can be categorized into two general design patterns depending on if they support accountability. In an accountable protocol, every replica knows whose mini-blocks are included in the final block. In Section 3 and 4, we delineate their differences and implications on the derived protocols.
The second technical contribution is a transaction broadcast protocol that enables clients to adjust the amount of resources spent on censorship resistance. We use probability of inclusion as a quantifiable measure for the amount of censorship resistance from which a client can choose the desired quantity. Once a probability is selected, this quantity can translate directly to the number of transaction copies that should be sent to distinct replicas. The more copies of a transaction a client sends, the higher chance the transaction would get included, up to probability 1. Allowing clients to express inclusion urgency through probability simplifies protocol complexity. We discuss spamming and transaction de-duplication in Appendix 2.2, the central idea is to correlate the spent resources with transaction fees.
A final technical contribution is an enforcement rule inserted into the BFT consensus path, so that no leader can append a BFT block to the ledger unless all the data from the DA-CR are included. We provide an integration with a two-phase leader based BFT protocol, Hotstuff-2[30], in Section 6.2, and prove both safety and livenesss. The integration fits all DA-CR parts inside the two-phase BFT protocol, and requires no change to pacemaker during leader rotation.
#### 2.0.1 Why we prefer leader based protocol
It is an empirical observation that resource distribution are highly asymmetrical. For example, the top three miningpool on Bitcoin occupy 65.9% of total hash power[5]. In Ethereum post-merge, Flashbot relay propagated 80% of blocks[41]. We expect similarly for parties that runs hardware (replicas). Because BFT protocols rely on at most 1/3 replicas are malicious, it is less likely that decentralized parties would collude together when the total number of replicas are large. To achieve that, we need the communication complexity on each node to be small. But since resourceful parties still exists and participates the system, they can play the role to perform heavy
computation, as long as they are rewarded accordingly. Leader based protocol fits well to this model. More discussions in Appendix 9.
### Comparison with Other BFT protocols
Many asynchronous BFT protocols by nature provide censorship resistance, including HoneyBadger [31] and DispersedLedger [40]. In HoneyBadger, every replica creates its own mini-block and runs a binary agreement protocol in parallel to commit at least \(n-f\) mini-blocks. There is no centralized leader in the system, and because of that, the asynchronous system enjoys censorship resistance inherently. Many DAG-basd BFT protocols like Bullshark[39], Tusk[16], Dumbo[21], VABA[1] are also leaderless and therefore censorship resistant. But without coordination, the leaderless protocols have a all-to-all communication pattern which hurts communication complexity.
To understand the protocol's trade-offs, we use three metrics to systematically compare their performance: **worst case inclusion distance**, **system throughput**, and **block interval**. The worst case inclusion distance measures the largest number of blocks malicious replicas can prevent a transaction from entering the blockchain in the worst scenario.
Second, a performant blockchain should support many clients and satisfy throughput requirements for a wide range of applications. Assuming each replica has a constant bandwidth \(C_{band}\), ideally we want the maximum system throughput to increase linearly as more replicas join the network. As more replica joins, it reinforces both the decentralization of blockchain and the system performance. To capture the scalability of the system, we measure it as a ratio between the total system throughput and individual bandwidth.
The third metric focuses on the latency of the block interval, because the shorter the interval is, the faster a client can receive transactions confirmation. The latency can be decomposed to two parts: number of round trips and communication latency which is a ratio between the communication bits and bandwidth.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline BFT & \begin{tabular}{c} Coor- \\ dimu \\ \end{tabular} & \begin{tabular}{c} **Per Replica Communication** \\ **DA** \\ \end{tabular} & \begin{tabular}{c} **S**ystem \\ **Consensus** \\ \end{tabular} & \begin{tabular}{c} Number \\ Replica \\ **S** \\ \end{tabular} & \begin{tabular}{c} Sum \\ Comm. \\ **S** \\ \end{tabular} & \begin{tabular}{c} Wost \\ **L**onship \\ **S** \\ \end{tabular} & \begin{tabular}{c} Max \\ Adv. \\ **L**onship \\ \end{tabular} \\ \hline \hline Hotstuff[42] & ✓ & \(O(nb)\) & ✓ & \(O(1)\) & \(O(1)\) & 4 & \(O(nb)\) & \(f+1\) & 0.33 \\ \hline HoneyBadger[31] & \(\times\) & \(O(nb+n^{2}\log n)\) & ✓ & \(O(n^{2}\log n)\) & \(O(1)\) & \(3+\log n\) & \(O(nb+n^{2}\log n)\) & 1 & 0.33 \\ \hline DispersedLedger[40] & \(\times\) & \(O(b+n^{2})\) & \(\times\) & \(O(n^{2}\log n)\) & \(O(n)\) & \(3+\log n\) & \(O(b+n^{2}\log n)\) & 1 & 0.33 \\ \hline Prime[33] & ✓ & \(O(nb+n^{2})\) & ✓ & \(O(n^{2})\) & \(O(1)\) & 6 & \(O(nb+n^{2})\) & 1 & 0.33 \\ \hline Dumbo-2[22] & \(\times\) & \(O(nb+n^{2}\log n)\) & ✓ & \(O(n^{2})\) & \(O(1)\) & 20 & \(O(nb+n^{2}\log n)\) & 1 & 0.33 \\ \hline Dumbo-NG[21] & \(\times\) & \(O(nb)\) & ✓ & \(O(n^{2}\log n)\) & \(O(1)\) & 9 & \(O(nb+n^{2}\log n)\) & 1 & 0.33 \\ \hline VABA[1] & \(\times\) & \(O(nb)\) & ✓ & \(O(n)\) & \(O(1)\) & 13 & \(O(nb)\) & 1 & 0.33 \\ \hline
\begin{tabular}{c} Bag-Rider+AVD[26] \\ \end{tabular} & \(\times\) & \(O(nb+n^{2})\) & ✓ & \(O(n^{2}\log n)\) & \(O(1)\) & 12 & \(O(nb+n^{2}\log n)\) & 1 & 0.33 \\ \hline Narwhal-Hotstuff[16] & ✓ & \(O(nb+n^{2})\) & ✓ & \(O(n^{1})\) & \(O(1)\) & 6 & \(O(nb+n^{2})\) & 1 & 0.33 \\ \hline Tusk[16] & \(\times\) & \(O(nb+n^{2})\) & ✓ & \(O(n^{2}\log n)\) & \(O(1)\) & 9 & \(O(nb+n^{2}\log n)\) & 1 & 0.33 \\ \hline Bullshark[39] & \(\times\) & \(O(nb)\) & ✓ & \(O(n^{2}\log n)\) & \(O(1)\) & 4 & \(O(nb+n^{2}\log n)\) & 1 & 0.33 \\ \hline BioDipperf-\(\gamma\) & ✓ & \(O(b)\) & \(\times\) & \(O(1)\) & \(O(n)\) & 4 & \(O(b)\) & 1 & 0.25 \\ \hline BioDipperf-\(\gamma\) & ✓ & \(O(b+n)\) & \(\times\) & \(O(1)\) & \(O(n)\) & 4 & \(O(b+n)\) & 1 & 0.33 \\ \hline BioDipperf-Lite & ✓ & \(O(b+\log n)\) & \(\times\) & \(O(1)\) & \(O(n)\) & 4 & \(O(b+\log n)\) & 1 & 0.33 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of BFT protocol in a system of \(n\) replicas
Table 1 summarizes the metrics for most recent and important BFT protocols in the past. The analysis of the table is provided in Appendix 11. At the high level, we can decompose any consensus protocol into data availability and consensus. Data availability (DA) ensures that everyone has access to a common data in the unit of blocks, whereas consensus restricts the ordering among blocks. Recent DAG based consensus protocols decouples into consensus and a transport layer which satisfies DA. DA can be either achieved by requiring every replica to have the full data while reaching agreement, or achieved by applying erasure code on the data such that replicas do not have the full data after finishing the agreement. We add a sub-column to table, **Download**, to signify for each protocol. We separately display per replica complexity for DA and consensus.
If a protocol requires every replica to have the entire BFT block while making agreement, then maximal system throughput is upper bounded by each replica's bandwidth. So the system to replica throughput ratio is always \(O(1)\). In contrast, with erasure code it is possible to reach agreement on the data without downloading it. Every replica's bandwidth is spent on distinct set of transactions, so the overall system throughput scales linearly as \(n\) increases. We note that there are types of applications that do not require full data for execution. For example, a sequencer[18] in the rollup architecture does not need to execute the transaction. But more generally, as roles are diversified in the infrastructure software, like consensus and execution separation in Ethereum. It is possible only a handy number heavy-duty nodes are providing the execution API. As long as, the result is provided with valid proof, regular replicas does not need to have all transaction data to re-execute everything.
In Hotstuff, the worst inclusion distance is \(f+1\), because there can a sequence of \(f\) malicious leader. Narwhal achieves probabilitic censorship resistance with DAG. Prime[3] is an early leader based protocol with censorship resistance, but its is designed for defense against performance degradation. All leaderless BFT protocols has a distance of 1 given sufficient replicas have the transaction.
When computing the confirmation latency, the communication step is a fixed number. The variable part comes from the communication bit complexity, which is the sum of the DA and consensus complexity. Most leader based protocol has a small complexity on consensus, because there is a central role to coordinate communication among replicas to reach consensus. Most leaderless asynchronous BFT protocols have high complexity on both DA and consensus, those also include DAG based protocols like Dag-Rider[26], Bullshark[39], Dumbo[23, 21].
BigDipper-\(\nicefrac{{1}}{{4}}\) achieves constant overhead while achieving linear system capacity but it requires a stronger security assumption of \(n\geq 4f+1\). BigDipper-7 restores the 1/3 assumption, but each replica incurs a linear overhead at the download path. At last, BigDipper-Lite is a modification based on BigDipper-7, that achieves all desired properties with latency of \(O(\log n)\).
## 2 Background and Related Works
### DA, VID and Erasure code
Data availability is implicit by all BFT protocols such that all replicas have access to a common ordered data at any time. Most leader based BFT protocol uses a simple DA by having the leader broadcast the data to everyone. There are more sophisticated designs of DA, including AVID [13], AVID-FP [24], AVID-M [40], SEMI-AVID-PR [33]. We compare our censorship resistant DA with them in Appendix 13.2. Verifiable Information dispersal(VID) is a technique that uses Reed-Solomon code to split data into \(O(n)\) chunks, and each replica only stores a constant number of chunks to ensure the data is reconstructable. The verifiability ensures any two clients always retrieve identical data.
A Reed-Solomon(RS) erasure code is specified by a pair \((k,h)\), where \(k\) is the number of systematic chunks, created from the input data, and the second parameter \(h\) is the coding redundancy. The newly generated data are called parity chunks. The ratio between \(k\) and \(h\) is called the coding ratio. The encoding process of RS code is based on the idea of polynomial interpolation[35], which is a linear operation. Given an input array, the encoding can be implemented so that the output is a concatenation of input data and parity chunks. We use \(rsEncode,rsDecode\) to refer to the encoding and decoding operations. Appendix 12.3 contains more detailed introduction.
### Polynomial Commitment and KZG
A polynomial commitment scheme [25] treats data as a polynomial and provides primitives including _commit_, _createWitness_, and _verifyEval_. The _commit_ primitive provides a concise and unique representation of a polynomial. It has a Polynomial binding property such that it is computationally infeasible to find two polynomials that commit to the same commitment. With primitives _createWitness_, _verifyEval_, the scheme allows any party to reveal(open) evaluations at specific indices along with witnesses(proofs) without disclosing the entire polynomial. When presented with a correct witness, any verifier can use _verifyEval_ and check that the evaluations are indeed correct against the commitment. The scheme has a Evaluation binding property such that it is computationally infeasible to find two witnesses for two different polynomial evaluations at the same evaluation index that passes the _verifyEval_. Kate-Zaverucha-Goldberg (KZG)[25] is a polynomial commitment scheme that is linear in commitments and witnesses.
KZG can be applied on a two dimensional matrix. Suppose we have \(n\) polynomials, each of degree \(d-1\), and we want to encode it with a coding ratio of \(\frac{1}{3}\). A matrix can be created where every column contains one polynomial. The RS encoding is applied to each of \(d\) row by extending the evaluations to \(2n\) more points. The commitment of every newly generated \(2n\) polynomials along the columns are the polynomial extension of the first \(k\) column commitments. It is due to the linearity property of KZG and RS [11]. A good reference can is available at [33]. Figure 2 provides a visualization for such 2D encoding. More discussion can be found in Appendix 12.3 and 12.5.
### Combined Signature Scheme and CRHF
We use combined signature to refer to a signature generated from either a multi-signature or threshold signature scheme. We will use the exact name when a specific signature scheme is used. On paring friendly curves, a practical way to aggregate multiple signatures on a common message into one signature is to use BLS multi-signature [6]. The signature scheme provides three secure primitive \(\sigma_{i}\leftarrow\textit{ms-sign}(sk_{i},m)\); \(\sigma,I_{\sigma}\leftarrow\textit{ms-agg}\)\((pk_{1},\sigma_{1}\cdots\ \textit{pk},\sigma_{t})\); \(\textit{bool}\leftarrow\textit{ms-verify}(I_{\sigma},m,\sigma)\), where \(I_{\sigma}\) is an indicator vector for the signers, which has \(O(n)\) size. In contrast to multi-signature, a threshold signature show out of \(n\) signers at least some ratio number of signers have signed a common message, and it requires only constant size as opposed to \(O(n)\) size. To convert a list of data into a single message, we assume a collision resistant hash function(_CRHF_) with negligible probability for two different data to hash into the same digest. A combined signature is valid if it contains \(n-f\) signatures.
### Security Assumptions for BigDipper
The network is assumed to be partial asynchronous. In this networks, there is a notion of an unknown Global Stabilization Time (GST), after which all transactions can arrive within a known \(\Delta\) duration. The security assumption is \(n\geq 3f+1\). all malicious replicas can have arbitrary network latency. The malicious replicas can be separate or coordinated by a single adversary. Transaction censoring occurs when malicious replicas (including the leader) exclude some transactions on purpose (it is different from the situation when an honest replica is impatient of waiting and miss transactions,it is not on purpose).
## 3 Censorship Resistant DA
### Primitives
BigDipper is built on three protocol components for providing short term censorship resistance in leader based BFT protocols. The first component delivers clients' transactions to BFT replicas; DA-CR ensures the censorship resistance; the last component integrates the DA-CR with a standard BFT protocol, like Hotstuff. The DA-CR is a critical for BigDipper BFT system as its properties determine the degree of censorship resistance. In this section, we outline the interfaces and their properties.
The DA-CR provides a Disperse invocation that distributes transactions data among replicas, and a Retrieve invocation to safely retrieve the data from honest replicas. For censorship resistance, we require the Disperse invocation be initiated in a distributed manner from at least \(n-f\) replicas, and the final dispersal data \(B\) assembled by the leader must also contain at least \(n-f\) mini-blocks. For the Retrieve invocation, either replicas or clients can invoke the procedure to retrieve partial or full dispersed data.
Every invocation is initiated against a DA-CR instance with a unique ID, which is used by replicas and clients to identify the correct context for intended invocations. The Disperse invocation is associated with two events, a Start event and a Finish event. The Finish event evaluates to the following types: Complete or Incomplete. Since a malicious leader can stall the protocol by inactivity, the Incomplete type serves as a trigger for the leaders replacement. If a Disperse invocation finishes with the type Complete, it will return a commitment \(C\) and its associated combined signature from \(n-f\) out of \(n\) replicas. It's possible that a Completed Disperse invocation may not contain mini-blocks from all \(n\) replicas. This is because either some honest replicas are late for the leader's collection time, or some malicious replicas refusing to participate. For the missing parts, a protocol designer can meditate on the trade-offs and decide if to allow the leader to fill in its own data. We elaborate on this trade-off in Appendix 1.1.
### Properties of DA-CR protocols
A leader based DA-CR protocol that offers censorship resistance must provide the following properties for all DA-CR instances. The DA-CR protocol relies on partial synchronous assumption, which is different from asynchronous protocols like AVID[13], AVID-M[40].
* **Termination**: If some replicas invoke \(\mathsf{Disperse}(B)\), then those replicas eventually Finish the dispersal. If at least \(n-f\) replicas invoke \(\mathsf{Disperse}(B)\) and the leader is honest with a network after GST, then the dispersal Finishes with type Complete.
* **Availability**: If an honest replica Finished a dispersal with type Complete, then any honest retriever can invoke Retrieve and eventually reconstruct a block \(B^{\prime}\).
* **Correctness**: If an honest replica Finished a dispersal with a type Complete, then any honest retriever always retrieves the same block \(B^{\prime}\) by invoking Retrieve. If the leader is honest, then \(B=B^{\prime}\), where \(B\) contains all the mini-blocks available to the honest leader at dispersal.
* **Commitment Binding**: If an honest replica Finished a \(\mathsf{Disperse}(B)\) invocation with a type Complete and a valid combined signature on its commitment, no adversary can find another different \(B^{\prime}\) that verifies correctly against the combined signature.
* **Inclusion-\(\mathbf{t}\)**: If an honest replica Finished a \(\mathsf{Disperse}(B)\) invocation with type Complete, the dispersed data must include at least \(t\) mini-blocks from honest replicas.
* **Space-Capturing**-\(\eta\)**: If an honest replica Finished a \(\mathsf{Disperse}(B)\) invocation with type Complete, the dispersed data contains at most \(\eta\) captured mini-blocks by the leader.
* **Data-tampering Resistance**-\(\rho\)**: In an accountable inclusion mechanism, if an honest replica Finished a \(\mathsf{Disperse}(B)\) invocation with type Complete, the dispersed data \(B\) must ensure at least \(\rho\) mini-blocks are not tampered in the DA-CR instance. If \(\rho=n\), protocol guarantees no data can be tampered.
The last three properties are new for delineating the censorship resistance. As mentioned in Section 1, \((t,\eta,\rho)\) characterize the properties of censorship resistance. We will use an important concept called **Attestation** later. In short, it is a signature on a mini-block by a replica to declare the replica intends to include the mini-block. We refer the readers to Appendix 13.1 for formal definitions of **Space-capturing**, **Accountable mechanism** shown in the properties.
Next, we describe three DA-CR protocols, Table 2 summarizes their properties. For ease of reference, we use the suffix of each protocol combining with Card1 as the name for DA-CR component. Vanilla is a protocol that takes minimal changes from a conventional leader based BFT protocol to a censorship resistant one,it is presented in Appendix 16.1.
Footnote 1: Card stands for **c**ensorship **r**esistance **d**ata **a**vailability with flexible ordering.
\(\lambda\) is a security parameter with fixed size, which uniquely represents the data block. \(\kappa\) is a small constant. \(\dagger\) means the parameter \(\eta\) can increase up to \(\eta=f\), see Appendix 18.
## 4 DA-CR Protocol Designs
We begin by presenting a DA-CR design called Card-7 which has properties of \((\rho=n,t=n-2f,\eta=0)\). We present its variant Card-\(\nicefrac{{1}}{{4}}\) in Appendix 15. We provide an outline for Card-Lite that achieves \(O(\log n)\) complexity at the end.
### Card-7 Protocol
At a high level, the Card-7 protocol relies on the Reed-Solomon code, a 2D KZG and an aggregate multi-signature scheme to support the invocations with mentioned properties.
It is designed to be an accountable inclusion mechanism, so that every replica know whose mini-blocks get included. The formal protocol is given in Algorithm 2 from Appendix 14.1. In the following sections, we present an outline of both **D**perse and **R**etrieve invocations in sequence, and proof is given in Appendix 14.5.
#### 4.1.1 Disperse Invocation
The **D**isperse invocation for a DA-CR instance involves two round trips between replicas and the leader, and its workflow is shown in
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline BicDipper & DA-CR & BFT (\(n\geq\)) & \(\rho\) & \(t\) & \(\eta\) & message & hyperscale \\ protocol & protocol & assumption & & & complexity & throughout \\ \hline \hline BigDipper-7 & Card-7 & \(3f+1\) & \(n\) & \(n-2f\) & \(0\dagger\) & \(O(b+\lambda n)\) & \(\checkmark\) \\ \hline BigDipper-1/4 & Card-\(\nicefrac{{1}}{{4}}\) & \(4f+1\) & - & \(n-3f\) & \(n-f-1\) & \(O(b+\lambda)\) & \(\checkmark\) \\ \hline BigDipper-Lite & Card-Lite & \(3f+1\) & \(n\) & \(n-2f-o(f/\log(f))\) & \(o(f/\log f)\dagger\) & \(O(b+\lambda\kappa\log n)\) & \(\checkmark\) \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of DA-CR component of BFT protocols
Figure 1: Flow diagram for a DA-CR protocol
Figure 2: A 2D matrix data structure used by the leader for producing \(3n\) commitment. Replica 1 is excluded, as its data is 0. \(b\) denotes mini-block(systematic chunk), orange are for parity chunks.
Figure 1: Flow diagram for a DA-CR protocol
least \(n-2f\) honest mini-blocks, and parity chunks are generated correctly and the leader has not tampered any data. If all verifications pass, a replica computes a final commitment \(C\) with a collision resistant hash function (_CRHF_) on the _commitment list_. The final commitment \(C\) is used as the signing message for approving the dispersal by the multi-signature scheme (2.3). After that, a replica sends a **approval message** containing the signature to the leader. An honest replica only signs once for every unique DA-CR instance, and it is strictly increasing as the BFT view number.
In Figure 1, after receiving approval messages, a leader waits to accumulate \(n-f\) multi-signature before proceeding. Although \(f\) out of them might be malicious, the remaining \(f+1\) approval from honest replicas is sufficient to ensure the data availability since \(3(f+1)>n\). With sufficient approval, the leader generates an aggregate signature and signer bitmap (Section2.3), and sends the identical **agreement message** to all \(n\) replicas. When a replica receives a valid aggregate signature with sufficient signers, the corresponding DA-CR instance is considered Finished with a Complete type. Otherwise after timeout, a replica can conclude that the DA-CR instance Finishes with a Incomplete type.
#### 4.1.2 Retrieve Invocation
The retrieval algorithm is formally defined in Algorithm 4 from Appendix 14.3. The **Retrieve** invocation guarantees any parts from dispersed data in any DA-CR instance are retrievable by anyone. If the interested data is one mini-block, a Retrieve invocation is first sent to the leader and the replica who possess the data; upon responded with data, the retriever checks correctness by computing polynomial commitment on the data and comparing with the _commitment list_. If both deny the request, a replica queries and finds \(f+1\) other honest replicas which responds correct chunks consistent to the commitments. Because all replicas have distinct chunks, the retriever can perform _rsDecode_ to get the original dispersed data, which contains all mini-blocks. For partial retrieval from a mini-block \(b_{p}\), the invocation is similar to full mini-block. More importantly, if both deny, the retriever can query partial data and does partial reconstruction without needs to construct the entire dispersed data. Because each replica possesses three chunks, a retriever needs only contact roughly \(\frac{4f}{5}\) random replicas to reconstruct the data with high probability. Appendix 14.4 presents analysis and figures.
### Card-Lite Protocol
Card-Lite is an improved protocol based on Card-7 that only requires \(O(\log n)\) communication overhead. But it has slightly weaker censorship resistance, as shown in Table 2. We outlines the intuition behind the protocol and provide the complete protocol and proofs in Appendix 17.
How to differentiate two types of leaders is the key problem for any honest replica when asked to approve a dispersal. In the context of censorship resistance, an honest leader is required to include at least \(2f+1\) mini-blocks, whereas the
censoring leader includes less than \(2f+1\) mini-blocks. It is easy to detect the malicious leader in Card-7, because everyone receives the entire _commitment list_. But we want each replica receives only \(O(\log n)\) commitments and attestations.
The core idea is to distinguish the two types of leader through random sampling. If a leader does not include sufficient mini-blocks in its commitment, there will be some random samples which the leader is unable to answer. If the leader is unable to answer too many of them, the leader would not be able to collect \(2f+1\) signatures to make progress.
The idea assumes that the leader cannot lie about the relation between responses and its commitment. To achieve that while keeping a low complexity, we require the leader to use a doubly homomorphic commitment scheme[10] on the _commitment list_ (see Appendix 17.1 for background). So in the protocol, a sample for chunk \(i\) would contain: attestation from replica \(i\), commitment \(c_{i}\), final commitment \(C\), a kzg binding proof for \(c_{i}\) with respect to \(C\)). Because the proof is aggregatable and has a size of \(O(\log n)\), the overall complexity per sample is just \(O(\log n)\). A random query is success if the leader can provide the valid response, and failure otherwise.
Suppose a leader has constructed a 2D KZG matrix and receives a random query, clearly the honest leader type has higher chance of being able to response as opposed to the malicious type. Then if a batch of \(k\) random queries are requested, the probability of having all \(k\) success query is exponentially decreasing for both types. But if we independently retry (boost) \(L\) number of batches, eventually the probability of having at least one successful batch would be sufficiently high. A key observation is that the honest type requires much less boosting, the exact difference depends on the number of mini-block \(s\) which the malicious type tensors. As shown in Appendix 17, if the leader is censoring \(s=\Omega(f/\log f)\) mini-blocks, and all honest replicas sample \(L=o(n)\) each, the extremely high probability the malicious type is exposed, but the honest type can always pass the query in all cases.
If done naively, every replica need to send \(kL\) number of random queries, which is much larger than \(O(\log n)\). But with Fiat-Shamir transform on the final commitment, we can make the process non-interactive. The leader deterministically generates random queries and sends back only one successful batch to the replica to prove non-censorship. Upon receiving a batch, a replica can independently verify the tuple and correctness of the randomness generation. The full analysis are presented in Appendix 17.
## 5 Transaction Broadcast
In the new architecture, all replicas can add transaction to the final block, the client now has more options to choose which replicas to send to. We provide a transaction broadcast mechanism which clients can use to achieve a desired amount of censorship resistance.
### Navigator Protocol
The interface to the protocol is a submit function call, shown in Algorithm 5 from Appendix 19.2. To send a transaction, a client provides two inputs: the transaction and number of copies \(x\). The algorithm will send the transaction to \(x\) random replicas available for the view number. Replicas continuously wait for transactions to arrive, and include them at best effort.
The difficulty with the interface is to decide how many copies to send for achieving some desired probability for inclusion in the next block. To do that, we create Table 3 based on the worst case assumption that all \(f\) malicious replicas are censoring transactions all the time. Depending on the type of leader and number of included honest mini-block \(q\), we can compute the probability of inclusion as a function of \(x\). However, since both the leader type and the number \(q\) is unknown to the client, we have to model the leader with a bernoulli variable with \(p=2/3\) for each possible leader, and a uniform variable for all possible \(q\). Later, as the clients gets more data, it is possible to update those prior based on Bayesian method. The lower bound for \(q\) is determined by the Inclusion-t property offered by the DA-CR. When a leader is honest, \(q\) has a range from \(t\) to \(n-f\). On the other extreme, when the leader is malicious, there can be at most \(t\) collected honest mini-blocks. At runtime, the actual number of \(q\) would fall anywhere between \(t\) and \(n-f\). In the Table, if a client sends \(x\ll f\) copies, determined malicious replicas can censor it with certainty. However, in Appendix 19.6, we show that although a malicious leader can censor some clients who sends only a few copies, the probability to censor \(u\) different transactions altogether decreases exponentially as \(u\) increases. The rest of the section delineates how to compute the Table.
The cases with probability 1 can be proved by the pigeonhole principle. For instance, suppose \(x=n-t\) and \(q=t+1\), then there must be at least one mini-block from an honest replica which includes the transaction, and similarly for any pairs with \(x+q\geq n+1\). On the other extreme, when a client sends less than \(x\leq f\) transactions to replicas and if a malicious leader has intent of censoring the transaction, the adversary can selectively including those \(f+1\) honest mini-blocks without the transaction.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Leader type & Malicious & Honest & Honest \\ \hline number honest mini-block & \(q=t\) & \(q=t\) & \(q=n-f\) \\ \hline \hline \(n-t+1\leq x\leq n\) & \(\Pr(IN)=1\) & \(\Pr(IN)=1\) & \(\Pr(IN)=1\) \\ \hline \(f+1\leq x<n-t+1\) & see 5.2 & see 5.2 & \(\Pr(IN)=1\) \\ \hline \(0<x<f+1\) & \(Pr(IN)=0\) & see 5.2 & see 5.2 \\ \hline \end{tabular}
\end{table}
Table 3: Probability of inclusion as a function of number of copies and number of honest inclusion
### Probabilistic Transaction Inclusion
#### 5.2.1 Probability of Inclusion with an Honest Leader
Given the leader is honest and the client chooses \(x\) replicas randomly without replacement, let \(X\) be a random variable indicating the number of honest replicas that have the client's transaction. For \(0<i\leq\min{(x,n-f)}\), the probability of the transaction being included by \(X=i\) honest replicas is \(\Pr(X=i;x)=\binom{n-f}{i}\binom{f}{x-i}/\binom{n}{x}\). Suppose \(f\) malicious mini-blocks are collected by the honest leader, the probability is
\[\Pr(\textit{IN; x,q})=1-\sum_{i=1}^{x}\Pr(\textit{None}\ |X=i;q)\Pr{(X=i;x)} \tag{1}\]
where \(\Pr(\textit{None}\ |X;q)\) is the probability none of \(X\) honest mini-block containing the client's transaction. Suppose all honest replicas have independent and identically distributed network latency, \(\Pr(\textit{None}\ |X=i;q)=\prod_{j=0}^{q-1}\frac{n-f-i-j}{n-f-j}\), and the probability of exclusion decreases exponentially as \(q\) increases. We provide an empirical evaluation in Appendix 19.4 and an table of probability in Figure 7.
#### 5.2.2 Probability of Inclusion with a Malicious Leader
When a leader is malicious, but a client sends \(x\geq f+1\) copies of a transaction, the probability of inclusion equals to \(\Pr(IN)=\sum_{i=f+1}^{x}(\binom{n-f}{i}\binom{f}{x-i}/\binom{n}{x})\), where \(f+1\leq x<n-t+1\). Further analysis is provided in Appendix 19.5.
## 6 BigDipper with BFT Integration
### Overview
BigDipper is a system that provides censorship resistance to leader based BFT protocols [14, 9, 42]. After transactions are included in the DA-CR, a leader is required to take all transactions into its next block. By Commitment binding and Correctness property of DA-CR, every replica can arrive at an identical order. In the following, we specify the Scooping algorithm which would be inserted into the consensus path in order to check correct inclusion and DA. Then we demonstrate an integration with the two phase Hotstuff-2[30] protocol which achieves all the properties in Appendix 10.1.
### Scooping Algorithm
The Scooping algorithm is simply a procedure called verifyCard. It performs the operation to verify if the combined signature returned by the DA-CR in **Agreement** is correct. This procedure is inserted into the BFT consensus path such that it would not proceed unless the replica can looking up its local storage to find a valid combined signature. The algorithm is defined in Appendix 22.1.
### BigDipper-7 with Hotstuff-2
DA-CR protocol can integrate with any leader based BFT that uses lock-commit paradigm[30]. When some replicas commit a BFT block, the lock-commit paradigm ensures safety by having at least \(n-f\) replicas to lock the committed value, such that no other value can be committed at the same block height. HotStuff[42, 30], Tendermint [9], PBFT [14] all use the paradigm. They differ from how liveness property is achieved when the leader is honest. If a leader is malicious, it cannot violate the safety due to the lock, and it would be replaced if violating liveness. We provide an integration of Card-7 with Hotstuff-2, which has great properties including its simplicity, optimal responsiveness and the two phase latency.
The integrated protocol is defined in Algorithm 1, which highlights the additional procedures on top of the Hotstuff-2 protocol. We provide a brief terminology of Hotstuff-2 protocol in the following; a certified block \(B\) at view \(v\), whose certificate is denoted as \(C_{v}(B)\), is a block with \(n-f\) combined signature, certified blocks can be ranked by height; a certificate is locked if the replicas is guarding its block; a BFT block has a double certificate if it has been voted by \(n-f\) replicas in two phases, and is safe to commit. To help understand the protocol integration, we create a flow diagram in Figure 10 in Appendix 23.
In the new algorithm, different from the original Hotstuff-2, a leader does not include the entire BFT block in the Propose(2) message. Instead it uses information dispersal and runs the encoding parts of the Card-7 protocol line 22- 26. The data availability is confirmed in exactly one round trip later before locking any certificate. Card-7 does not affect Hotstuff-2 safety because the data availability is checked before locking. The abstract signal mentioned in Section 14.1 can be implemented by the Hotstuff-2 pacemaker [30]. The complete proof for **Safety** and **Liveness** are presented in Appendix 24.
## 7 Conclusion
BigDipper is a hyperscale BFT system with short term censorship resistance for leader based protocols. It is based on the idea a DA layer can the decentralize transaction inclusion and clients can choose amount of censorship resistance.
|
2303.02076 | Graph-based Global Robot Localization Informing Situational Graphs with
Architectural Graphs | In this paper, we propose a solution for legged robot localization using
architectural plans. Our specific contributions towards this goal are several.
Firstly, we develop a method for converting the plan of a building into what we
denote as an architectural graph (A-Graph). When the robot starts moving in an
environment, we assume it has no knowledge about it, and it estimates an online
situational graph representation (S-Graph) of its surroundings. We develop a
novel graph-to-graph matching method, in order to relate the S-Graph estimated
online from the robot sensors and the A-Graph extracted from the building
plans. Note the challenge in this, as the S-Graph may show a partial view of
the full A-Graph, their nodes are heterogeneous and their reference frames are
different. After the matching, both graphs are aligned and merged, resulting in
what we denote as an informed Situational Graph (iS-Graph), with which we
achieve global robot localization and exploitation of prior knowledge from the
building plans. Our experiments show that our pipeline shows a higher
robustness and a significantly lower pose error than several LiDAR localization
baselines. | Muhammad Shaheer, Jose Andres Millan-Romera, Hriday Bavle, Jose Luis Sanchez-Lopez, Javier Civera, Holger Voos | 2023-03-03T16:48:38Z | http://arxiv.org/abs/2303.02076v1 | # Graph-based Global Robot Localization Informing Situational Graphs with Architectural Graphs
###### Abstract
In this paper, we propose a solution for legged robot localization using architectural plans. Our specific contributions towards this goal are several. Firstly, we develop a method for converting the plan of a building into what we denote as an architectural graph (A-Graph). When the robot starts moving in an environment, we assume it has no knowledge about it, and it estimates an online situational graph representation (S-Graph) of its surroundings. We develop a novel graph-to-graph matching method, in order to relate the S-Graph estimated online from the robot sensors and the A-Graph extracted from the building plans. Note the challenge in this, as the S-Graph may show a partial view of the full A-Graph, their nodes are heterogeneous and their reference frames are different. After the matching, both graphs are aligned and merged, resulting in what we denote as an informed Situational Graph (iS-Graph), with which we achieve global robot localization and exploitation of prior knowledge from the building plans. Our experiments show that our pipeline shows a higher robustness and a significantly lower pose error than several LiDAR localization baselines.
**Paper Video:**[https://youtu.be/3Pv7y8a0sUY](https://youtu.be/3Pv7y8a0sUY)
## I Introduction
Mobile robots are increasingly being deployed in the construction sector, with significant potential benefits. For example, they may reduce significantly the costs by regular inspection of an ongoing site to monitor progress. However, robots at construction are nowadays mostly teleoperated or work semi-autonomously due among others to the perception challenges associated with the constantly changing nature of a construction site. For fully autonomous operation, it would be convenient for such robots to have a comprehensive prior knowledge of the construction site geometry. Leveraging such prior knowledge together with sensor readings during real-time operation may lead to robust and accurate global localization in construction sites.
Digital architectural plans, such as Building Information Modelling (BIM) [1], provide a means of capturing and communicating information about a construction site and incorporating it as prior knowledge about the scene. Works such as [2, 3, 4] have addressed the problem of extracting relevant structural knowledge from BIM and using it for real-time robot localization. However, these methods only extract geometric information from the BIM and do not leverage the topological and relational information also available in it, which limits the robustness and accuracy in complex and changing construction sites.
To tackle this problem, we present a novel approach to localize robots leveraging not only geometry but also higher-level hierarchical information from architectural plans. We present in this paper how to model the BIM information in the form of a graph that we denote as Architectural Graph (_A-Graph_), and then match and merge with the online Situational Graph (_S-Graph_) [5, 6] that the robot builds as it navigates the environment. As a key aspect, translating low-level geometry into high-level fe
Fig. 1: Generation of an _iS-Graph_ leveraging the information from the offline generated _A-Graph_ using an architectural plan and the online generated _S-Graph_ using robot sensors. A structure-based graph matching algorithm estimates the relationship between the two graphs as the robot navigates to provide a final connected _iS-Graph_. |
2310.15432 | A Review of Economic Incentives for Efficient Operation of Flexible
Transmission | The growing penetration of renewable energy requires upgrades to the
transmission network to ensure the deliverability of renewable generation. As
an efficient alternative to transmission expansion, flexible transmission
technologies, whose benefits have been widely studied, can alleviate
transmission system congestion and enhance renewable energy integration.
However, under the current market structure, investments for these technologies
only receive a regulated rate of return, providing little to no incentive for
efficient operation. Additionally, a regulated rate of return creates an
incentive for building more transmission lines rather than efficient
utilization of the existing system. Therefore, investments in flexible
transmission technologies remain rather limited. To facilitate the deployment
of flexible transmission, improve system efficiency, and accommodate renewable
energy integration, a proper incentive structure for flexible transmission
technologies, compatible with the current market design, is vital. This paper
reviews the current market-based mechanisms for various flexible transmission
technologies, including impedance control, dynamic line rating, and
transmission switching. This review pinpoints current challenges of the
market-based operation of flexible transmission and provides insights for
future endeavors in designing efficient price signals for flexible transmission
operation. | Xinyang Rui, Omid Mirzapour, Brittany Pruneau, Mostafa Sahraei-Ardakani | 2023-10-24T00:56:26Z | http://arxiv.org/abs/2310.15432v1 | # A Review of Economic Incentives for Efficient Operation of Flexible Transmission
###### Abstract
The growing penetration of renewable energy requires upgrades to the transmission network to ensure the delivability of renewable generation. As an efficient alternative to transmission expansion, flexible transmission technologies, whose benefits have been widely studied, can alleviate transmission system congestion and enhance renewable energy integration. However, under the current market structure, investments for these technologies only receive a regulated rate of return, providing little to no incentive for efficient operation. Additionally, a regulated rate of return creates an incentive for building more transmission lines rather than efficient utilization of the existing system. Therefore, investments in flexible transmission technologies remain rather limited. To facilitate the deployment of flexible transmission, improve system efficiency, and accommodate renewable energy integration, a proper incentive structure for flexible transmission technologies, compatible with the current market design, is vital. This paper reviews the current market-based mechanisms for various flexible transmission technologies, including impedance control, dynamic line rating, and transmission switching. This review pinpoints current challenges of the market-based operation of flexible transmission and provides insights for future endeavors in designing efficient price signals for flexible transmission operation.
Flexible transmission, electricity markets, phase shifting transformers, power systems operation, reactance control, topology control, transmission investments.
This research was supported by the National Science Foundation under grant number 2146531.
## I Introduction
As the electricity generation and consumption patterns are evolving towards carbon-free generation and electrified consumption worldwide, the transmission system needs upgrades to adapt to the new environment with increased penetration from renewable energy sources (RES). Enhancing renewable integration is essential for decarbonizing the power grid and achieving a carbon-neutral economy, which has been an important objective for countries worldwide. For example, the Biden administration has announced the goal of a net-zero greenhouse gas (GHG) emission economy by 2050 [1]. Increased levels of renewable energy penetration and growing demand for electrified consumption have led to new congestion patterns in the legacy transmission grid [2]. The geographic locations of renewable resources [3, 4] have added to the congestion in the transmission grid as they can be far away from load centers [5]. Therefore, the available transfer capability (ATC) needs enhancement to ensure the deliverability of intermittent and geographically dispersed renewable generation. Transmission expansion is an obvious approach to enhance ATC; however, building new transmission lines faces challenges such as lengthy permitting processes and lumpy investment discouraging investors from investing in this sector [6, 7].
On the other hand, flexible transmission technologies have been viewed by previous literature as an efficient alternative to transmission expansion for ATC procurement and streamlining renewable energy deployment through congestion relief in the transmission system [8]. Flexible transmission includes a range of technologies and operational methods that allow optimal utilization of current transmission infrastructure instead of considering transmission systems as fixed assets during operation. The adjustments that can provide flexibility in the transmission network include topology changes, reactance compensation, thermal rating adjustment, and nodal phase shift. Prominent flexible transmission technologies include series flexible transmission system (FACTS) devices, transmission switching, dynamic line rating (DLR), high-voltage direct-current (HVDC) lines, and phase-shifting transformers (PST). More detailed descriptions of these technologies are presented in later sections of the paper. An extensive body of literature has shown the potential benefits of implementing flexible transmission to alleviate congestion and facilitate renewable generation integration.
Despite the widely-studied benefits, flexible transmission
deployment in the existing power grid is still limited due to challenges such as the conservative investments in the transmission system, increased computational complexity of operation and planning models with flexible transmission, and lack of economic incentive. An important challenge hindering the implementation of flexible transmission is the lack of a proper market-based incentive structure for transmission assets in current electricity market designs. The most prevalent compensation scheme for transmission investment in several markets does not provide proper incentives for deploying and optimally operating flexible transmission technologies, as the owners receive only a regulated rate of return (RoR) compensation. This is due to the fact that transmission assets were operated as a part of the vertically integrated utilities (VIU) under government regulation. Following the restructuring of power systems, the VIUs disintegrated, and competitive markets were formed for electricity generation and retail. However, the transmission system remained regulated under the umbrella of natural monopoly. Independent System Operators/Regional Transmission Organizations (ISO/RTOs) were formed to manage wholesale energy and ancillary service markets, operate the transmission network, and plan transmission expansion. Extending the competition to the transmission sector has been a subject of interest since then. The merchant transmission model for compensating transmission investment through Financial Transmission Rights (FTR) has been investigated in several studies [9, 10, 11, 12]. However, under realistic conditions, the benefits of this model are undermined due to stochastic characteristics of the transmission network and market participant behavior [9]. With the aforementioned demands for transmission system upgrades due to the steep growth of renewable generation and the need for higher grid resilience, the conventional cost-of-service regulation and monopoly transmission investment projects are insufficient for the changing electricity industry environment. The Federal Energy Regulatory Commission (FERC) Order 1000, issued in 2011, intends to bring competition to US transmission investment by removing barriers, stimulating more participation in transmission investment, and promoting decentralized transmission projects [13]. Several research endeavors have sought to find optimal investment in flexible transmission technologies in the market environment thereafter [14, 15]. A large portion of these efforts have focused on transmission expansion planning with flexible transmission technologies [16, 17]
A properly designed market structure facilitates flexible transmission deployment in the deregulated market. Previous literature has proposed different market structures and compensation schemes for flexible transmission. They are based on financial transmission rights or marginal value of flexible transmission operation in day-ahead markets. Regulatory entities and the industry has also pushed for performance-based market structures regarding flexible transmission. Nevertheless, further research is still needed to implement a well-designed market mechanism to harness the benefits of flexible transmission. This paper critically reviews the market structure and incentive proposals for flexible transmission to facilitate further research.
The rest of this paper is organized as follows: Section II presents an overview of flexible transmission technologies. Economic valuation and impacts of flexible transmission technologies are presented in Section III, followed by an overview of the proposed market-based incentive structures for flexible transmission operation and concluded by efforts and incentive mechanisms adopted by industry in various ISO/RTOs. The challenges for establishing efficient market mechanisms for flexible transmission are discussed in Section IV. Finally, conclusions are drawn in Section V, and guidelines for future research are presented.
## II Overview Flexible transmission Technologies
This section presents the functionalities of different flexible transmission technologies in the context of DC power flow. The basic formulation of the single-hour DC optimal power flow (DCOPF) is shown as follows:
\[\min\sum_{g\in G}c_{g}p_{g} \tag{1}\] \[\mathrm{s.t.}\] \[p_{g}^{\min}\leq p_{g}\leq p_{g}^{\max},g\in G;\] (2) \[-f_{k}^{\max}\leq f_{k}\leq f_{k}^{\max},k\in K;\] (3) \[f_{k}=b_{k}(\theta_{k,\mathrm{to}}-\theta_{k,\mathrm{fr}}),k \in K;\] (4) \[\theta_{1}=0;\] (5) \[\sum_{k\in\delta^{+}(n)}f_{k}-\sum_{k\in\delta^{-}(n)}f_{k}+\sum_ {g\in G(n)}p_{g}=d_{n},n\in N, \tag{6}\]
where \(p_{g}\) is the active power output of generator \(g\), \(f_{k}\) is the active power flow through transmission line \(k\), and \(\theta_{k,\mathrm{to}}\) and \(\theta_{k,\mathrm{fr}}\) are the voltage angles at the end buses of line \(k\). (1) is the objective function that minimizes the total generation cost, with \(c_{g}\) being the linear marginal cost of generator \(g\). Generator capacity limits \(p_{g}^{\max}\) and \(p_{g}^{\min}\) are specified in (2). (3) defines the thermal limit constraint of transmission lines and \(f_{k}^{\max}\) is the thermal limit of transmission line \(k\). The DC power flow equation, with \(b_{k}\) being the susceptance of transmission line \(k\), is presented in (4). (5) specifies the voltage angle at the reference bus. Finally, (6) is the power balance constraint at each system bus \(n\).
Flexible transmission technologies can enhance operation efficiency and ATC by altering the constraints (3) and (4) in the DCOPF formulation presented above. There are four ways to alter these constraints mathematically: (i) controlling the phase angles in (4), (ii) adjusting the susceptance in (4), (iii) removing the constraints for a line (switching it out), and (iv) changing the limits in (3).
PSTs can provide controllability over \(\theta_{k,\mathrm{to}}\) and \(\theta_{k,\mathrm{to}}\) in (4), effectively enabling the power flow to be controlled [18]. This controllability can be integrated into the DCOPF formulation by introducing a new variable \(\phi_{k}\) into the line flow constraint (4) to extend the feasible region to a wider area. This is shown in Fig. 1(a).
With the deployment of series FACTS devices, the reactance of transmission lines can be altered so that power flows can be rerouted to avoid transmission bottlenecks. Devices such as the thyristor-controlled series compensator (TCSC), the static synchronous series compensator (SSSC), and the unified power flow controller (UPFC) are widely studied in previous literature and have been deployed in actual industry applications. The TCSC directly adjust the line susceptance, making (4) a nonlinear equation. The UPFC and the SSSC use voltage injections to emulate susceptance adjustments. Techniques and modeling to efficiently incorporate series FACTS into power system operation models are presented in [19, 20]. The impact of reactance control from TCSC-type devices can be visualized through expanded feasible region as shown in Fig. 1(b).
With transmission switching, the status of the transmission elements can be altered so that power flow control functions are provided [21]. The formulations of (3) and (4) are changed, using the big-\(M\) method, with the introduction of binary variable \(z_{k}\) representing line switching [22]:
\[-f_{k}^{\max}z_{k}\leq f_{k}\leq f_{k}^{\max}z_{k},k\in K; \tag{7}\] \[f_{k}-b_{k}(\theta_{k,\mathrm{to}}-\theta_{k,\mathrm{fr}})+(z_ {k}-1)M\leq 0,k\in K;\] (8) \[f_{k}-b_{k}(\theta_{k,\mathrm{to}}-\theta_{k,\mathrm{fr}})-(z_ {k}-1)M\geq 0,k\in K. \tag{9}\]
Transmission switching can also be considered as a discrete susceptance control, where the susceptance is adjusted to zero for a line that is switched out of the system. It is also worth noting that transmission switching can be performed utilizing existing assets [23], whereas line reactance and phase angle control require the installation of additional devices, which can involve hefty investments.
Under static line rating, the thermal limit \(f_{k}^{\max}\) is a parameter, and traditionally the value is given with a conservative estimate. With DLR, \(f_{k}^{\max}\) is dynamically updated based on monitoring of real-time weather conditions or communication of the actual conductor temperature. Thus, DLR enables the adoption of higher limits that will increase transmission system capacity [24].
As an alternative to AC transmission systems, HVDC systems are superior in some applications, including long-distance transmission, offshore renewable integration, and regional electricity market interconnections [25]. The unique controllability features that HVDC systems provide make them suitable for managing congestion and providing flexibility on the grid level [26].
## III Economic Valuation and Market Integration of Flexible Transmission
### _Quantification of Economic Value_
The first step towards designing an efficient market-based scheme for flexible transmission technologies is quantifying the economic benefits of such technologies. This could be evaluated as social welfare enhancement or cost savings in markets with inelastic demand for electricity. Quantifying the benefits is essential for developing market structures to provide the correct incentives for the efficient operation of flexible transmission. A well-designed incentive structure will ensure that the flexible transmission owner benefit is aligned with social welfare improvement. In cases where the optimal direction of adjustment is not aligned with the owner's interest, the market-based scheme should provide compensation schemes for the owner to operate the device in the optimal direction.
The most common benefit of flexible transmission in the existing literature is dispatch cost reduction. Different levels of savings have been reported in various previous studies [22, 27, 28]. With recent developments in FACTS technology, modular lightweight versions are introduced to the flexible transmission market known as distributed/modular FACTS (D-FACTS or M-FACTS) with enhanced controllability and congestion management capabilities. Ref. [29] evaluates the operation cost savings provided by implementing FACTS and D-FACTS. The evaluation is carried out through a linearized optimal power flow model under different loading scenarios. The results show that the benefits of both FACTS and D-FACTS are higher than break-even costs. D-FACTS offers higher economic value than FACTS incurring cost savings of up to 2.55%.
Other types of benefits are also quantified by previous research. Ref. [30] evaluates the economic benefits of transmission switching providing both c
Fig. 1: Feasible region extension by: (a) phase shift (b) susceptance adjustment.
reliability enhancement in the ISO-NE system. In [32], the authors conduct a case study to evaluate the economic benefits of implementing DLR to consumers by studying the electricity prices at both ends of a transmission bottleneck.
Overall, the literature suggests that deploying flexible transmission can provide various benefits, each of which should be quantified separately. These benefits are important in evaluating the performance of flexible transmission technologies and can be the basis for developing economic incentives and compensation mechanisms. The quantification of such benefits is summarized in Table I. This evaluation lays the foundation for efficient incentive design for market operation and further helps the investors with the right choice of technology.
### _Incentive Design and Market Integration_
Although flexible transmission does not possess the same characteristics as bulk transmission expansion projects, they are still regulated as a part of transmission system upgrades and implemented upon ISO/RTO transmission upgrades requirement. This scheme, however, provides no incentive for investing in these technologies, and the deployment has been slow so far. Several compensation mechanisms have been proposed in recent studies to accelerate the proliferation of such technologies through performance-based incentives.
Financial transmission rights (FTRs) are risk-hedging tools designed to minimize the congestion price risk for forward contracts and are successfully implemented in various power markets [33]. In [34], a market structure, where owners of power flow controllers receive FTR allocations, is proposed to solve the lack of incentive problem under the existing market structure. The authors argued that additional FTRs should be assigned to FACTS device owners. Revenue adequacy and performance of the proposed mechanism are demonstrated on 2-bus and 3-bus systems. However, the difficulty in identifying which particular set of FTRs would correspond to a transmission expansion project, and the order in which projects are built affects the rights awarded can be a drawback for using FTR-style rights to compensate merchant transmission projects [35].
Besides directly using FTR allocation, several previous studies proposed marginal value or other metrics as a compensation mechanism for flexible transmission. It is argued in [36] that an important issue for using FACTS devices to manage congestion in the deregulated market is the compensation scheme for the utilization of FACTS devices and penalty for users to operate at their limits and addressed both in their proposed price scheme. Under such a scheme, FACTS device owners receive a regulated portion of the total cost savings incurred by their operation. They also receive a penalty from loads when the device is operating at its limits, which is proportional to the value of the Lagrangian dual variable associated with the FACTS operating constraints. However, the proposal is revenue inadequate and incompatible with wholesale energy market structures. Yet, the modeling of FACTS devices presented in this paper is still valuable. Ref. [37] seeks to address the positive externality problem in the transmission payment method proposed in [35]. In [35], each transmission element receives a payment equal to the active power flow multiplied by the locational marginal price (LMP) different at the two ends of the element. This creates a positive externality problem, where a flow on a line can increase due to the actions taken by another market player, but the line owner will receive the benefits, due to the increased flow. Ref. [37] alternatively proposes a sensitivity-based calculation of the marginal value of susceptance adjustment by variable-impedance FACTS devices. However, the mathematical proof of revenue adequacy is limited to the case that susceptance adjustment increases the flow on the line that the FACTS device is installed, which is when FACTS increases the absolute value of susceptance. An investment recovery scheme for FACTS devices is proposed in [38], which is based on the load and generator surplus increase due to FACTS deployment. Such a scheme can be utilized as a performance-based incentive for FACTS deployment. In [39], similar to the proposal in [35], a metric is introduced to identify the favorable candidate lines for transmission switching. Although this proposal is not intended for the market, it can be used to develop a compensation mechanism.
Besides the widely studied benefits of reducing the operation cost and facilitating renewable generation integration, the value of flexible transmission in transmission planning as an asset to provide investment flexibility and risk alleviation has also been explored by the existing literature. New transmission projects are capital-intensive and are, in most cases, irreversible. Technologies such as FACTS and PST can provide investment flexibility to avoid unfavorable transmission expansion plans due to uncertainty introduced by future integration of renewable generation [31, 40]. Simulation studies in [40] and [31] show the option value of flexible transmission in long-term transmission expansion projects. The results in [40] show that the option value provided by FACTS devices for deferment of new transmission line investments can be 12% of the net present value (NPV) of transmission expansion. It is shown in [31] that PSTs can bring the total value of PS13.1 million (reducing investment cost from PS5609 million to PS596 million) in investment cost reduction while providing the transmission expansion planning projects with enough flexibility to reduce uncertainty in investment decisions. However, no existing literature has incorporated such expansion strategies into a merchant transmission scheme.
### _Industry Practice and ISO/RTO Experience_
The issues of a regulated RoR and the lack of incentives are known to regulatory entities such as FERC, the ISO/RTOs, and the industry. Over the years, there have been endeavors to make changes and facilitate the deployment of flexible transmission technologies. The U.S. Department of Energy, in a study conducted in the early 2000s, has highlighted the importance of a performance-based regulation (PBR) and revealed that the PBR in the UK led to congestion cost reduction in substantial amounts [41]. Ref. [41] also highlighted that the PBR scheme in the UK showed that incentives for enhanced
transmission system operations could have an important role in enhancing transmission operation efficiency, which includes increasing investment in innovative transmission technologies such as flexible transmission.
Studies regarding the benefits of flexible transmission have been conducted by ISO/RTOs as well. In [42], ISO New England (ISO-NE) discussed the value of implementing FACTS and HVDC in their system. Ref. [42] stated that because of the controllability of HVDC, it is attractive for merchant transmission line applications, and that opportunities for merchant FACTS and HVDC are open in New England. However, no performance-based compensation mechanism regarding merchant transmission projects is mentioned. It is also highlighted by the Pennsylvania-New Jersey-Maryland Interconnection (PJM) that precise control by HVDC makes it ideal for merchant transmission projects [43], with an example being the SOO Green HVDC link [44]. However, the mechanism is still being developed to incorporate inter-RTO HVDC links into the PJM capacity market to allow customers to benefit from increased competition, greater geographic and technological generation diversity, and the additional instantaneous control offered by dispatchable HVDC facilities [44].
In September 2021, FERC held the "Workshop to Discuss Certain Performance-based Ratemaking Approaches" [45]. With a focus on shared savings, this workshop was intended to stimulate the development of transmission technologies. The transmission technologies, or grid-enhancing technologies (GET) discussed at the workshop include flexible transmission technologies such as FACTS devices. The Shared Savings Proposal [46] made by Working for Advanced Transmission Technologies (WATT coalition) and AEE presented a compensation scheme, where 25% of the savings achieved by implementing transmission technologies are allocated. The proposal also presented a re-evaluation scheme that if the cost-benefit ratio of the project can satisfy the predefined requirement, the incentive will be awarded for the subsequent three years. Despite introducing a performance-based mechanism, this proposal does not have any information regarding making flexible transmission market participants. It also does not address how the compensations should be allocated if multiple projects are planned or carried out in the same time period.
## IV Challenges and Future Research
The existing proposals regarding market structures and incentives for flexible transmission provide important references and guidance for future developments on this topic. The following challenges in this field need to be addressed in future proposals.
* The existing proposals involve a variety of ways of providing incentives/compensations for flexible transmission. They involve shared savings, FTR allocations, as well as generation and load surpluses. Further investigations are needed for each scheme to determine the proper schemes for each technology in different scenarios. A more general compensation scheme for different technologies is desirable. Additionally, Previous studies mainly focused on continuous resources such as FACTS. More compensation mechanism proposals for other technologies are needed [47]. Notably, the discrete changes in the network topology have unpredictable impacts on locational marginal prices (LMP) and might create revenue inadequacy in current FTR markets [48]. Creating a market-compatible mechanism for accruing the economic benefits of optimal transmission switching remains an interesting research subject.
* Several previous studies use small systems which only have two to three buses to demonstrate the effectiveness of the proposed schemes. Numerical studies on larger systems or using real system data are preferable in future research.
* Mathematical proofs, regarding revenue adequacy and alignment of social welfare improvement with flexible transmission incentives, are key for the future development of market structures for flexible transmission.
## V Conclusions
This paper reviews the existing efforts to establish market structures and economic incentives to stimulate the adoption of flexible transmission technologies. Considering the technological advancements, the lack of performance-based compensation is a key obstacle to increasing the utilization of flexible transmission. Effective solutions to resolve this problem will be vital for enhancing the efficiency of power system operation, improving utilization of the existing transmission system, and ultimately facilitating renewable generation integration to achieve decarbonization targets. While the existing literature provides directions for future research, the problem remains unsolved. We do not yet have appropriate ways to provide incentives for flexible transmission operations.
|
2302.05430 | Oracle-Efficient Smoothed Online Learning for Piecewise Continuous
Decision Making | Smoothed online learning has emerged as a popular framework to mitigate the
substantial loss in statistical and computational complexity that arises when
one moves from classical to adversarial learning. Unfortunately, for some
spaces, it has been shown that efficient algorithms suffer an exponentially
worse regret than that which is minimax optimal, even when the learner has
access to an optimization oracle over the space. To mitigate that exponential
dependence, this work introduces a new notion of complexity, the generalized
bracketing numbers, which marries constraints on the adversary to the size of
the space, and shows that an instantiation of Follow-the-Perturbed-Leader can
attain low regret with the number of calls to the optimization oracle scaling
optimally with respect to average regret. We then instantiate our bounds in
several problems of interest, including online prediction and planning of
piecewise continuous functions, which has many applications in fields as
diverse as econometrics and robotics. | Adam Block, Alexander Rakhlin, Max Simchowitz | 2023-02-10T18:45:52Z | http://arxiv.org/abs/2302.05430v2 | # Oracle-Efficient Smoothed Online Learning for Piecewise Continuous Decision Making
###### Abstract
Smoothed online learning has emerged as a popular framework to mitigate the substantial loss in statistical and computational complexity that arises when one moves from classical to adversarial learning. Unfortunately, for some spaces, it has been shown that efficient algorithms suffer an exponentially worse regret than that which is minimax optimal, even when the learner has access to an optimization oracle over the space. To mitigate that exponential dependence, this work introduces a new notion of complexity, the generalized bracketing numbers, which marries constraints on the adversary to the size of the space, and shows that an instantiation of Follow-the-Perturbed-Leader can attain low regret with the number of calls to the optimization oracle scaling optimally with respect to average regret. We then instantiate our bounds in several problems of interest, including online prediction and planning of piecewise continuous functions, which has many applications in fields as diverse as econometrics and robotics.
###### Contents
* 1 Introduction
* 2 Formal Setting and Notation
* 3 Follow the Perturbed Leader and Generalized Brackets
* 4 Exponential Perturbations and Piecewise Continuous Functions
* 4.1 Piecewise-Continuous Prediction
* 4.2 Piecewise Continuous Prediction with Generalized Affine Boundaries
* 4.3 Piecewise Continuous Prediction with Polynomial Boundaries
* 5 Smoothed Multi-Step Planning
* A Related Work
* B Proof of Proposition 3.1
* C Proof of Proposition 3.2
* D Proof of Corollary 3.1
* E Proof of Theorem 1
* E.1 Bounding the Stability Term
* E.2 Concluding the Proof
* F Proofs related to Piecewise Continuous Functions with Generalized Affine Boundaries
* F.1 Proof of Theorem 2
* F.2 Proof of Corollary 4.1
* F.3 Replacing \(\overline{\ell}\) with \(\ell\)
* G Proofs from Section 4.3
* G.1 Polynomial Smoothness
* G.2 Proof of Theorem 3
* H Proof of Theorem 4
## 1 Introduction
The online learning setting has become the most popular regime for studying sequential decision making with dependent and potentially adversarial data. While this paradigm is attractive due to its great generality and minimal set of assumptions (Cesa-Bianchi and Lugosi, 2006), the worst-case nature of the adversary creates statistical and computational challenges (Rakhlin et al., 2015; Littlestone, 1988; Hazan and Koren, 2016). In order to mitigate these difficulties, Rakhlin et al. (2011) proposed the _smoothed_ setting, wherein the adversary is constrained to sample data from a distribution whose likelihood ratio is bounded above by \(1/\sigma\) with respect to a fixed dominating measure, which ensures that the adversary cannot choose worst-case inputs with high probability. As in other online learning settings, performance is measured via _regret_ with respect to a best-inhindsight comparator (Cesa-Bianchi and Lugosi, 2006).
Recent works have demonstrated strong computational-statistical tradeoffs in smoothed online learning: while there are statisticaly efficient algorithms that can enjoy regret _logarithmic_ in \(1/\sigma\), oracle-efficient algorithms necessarily suffer regret scaling _polynomially_ in \(1/\sigma\)(Haghtalab et al., 2022; Block et al., 2022; Block et al., 2022), where the learner is assumed access to an Empirical Risk Minimization (ERM) oracle that is able to efficiently optimize functionals on the parameter space. This gap is significant, because in many applications of interest, the natural scaling of \(\sigma\) is _exponential_ in ambient problem dimension (Block and Simchowitz, 2022).
A natural question remains: under which types of smoothing is it possible to design oracle-efficient algorithms with regret that scales _polynomially_ in problem dimension? A partial answer was provided by Block and Simchowitz (2022), who demonstrate an efficient algorithm based on the John Ellipsoid which attains \(\log(T/\sigma)\cdot\text{poly}(\text{dimension})\)-regret for _noiseless_ linear classification, and for a suitable generalization to classification with polynomial features. They also demonstrate that, under a different smoothness condition - \(\sigma_{\text{dir}}\)-directional smoothness - the perceptron algorithm automatically provides regret sublinear-in-\(T\) and polynomial in \(1/\sigma_{\text{dir}}\). Crucially, \(\sigma_{\text{dir}}\) is _dimension-free_ for many distributions of interest, circumventing the curse-of-dimension encountered in previous \(\text{poly}(1/\sigma)\)-regret bounds (Block et al., 2022; Haghtalab et al., 2022).
In this work we take oracle-efficiency as a necessary precondition and expand the set of problems that efficient smoothed online learning can address. A central example to keep in mind is that of
piecewise affine (PWA) functions, where a PWA function is defined by a finite set of regions in Euclidean space, within each of which the function is affine. Such classes naturally arise in segmented regression applications common in statistics and econometrics (Feder, 1975; Bai and Perron, 1998; Yamamoto and Perron, 2013), as well as in popular models for control systems (Borrelli, 2003; Henzinger and Sastry, 1998).
Unfortunately, because of the discontinuities that arise when crossing regions, PWA regressors are _not_ learnable in the adversarial setting even with unbounded computation time, due to the fact that they contain the class of linear thresholds, whose lack of online learnability is well-known (Littlestone, 1988); however, a smoothness assumption is natural in this setting, due to the injection of noise empiricists already incorporate (Posa et al., 2014; Suh et al., 2022). Unfortunately, the nature of the injected noise is such that the smoothness parameter \(\sigma\) will be exponential in the dimension of the context space, as above, and thus previous guarantees do not suffice for applications. We are thus left with the question of designing practical algorithms that are provably (oracle)efficient in the smoothed online learning setting.
Below, we will propose a measure of complexity based on classical bracketing numbers (Blum, 1955; Gine and Nickl, 2021) that, if bounded, leads to a practical algorithm that experiences provably small regret. In particular, we will consider instantiations of the well-known Follow-the-Perturbed-Leader (FTPL) algorithm (Kalai and Vempala, 2005), where, at each time \(1\leq t\leq T\), we sample a random path \(\omega_{t}(\theta)\) on \(\Theta\) and select \(\theta_{t}\in\arg\min_{\theta}L_{t-1}(\theta)+\omega_{t}(\theta)\), with \(L_{t-1}(\theta)\) denoting the cumulative loss up to time \(t-1\). Standard analyses of FTPL (Agarwal et al., 2019, Suggala and Netrapalli, 2020; Haghtalab et al., 2022; Block et al., 2022) require that the loss functions be Lipschitz in the parameter \(\theta\), which clearly does not hold for the central example of PWA functions. We show, however, that smoothness guarantees that many loss functions are Lipschitz _in expectation_, up to an additive constant depending on the complexity of the class as measured by our proposed generalization of bracketing numbers. Using this fact, we provide a template for proving regret guarantees for a lazy instantiation of FTPL.
While the theory described above may be of technical interest in its own right, we instantiate our results in several examples. We replace the standard notion of smoothness with the related concept of directional smoothness introduced above (Block and Simchowitz, 2022). We adapt results from Agarwal et al. (2019); Suggala and Netrapalli (2020) on FTPL with an exponentially distributed perturbation and exhibit a practical and provably low-regret algorithm for piecewise continuous loss functions with generalized affine boundaries. We then generalize this result to loss functions with polynomial boundaries, assuming a more constrained adversary, and finally instantiate our results in a setting motivated by robotic planning. In more detail:
* In Section3, we introduce a new measure of the size of a class, the generalized bracketing number, which combines assumptions on the adversary with the complexity of the space and thus can be small in many situations of interes. We use generalized bracketing numbers to prove Proposition3.2, which says that if an adversary is suitably constrained and the generalized bracketing number with respect to a particular pseudo-metric is controlled, then a lazy version of FTPL experiences low regret. Along the way, we show in Proposition3.1 that control of the generalized bracketing number leads to a concentration inequality that is uniform over both parameters and adversaries.
* In Theorem1, we apply the general theory developed in Section3 to the special case of finite dimensional \(\Theta\). In particular, by adapting arguments of Agarwal et al. (2019); Suggala and
Netrapalli (2020), we show that if the generalized bracketing numbers of \(\Theta\) are controlled, then Algorithm 2 can achieve average regret at most \(\epsilon\) with the optimal \(\widetilde{\mathcal{O}}\left(\epsilon^{-2}\right)\) number of calls to the ERM oracle.
* In Theorem 2 and Corollary 4.1, we consider an even more concrete setting, where the loss function is piecewise continuous with affine boundaries. In particular, we show that if the adversary is \(\sigma_{\text{dir}}\)-directionally smooth, then Algorithm 2 attains average regret \(\epsilon\) with only \(\widetilde{\mathcal{O}}\left(\sigma_{\text{dir}}^{-1}\epsilon^{-2}\right)\) calls to the ERM oracle, removing the exponential dependence on the dimension that would come from applying Block et al. (2022) and attaining optimal dependence on \(\epsilon\).
* In Theorem 3, we generalize the results of Corollary 4.1 and show that if the adversary is further constrained to be polynomially smooth (see Definition 4.1) and the loss function is piecewise continuous with boundaries defined by polynomials of degree at most \(r\), then Algorithm 2 can achieve average regret \(\epsilon\) with at most \(\widetilde{\mathcal{O}}\left(\epsilon^{-2r}\right)\) calls to the ERM oracle.
* In Section 5, we consider a setting of piecewise Lipschitz "hybrid" dynamical systems (Henzinger and Sastry, 1998), where the boundaries within regions are either linear are polynomial. These can model a number of dynamical systems popular in robotics, notably piecewise affine systems (Borrelli, 2003; Marcucci and Tedrake, 2019) and piecewise-polynomial systems (Posa et al., 2015). We demonstrate in Theorem 4 that, with smoothning in the inputs and dynamics, our proposed FTPL algorithm attains low-regret in an online planning setting. To our knowledge, this is the first low-regret algorithm for planning in hybrid systems that exhibit discontinuities.
We begin the paper by formally setting up the problem and introducing a number of prerequisite notions, before continuing to state and discuss our results. An extended discussion of related work is deferred to Appendix A for the sake of space.
## 2 Formal Setting and Notation
Formally, we consider the problem of online learning with a constrained adversary. Given some decision space \(\Theta\) and context space \(\mathcal{Z}\), as well as a loss function \(\ell:\Theta\times\mathcal{Z}\to[0,1]\), online learning proceeds in rounds \(1\leq t\leq T\). At each time \(t\), the adversary selects some \(z_{t}\in\mathcal{Z}\) and the learner selects some \(\theta_{t}\in\Theta\) and suffers loss \(\ell(\theta_{t},z_{t})\) with the goal of minimizing regret with respect to the best \(\theta\in\Theta\) in hindsight, \(\mathbb{E}\left[\operatorname{Reg}_{T}\right]=\mathbb{E}\left[\sum_{t=1}^{T} \ell(\theta_{t},z_{t})-\inf_{\theta\in\Theta}\sum_{t=1}^{T}\ell(\theta,z_{t})\right]\). For the purposes of measuring oracle complexity, we will be particularly interested in the normalized regret \(T^{-1}\operatorname{Reg}_{T}\). Frequently in applications, we will consider the special case of online supervised learning where \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\) and \(z=(x,y)\) consists of a context \(x\) and label \(y\); in this case, we distinguish between _proper_ learning, where the learner chooses \(\theta_{t}\) before seeing \(x_{t}\), and _improper_ learning, where the learner is able to choose \(\theta_{t}\) depending on the revealed \(x_{t}\).
Due to the statistical and computational challenges of fully adversarial online learning (Rakhlin et al., 2015; Hazan and Koren, 2016), we will constrain the adversary to choose \(z_{t}\sim p_{t}\), where \(p_{t}\in\mathcal{M}\subset\Delta(\mathcal{Z})\) is a distribution on \(\mathcal{Z}\) possibly depending on the history up to time \(t\) and \(\mathcal{M}\) is some restricted class of distributions. In this work, we will mostly focus on the setting where \(\mathcal{M}\) consists of smooth distributions in some sense:
**Definition 2.1**.: Given a space \(\mathcal{X}\), a measure \(\mu\in\Delta(\mathcal{X})\), and some \(\sigma<0\), we say that a measure \(p_{t}\) is \(\sigma\)-smooth with respect to \(\mu\) if the likelihood ratio with respect to \(\mu\) is uniformly bounded by \(\sigma^{-1}\), i.e., \(\left|\left|\frac{dp_{t}}{d\mu}\right|\right|_{\infty}\leq\frac{1}{\sigma}\). If \(\mathcal{Z}\subset\mathbb{R}^{d}\) for some \(d\), we say that \(p_{t}\) is \(\sigma_{\text{dir}}\)-directionally smooth if, for any unit vector \(\mathbf{w}\in\mathcal{S}^{d-1}\), the distribution of \(\left\langle\mathbf{w},\mathbf{x}\right\rangle\) is \(\sigma_{\text{dir}}\)-smooth with respect to the Lebesgue measure on the real line, where \(\mathbf{x}\sim p_{t}\).
As discussed further in the related work section, smoothness has recently become a popular assumption for smoothed online learning. Directional smoothness, introduced in Block and Simchowitz (2022) and used in Block et al. (2023), has provided a natural way to mitigate the dimensional dependence of standard smoothness in some commonly used systems.
Our algorithms will employ the computational primitive of an Empirical Risk Minimization (ERM) oracle:
**Definition 2.2**.: Given a space \(\Theta\), and functionals \(\ell_{i}:\Theta\to\mathbb{R}\) for \(1\leq i\leq m\), define an Empirical Risk Minimization (ERM) oracle as any oracle that optimizes over \(\Theta\), i.e., \(\widetilde{\theta}=\mathsf{ERMOracle}\left(\sum_{i=1}^{m}\ell_{i}(\theta)\right)\) if \(\widetilde{\theta}\in\arg\min_{\theta\in\Theta}\sum_{i=1}^{m}\ell_{i}(\theta)\).
Definition 2.2 is a common assumption in the study of computationally efficient online learning (Hazan and Koren, 2016; Block et al., 2022; Haghtalab et al., 2022), with many heuristics for popular function classes available for practical application (LeCun et al., 2015; Garulli et al., 2012). In the sequel, we will always suppose that ther learner has access to an ERM Oracle and measure the computational complexity of the algorithm by the number of calls to \(\mathsf{ERMOracle}\). In particular, we are interested in the oracle complexity of achieving average regret \(\epsilon\), i.e., the number of oracle calls that suffice to ensure that \(T^{-1}\cdot\mathbb{E}\left[\operatorname{Reg}_{T}\right]\leq\epsilon\). While in the main body we assume that \(\mathsf{ERMOracle}\) is exact for the sake of clean presentation, in the appendix we provide statements and proofs requiring only an approximate oracle, with a possibly perturbation-dependent error contributing additively to our final regret guarantees.
In the following section, we will introduce a new notion of complexity, the generalized bracketing number of a space \(\Theta\). Here, we will recall the classical notion of bracketing entropy, both for the sake of comparison and for future reference with respect to one of our results:
**Definition 2.3** (From Section 3.5.2 in Gine and Nickl (2021)).: For a function class \(\mathcal{F}:\mathcal{Z}\to\mathbb{R}\) and a measure \(\mu\in\Delta(\mathcal{Z})\), we say that a partition \(\mathcal{N}=\{\mathcal{B}_{i}\}\) of \(\mathcal{F}\) is an \(\epsilon\)-bracket with respect to \(\mu\) if for all \(\mathcal{B}_{i}\), it holds that \(\mathbb{E}_{\nu}\left[\sup_{f,g\in\mathcal{B}_{i}}\left|f(z)-g(z)\right|\right]\leq\epsilon\). The bracketing number, \(\mathcal{N}_{\|}\left(\mathcal{F},\mu,\epsilon\right)\) is the minimal size of such a partition.
Control of the bracketing numbers of a function class classically lead to uniform laws of large numbers and uniform central limit theorems, with many common function classes having well-behaved such numbers; for more detail, see (Gine and Nickl, 2021).
NotationIn the sequel, we will reserve \(z\) for contexts and \(\theta\) for parameters. We will always denote the horizon by \(T\), loss functions by \(\ell\), and will make vectors bold. We will use \(\mathcal{O}\left(\cdot\right)\) notation to suppress universal constants and \(\widetilde{\mathcal{O}}\left(\cdot\right)\) to suppress polylogarithmic factors. We will let \(\left|\left|\cdot\right|\right|_{1}\) denote the \(\ell_{1}\) norm in Euclidean space and the unadorned \(\left|\left|\cdot\right|\right|\) denote the Euclidean norm.
## 3 Follow the Perturbed Leader and Generalized Brackets
In this section, we propose our algorithm and define the complexity parameters that ensure we experience low expected regret. In the following section, we will provide examples. We will consider an instantiation of the Follow-the-Perturbed-Leader (FTPL) class of algorithms (Kalai and Vempala, 2005), where, at each time \(1\leq t\leq T\), we construct a sample path \(\omega_{t}(\theta)\) drawn independently and identically across \(t\) from some stochastic process on \(\Theta\) and select
\[\theta_{t}=\operatorname*{arg\,min}_{\theta\in\Theta}L_{t-1}( \theta)+\omega_{t}(\theta), \tag{3.1}\]
where \(L_{t-1}(\theta)=\sum_{s=1}^{t-1}\ell(\theta,z_{s})\). The classical analysis of FTPL uses the so-called 'Be-The-Leader' lemma (Kalai and Vempala, 2005, Lemma 3.1) to decompose regret into the size of the perturbation and the stability of the predictions, i.e., if the learner plays \(\theta_{t}\) from (3.1), then regret is bounded as follows:
\[\mathbb{E}\left[\operatorname{Reg}_{T}\right]\leq 2 \cdot\mathbb{E}\left[\sup_{\theta\in\Theta}\omega_{1}(\theta)\right]+\sum_{t =1}^{T}\mathbb{E}\left[\ell(\theta_{t},z_{t})-\ell(\theta_{t+1},z_{t})\right]. \tag{3.2}\]
Typically, the challenge in analysing the regret incurred by FTPL is in bounding the second term in (3.2), the stability term. A common assumption involved in this analysis is that the loss \(\ell\) is Lipschitz in \(\theta\)(Agarwal et al., 2019; Suggala and Netrapalli, 2020; Block et al., 2022); unfortunately, for many classes of interest, this assumption does not hold.
To motivate our approach, consider the simple setting of learning linear thresholds, where \(\theta\in[0,1]\) and \(\ell(\theta,z)=\mathbb{I}\left[y\neq\operatorname{sign}(x-\theta)\right]\) for \(z=(x,y)\in\mathcal{Z}=[0,1]\times\{\pm 1\}\). In this case, it is clear that \(\theta\mapsto\ell(\theta,z)\) is not Lipschitz (or even continuous) and so the results of Agarwal et al. (2019); Suggala and Netrapalli (2020) do not apply; however, a simple computation tells us that if the adversary is \(\sigma\)-smooth with respect to the Lebesgue measure, then \(\theta\mapsto\mathbb{E}_{z}\left[\ell(\theta,z)\right]\)_is_ Lipschitz. Naively, we might then hope that the stability term \(\mathbb{E}\left[\ell(\theta_{t},z_{t})-\ell(\theta_{t+1},z_{t})\right]\) can be controlled by \(|\theta_{t}-\theta_{t+1}|\) and a similar argument as in Agarwal et al. (2019); Suggala and Netrapalli (2020) could be applied. This idea does not work because, while it is true that for any fixed \(\theta\in\Theta\), smoothness of \(z_{t}\) conditioned on the history implies that \(\mathbb{E}\left[\ell(\theta_{t},z_{t})-\ell(\theta,z_{t})\right]\lesssim| \theta_{t}-\theta|\), in fact \(\theta_{t+1}\) depends on \(z_{t}\) and so it is _not_ true that the distribution of \(z_{t}\) conditioned on \(\theta_{t+1}\) is necessarily smooth. We will not wholly discard the approach, however; instead, we will show that if the class of functions \(\theta\mapsto\ell(\theta,z)\) is small with respect to a particular notion of complexity, then a similar argument holds. To make this precise, consider the following definition:
**Definition 3.1**.: Let \(\mathcal{M}\) be a class of distributions on some space \(\mathcal{Z}\) and suppose that \(\rho:\Theta\times\Theta\times\mathcal{Z}\to\mathbb{R}\) is a pseudo-metric on the space \(\Theta\), parameterized by elements of \(\mathcal{Z}\). We say that a set \(\{(\theta_{i},\mathcal{B}_{i})\}\subset\Theta\times 2^{\Theta}\) is a generalized \(\epsilon\)-bracket if \(\Theta\subset\bigcup_{i}\mathcal{B}_{i}\) and for all \(i\), it holds that
\[\sup_{\nu\in\mathcal{M}}\mathbb{E}_{z\sim\nu}\left[\sup_{\theta\in \mathcal{B}_{i}}\rho(\theta,\theta_{i},z)\right]\leq\epsilon.\]
We denote by \(\mathcal{N}_{\mathcal{M},\left[\right]}\left(\Theta,\rho,\epsilon\right)\) the minimal size of a generalized \(\epsilon\)-bracket.
Note the similarity of Definition 3.1 with the classical notion from Definition 2.3: generalized brackets require that the expected diameter of a given partition \(\mathcal{B}_{i}\) is small _uniformly over measures_ in some class \(\mathcal{M}\); in fact, if \(\mathcal{M}\) is a singleton, we recover the classical notion. The utility of generalized \(\epsilon\)-brackets over other notions of complexity, like standard covering numbers is as follows:
**Proposition 3.1**.: _Let \(\mathcal{M}\) and \(\rho\) be as in Definition 3.1 and suppose that \(z_{1},\ldots,z_{n}\) are generated such that the law \(p_{i}\) of \(z_{i}\) conditioned on \(\sigma\)-algebra \(\mathcal{F}_{i}\) generated by the \(z_{j}\) up to time \(i\) satisfies \(p_{i}\in\mathcal{M}\) for all \(1\leq i\leq n\). Suppose further that for all \(z\in\mathcal{Z}\), it holds that \(\sup_{\theta,\theta^{\prime}\in\Theta}\rho(\theta,\theta^{\prime},z)\leq D\). Then for any \(\epsilon,\delta>0\), with probability at least \(1-\delta\), it holds simultaneously for all \(\theta,\theta^{\prime}\in\Theta\) that:_
\[\left|\sum_{i=1}^{n}\rho(\theta,\theta^{\prime},z_{i})\right|\leq 4n\cdot \sup_{\nu\in\mathcal{M}}\mathbb{E}_{\nu}\left[\rho(\theta,\theta^{\prime},z) \right]+8\epsilon n+6D^{2}\log\left(\frac{2\mathcal{N}_{\mathcal{M},\|}\left( \Theta,\rho,\epsilon\right)}{\delta}\right). \tag{3.3}\]
The proof of Proposition 3.1 can be found in Appendix B and proceeds by applying Freedman's inequality and controlling the supremum of a sum by the sum of suprema. It is somewhat surprising that, despite this seemingly very loose bound, we are able to achieve below the expected \(\widetilde{\mathcal{O}}\left(\epsilon^{-2}\right)\) oracle complexity guarantees in a wide variety of settings.
Critically, because (3.3) holds uniformly over \(\theta^{\prime}\in\Theta\), we may apply Proposition 3.1 to \(\theta^{\prime}=\theta_{t+1}\) and escape the challenge presented by \(z_{t}\) not being smooth when conditioned on \(\theta_{t+1}\). There are two remaining problems before we can present our algorithm. First, due to the additive statistical error in (3.3), if \(n\) is too small, then Proposition 3.1 is vacuous. To mitigate this problem, we will run FTPL in epochs. For some fixed \(n\in\mathbb{N}\), and for all \(\tau\geq 1\), let \(\widetilde{L}_{\tau}(\theta)=L_{\tau n}(\theta)\), and define \(\mathcal{I}_{\tau}=\left\{i|(\tau-1)n+1\leq i\leq\tau n\right\}\) as well as \(\widetilde{\ell}_{\tau}(\theta)=\sum_{t\in\mathcal{I}_{\tau}}\ell(\theta,z_{t})\). We will run a lazy version of FTPL, where we update \(\theta_{t}=\widetilde{\theta}_{\tau}\) at the beginning of each \(\mathcal{I}_{\tau}\) and let \(\theta_{t}=\widetilde{\theta}_{\tau}\) until the next change of epoch. The laziness allows the first term in (3.3) to dominate when we apply Proposition 3.1. The full algorithm is summarized in Algorithm 1.
```
1:Initialize ERM Oracle ERM Oracle, epoch length \(n\), perturbation distribution \(\Omega\)
2:for\(\tau=1,2,\ldots,T/n\)do
3: Sample\(\omega_{\tau}:\Theta\rightarrow\mathbb{R}\) from \(\Omega\) (% Sample Perturbation)
4:\(\widetilde{\theta}_{\tau}\leftarrow\mathsf{ERMOracle}\left(\widetilde{L}_{ \tau}(\theta)+\omega_{\tau}(\theta)\right)\) (% Call ERM Oracle on perturbed losses)
5:for\(t=(\tau-1)n+1,\ldots,\tau n\)do
6: Observe\(z_{t}\), Predict\(\widetilde{\theta}_{\tau}\), Receive\(\ell(\widetilde{\theta}_{\tau},z_{t})\)
```
**Algorithm 1** Lazy FTPL
The second challenge is to relate the stability terms in (3.2) to the pseudo-metric \(\rho\) evaluated on successive \(\widetilde{\theta}_{\tau}\). Thus, we will require that the losses satisfy the following structural condition:
**Definition 3.2**.: Suppose that that \(\Theta\) is a subset of some normed space equipped with norm \(||\cdot||\). We say that the pseudo-metric \(\rho:\Theta\times\Theta\times\mathcal{Z}\rightarrow\mathbb{R}\) satisfies the pseudo-isometry property with parameters \((\alpha,\beta)\) with respect to the class of distributions \(\mathcal{M}\) and the norm \(||\cdot||\) if for all \(\theta,\theta^{\prime}\in\Theta\), it holds that
\[\sup_{\nu\in\mathcal{M}}\mathbb{E}_{z\sim\nu}\left[\rho(\theta,\theta^{\prime},z)\right]\leq\alpha\cdot\left|\left|\theta-\theta^{\prime}\right|\right|^{ \beta}.\]
We are now prepared to state our first result bounding the regret of an instance of Algorithm 1:
**Proposition 3.2**.: _Suppose that we are in the constrained online learning setting, where the adversary is constrained to sample \(z_{t}\) from some distribution in the class \(\mathcal{M}\). Suppose further that there is a pseudo-metric \(\rho\) on \(\Theta\) parameterized by \(\mathcal{Z}\) satisfying the pseudo-isometry property of Definition
3.2, and for all \(\theta,\theta^{\prime}\in\Theta\) it holds that \(\sup_{\nu\in\mathcal{M}}\mathbb{E}_{\nu}\left[\ell(\theta,z)-\ell(\theta^{\prime},z)\right]\leq\sup_{\nu\in\mathcal{M}}\mathbb{E}_{\nu}\left[\rho(\theta,\theta^{ \prime},z)\right]\). If the learner plays Algorithm 1 and \(\sup_{\theta,\theta^{\prime}\in\Theta}\rho(\theta,\theta^{\prime},z)\leq D\), then for any \(\epsilon>0\), the expected regret is upper bounded by:_
\[\mathcal{O}\left(\mathbb{E}\left[\sup_{\theta\in\Theta}\omega_{1}(\theta) \right]+\epsilon T+\frac{TD^{2}}{n}\cdot\log\left(T\cdot\mathcal{N}_{\mathcal{ M},[]}\left(\Theta,\rho,\epsilon\right)\right)+2n\alpha\cdot\sum_{\tau=1}^{T/n} \mathbb{E}\left[\left|\left|\widetilde{\theta}_{\tau}-\widetilde{\theta}_{\tau+ 1}\right|\right|^{\beta}\right]\right).\]
We provide a complete proof in Appendix C. We first prove a variant of the Be-the-Leader lemma from Kalai and Vempala (2005) that allows for lazy updates, before applying Proposition 3.1 along with the pseudo-isometry property to control the stability term of the lazy updates with respect to the evaluated loss functions by the stability of the learner's predictions with respect to the relevant norm. Putting everything together concludes the proof. We remark that, as presented, it might appear that there is no disadvantage to setting \(n\) as large as possible; indeed the \(n\) dependence in the final sum appears to cancel out and increasing \(n\) decreases the third term. Unsurprisingly, this is not the case as increasing \(n\) reduces the stability of the learner's predictions and thus implicitly increases the final term, as is clear in the applications of this result.
Proposition 3.2 provides a template for proving regret bounds for different instantiations of Algorithm 1. In particular, for a given loss function \(\ell(\cdot,\cdot)\), it suffices to find a pseudo-metric \(\rho\), norm \(||\cdot||\), and noise distribution \(\Omega\) such that (a) \(\rho\) is a pseudo-isometry with respect to the norm \(||\cdot||\), (b) the generalized bracketing numbers of \(\Theta\) are small with respect to \(\rho\), and (c) the perturbation causes the lazy updates to be stable in the sense that \(\mathbb{E}\left[\left|\left|\widetilde{\theta}_{\tau}-\widetilde{\theta}_{ \tau+1}\right|\right|\right]\) is small. As an easy warmup for the results in the next section, we show that we can recover a weak version of the oracle-complexity upper bound of proper, smoothed online learning with the Gaussian process perturbation from Block et al. (2022), using a substantially simpler proof when the relevant function class has small bracketing entropy in the classical sense.
In this motivating example, we suppose that \(\Theta=\mathcal{F}\) denotes a function class and that we are in the online supervised learning setting, i.e., \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\) with \(\ell(\theta,z)=\widetilde{\ell}(f(x),y)\). We further suppose that the adversary is \(\sigma\)-smooth with respect to a known base measure \(\mu\) (recall Definition 2.1). As in Block et al. (2022, Theorem 10), we consider a Gaussian process perturbation, where we draw \(x_{1},\ldots,x_{m}\sim\mu\) independently, \(\gamma_{1},\ldots,\gamma_{m}\) standard gaussians, and let \(\omega(f)=\eta\cdot\sum_{i=1}^{m}\gamma_{i}f(x_{i})\).
**Corollary 3.1**.: _Suppose that we are in the smoothed online learning setting with a function class \(\mathcal{F}:\mathcal{X}\to\{\pm 1\}\) and with \(\widetilde{\ell}\) in the unit interval and Lipschitz with respect to the first argument for all choices of the second argument. If the learner plays Algorithm 1 with the Gaussian perturbation described above, then with the correct choice of hyperparameters, given in Appendix D, the learner can achieve average regret \(\epsilon\) with \(\widetilde{\mathcal{O}}\left(\frac{\epsilon^{-4}L^{3/5}}{\sigma^{2/5}}\cdot \log^{3/5}\left(\mathcal{N}_{\mathbb{[}}\left(\mathcal{F},\mu,\frac{\sigma}{ LT}\right)\right)\right)\) calls to the ERM oracle._
Note that the oracle complexity guarantee is weaker than that of Block et al. (2022); we include Corollary 3.1, and its proof in Appendix D, merely as a simple demonstration of our techniques and how they relate to more classical notions of function class complexity. We now proceed to examples where our machinery provides novel regret bounds in fundamental settings.
## 4 Exponential Perturbations and Piecewise Continuous Functions
In the previous section, we observed that Proposition 3.2 provided a template for proving regret bounds for different instantiations of FTPL and applied this technique to recover earlier results from
smoothed online learning. In this section, we provide new results for an important setting: piecewise continuous functions. Before we formally define piecewise continuous functions, we consider the more general case where the set \(\Theta\subset\mathbb{R}^{d}\) for some dimension \(d\). The template provided by Proposition 3.2 requires that we specify a perturbation distribution; whereas before we used a Gaussian process, here we adopt the approach of Agarwal et al. (2019), Suggala and Netrapalli (2020) and use an exponential perturbation. Summarized in Algorithm 2, we keep the lazy updating from Algorithm 1 but specify \(\omega(\theta)=-\eta\cdot\left\langle\xi,\theta\right\rangle\) for some scale parameter \(\eta>0\) and \(\xi=(\xi_{1},\ldots,\xi_{d})\) for \(\xi_{i}\sim\mathrm{Exp}(1)\) independently. With the exponential perturbation, we have the following regret bound:
**Theorem 1**.: _Suppose that we are in the constrained online learning setting of Proposition 3.2 with \(\Theta\subset\mathbb{R}^{d}\) such that \(\sup_{\theta,\theta^{\prime}\in\Theta}\left|\left|\theta-\theta^{\prime} \right|\right|_{1}=D<\infty\). Suppose further that the \(\mathcal{Z}\)-parameterized pseudo-metric \(\rho\) satisfies the pseudo-isometry property of Definition 3.2 with respect to \(\ell_{1}\) on \(\mathbb{R}^{d}\) and that \(\sup_{\nu\in\mathcal{M}}\mathbb{E}_{\nu}\left[\ell(\theta,z)-\ell(\theta^{ \prime},z)\right]\leq\sup_{\nu\in\mathcal{M}}\mathbb{E}_{\nu}\left[\rho(\theta,\theta^{\prime},z)\right]\). If the learner plays Algorithm 2 and \(\eta=\Omega(n^{2})\), then the expected regret is bounded:_
\[\mathbb{E}\left[\mathrm{Reg}_{T}\right]\leq\mathcal{O}\left(\eta+\frac{T}{n} \cdot\log\left(\mathcal{N}_{\mathcal{M},\left\|\right)}(\Theta,\rho,1/T)\right) +T\alpha\left(\frac{\log\mathcal{N}_{\mathcal{M},\left\|\right)}(\Theta,\rho, 1/T)}{\eta}\right)^{\frac{\beta}{4-2\beta}}\right).\]
Tuning \(\eta\) and \(n\), regret scales as \(\widetilde{\mathcal{O}}\left(T^{\frac{4-2\beta}{4-\beta}}\right)\) with \(\widetilde{\mathcal{O}}\left(T^{\frac{2}{4-\beta}}\right)\) calls to the optimization oracle and thus \(\widetilde{\mathcal{O}}\left(\epsilon^{-2/\beta}\right)\) calls to ERMOracle suffice to attain average regret \(\epsilon\). In particular, in the best case, when \(\beta=1\), we recover the optimal \(\widetilde{\mathcal{O}}\left(\epsilon^{-2}\right)\) oracle-complexity of attaining average regret bounded by \(\epsilon\) that would arise if we called the oracle once per round and achieved regret \(\widetilde{\mathcal{O}}\left(\sqrt{T}\right)\).
While a complete proof of Theorem 1 can be found in Appendix E, we provide a brief sketch here. Though we follow the general template of Proposition 3.2, we do not directly apply the result in order to get a slightly improved rate. As in the proof of the more general proposition, we appeal to the Be-the-Leader lemma to reduce the analysis to bounding the stability of the learner's predictions with respect to the evaluated loss functions. We then apply techniques from Agarwal et al. (2019), Suggala and Netrapalli (2020) to show that if the stability term is small, then the learner's predictions are stable with respect to \(\left|\left|\cdot\right|\right|_{1}\) in \(\mathbb{R}^{d}\). Finally, we use the pseudo-isometry assumption and control of the generalized bracketing numbers along with Proposition 3.1 to conclude with a self-bounding argument.
### Piecewise-Continuous Prediction
We now instantiate the previous result on several problems of interest. For the rest of this section, we show that piecewise continuous functions with well-behaved boundaries allow for both small
bracketing numbers and the pseudo-isometry property for properly chosen \(\rho\), assuming only directional smoothness of the adversary. Formally, we suppose that \(\Theta=\Theta_{\mathrm{c}}\times\Theta_{\mathrm{d}}\) can be decomposed into continuous and discrete parts with \(\Theta\subset\mathbb{R}^{m}\) for some dimension \(m\). We construct a function \(g\) as follows. First, consider classes \(g_{k}:\Theta_{\mathrm{c}}\times\mathcal{Z}\to\mathbb{R}\) for \(1\leq k\leq K\) such that for all \(z\in\mathcal{Z}\), \(g_{k}(\cdot,z)\) is Lipschitz as a function of \(\theta_{\mathrm{c}}\) with respect to the \(\ell_{1}\) norm on \(\Theta\). Now, for a fixed \(\phi:\Theta_{\mathrm{d}}\times[K]\times\mathcal{Z}\to\mathbb{R}\), we define
\[k_{\phi}(\theta_{\mathrm{d}},z) =\operatorname*{arg\,max}_{k\in[K]}\phi(\theta_{\mathrm{d}},k,z), \ell(\theta,z) =g_{k_{\phi}(\theta_{\mathrm{d}},z)}(\theta_{\mathrm{c}},z). \tag{4.1}\]
While the formulation of (4.1) combines versatility and simplicity, a related construction turns out to be easier to analyze: let \(\overline{\phi}:\Theta_{\mathrm{d}}\times[K]^{\times 2}\times\mathcal{Z}\to \mathbb{R}\) such that \(\overline{\phi}(\theta_{\mathrm{d}},k,k^{\prime},z)=-\overline{\phi}(\theta_{ \mathrm{d}},k^{\prime},k,z)\) for all \(\theta_{\mathrm{d}}\in\Theta_{\mathrm{d}}\), \(k,k^{\prime}\in[K]\), and \(z\in\mathcal{Z}\). Further, let
\[\overline{k}_{\phi}(\theta_{\mathrm{d}},z) =\operatorname*{arg\,max}_{k\in[K]}\sum_{k^{\prime}\neq k}\mathbb{ I}\left[\overline{\phi}(\theta_{\mathrm{d}},k,k^{\prime},z)\geq\phi(\theta_{ \mathrm{d}},k^{\prime},k,z)\right],\]
with ties broken lexicographically, i.e., \(\overline{k}_{\overline{\phi}}\) is the smallest index \(k\) that wins the most matches of a tournament, where victory is determined by the sign of \(\overline{\phi}(\theta_{\mathrm{d}},k,k^{\prime},z)\). We then define
\[\overline{\ell}(\theta,z) =g_{\overline{k}_{\overline{\phi}}(\theta_{\mathrm{d}},z)}( \theta_{\mathrm{c}},z). \tag{4.2}\]
In this section, we will focus on the tournament formulation of (4.2) for the sake of simplicity. In Appendix F.1, we will extend our results to the case of (4.1) with an additional margin assumption. We further remark that (4.2) can be regarded as an improper relaxation of the natural function class in (4.1) and thus suffices for improper online learning1. Finally, we note that, while we have described a tournament-style aggregation system for the sake of simplicity, as can be seen from our proof, any aggregation of the \(\binom{K}{2}\) events \(\overline{\phi}(\theta_{\mathrm{d}},k,k^{\prime},z)\geq 0\) will result in a similar statement, resulting in much greater generality. This generalization allows, for example, to efficiently represent polytopic regions with \(K\) proportional to the number of faces.
Footnote 1: See Block et al. (2022) for a discussion on the difference between proper and improper online learning.
### Piecewise Continuous Prediction with Generalized Affine Boundaries
We begin our study with the important special case of affine decision boundaries. and note that the setting described by (4.1) encompasses the central example of PWA functions: by letting \(\Theta_{\mathrm{c}}=\left(\mathbb{R}^{m\times d}\right)^{\times\widetilde{K}}\), \(\Theta_{\mathrm{d}}=\left(\mathbb{R}^{d+1}\right)^{\times K}\), \(\mathcal{Z}=\mathbb{R}^{d}\times\mathbb{R}^{m}\), and \(\phi(\theta_{\mathrm{d}},k,z)=\left\langle\mathbf{w}_{k},(\mathbf{x},1)\right\rangle\), we may take
\[\ell(\theta,z) =\left|\left|\mathbf{y}-\mathbf{W}_{k^{*}}\mathbf{x}\right|\right|^ {2}, k^{\star} =\operatorname*{arg\,max}_{k\in[K]}\left\langle\mathbf{w}_{k},( \mathbf{x},1)\right\rangle,\]
where we add an extra coordinate of \(1\) at the end to account for a possible affine constant. We show that if \(\overline{\ell}\) is piecewise continuous as in (4.2) with affine boundaries, then the generalized bracketing numbers are small and pseudo-isometry holds with respect to the \(\ell_{1}\) norm as long as the adversary is \(\sigma_{\mathrm{dir}}\)-directionally smooth.
**Theorem 2**.: _Suppose that \(\mathcal{Z}\subset\mathbb{R}^{d}\) and that \(\Theta\) is a subset of Euclidean space of \(\ell_{1}\) diameter bounded by \(D\), with \(\Theta_{\mathrm{d}}\subset(\mathcal{S}^{d})^{\times\binom{K}{2}}\); denote by \(\mathbf{w}_{kk^{\prime}}\) the coordinates of a given \(\theta_{\mathrm{d}}\in\Theta_{\mathrm{d}}\). Suppose further that \(\overline{\phi}(\theta_{\mathrm{d}},k,k^{\prime},\mathbf{z})=\psi(\langle \mathbf{w}_{kk^{\prime}},(\mathbf{z},1)\rangle)\) for some differentiable, odd, link function \(\psi:\mathbb{R}\to\mathbb{R}\) satisfying \(a\leq\left|\psi^{\prime}(x)\right|\leq A\) for all \(x\), and let \(\mathcal{M}\) consists of the class of \(\sigma_{\mathrm{dir}}\)-directionally smooth distributions such that \(\left|\left|\mathbf{z}\right|\right|_{\infty}\leq B\). Let_
\[\rho(\theta,\theta^{\prime},\mathbf{z})=2\cdot\mathbb{I}\left[\overline{k}_{ \overline{\phi}}(\theta_{\mathrm{d}},\mathbf{z})\neq\overline{k}_{\overline{ \phi}}(\theta^{\prime}_{\mathrm{d}},\mathbf{z})\right]+\max_{1\leq k\leq K} \left|\left|\theta_{\mathrm{c}}^{(k)}-\theta_{\mathrm{c}}^{\prime(k)}\right| \right|_{1}. \tag{4.3}\]
_Then \(\rho\) is a pseudo-metric satisfying the pseudo-isometry property with \(\alpha=\frac{2A(B\lor 1)}{a\sigma_{\mathrm{dir}}}\) and \(\beta=1\). Furthermore, for all \(\epsilon>0\), \(\mathcal{N}_{\mathcal{M},\left|\right|}\left(\Theta,\rho,\epsilon\right)\leq \left(\frac{9AK^{2}BD}{a\sigma_{\mathrm{dir}}\epsilon}\right)^{K^{2}(d+1)}\)._
We prove Theorem2 in full detail in AppendixF.1. The proofs of both statements rely on the same key step, given in LemmaF.1, which demonstrates that for fixed \(\theta_{\mathrm{d}}^{0}\), even though the event \(\mathbb{I}\left[\overline{k}_{\overline{\phi}}(\theta_{\mathrm{d}},\mathbf{z} )\neq\overline{k}_{\overline{\phi}}(\theta_{\mathrm{d}}^{0})\right]\) is not a continuous function of \(\theta_{\mathrm{d}}\), its expectation is Lipschitz if \(\mathbf{z}\) is \(\sigma_{\mathrm{dir}}\)-directionally smooth. Thus, LemmaF.1 is a vast generalization of the motivating argument involving one-dimensional thresholds in Section3. This key lemma is proven by appealing to the anti-concentration of affine functions applied to directionally smooth random variables and is the only place that the analysis of \(\overline{\ell}\) is different from that of the function \(\ell\) in (4.1). We then use this result to both imply pseudo-isometry and to show that a cover of \(\Theta\) with respect to \(\ell_{1}\) gives rise to a generalized \(\epsilon\)-bracket with respect to \(\mathcal{M}\) and \(\rho\).
Using Theorem2, we are able to prove a concrete regret bound for Algorithm2 on the class of piecewise continuous functions with affine boundaries:
**Corollary 4.1**.: _Suppose that \(\overline{\ell}\) is as in (4.2) with \(\overline{\phi}\) and \(\Theta\) as in Theorem2 with \(B\geq 1\) and \(\overline{\ell}\) uniformly bounded in magnitude by \(1\). If we set \(\eta=\widetilde{\Theta}\left(\left(TK^{2}dDBA(a\sigma_{\mathrm{dir}})^{-1} \right)^{2/3}\right)\) and \(n=\sqrt{\eta}\), then Algorithm2 experiences \(\mathbb{E}\left[\mathrm{Reg}_{T}\right]\leq\widetilde{\mathcal{O}}\left(\left( TAK^{2}dBD(a\sigma_{\mathrm{dir}})^{-1}\right)^{2/3}\right)\). In particular, to achieve average regret \(\epsilon\), it suffices to call \(\mathsf{ERMOracle}\) only \(\widetilde{\mathcal{O}}\left(\frac{AK^{2}dDB}{a\sigma_{\mathrm{dir}}\epsilon ^{2}}\right)\) times._
The proof of Corollary4.1 can be found in AppendixF.2 and follows almost immediately from Theorems1 and 2. The simplest example of a link function is simply to let \(\psi(x)=x\) the identity, in which case we obtain a regret bound for piecewise continuous functions with affine boundaries.
### Piecewise Continuous Prediction with Polynomial Boundaries
In order to broaden the scope of applications, we now consider more general boundaries between regions. As mentioned above, the key to proving an analogue of Theorem2 is the anti-concentration of affine functions applied to directionally smooth random variables. While anti-concentration properties of more general functions remain an active area of research, sub-classes of polynomials, such as multi-linear functions of independent variables, are known to anti-concentrate in great generality [Mossel et al., 2010] and suffice to extend our results loss functions with these decision boundaries using our techniques, we instead focus on general polynomial boundaries and further restrict \(\mathcal{M}\):
**Definition 4.1**.: For a polynomial \(f:\mathbb{R}^{d}\to\mathbb{R}\) such that \(f(x)=\sum_{\mathcal{I}\subset[n]}\alpha_{\mathcal{I}}x^{\mathcal{I}}\), let \(r=\deg(f)=\max\left\{\left|\mathcal{I}\right|\left|\alpha_{\mathcal{I}}\neq 0\right\}\) denote the degree and let \(\mathrm{coeff}_{r}(f)=\sqrt{\sum_{\left|\mathcal{I}\right|=r}\alpha_{\mathcal{I} }^{2}}\) be the Euclidean norm of the
vector of coefficients on the top-degree terms of the polynomial \(f\). We say that a distribution \(\nu\) is \(\sigma_{\mathrm{poly},r}\)-polynomially smooth if for all \(a\in\mathbb{R}\), and all degree \(f\) polynomials such that \(\mathrm{coeff}_{r}(f)=1\), it holds that \(\mathbb{P}_{x\sim\nu}\left(|f(x)-a|\leq\epsilon\right)\leq\frac{\epsilon^{ \frac{1}{r}}}{\sigma_{\mathrm{poly},r}}\).
Before proceeding, a few remarks are in order. First, we note that directional smoothness is _not_ sufficient to ensure polynomial smoothness, as exhibited by Glazer and Mikulincer (2022, Example 3) and thus more constrained adversaries are indeed necessary to apply our methods. Second, we obseve that Definition 4.1 extends the notion of directional smoothness, with the latter corresponding to \(\sigma_{\mathrm{poly},1}\)-smoothness. Finally, we observe that several common families of distributions are easily seen to be polynomially smooth with dimension-independent \(\sigma_{\mathrm{poly},r}\), such as Gaussians and, more generally, product measures of log-concave marginals (Glazer and Mikulincer, 2022, Corollary 4); we expand on this discussion in Appendix G.1. Assuming an adversary is polynomially smooth, we prove an analogue of Theorem 2, which then results in the following regret bound:
**Theorem 3**.: _Suppose \(\mathcal{Z}\subset\mathbb{R}^{d}\) and \(\Theta\) is a subset of Euclidean space with \(\ell_{1}\) diameter bounded by \(D\). Let \(\Theta_{\mathrm{d}}\) parameterize the set of tuples of \(\binom{K}{2}\) degree \(r\) polynomials \((f_{\mathbf{w}_{kk^{\prime}}})\) on \(\mathbb{R}^{d}\) such that \(\mathrm{coeff}_{r}(f_{\mathbf{w}_{kk^{\prime}}})=1\) for all \(k\in[K]\). Suppose that \(\overline{\ell}\) is defined as in (4.2) with \(\overline{\phi}(\theta_{\mathrm{d}},k,k^{\prime},\mathbf{z})=f_{\mathbf{w}_{kk ^{\prime}}}(\mathbf{z})\) and \(\overline{\ell}\) bounded in the unit interval. If \(\mathcal{M}\) is the class of \(\sigma_{\mathrm{poly},r}\)-polynomially smooth distributions such that \(\left|\left|\mathbf{z}\right|\right|_{\infty}\leq B\) almost surely, then with the correct choices of \(\eta,n\) given in Appendix G.2, Algorithm 2 experiences \(\mathbb{E}\left[\mathrm{Reg}_{T}\right]\leq\widetilde{\mathcal{O}}\left( \left(TK^{2}r^{2}d^{r}DB\sigma_{\mathrm{poly},r}^{-1}\right)^{\frac{4r-2}{4r- 1}}\right)\). Thus, the oracle complexity of achieving average regret at most \(\epsilon\) is controlled by \(\widetilde{\mathcal{O}}\left(\left(\frac{K^{2}r^{2}d^{r}DB}{\sigma_{\mathrm{ poly},r}}\cdot\epsilon^{-2r}\right)\right)\)._
We prove Theorem 3 in similarly to how we prove Theorem 4.1, i.e., we show an analogue of Theorem 2 for polynomially smooth distributions to control the generalized bracketing numbers and pseudo-isometry constants with respect to \(\rho\) from (4.3) before applying Theorem 1. The full details are in Appendix G.2. We remark that the common thread between the proofs of Theorem 3 and Corollary 4.1 is that functions of random variables samples from distributions in \(\mathcal{M}\) are sufficiently anti-concentrated as to smooth the non-continuous parts of the loss functions. Finally, note that we can replace \(\overline{\ell}\) with \(\ell\) from (4.1) with a similar margin assumption as in Appendix F.3.
## 5 Smoothed Multi-Step Planning
In previous sections, we were interested in online prediction; here we focus on the related problem of multi-step decision making. Specifically, we study the setting of multi-step planning, where the learner plays a sequence of dynamical inputs (in control parlance, an _open-loop plan_) to minimize a cumulative control loss over a finite planning horizon. We focus on "hybrid dynamics" (Borrelli, 2003, Henzinger and Sastry, 1998), where each state space is partitioned into regions (called modes) within which the dynamics are Lipschitz. We consider the case of affine decision boundaries between modes here and defer discussion of polynomial boundaries to Appendix H. We remark that this problem is challenging due to the introduction of possible discontinuities across modes, again limiting the applicability of previous techniques. This class is rich enough to model piecewise-affine dynamics frequently encountered in robotic-planning (Hogan and Rodriguez, 2016, Amitescu and Potra, 1997, Aydinoglu et al., 2021); in the appendix, we generalize further to polynomial decision boundaries (Posa et al., 2015). See also the related work in Appendix A.
Formally, we fix a planning horizon \(H\in\mathbb{N}\) and consider a family of dynamical systems with states \(\mathbf{x}_{h}\in\mathcal{X}\subset\mathbb{R}^{m}\) and inputs \(\mathbf{u}_{h}\in\mathcal{U}\subset\mathbb{R}^{d}\). Our decision variables are _plans_\(\theta=\bar{\mathbf{u}}_{1:H}\in\mathcal{K}\subset\mathcal{U}^{\times H}\) and our context are tuples \(z_{t}=(\mathbf{x}_{t,1},\boldsymbol{\eta}_{t,1:H},\boldsymbol{\xi}_{t,1:H},g_{ t;1:H,1:K},\ell_{t}^{\mathrm{\nu}},\mathbf{W}_{t,1:H})\) consisting of an initial state \(\mathbf{x}_{t,1}\in\mathcal{X}\), noises \(\boldsymbol{\eta}_{t,h}\in\mathcal{X}\) and \(\boldsymbol{\xi}_{t,h}\in\mathcal{U}\), continuous functions \(g_{t,h,k}\) defining the dynamics for mode-\(k\) at step \(h\), time-dependent continuous losses \(\ell_{t}^{\mathrm{\nu}}\), and matrices \(\mathbf{W}_{t,h}\in\mathbb{R}^{K(m+d+1)}\) determining the boundaries between modes, where \(\mathbf{W}_{t,h}\) has rows \(\mathbf{w}_{t,h,k}\in\mathcal{S}^{m+d}\). We use \(\mathbf{v}\in\mathcal{V}=\mathcal{X}\times\mathcal{U}\) to denote concatenations of state and input. We suppose piecewise-continuous dynamics, where
\[\mathbf{x}_{t,h+1}(\theta) =g_{t,h,k_{t,h}(\mathbf{v}_{t,h}(\theta))}(\mathbf{v}_{t,h}( \theta))+\boldsymbol{\eta}_{t,h},\quad\text{and} \tag{5.1}\] \[\mathbf{u}_{t,h}(\theta) =\bar{\mathbf{u}}_{t,h}+\boldsymbol{\xi}_{t,h},\quad\mathbf{v}_{ t,h}(\theta)=(\mathbf{x}_{t,h}(\theta),\quad\mathbf{u}_{t,h}(\theta)),\] \[k_{t,h}(\mathbf{v}) =\operatorname*{arg\,max}_{k\in[K]}\phi_{t,h}(k,\mathbf{v}), \quad\text{and}\quad\phi_{t,h}(k,\mathbf{v})=\left\langle\mathbf{w}_{t,h,k},( \mathbf{v},1)\right\rangle.\]
In words, for each time \(t\), there are length \(H\) trajectories that evolve according to piecewise continuous dynamics, where each piece (mode) is determined by affine functions of both the previous state and an input. We aim to minimize regret with the loss \(\ell(\theta,z_{t}):=\ell_{t}^{\mathrm{\nu}}(\mathbf{v}_{t,1:H}(\theta))\), where \(\ell_{t}^{\mathrm{\nu}}:\mathcal{V}^{H}\to\mathbb{R}\) are \(1\)- Lipschitz functions of both the state and input. We assume that, _for fixed mode sequences_\(k_{1:h}\in[K]^{h}\), the \(h\in[H]\)-fold compositions of the Lipschitz dynamic maps \(g_{t,h,k_{h}}\circ g_{t,h-1,k_{h-1}}\circ\cdots\circ g_{t,1,k_{1}}\) are \(L\)-Lipschitz as functions of \(\theta\in\mathcal{K}\) in an \(\ell_{1}\to\ell_{1}\) sense (see the appendix for a precise statement). Though \(L\) may be exponential in \(H\) in the worst-case, common stability conditions ensure that \(L\) is more reasonably bounded; for further elaboration, see Remark H.1. Finally, in order to incorporate smoothness, let \(\mathcal{F}_{t}\) denote the filtration generated by \((z_{1:t-1},\ell_{t}^{\mathrm{\nu}},g_{t,1:H,1:K},\mathbf{W}_{t,1:H})\), and for \(h\geq 0\) let \(\mathcal{F}_{t,h}\) denote the filtration generated by \(\mathcal{F}_{t}\) and \(\boldsymbol{\xi}_{t,1:h},\boldsymbol{\eta}_{t,1:h},\mathbf{x}_{t,1}\); we suppose that the tuple \((\boldsymbol{\xi}_{t,h},\boldsymbol{\eta}_{t,h})\) of dynamics and input noise, conditioned on \(\mathcal{F}_{t,h}\), is \(\sigma_{\mathrm{dir}}\)-directionally smooth.
While the restriction to open-loop plans may seem limiting, we note that the flexibility in our definition of the \(g_{t,h,k}\) allows us to incorporate a wide variety of state-dependent policies with minimal modification. For example, our framework includes the popular setting of linear controls, where the learner plays an affine function mapping the state to an input; by letting \(g_{t,h,k}\) be multilinear in the input matrix and the state and letting the loss be quadratic, both of which remain Lipschitz due to our boundedness assumptions, we naturally recover a piecewise generalization of the well-known Linear Quadratic Regulator (LQR). Our main result is the following.
**Theorem 4**.: _Suppose that we are in the situation described by (5.1), with \((\boldsymbol{\eta}_{t,h},\boldsymbol{\xi}_{t,h})|\mathcal{F}_{t,h-1}\)\(\sigma_{\mathrm{dir}}\)-directionally smooth, \(\sup_{\mathbf{v}\in\mathcal{V}}||\mathbf{v}||_{1}\leq D\), the \(\ell_{t}^{\mathrm{\nu}}\) are Lipschitz and bounded, and the \(g_{t,h,k}\) satisfying technical continuity assumptions found in Theorem10. If there is some margin parameter \(\gamma>0\) such that for all \(t\in[T]\) and \(h\in[H]\) it holds that \(\min_{k\neq k^{\prime}\in[K]}\big{|}\big{|}\mathbf{w}_{t,h,k}-\mathbf{w}_{t,h,k^{\prime}}\big{|}\big{|}\geq\gamma\) and the planner plays \(\bar{\mathbf{u}}_{t,h}\) according to Algorithm2, then the oracle complexity of achieving average regret \(\epsilon\) is at most \(\widetilde{\mathcal{O}}\left((dH^{5}K^{4}(DL/(\gamma\sigma_{\mathrm{dir}}))^{2 })^{\frac{1}{3}}\epsilon^{-2}\right)\)._
The proof, elaboration of assumptions, and the extension to polynomial decision boundaries are given in AppendixH. The proof follows the template of the previous section; to handle the multi-step setup, we argue that smooth dynamical noise suffices to ensure that, when \(\theta,\theta^{\prime}\in\mathcal{K}\) are sufficiently close, smoothness ensures that the sequence of modes \(k_{t,h}(\mathbf{v}_{t,h}(\theta)),k_{t,h}(\mathbf{v}_{t,h}(\theta))\) coincide for all \(h\in[H]\) with high probability; this requires a telescoping argument similar in spirit to the performance-difference lemma in reinforcement learning (Kakade, 2003).
## Acknowledgments
AB acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 1122374. We also acknowledge support from ONR under grant N00014-20-1-2336, DOE under grant DE-SC0022199, and NSF through award DMS-2031883. MS acknowledges support from Amazon.com Services LLC grant; PO 2D-06310236. We also acknowledge Russ Tedrake, Terry H.J. Suh, and Tao Pang for their helpful comments.
|
2308.10700 | Global assessment of university research comprehensiveness | The demand for global university league tables has been high over the past
two decades. However, significant criticism of their methodologies is
accumulating without being addressed. I revisit global university league tables
by normalizing each field as to create a uniform distribution of value. Then,
the overall performance of an institution is interpreted as the probability of
having a high score in any given academic field. I focus on the similarity of
institutions across ten criteria related to academic performance in eighty
subjects of all fields of knowledge. The latter does not induce a zero-sum
game, removing one of the most prominent negative features of established
league tables. The present assessment shows that the main difference between
hundreds of leading global research universities is whether their coverage of
all areas of human knowledge is comprehensive or specialized, as their mean
performance per subject is nearly indistinguishable. I compare the results with
the main league tables and found excellent agreement, suggesting that
regardless of their methodologies, research-intensive institutions perform well
in rankings if they are comprehensive. This comprehensiveness is ultimately
dependent on institutional age, privileged funding allocation and regional
academic culture. Consequently, when the size of an institution is taken out of
the picture, I found no correlation between comprehensiveness and quality, and
no difference can be found in the mean quality of institutions regionally or
globally. Furthermore, I find the reputation and prestige of several famous
institutions to far exceed their performance within the present methodology,
while numerous institutions with less reputation and visibility perform better
than expected. | Saulo Mendes | 2023-08-21T13:09:59Z | http://arxiv.org/abs/2308.10700v1 | # Global assessment of university research comprehensiveness
###### Abstract
The demand for global university league tables has been high over the past two decades. However, significant criticism of their methodologies is accumulating without being addressed. One important bias regards the unequal distribution of research output and impact across different subjects, which in turn favors institutions that have its strongest performance in medical and physical sciences. I revisit global university league tables by normalizing each field as to create a uniform distribution of value.Then, the overall performance of an institution is interpreted as the probability of having a high score in any given academic field. I focus on the similarity of institutions across ten criteria related to academic performance in eighty subjects of all fields of knowledge. The latter does not induce a _zero-sum game_, removing one of the most prominent negative features of established league tables. The present assessment shows that the main difference between hundreds of leading global research universities is whether their coverage of all areas of human knowledge is comprehensive or specialized, as their mean performance per subject is nearly indistinguishable. I compare the results with the main league tables and found excellent agreement, suggesting that regardless of their methodologies, research-intensive institutions perform well in rankings if they are comprehensive. This comprehensiveness is ultimately dependent on institutional age, privileged funding allocation and regional academic culture. Consequently, when the size of an institution is taken out of the picture, I found no correlation between comprehensiveness and quality, and no difference can be found in the mean quality of institutions regionally or globally. Furthermore, I find the reputation and prestige of several famous institutions to far exceed their performance within the present methodology, while numerous institutions with less reputation and visibility perform better than expected.
Keywords:University Ranking Comprehensiveness Specialization Research
## 1 Introduction
The modern university arose from the competition between the Jesuit, Oxbridge1 and German university frameworks (Clark, 2008; Menand et al., 2017), which significantly differed from each other. While the Oxbridge model had no faculties and mostly focused on teaching for future fellows and administered by the head of the colleges, the German model focused on faculties of different academic subjects, had no colleges and was primarily focused on research (Kerr, 2001; Clark, 2008). Confronted by the far more impactful German university model, even the famous Oxbridge and Paris academies adopted such system (Clark, 2008). During the second-half of the nineteenth century, the American model combined the Oxbridge and German systems to create the "comprehensive" university (Kerr, 2001; Zemliakova, 2018). Unlike the classical European universities, the American comprehensive university did not reject technical subjects such as engineering, agriculture or commerce from its core curriculum. Furthermore, they combined the two systems in different ways: undergraduate studies at American universities followed the Oxbridge system while graduate studies followed the German research university model (Veysey, 1965; Crow and Dabars, 2015). It was therefore the American university that first combined nearly all subjects of human knowledge into a single campus, initially embodied in the prominent examples of _Harvard_, _Johns Hopkins_ and _Cornell_
universities (Crow and Dabars, 2015). Since then, these institutions outperformed smaller counterparts in drawing visibility due to hallmark discoveries in science, engineering, medicine or economics, so that it has become natural to deem the best universities those with strong research output and impact. The association between university performance and research impact is very rooted, to such an extent that global rankings are built to measure institutions against this model (Marginson and van der Wende, 2007). Indeed, even teaching in graduate and undergraduate classrooms is ultimately related to an institution of research in the form of the "delivery of research-led teaching" (Taylor, 2006). Therefore, it is only logical to measure university performance by research metrics due to its public availability and transparency. In fact, a comparative analysis among established university league tables show that although criteria vary significantly, research metrics appear in all of them and have the most prominent role among all indicators (Buela-Casal et al., 2007). This core research criteria make different league tables and methodologies thereof provide similar results (Aguillo et al., 2010).
The evaluation of performance and prestige of universities can be traced back to the 1798 _Der Universitats-Bereiser_ report to King Wilhelm II of Prussia (Gedike, 2018), regarded as one of the first forms of regional university ranking (Menand et al., 2017). In the twentieth century, international institutional ranking did not exist in the present form, but the prestige of institutions were largely determined by awards, most prominently the Nobel prize (Zuckerman, 1967; Inhaber and Przednowek, 1976; Zuckerman, 1977, 1978). On the other hand, the lack of a central education regulator led American institutions to compete among themselves for human and financial resources (Hagstrom, 1971), ultimately leading to a growing need for national rankings for both undergraduate and graduate studies, especially in the the post-war era (Babcock, 1911; Cartter, 1966; Webster, 1981, 1986; Wilbers and Brankovic, 2021). However, due to the exponential growth of universities2, fast globalization of higher education and demand for evaluation of cross-national tertiary educational systems (Larsen et al., 2002; Merisiotis, 2002; Schofer and Meyer, 2005; Adina-Petruta, 2015), the modern concept of global university ranking arose in 2003 with the _Shanghai (2022) Ranking_ in an attempt by China to compare its institutions to the leading American and European counterparts. Especially because of the effect of university research on economic growth (Anselin et al., 1997; Mokyr, 2011; Cantoni and Yuchtman, 2014; Valero and Van Reenen, 2019; Agasisti and Bertoletti, 2020), league tables have found a significant impact in geopolitics and policymaking (Hazelkorn, 2008), and the competition it stems plays a major role in further enhancing the discourse on excellence of human development of nation states (Brankovic et al., 2018). Furthermore, rankings can influence the reshaping of national educational systems (Marginson and van der Wende, 2007; Hazelkorn, 2015). Most prominently, the European tradition of favoring small specialized institutions have been recently challenged in view of the comprehensive university model, so that mergers of leading European insitittiutions have been carried out (Docampo et al., 2015; Ripoll-Soler and de Miguel-Molina, 2019). In addition, university rankings are also revelant for the decision-making of students regarding the evaluation of the cost-benefit of choosing a particular institution (Ehrenberg, 2000). Therefore, assessments on university performance is undisputably essential for many economic and social spheres, both at the national and international level.
Footnote 2: Buringh and van Zanden (2009) estimates the existence of a dozen universities in medieval Europe. By the time of the industrial revolution it did not exceed a hundred universities, arguably a small number of institutions across many nations to prompt a need for rankings. In comparison, current worldwide estimates are placed around 30,000 universities (Adina-Petruta, 2015).
Despite the relevance of university performance assessments, significant issues in the methodology of established league tables as well as the benchmarking of institutional quality has surfaced (Bowden, 2000; Clarke, 2002; Dill and Soo, 2005; Fauzi et al., 2020). For instance, Ehrenberg (2003) discussed the methodology of American college rankings, finding its main criteria to be based on reputation, selectivity of the institutions, retention and graduation rates, and even alumni donations. Although this rank still exists nationally, it is not surprising these factors are considered to be of the least importance when ranking global universities. Although research-based rankings are more suitable than survey-oriented ones for creating a global league table (Taylor and Braddock, 2007), global league tables relying on research-based metrics as provided by the _Shanghai Ranking_ also contain issues and biases. For instance, although recent versions of the _Shanghai Ranking_ have added awards and prizes in several other fields, their subject rankings do not include many social sciences as well as art and humanities subjects.
Building on the early criticisms of the _Shanghai Ranking_, alternative league tables with new methodologies sprung in the same decade. Mostly as a mix of national rankings in the USA and research-led global ranking, _Quarelis Simons (QS)_ and _Times Higher Education Ranking (THE)3_ also relied on highly subjective parameters obtained from academic surveys, undergraduate employability, ratio of international students as well as class size. A major issue with employability indicators is that they are only applicable on a national or regional scale. Employers from South Africa can not compare its national institutions to those in New Zealand and so forth, as they do not compete together. Therefore, in an international ranking, employability will further skew scores to institutions that have a disproportionately favourable view from
national employers as opposed to universities within countries where employability evaluation is uniform among the best institutions. In addition, employability favours schools who either specialize or score very high in engineering and decision sciences, such as law, economics or finance. Furthermore, it is well-known that non-academic employability of pure and theoretical physical sciences, humanities, social sciences are very low (Garrauste and Rodrigues, 2014). As already argued in Ehrenberg (2003) and Clarke (2002), these indicators are very problematic, as they are either measuring performance of previous decades or rooted in privilege and reputation that may not actually reflect the current state of an institution. Naturally, these survey-based indicators have found to lead to data fabrication, markedly exhibited in the recent scandals involving _Temple Business School4_ and _Columbia University5_, in addition to being found to have significant conflicts of interest (Chirikov, 2021).
Footnote 4: See _“Former Temple Business-School Dean Gets Prison Term in Rankings Scandal” at_ Wall Street Journal.
Footnote 5: See the article _“Columbia is the latest university caught in a rankings scandal”_ at The Economist.
Yet a third group of global university rankings arose.6 the purely bibliometric rankings such as the _NTU Ranking_, _Leiden Ranking_, _US News & Report Global University Rankings_ among others. The main feature of these rankings is the abhorrence towards surveys and reputation of non-research metrics, and almost all indicators are related to bibliometrics. Indeed, Chen and Liao (2012) showed that _Shanghai Ranking_ correlated well with bibliometric-based rankings, and survey-based ones showed the least overlap. Not only the overlap is less significant when comparing research-based with survey and reputation-based rankings, the latter tend to be prone to lack of transparency, gaming of data or total fabrication that can not be assessed by independent examiers. On the other hand, research-based metrics tend to be transparent and reproducible (Docampo, 2013). Nevertheless, pure research basis have also shown some setbacks: it is well-known that established university league tables favors universities who have a very high research output in natural and medical sciences, as these fields claim the journals with highest impact factor and highest number of published articles. Indeed, a _Matthew effect_(Biglu, 2008) was found on the distribution of research output across different fields, with fields of highest impact claiming the highest growth in impact factor (Althouse et al., 2009). Hence, there is little hope that social sciences, humanities and engineering will ever reach the sheer number of citations and published articles of natural and medical sciences (Hamilton, 1991). In fact, institutions can have their rank uplifted if specific scientific fields have a higher weight measure in the computation of the ranking, either by active methodological choice or by unconscious bias towards fields that provide higher metrics. In this context, institutions whose strongest departments are in the social sciences and humanities are severely underestimated. Conversely, having strong medical and science departments is enough to maintain or attract prestige. Therefore, a fair measurement of university rankings require a proper subject normalization.
Footnote 6: Among those not listed, webometrics is one the earliest. However, it is mostly based on internet transparency, not research or similar metrics. The league tables are found at NTU Ranking, US News & Report Global University Rankings, Leiden Ranking, SCImage Ranking.
Despite the long list of issues present in most league tables, a concrete comparison between a large number of universities can be made based on research impact and comprehensiveness. However, as shown in Figure
Figure 1: List of academic subjects (see Table 2) ranked by research output (logarithm of the number of indexed documents in _web of Science_ in the period 2011-2020) of their leading institutions. The leader in the subject of medicine (_Harvard University_) published 139,000 documents in this period, while the leader in physics (_Universite de Paris-Saclay_) published 6,000 documents and the leader in chemistry (_University of the Chinese Academy of Sciences_) published over 17,000 documents.
1, some subjects have a much higher research output (articles, reviews, etc.) than others. Moreover, they also vary wildly in citation metrics and journal impact factor (Althouse et al., 2009). Overall, bibliometric practices strongly vary among different subjects (Mood et al., 1985) and makes any global evaluation based on weighted metrics biased and unrealistic. Albeit articles have steadily increased their number of references over the last decades, their growth in STEM fields is bigger than in social sciences and humanities (Biglu, 2008; Dai et al., 2021) and the variability across fields further widens the unevenness of their bibliometrics (Seglen, 1992). Hence, league tables based on total weighted research are inevitably favorable to the leading institutions in medical, chemical and physical sciences.
Table 1 displays a much less discussed feature of university league tables: the _non-extensive_ character that creates an enormous disparity between excellence in academic subjects and the overall excellence. Although _UC Berkeley_ is clearly a peer of _Harvard, Stanford_ and _MIT_ by the measure of the number of subjects they excel in, the _QS_ overall ranking seems to contradict its own subject evaluation and puts _UC Berkeley_ far below its peers in the overall analysis. Furthermore, the _Sorbonne Universite_ seems to be at the same overall rank as the _Ecole Polytechnique_ even though the former excels in dozens of subjects and the latter in only a few. Strikingly, the two overall rankings suggest that _Caltech_ is far ahead of _Sorbonne_, while the latter actually has similar subject performance of _Caltech_. One can not help but perceive this phenomenon as paradoxical to the claim of academic excellence. Other league tables such as the _Shanghai Ranking_ feature the same problem. The methodology of established rankings create this paradox by selecting _intensive_ criteria that favours a few subjects, either present in bibliometric or reputation criteria. To remove this conflicting issue, in this work I analyze the comprehensiveness and evaluate the overall research impact of an institution as _extensive_ (henceforth in the paper I take it to mean an additive property), i.e. the ideal (or maximal) research-intensive university is axiomatically the one which seeks in all subjects.
In addressing all these points, the present study focuses and creates a method to rank global institutions limited to research-based indicators extracted from publicly available data of _web of Science_. In the spirit of Kosztyan et al. (2019), I replace unidimensional rankings by well-posed bideminesional ones: a collection of ranked leagues within the otherwise full unidimensional ranking. However, the present model define these groups by the similarity of these institutions across twelve indicators derived from each of the eighty subjects. In doing so, I address the well-known pathology of _zero-sum games_(Lee et al., 2020) and avoid the practice of gaming or tweaking of indicators. This new method of normalized and extensive analysis where the total score is a reflection of how many subjects an institution excels, shows that institutions are overvalued due to either their financial privileges, reputation, prestigious awards or disproportionate high performance in subjects with the highest bibliometric impact. Remarkably, the mean ranking among five leading league tables is shown to agree well with the present model, which confirms the tangible existence of the stratification of academic excellence.
## 2 Extensive World Rankings: Subject Comprehensiveness
In view of the several caveats emerging in the process of classifying universities, I propose to ammend these issues by measuring the groupings or similarity of institutions. I separate institutions by groups of similarity rather than a continuous ranking. Below I describe the step-by-step methodology:
* Delineate subareas for each of the eighty subjects according to _web of Science_ classification over the period between 2011-2020.
\begin{table}
\begin{tabular}{l|r r r r|l} \hline \hline _Institution_ & _QS Rank 2020_ & _\#Top50 QS_ & _CWUR 2017_ & _\#Top10 CWUR_ & _QS+CWUR_ & _Country_ \\ \hline Harvard & **3** & 35 & **1** & 112 & _147_ & United States \\ UC Berkeley & **28** & 38 & **7** & 50 & _88_ & United States \\ Stanford & **2** & 38 & **2** & 48 & _86_ & United States \\ MIT & **1** & 29 & **3** & 41 & _70_ & United States \\ Princeton & **13** & 26 & **9** & 9 & _35_ & United States \\ Caltech & **5** & 12 & **11** & 8 & _20_ & United States \\ \hline Oxford & **4** & 38 & **5** & 47 & _85_ & United Kingdom \\ Cambridge & **7** & 39 & **4** & 38 & _87_ & United Kingdom \\ UCL & **8** & 33 & **31** & 37 & _70_ & United Kingdom \\ \hline Sorbonne & **77** & 9 & **56** & 17 & _26_ & France \\ Ecole Poly. & **60** & 3 & **35** & 0 & \(3\) & France \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between world ranking and the number of subjects in which universities are world leaders from _QS Ranking 2020_ and _CWUR Ranking 2017_. The former measured university performance in all areas of human knowledge subdivided into 46 subjects, while the latter had 227 subjects due to the inclusion of subareas of all natural and medical sciences.
* Separate a measure of research impact applicable to each subject. The _Leiden Rank_ parameter \(PP_{10\%}\), henceforth denoted by \(\mathcal{L}\) and normalized to account for subject differences, measures the percentage of an institution output at the top 10% of the most cited papers and is the most suitable measure of research impact due to its weak dependence on time, as opposed to average citations (Sangwal, 2011; Finardi, 2014) or the Hirsch's h-index7(Egghe, 2007; Mannella and Rossi, 2013). The fractional counting is applied to remove the effect of large collaborations in medical and physical sciences. This parameter superseeds average citations as it weakens the citation disparity among subjects. Footnote 7: Note that the time-dependence of the h-index was already discussed in Hirsch (2005), where the author proposed to normalize the h-index by one’s net academic age.
* For multi-campus universities I only measure the performance of the main campus. This is due the fact that leading US institutions have several campuses that are not computed altogether. For instance, were I to compute all campuses, the _University of California_ output would far exceed the peers of its flagship campus (_UC Berkeley_) by an order of magnitude. Therefore, I define the parameter \(\rho\) as the percentage of the research output carried out by the main campus of an institution, with \(\rho\approx 1\) in the vast majority of cases.
* For each \(i\)-th subject with \(1\leqslant i\leqslant 80\), I compute the total research output between 2011-2020 as the sum of all citeable documents (articles, reviews, conference papers, etc.). I henceforth denote this number by \(\mathcal{O}_{i}\). The arbitrary chronological \(i\)-order is given in Table 2. Each subject can contain at most 200 ranked institutions, thus listing only the world 1% leading universities in a given subject.
* For each \(i\)-th subject I compute the maximum output and denote it with \(\mathbb{E}[\mathcal{O}_{i}]\).
\begin{table}
\begin{tabular}{l l l l l l l} \hline
1 & Physics & 2 & Applied Physics & 3 & Eng. physics & 4 & Astronomy & 5 & Mathematics \\
6 & Statistics & 7 & Geophysics & 8 & Meteorology & 9 & Phys. Oceanography & 10 & Climate Science \\
11 & Geology & 12 & Geochemistry & 13 & Chem. Oceanography & 14 & Chemistry & 15 & Chemical Eng. \\
16 & Energy Eng. & 17 & Petroleum Eng. & 18 & Material Science & 19 & Metallurgical Eng. & 20 & Mineral Eng. \\ \hline
21 & Civil Eng. & 22 & Ecological Eng. & 23 & Transport Eng. & 24 & Environmental Eng. & 25 & Hydraulic Eng. \\
26 & Coastal Eng. & 27 & Architecture & 28 & Urban Planning & 29 & Geodesy & 30 & Mechanical Eng. \\
31 & Naval Eng. & 32 & Aerospace Eng. & 33 & Industrial Eng. & 34 & Acoustic Eng. & 35 & Electrical Eng. \\
36 & Computer Sci. & 37 & Computer Eng. & 38 & Telecom. Eng. & 39 & Mechatronical Eng. & 40 & Nanoscience \\ \hline
41 & Medicine & 42 & Nursing & 43 & Neurosci. \& Psych. & 44 & Public Health & 45 & Sport \& Nutrition \\
46 & Phys. Therapy & 47 & Dentistry & 48 & Pharmacy & 49 & Biomedical Eng. & 50 & Molecular Biology \\
51 & Biochemistry & 52 & Biophysics & 53 & Biology & 54 & Ecology & 55 & Biol. Geosciences \\
56 & Agriculture & 57 & Food Eng. & 58 & Biotechnology & 59 & Vet. Medicine & 60 & Plant Science \\ \hline
61 & Economics & 62 & Finance & 63 & Business & 64 & Public Policy & 65 & Law \\
66 & Politics & 67 & Sociology & 68 & Anthropology & 69 & Education & 70 & Journalism \\
71 & Media studies & 72 & Management & 73 & Archaeology & 74 & History & 75 & Geography \\
76 & Philosophy & 77 & Thology & 78 & Languages & 79 & Music & 80 & Arts \\ \hline \end{tabular}
\end{table}
Table 2: The partition of eighty academic subjects for the evaluation of university performance between the fields of pure and applied physical, earth & chemical sciences (1-20), engineering sciences (21-40), biological & life sciences (41-60) and social sciences & humanities (61-80).
Figure 2: Distribution of subjects according to the highest research output in \(log_{10}\) scale of the best institution for each subject, as ordered in Table 2.
* The \(i\)-th subject has a score \(0\leqslant\Lambda_{ik}\leqslant 1000\) for the \(k\)-th institution, and is computed with normalization \(A_{i}\): \[\Lambda_{ik}:=A_{i}\sqrt{\rho\mathcal{L}\cdot\frac{\mathcal{O}_{ik}}{\mathbb{E} [\mathcal{O}_{ik}]}}\quad\therefore\quad A_{ik}\left(\mathbb{E}[\mathcal{O}_{i}] \right)\equiv 1000\quad.\] (1)
* The major variable \(0\leqslant X\leqslant 80,000\) measures the total score of all covered subejcts (appearing among the top 200 institutions) by an institution removing the output bias of each subject: \[X:=\sum_{i=1}^{80}A_{i}\sqrt{\rho\mathcal{L}\cdot\frac{\mathcal{O}_{i}}{\mathbb{ E}[\mathcal{O}_{i}]}}\quad.\] (2)
* The major variable \(0\leqslant Y\leqslant 1000\) measures the total score of all covered subejcts by an institution regardless of its subject rank, though keeping the output bias of each subject: \[Y:=A\sqrt{\rho\mathcal{L}\cdot\left(\sum_{i=1}^{80}\mathcal{O}_{i}\right)^{1/2 }}\quad.\] (3) The root of the output appears because the weighted sum of all subject outputs have only one normalization constant A and the exponential decay in output is much faster than for X.
* The variable \(0\leqslant\overline{\Lambda_{X}}:=X/\mathcal{N}_{200}\leqslant 1000\) computes the arithmetic mean among all subjects, where \(\mathcal{N}_{j}\) measures the number of subjects \(i\) in which a given institution appear among the top \(j\) universities. This parameter is the closest measure of institutional research "quality".
* The variable \(0\leqslant\overline{\Lambda_{Y}}:=(X+80Y)/(\mathcal{N}_{200}+80)\leqslant 1000\) computes the weighted mean among all subjects regardless of its subject rank.
* The variable \(0\leqslant\overline{\Lambda_{Y}}:=B\cdot\overline{\Lambda_{Y}}\tanh\left[(1+ \mathcal{N}_{200})/30\right]\leqslant 1000\) adjusts the mean \(\overline{\Lambda_{Y}}\) to the number \(i\) of subjects they excel at, where \(B\approx 1\) is a normalization constant. It differentiates institutions posessing the same mean \(\overline{\Lambda_{Y}}\) but that are indeed far apart in the measure of comprehensiveness.
* The variable \(0\leqslant\mathcal{C}:=1000(\mathcal{N}_{200}+\mathcal{N}_{100}+\mathcal{N}_{ 30}+\mathcal{N}_{1}+1)/(3\mathcal{N}_{200}+81)\leqslant 1000\) measures the degree of comprehensiveness concurrent with research impact of an institution.
* The auxiliary variable \(\varphi\) measures the mean score per research faculty of an institution: \[\varphi:=\frac{80\overline{\Lambda_{Y}}}{2000\tanh\left(M_{F}/2500\right)}\quad,\] (4) where \(M_{F}\) is the total number of permanent faculty. Note that if \(M_{F}\ll 10,000\) the effective number of research-intensive faculty converges to \((4/5)M_{F}\). This asymptotic limit accounts for the known fact that teaching stream positions (i.e. not involved in research and restricted to teaching duties) in research-intensive institutions do not exceed \(20\%\) of the total full-time faculty count8. In order to focus on research metrics while not upholding any bias towards socialized higher educational systems (which have an inflated number of teaching-only faculty), the above formula removes teaching stream instructors from the total number of full-time faculty. Footnote 8: Excluding medical schools, research-intensive institutions in the United States tend to restrict teaching-only faculty positions in the non-tenure track. Table 3.1 of Ehrenberg and Zhang (2005) shows that by 1999 about 15% of full-time faculty were of non-tenure track. However, this number has slowly increased in the years since (Curtis, 2014). As a remark, the research-oriented faculty does not consist of research-only duties. On the contrary, their teaching and administrative load is typically higher than of research (Schuster and Finkelstein, 2006).
* The auxiliary variable \(\varphi\) transforms the size-dependent variables \(\mathcal{V}=(X,Y,\overline{\Lambda_{\mathcal{N}}},\mathcal{C})\) into size-independent ones \(\mathcal{V}^{*}=(X^{*},Y^{*},\overline{\Lambda_{\mathcal{N}}^{*}},\mathcal{C}^ {*})\) through the change of variables \(\mathcal{V}^{*}=\mathcal{V}\cdot\sqrt{\varphi/40}\). The square root and normalization \(1/40\) appear to maintain the size-independent variables in the same range [0,1000]. Note that for an institution obtaining \(\Lambda_{1}=\Lambda_{2}=\cdots=\Lambda_{i}=\cdots=\Lambda_{80}=1000\), currently not existing, would require a very large \(M_{F}\) and therefore \(\lim_{M_{F}\to\infty}\varphi(\Lambda_{i}=1000\,,\,\forall i)=40\).
* I combine five faculty size-independent variables \(\mathcal{V}^{*}=(X^{*},Y^{*},\overline{\Lambda_{X}},\overline{\Lambda_{\mathcal{ N}}^{*}},\mathcal{C}^{*})\) and five faculty size-dependent variables \(\mathcal{V}=(X,Y,\overline{\Lambda_{Y}},\overline{\Lambda_{\mathcal{N}}}, \mathcal{C})\) to create the set \((\mathcal{V},\mathcal{V}^{*})\) defining groupings (_peer similarity_). For each variable, I define a minimum threshold for an institution to belong to a group \(G_{1+\frac{\pi}{2}}\). Fractional groups exist as an ammend to institutions that do not obey the minimum criteria to belong to a group \(G_{p+1}\) but that are not peer similar to any institution in the next integer group \(G_{p+2}\). To make this groupings analysis robust, I oscillate the threshold by \(\pm 5\%\) and \(\pm 10\%\). Therefore, each institution will be assigned a groupings number \(G_{1+\frac{\pi}{2}}\) for each variable of the set \((\mathcal{V}_{\pm},\mathcal{V}_{\pm}^{*})\). The thresholds without oscillations is shown in Table 3.
* I compute the mean groupiness \(\langle G\rangle\) among all fifty values of the set \((\mathcal{V}_{\pm},\mathcal{V}_{\pm}^{*})\). The original five major variables were expanded for robustness, enlarged twofold to account for faculty size and then fivefold for threshold flexibility. Let \((n,p)\in\mathbb{N}\) with \(n>0\) with \(p\) odd, an institution \(U\) belongs to a group \(G_{n+\frac{p}{2}}\) provided that: \[U\in G_{n+\frac{p}{2}}\iff\langle G\rangle\leqslant\Big{(}n+\frac{p}{2}+1 \Big{)}\quad\text{and}\quad\frac{\sum_{n,p}(G_{n+\frac{p}{2}}+G_{n+\frac{p}{2}- 1})}{\sum_{n,p}G_{\forall n,p}}\geqslant 0.5\;.\] (5) On the other hand, any institution \(U\) will neither belong to group \(G_{n+\frac{p}{2}}\) nor \(G_{n+\frac{p}{2}+1}\) as long as, \[U\in G_{n+\frac{(n+1)}{2}}\iff\langle G\rangle\leqslant\Big{(}n+\frac{p}{2}+ \frac{3}{2}\Big{)}\quad\text{and}\quad\frac{\sum_{n,p}(G_{n+\frac{p}{2}}+G_{n+ \frac{p}{2}+1})}{\sum_{n,p}G_{\forall n,p}}\geqslant 0.95\;.\] (6)
* The _peer similarity_ creates a league table with stratification of performance among different groups, but within a group all members are _peers_ with little deviation in performance among themselves. For the sake of comparison with established league tables, I will differentiate between _peers_ by the mean groupiness \(\langle G\rangle\), leading to a continuous _peer ranking_\(\mathcal{R}_{p}\) (see Table 4).
### Results
I have thus far assigned a methodology that is solely focused on subject rankings through the variables \((\Lambda_{i},\mathcal{N}_{j})\) and the faculty density correction \(\varphi\). As a result, the overall evaluation of all institutions is simply composed by the extensive analysis of all subjects. More specifically, intensive (the _Leiden_ parameter \(\mathcal{L}\)) and extensive (the output \(\mathcal{O}\)) parameters were assigned to every subject for all institutions and make an extensive analysis of all subjects, each with the same maximum score to remove bibliometric disparity among different subjects. Because the overall rank is obtained from the _peer similarity_ defined in eqs. (5-6), the final _peer ranking_\(\mathcal{R}_{p}\) independens on how the intensive and extensive measures are computed. In fact, one could use the \(h\)-index as an alternative to both \((\mathcal{L},\mathcal{O})\) without significant deviations in the final result. However, in addition of being time-dependent (Mannella and Rossi, 2013), it is computationally burdensome to find the main campus contribution \(\rho\in[0,1]\) with the h-index. The core criteria that delineates final groups of similarity and ultimately the _peer ranking_ is the stratification of classes defined in Table 3, whose stratification is clearly seen in Figure 3.
The main result is observed in Figures 3(a) and 5: the mean subject score \(\overline{\Lambda_{X}}\) curve shows a clear stabilization in the tail (\(\mathcal{R}_{p}\gtrsim 200\)), establishing that almost all research-intensive universities attain the same _quality_, i.e. the same average research impact in a number of academic subjects. The exception to such region of stability seems to be restricted to the first 100 ranked institutions. Conversely, Figure 3(b) demonstrates that the remaining variables are strongly correlated with the _peer ranking_, without considerable
Figure 3: Color stratification of group membership ordered by the _peer rank_ for \(\mathcal{R}_{p}\leqslant 426\). Each color within the mosaic follows the thresholds of Table 3. Color bands with bold boundary lines at the bottom depict the groups in Table 4 ranging from \(A\)++ to \(B\)\(-\).
\begin{table}
\begin{tabular}{c|c c c c c c c c c c} \hline \hline _Group/Color_ & \(X\) & \(X^{*}\) & \(Y\) & \(Y^{*}\) & \(\overline{\Lambda_{X}}\) & \(\overline{\Lambda_{Y}}\) & \(\overline{\Lambda_{X}}\) & \(\overline{\Lambda_{X}^{*}}\) & \(\mathcal{C}\) & \(\mathcal{C}^{*}\) \\ \hline & 40,000 & 40,000 & 40,000 & 40,000 & 600 & 600 & 550 & 550 & 550 & 550 \\ & 30,000 & 30,000 & 30,000 & 30,000 & 500 & 500 & 450 & 450 & 450 & 450 \\ & 20,000 & 20,000 & 20,000 & 20,000 & 400 & 400 & 350 & 350 & 350 & 350 \\ & 10,000 & 10,000 & 10,000 & 10,000 & 350 & 350 & 250 & 250 & 250 & 250 \\ & 6,000 & 6,000 & 6,000 & 6,000 & 300 & 300 & 150 & 150 & 150 & 150 \\ & 3,000 & 3,000 & 3,000 & 3,000 & 250 & 250 & 80 & 80 & 80 & 80 \\ & 7 & 1,000 & 1,000 & 1,000 & 1,000 & 200 & 200 & 20 & 20 & 20 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Thresholds for the set of variables that defines mean groupiness and peer similarity. The remaining group 8 does not require a minimum threshold. Integer groups are assigned with a unique color to portray group stratification as in Figure 3.
deviations among themselves. Interestingly, only a dozen countries appear in the groups of highest impact, representing 40% of all subject rank slots (see Table 4). Furthermore, we note in Figure 5(a) that there is a strong correlation between the number of subjects exceled at and the normalized total subject score. This correlation is weakened when considered the size-dependent mean score per subject in Figure 5(c), while Figure 5(b) demonstrates that no correlation exists with the size-independent mean score. Figure 5(b) also reveals that excluding the institutions with \(\mathcal{N}_{200}\geqslant 40\) the scatter plot of the mean score is identical to a normal distribution centered at \(\overline{\mathcal{I}_{X}}\approx 350\). This suggests that below the threshold \(\mathcal{N}_{200}=40\) the mean subject score is simply randomic. Therefore, research-intensive institutions seem to maintain the same _mean quality_ (the reader should understand it as the mean research impact per field, that is to say, the degree of leadership at a given subject), except above a threshold which is the only feasible definition of a _World-Class_ institution.
Contrary to common belief, Table 5 shows that specialized institutions do not tend to lead in their respective subjects, as they are underrepresented in the group of leading institutions in their areas of expertise. These institutions are considered specialized because they do not cover more than 20 subjects in terms of departamental structure. In spite of their original names, most technical universities and institutes
\begin{table}
\begin{tabular}{l l l|r r r r r r r} \hline \hline _Class_ & _Grade_ & _Group_ & \(\langle X\rangle\) & \(\langle Y\rangle\) & \(\langle\overline{\mathcal{A}_{X}}\rangle\) & \(\langle\overline{\mathcal{A}_{Y}}\rangle\) & \(\langle\mathcal{C}\rangle\) & \(\langle\mathcal{N}_{200}\rangle\) & \(\sum U\) & _Countries_ \\ \hline
**Elite** & & 1, + 1/2 & 44,300 & 47,900 & 614.4 & 622.5 & 653.8 & 72 & **9** & **3** \\ & & 2 & 31,100 & 39,000 & 522.0 & 504.4 & 537.2 & 63 & **10** & **4** \\
**World-Class** & & 2 + 1/2 & 29,700 & 33,000 & 478.7 & 472.9 & 500.5 & 62 & **22** & **8** \\ & & A- & 3 & 23,300 & 27,300 & 433.4 & 423.4 & 423.8 & 54 & **39** & **13** \\ & & A- & 3 + 1/2 & 17,800 & 24,000 & 391.3 & 388.4 & 348.1 & 45 & **36** & **14** \\
**Continental** & & B- & 4 & 13,200 & 19,800 & 377.7 & 356.6 & 292.6 & 36 & **85** & **26** \\ & & B- & 4 + 1/2 & 8,700 & 16,400 & 366.6 & 332.2 & 234.6 & 24 & **58** & **30** \\ & & B & 5 & 6,300 & 14,900 & 352.3 & 317.1 & 191.9 & 18 & **104** & **32** \\
**National** & & B- & 5 + 1/2 & 4,200 & 11,600 & 361.6 & 274.9 & 146.7 & 12 & **63** & **37** \\ & & B- & 6 & 2,700 & 11,300 & 328.8 & 267.4 & 111.0 & 8 & **103** & **42** \\
**Regional** & & C+ & 6 + 1/2 & 1,400 & 9,400 & 326.4 & 233.5 & 70.6 & 4 & **160** & **47** \\ & & C & 7 & 700 & 7,400 & 302.2 & 187.8 & 42.4 & 2 & **294** & **55** \\
**Local** & & D & 7 + 1/2, 8 & 300 & 5,500 & 261.0 & 155.5 & 24.3 & 1 & **270** & **63** \\ \hline
**All** & & & 4,900 & 11,900 & 304.9 & 250.5 & 127.6 & 13 & **1,253** & **63** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Group averages \(\langle\cdot\rangle\) among different criteria, the count of universities within each group and cumulative distribution of affiliated countries. Geographical distribution of group members allows the estimate of their influence (class).
Figure 4: Characteristic decay of the variables of the _peer similarity_ analysis: (a) mean subject scores and (b) absolute subject scores adjusted to faculty size.
do not limit their scope to technology and natural sciences, and thus can not be considered specialized institutions. For a comparison, major nominal technical institutions include _MIT_ with \(\mathcal{N}_{200}=67\), _ETH Zurich_ with \(\mathcal{N}_{200}=61\), and _Technical Univ. Munich_ with \(\mathcal{N}_{200}=49\) whereas truly specialized institutions are represented by _UC San Francisco_ and the _London School of Economics (LSE)_ both with \(\mathcal{N}_{200}=17\).
Despite the _peer ranking_ being a measure of homogeneity among institutions of each group and the thresholds not being dependent on comprehensiveness (see section 2), Figure 7a conveys a clear mathematical relationship between the _peer ranking_ and comprehensiveness of the kind:
\[\mathcal{N}_{200}\approx 80\exp\left[\frac{1-\mathcal{R}_{p}}{200}\right]\quad \therefore\quad\mathcal{R}_{p}\approx 1+200\ln\left(\frac{80}{\mathcal{N}_{200}} \right)\quad. \tag{7}\]
According to eq. (7), it can be stated with a high degree of confidence that only one strong predictor for the _continuous_ league table emerges, namely the institutional comprehensiveness. Moreover, Figure 7b reveals that the mean groupings also displays an exponential relationship with the peer ranking. Note that the groupiness requires threshold definitions and robustness thereof (see Table 3), whereas the measure of comprehensiveness \(\mathcal{N}_{200}\) is self-explanatory and independent of group thresholds. Nonetheless, the groupiness and comprehensiveness seem to be related almost exactly by a linear relationship, as shown in Figure 7c.
Figure 5: Box-whiskers diagram of the characteristic exponential decay of the mean subject score. This fast decay halts and reaches a region of stability at \(200\lesssim\mathcal{R}_{p}\lesssim 1250\) for the remaining groups. Color stratification is the same as in Table 4.
Figure 6: Dependence on the total number of subjects among the top \(200\) (\(\mathcal{N}_{200}\)) for some key variables: (a) Total normalized subject score (circles) and the faculty size-adjusted equivalent (dots), (b) the size-independent mean subject score and (c) the size-dependent mean subject score.
## 3 The _World-Class_ Debate
Another important topic of debate regards whether is possible to reach a definition of world-class institutions (Hazelkorn, 2008). Critics claim that strategic national programs created to help their universities reach the ill-defined level of "world-class" institutions overlook the major flaw of _zero-sum game_ in league tables, and it would be impossible to have all or many of these institutions defined as world-class (Douglass, 2016). In this section I address this problem. First and foremost, it has been conclusively shown in section 2.1 that the significantly higher mean subject score is what distinguishes world-class institutions from the remaining ones. Secondly, I have also shown this group to be strongly correlated with comprehensiveness of research impact. Therefore, the present analysis has the capability of adding nuance to the debate: a world-class level can be achieved by institutions currently belonging to lower groups without affecting its peers, since the groupiness is strongly correlated with comprehensiveness (see Figure 6(c)). As a result, it is possible to enlarge the number of world-class institutions without affecting the degree of comprehensiveness of the remaining universities. This is possible because of the conservation principle:
\[\int_{0}^{80}f_{s}(\mathcal{N}_{200})\,d\mathcal{N}_{200}=16,000\quad, \tag{8}\]
where \(f_{s}(\mathcal{N}_{200})\) is the dimensional equivalent of a probability density counting the total number of subject ranking slots taken by all institutions with a given \(\mathcal{N}_{200}\). As long as this integral property of the subject rankings holds, any redistribution of academic performance is possible. In fact, Figure 7(a) shows that the vast majority of institutions belong to the lowest groups of Table 4. However, in Figure 7(b) I shift the focus to the histogram of subject ranking slots as a function of comprehensiveness, observing a median value of \(\mathcal{N}_{200}\approx 37\) and thus the equal share of subject rank slots between the half bottom and top half groups. Nonetheless, the histogram is far from being uniformly distributed, displaying two maxima: first of the lowest groups (\(\mathcal{N}_{200}\leqslant 30\)) and the global maximum at the highest groups with \(72\leqslant\mathcal{N}_{200}\leqslant 74\). Similarly, Figure 7(c) breaks down the same histogram into groupiness, revealing a normal distribution centered at the B++ group.
However, the landscape of global research universities is changing quickly due to the rise of institutions from developing nations, in particular from Asia (Veugelers, 2013, 2017; Nature-Index, 2021; Conroy and Plackett, 2022). Hence, the current bibliometric trend will likely result in the convergence of the distribution of excellence in academic subjects to a uniform one in a few decades, decreasing the peak between groups B++ and A in Figure 7(c) and reallocating them to neighboring lower groups. Therefore, the _peer similarity_ approach removes the _zero-sum game_ issue: although it is not possible for every developed/developing nation to claim 10 universities among the top 100, it is possible that the group of world-class institutions will grow. Consequently, it is possible to answer the questions raised by Altbach and Salmi (2011) and Oliver (2013) on world-class institutions: Firstly, yes it is possible for most of the major 50 economies to have a few world-class universities. Secondly, the present statistical analysis extracted precise characteristics of this class.
The present methodology has shown that continuously stratified ranking tables are obsolete and should be replaced by similarity of university groups. Likewise, the Carnegie classification (Carnegie, 1976; Kosar and Scott, 2018) puts together universities of different groups of research intensity, but the new model is more precise. Unlike Carnegie (1976), the model is able to differentiate between institutions like _Caltech
Figure 7: Scatter plots between comprehensiveness (\(\mathcal{N}_{200}\)), _peer ranking_ (\(\mathcal{R}_{p}\)) and mean groupiness (\((G)\)).
and _MIT_: while the former excels in 28 out of 80 subjects of Table 2 and features a mean subject score of \(\overline{A_{X}}\approx 560\), the latter excels in 67 subjects and reaches \(\overline{A_{X}}\approx 640\). Hence, the two universities can not be considered peers and are not in the same tier. The highest Carnegie classification (group), the R1 institutions, joins together almost all groups of Table 4. The main disparity between the Carnegie (1976) classification and the present approach is that the former merges all the research output of a university, with disproportionate allocation of research funds and efforts into a few subjects of higher impact (see Figure 2) compensating for the lack of it in the remaining subjects. Hence, in a comparison of universities, this methodology will not assess the uneven distribution of research output among subjects.
## 4 Comparison with other league tables
Even though the methodology of the five major university league tables are divergent, their core criteria are research impact. Hence, in this section I shall probe the main results of eq. (7) as well as the mean subject score evolution in Figure 3(a) against the five major university league tables in an attempt to reveal hidden features of their methodologies. First, I analyze how well the curve in Figure 6(a) is reproduced in established league tables. As can be observed in Figure 8(a), the _Shanghai_ ranking reproduces the same curve quite well. Note, however, that differences as compared to the peer ranking are visible: the Shanghai counterpart shows discrete stratification for \(\mathcal{R}_{1}>200\) (see Table 6 for definitions), which is due to their methodological discrete ranking above this threshold. Moreover, the _Shanghai_ ranking also features a few dozens outliers compared to the peer ranking. Likewise, the _US News_ league table follows the same curve of the peer rank, as shown in Figure 8(b). Although the correlation between both _QS_ and _THE_ rankings (see Figures 8(c),d) and the number of subjects has a much stronger scatter than the peer ranking, the trend of better ranked institutions to be more comprehensive is still observed. Finally, both curves for the _NTU_ and _Mean_ rankings (see Figures 8(e),f) are in good agreement with eq. (7).
Furthermore, Table 6 delineates by how much each institution rank differs between each league table and the peer rank. I found that on average, the vast majority of institutions lie in the \(\pm 30\%\) confidence interval of the peer rank. This interval supports the view that institutions can not be continuously ranked, as the differences between peers are so small that their positions may change depending on each parameter weight in the overall computation. Thus, Figure 9 and Table 6 confirm the validity of eq. (7), and allows us to discern between the established league tables: _Shanghai_ and _US News_ are the most accurate while _QS_ and _THE_ are the least trustworthy. Additionally, I have also checked whether the distribution of mean score \(\overline{A_{X}}\) as a function of the peer ranking also applies to other league tables. It could be that other league tables assign so different rank positions compared to \(\mathcal{R}_{p}\) that the distribution of \(\overline{A_{X}}\) would not saturate. As it turns out, Figure 10 corroborates the main result that the mean score of universities across eighty academic subjects across all fields tend to saturate rather quickly outside the Elite and World-class groups (see Table 4). Remarkably, even the curve for the _QS_ ranking shows no significant deviation from Figure 3(a), despite the former being the least accurate league table.
This assessment of the correlation between the _peer rank_ and these major league tables have have shown surprisingly good fidelity of the measure of comprehensiveness. As such, it allows me to draw the conclusion that regardless of the exact criteria employed, the continuous league position of research-intensive institu
Figure 8: Histograms and fitting curves (solid) for the (a) frequency of institutions with a given \(\mathcal{N}_{200}\) (b) total number of slots in subject rankings distributed over \(\mathcal{N}_{200}\) (c) and the mean groupings \(\langle G\rangle\).
tions are strongly correlated with academic comprehensiveness. More precisely, the success of institutions in university league tables are strongly influenced by research comprehensiveness and secondarily by high density of score per faculty. The very few institutions, such as _Caltech_, that are not fully comprehensive (but also not specialized) can appear among the best when their score density per faculty is very high. Conversely, a high score per faculty density can not uplift specialized institutions into groups of comprehensive ones, such as in the case of _Scuola Normale Superiore di Pisa_.
best ranked institution, _Harvard_, is still far from this upper bound. In addition, the similarity between all USA institutions belonging to group A++ is evident. Strikingly, the mean curve of this group seems to be well represented by the _MIT_ one. As such, in all remaining panels of Figures 11-12 I compare institutional performance against that of _MIT_. Figure 11b demonstrates that many private USA institutions believed to be a peer of _MIT_ are indeed at a lower level (A+). _Caltech_'s subject performance is significantly lower than its peers, but this is compensated by a much higher score per faculty. Conversely, Figure 11c shows the leading public USA institutions perform better than expected of their reputation, being at the same level of the private institutions of Figure 11b. In the United Kingdom, Figure 11d depicts the indiscernible performance of _Oxford_ and _Cambridge_ at par with _MIT_, both belonging to the A++ group. Moreover, _Imperial College_ and _UCL_ are not too far behind. For the remaining panels (Figures 11e-i and 12), no institution will be found to be performing similarly to _MIT_ except for _ETH Zurich_ in Switzerland and the _University of Toronto_ in Canada. Among the leading countries in academic research, Germany seems to be underperforming in the subject rankings (see Figure 11f), but like _Caltech_ this is compensated by a much higher score per faculty density than its peers. In other words, German institutions perform a bit lower than its peers in leading European countries despite having a much smaller number of faculty per institution. For the remaining major regions of the world, a much lower performance is observed, except for the leading institutions in Denmark, Singapore, China, and Japan. The institutional performance amounts to the likelihood of their perfomance to be among the very best in any randomly chosen academic field. As shown in figures 5 and 10, very few similarity groups have a typical size-independent likelihood exceeding 40%. It is the size of research faculty that ultimately strengthen a stratification of likelihood in performing well in any randomly chosen subject.
In Figure 13 the regional performance of the mean score across all academic subjects is compared. The overall trend of saturation of mean score for \(\mathcal{R}_{p}\gtrsim 200\) is found in all regions of the globe and their differences are negligible. Therefore, I conclude that not only quality is nearly homogeneously distributed across the leading thousand institutions, it is also regionally or even nationally. Surprisingly, academic quality is found to be worldwide homogeneous even despite economic disparities9, and the only difference between countries/regions is the total number of their institutions and comprehensiveness thereof.
Footnote 9: The definitions for developed and developing economies can be found at the IMF report.
## 6 Conclusions
The academic world saw in the past two decades an explosion of university league tables and their impact. The question regarding the liability for the assessment of university league tables arose. As Weingart (2005) argues, it is the duty and responsibility of peer-review alone to evaluate institutions and individuals in the research realm. However, as far as university tables are concerned, scientists rarely offer solutions or alternatives to the established frameworks. Addressing such summons, I have attempted to answer the most pressing questions on biases and negative features of league tables, overcoming them. Tangible features of the institutions deemed to be "world-class" and the relationship of their metrics to the remaining research-intensive institutions were found.
Figure 10: Equivalent (black curve) of Figure 4a (blue curve) for the _Mean_ ranking among the five established league tables shown in Figure 9.
To the author's knowledge, the present work on university ranking is the first to demonstrate that status development without concurrently affecting the position of peer institutions is possible. Strong evidence supporting the stratification of excellence in academic performance has been found. Moreover, we solve the problem of the _zero-sum game_ through a parameterization of the research quality and output by implementing the idea of how broad the institution coverage of all main subjects of human knowledge are. Although most rankings are based on relative or comparative quality, the present methodology clearly classifies excellence by absolute values. The heart of the present rationale lies in the fact that normalized for subject average h-index, whether the research output or any similar measure of the leading institution in a given subject has the same influence and impact of the leading institution in another subject. Thus, the methodology points to the existence of groups of universities in which performance within each group is almost indistinguishable.
Figure 11: Distribution of subject score order from highest to lowest of major institutions of leading countries in research output in the western hemisphere.
Universities in the same similarity group can be considered equal and stratification within groups are hardly found.
Albeit the stratification of academic excellence is real between groups, the present results discredit the typical league table view of scarcity of reputation. The analysis carried out in this study has shown that nearly every research-intensive university excels in a wide range of subjects. Furthermore, it has been unveiled that reputation and prestige often attached to awards and prizes in these fields, is not a proxy for quality. Unsurprisingly, I find that some of the _Ivies_ and similar private institutions in the United States do not measure up to their reputation and prestige. The discrepancy between actual performance and reputation is particularly substantial for _Princeton_ and _Caltech_. On the other hand, the opposite seems to be true for large public institutions in North America, as their reputation does not measure up to their excellent
Figure 12: Distribution of subject score (as in Figure 11) of major institutions of less developed regions in regards to research impact ordered from highest to lowest.
performance and broad coverage such as the _University of Michigan_ and the _University of Wisconsin_ flagship campuses. Remarkably, the strong correlation between the _peer ranking_ and comprehensiveness of universities is somehow also applicable to five established league tables. As such, I conclude that regardless of the methodology, research-intensive institutions tend to be classified as a function of their comprehensiveness.
The results of the present analysis may be perceived as a paradox: On one hand, the quality of an institution is not correlated with comprehensiveness. On the other hand, there is a strong correlation between comprehensiveness and the mean ranking of the major established league tables. Nevertheless, it should by no means induce the reader to conclude that comprehensiveness is necessary to make a good university. Rather, university league tables will inherently and even uncconciously favor comprehensive institutions through their several differing methods, while the mean quality of all these institutions are nearly homogeneous. Hence, comprehensiveness is not a goal to be sought, but the root in the stratification of academic impact and unrelated to academic quality. In fact, comprehensiveness is a proxy for privilege, because the former requires large and steady endowment over decades in order to be achieved. Century-old institutions have a
Figure 13: Mean score across all academic subjects as a function of the peer rank for a variety of countries and regions. The last panel shows the break-down between developed and developing economies and the percentage of the total number of institutions belonging to each group.
clear advantage in achieving a high level of comprehensiveness, in particular those in countries with generous science funding bodies.
## Acknowledgements
The author thanks Jerome Kasparian for reading the manuscript and adding to its readability.
## Conflict of interest
The author declares no conflict of interest.
## Data Availability
Core data can be found within personal storage. This includes links to search queries for each of the eighty subjects, league tables of the overall and subject rankings and Google Earth interactive map with 800 universities in the same color stratification within the manuscript.
|
2307.00396 | Modified gauge unfixing formalism and gauge symmetries in the
non-commutative chiral bosons theory | We use the gauge unfixing (GU) formalism framework in a two dimensional
noncommutative chiral bosons (NCCB) model to disclose new hidden symmetries.
That amounts to converting a second-class system to a first-class one without
adding any extra degrees of freedom in phase space. The NCCB model has two
second-class constraints -- one of them turns out as a gauge symmetry generator
while the other one, considered as a gauge-fixing condition, is disregarded in
the converted gauge-invariant system. We show that it is possible to apply a
conversion technique based on the GU formalism direct to the second-class
variables present in the NCCB model, constructing deformed gauge-invariant GU
variables, a procedure which we name here as modified GU formalism. For the
canonical analysis in noncommutative phase space, we compute the deformed Dirac
brackets between all original phase space variables. We obtain two different
gauge invariant versions for the NCCB system and, in each case, a GU
Hamiltonian is derived satisfying a corresponding first-class algebra. Finally,
the phase space partition function is presented for each case allowing for a
consistent functional quantization for the obtained gauge-invariant NCCB. | Cleber N. Costa, Gabriella V. Ambrósio, Paulo R. F. Alves, Jorge Ananias Neto, Ronaldo Thibes | 2023-07-01T18:00:35Z | http://arxiv.org/abs/2307.00396v1 | # Modified gauge unfixing formalism and gauge symmetries in the non-commutative chiral bosons theory
###### Abstract
We use the gauge unfixing (GU) formalism framework in a two dimensional noncommutative chiral bosons (NCCB) model to disclose new hidden symmetries. That amounts to converting a second-class system to a first-class one without adding any extra degrees of freedom in phase space. The NCCB model has two second-class constraints - one of them turns out as a gauge symmetry generator while the other one, considered as a gauge-fixing condition, is disregarded in the converted gauge-invariant system. We show that it is possible to apply a conversion technique based on the GU formalism direct to the second-class variables present in the NCCB model, constructing deformed gauge-invariant GU variables, a procedure which we name here as modified GU formalism. For the canonical analysis in noncommutative phase space, we compute the deformed Dirac brackets between all original phase space variables. We obtain two different gauge invariant versions for the NCCB system and, in each case, a GU Hamiltonian is derived satisfying a corresponding first-class algebra. Finally, the phase space partition function is presented for each case allowing for a consistent functional quantization for the obtained gauge-invariant NCCB.
Gauge invariance, noncommutative chiral bosons, modified gauge unfixing formalism
Introduction
In the last decades, chiral bosons (CB) field theory has drawn a lot of interest in the physics community [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Additionaly to its own relevance, CB have been useful for understanding strings, superstrings and supersymmetry [4; 5; 6; 7], gravity and supergravity theories [8; 9; 10; 11], black holes [12], fractionary quantum Hall effect [13], Hodge theory [14] and general aspects of field theories in the light cone [15; 16]. After the pioneering work by Siegel [4], in which the self-dual condition first appeared as a quadratic constraint, Floreaninin and Jackiw introduced a local Lagrangian density for chiral bosons in which a rich canonical structure was revealed [17]. Despite its initial straightforward simplicity, the description of CB presents some subtleties [16; 17; 18; 19; 20; 21]. Among several issues, we may mention the presence of a single second-class primary constraint with a non-trivial commutation relation with itself. Therefore CB field theory is not naturally gauge invariant in its original inception. As it is well-known, gauge invariant theories hold significant importance regarding its quantum aspects [22], in particular the quantization of first-class systems is much more easy when compared to second-class theories [23; 24]. In the case of CB field theory, obtaining gauge invariance in a consistent form is not a simple task and should be handled with extra care. Due to the presence of an odd number of second-class constraints, in the Floreanini-Jackiw (FJ) description, usual conversional methods such as standard Batalin-Fradkin-Tyutin (BFT) or direct gauge-unfixing do not work. An alternative successful route can be found in reference [25] where, in order to restore CB gauge invariance, a particular method employing both BFT and gauge unfixing ideas was used. It is also worth mentioning the constraint Fourier modes expansion approach used in [26] to allow for the introduction of BFT modes.
Over and above that, in the beginning of the current century, there have been various arguments regarding the possibility of noncommutative effects in high energy physics [27; 28; 29; 30]. Relating those two subjects, two extensions of CB field theory including noncommutative features have been proposed in the literature [31; 32]. The first one allows for noncommutativity in space-time coordinates [31] while the second one introduces non-commutativity directly into the fields space itself [32] and has been further investigated in reference [33]. In the present work, we shall be concerned with the latter idea, named here for short as Noncommutative Chiral Bosons (NCCB), in which the quantum operators algebra is deformed in terms of a controlling noncommutative parameter. The NCCB model, introduced in [32], connects two FJ chiral bosons through that noncommutativity parameter, giving rise to nontrivial commutation relations among the fields in phase space and, similarly to [17], is characterized as a constrained second-class system lacking parent gauge freedom. Due to the interaction between left and right propagation modes, the original number of constraints doubles, turning more feasible to look for gauge symmetries generated by constraint abelianization methods. In fact, a couple of recent articles [34; 35] analyzed the canonical structure of the NCCB in the framework of the BFT embedding method, aiming to promote the NCCB constraints to first-class. The BFT formalism [36; 37; 38] converts second-class constraints into first-class ones by enlarging the phase space with the introduction of auxiliary fields and has found many important applications in the literature from which we mention a short representative sample [39;
46]. In [34], it is shown that the direct application of the BFT embedding formalism to the NCCB model may lead to an infinite amount of auxiliary fields in phase space. With an alternative choice for the BFT fields symplectic structure, Majid, Vahid and Mehran have shown in [35] that it is possible to abelianize the NCCB model in finite order, reducing the number of auxiliary fields to only two. Nonetheless, we claim that a consistent NCCB abelianization can be done without the need of any auxiliary fields whatsoever. That is one of the main advantages of the gauge-unfixing (GU) method [47; 48; 49; 50; 51; 52; 53; 54; 55]. Building on the original work of Mitra and Rajaraman [47], which first conjectured the interpretation of second-class constraints in phase space as resulting from gauge-fixing conditions within a larger gauge invariant theory, Anishetty and Vytheeswaran constructed a Lie projection operator whose action in the second-class functions was able to reveal hidden symmetries [48; 49; 50]. Those ideas were further elaborated to produce the improved or modified gauge-unfixing formalism [51; 52], centered on the construction of the invariant GU variables, and have found recent appeal in quantum field theory [53; 54; 55]. In this way, the main goal of the present letter is to develop a gauge invariant theory for the NCCB model, without auxiliary fields, by directly applying the modified GU formalism [51; 52; 53; 54; 55] to the noncommutative fields space.
For the reader's convenience, we have organized our presentation as follows: In the next section, we discuss the NCCB model and analyse its constraints canonical structure with the use of the Dirac-Bergmann formalism [56; 57; 58; 59] computing the Dirac brackets algebra in phase space. In Section **3**, we take the opportunity to briefly review the modified GU formalism preparing its application to the NCCB and turning the article self-contained. In Section **4**, the modified GU formalism is applied to the NCCB model and we show that it is possible to obtain gauge invariance without the introduction of auxiliary fields. We close in the last section with our conclusions and final remarks.
## 2 The noncommutative chiral bosons model
The noncommutative chiral bosons model (NCCB) in \((1+1)\) space-time dimensions is defined by the first-order action [32]
\[S[\phi_{a}]=\int d^{2}x\left[-\frac{2}{1+\theta^{2}}\dot{\phi}_{a}\Delta_{ab} \phi^{\prime}_{b}-\phi^{\prime}_{a}\phi^{\prime}_{a}\right]\,, \tag{1}\]
where \(\theta\) denotes a noncommutative parameter and \(\Delta_{ab}\) is an invertible symmetric \(2\times 2\) matrix given by
\[\Delta_{ab}=\frac{1}{2}\begin{pmatrix}-1&\theta\\ \theta&1\end{pmatrix}\,.\]
The Latin indexes \(a,b\) run through \(1,2\) and the dynamics resulting from (1) describes two chiral bosons \(\phi_{1}\) and \(\phi_{2}\) coupled by the noncommutative parameter \(\theta\). In fact, the field equations directly derived from the minimum action principle applied to \(S[\phi_{a}]\) read
\[2\frac{\Delta_{ab}}{1+\theta^{2}}\dot{\phi}^{\prime}_{b}+\phi^{ \prime\prime}_{a}=0\,, \tag{2}\]
and, alternatively, can be cast into the form
\[\dot{\phi}^{\prime}_{a}+2\Delta_{ab}\phi^{\prime\prime}_{b}=0\,. \tag{3}\]
In the limit \(\theta\to 0\), the left and right modes decouple and we recover the usual commutative case consisting of two independent chiral bosons [17; 18]. It is interesting to notice that the non-commutative parameter \(\theta\) leads to an enhancement of the speed of light concerning the two chiral bosons propagation. This can be seen from the fact that the equations of motion (3) are equivalent to the pair
\[\begin{split}\dot{\phi}^{\prime}_{1}&=\ \phi^{\prime\prime}_{1}-\theta\phi^{\prime\prime}_{2}\,,\\ \dot{\phi}^{\prime}_{2}&=-\phi^{\prime\prime}_{2}- \theta\phi^{\prime\prime}_{1}\,,\end{split} \tag{4}\]
which in turn, by space integration and time derivation, result in
\[\square_{\theta}\phi_{1}=\square_{\theta}\phi_{2}=0\,, \tag{5}\]
where \(\square_{\theta}\) denotes the \(\theta\)-dependent noncommutative D'Alembertian operator defined as
\[\square_{\theta}\equiv\frac{1}{1+\theta^{2}}\partial_{t}^{2}-\partial_{x}^{2 }\,. \tag{6}\]
Thus, we see that Lorentz invariance is preserved in noncommutative space, as long as we redefine the speed of light \(c\) as
\[c\to c_{\theta}\equiv c\sqrt{1+\theta^{2}}\,. \tag{7}\]
Due to the presence of constraints, the canonical quantization of the NCCB must be done carefuly. The action (1) actually corresponds to a singular Dirac-Bergmann system [56; 57; 58; 59]. To see this feature, note that the canonical momenta in phase space can be written as
\[\pi_{a}=-\frac{2}{1+\theta^{2}}\Delta_{ab}\phi^{\prime}_{b}\,. \tag{8}\]
As we can see, Eq. (8) does not involve the fields time derivatives. Consequently, we have a pair of primary constraints in phase space given by
\[\Omega_{a}\equiv\pi_{a}+\frac{2}{1+\theta^{2}}\Delta_{ab}\phi^{\prime}_{b}\,, \tag{9}\]
with corresponding Poisson bracket relations
\[\{\,\Omega_{a}(x)\,,\,\Omega_{b}(y)\,\}=\frac{4}{1+\theta^{2}}\Delta_{ab} \delta^{\prime}(x-y)\,. \tag{10}\]
Concerning the dynamical evolution, the Legendre transformation from configuration to phase space leads to a well defined canonical Hamiltonian within the primary constraints hypersurface given by
\[H=\int dx\,\phi^{\prime}_{a}\phi^{\prime}_{a} \tag{11}\]
and further steps of the Dirac-Bergmann algorithm show the consistent stability of \(\Omega_{a}\) without the need of new constraints. Hence, the constraints set (9) is complete and the invertibility of Eq. (10) assures the second-class nature of the system. To obtain the Dirac brackets among the phase space variables, we note that the antisymmetric inverse of (10) can be written as
\[\left\{\,\Omega_{a}(x)\,,\,\Omega_{b}(y)\,\right\}^{-1}=\Delta_{ab}\epsilon(x- y)\,, \tag{12}\]
with \(\epsilon(x)\) denoting the antisymmetric unity step function satisfying
\[\epsilon(x)=-\epsilon(-x)\,, \tag{13}\]
and
\[\epsilon^{\prime}(x)=\delta(x)\,. \tag{14}\]
Eq. (12) represents the functional inverse of (10) in the sense of
\[\int dz\,\left\{\,\Omega_{a}(x)\,,\,\Omega_{c}(z)\,\right\}\left\{\,\Omega_{c }(z)\,,\,\Omega_{b}(y)\,\right\}^{-1}=\delta_{ab}\delta(x-y)\,, \tag{15}\]
and
\[\int dz\,\left\{\,\Omega_{a}(x)\,,\,\Omega_{c}(z)\,\right\}^{-1}\left\{\, \Omega_{c}(z)\,,\,\Omega_{b}(y)\,\right\}=\delta_{ab}\delta(x-y)\,. \tag{16}\]
Inserting (12) into the general DB definition
\[\{F,G\}_{D}=\{F,G\}-\int dzd\bar{z}\,\{F,\Omega_{a}(z)\}\{\Omega_{a}(z), \Omega_{b}(\bar{z})\}^{-1}\{\Omega_{b}(\bar{z}),G\}\,, \tag{17}\]
the fundamental DBs among the phase space variables can be readily computed as
\[\{\phi_{a}(x),\phi_{b}(y)\}_{D}=\Delta_{ab}\epsilon(x-y)\,, \tag{18}\]
\[\{\phi_{a}(x),\pi_{b}(y)\}_{D}=\frac{1}{2}\delta_{ab}\delta(x-y)\,, \tag{19}\]
and
\[\{\pi_{a}(x),\pi_{b}(y)\}_{D}=-\frac{1}{1+\theta^{2}}\Delta_{ab}\partial_{x} \delta(x-y)\,. \tag{20}\]
At this point, the canonical quantization can be pursued by requiring the associated operators to satisfy commutation relations dictated by the DB algebra above. Our main goal in the present work, however, is to produce gauge symmetry for the NCCB model, write down the corresponding quantum generating functional, and proceed along the lines of the functional quantization framework. This can be done by means of the modified GU formalism. Indeed, the second-class property shown by the constraints in Eq. (9) allows us to directly apply that improved version of the GU formalism considering one of the constraints as generator of gauge transformations and calculating the GU variables. In the next section, we give a brief general review of the modified GU technique, paving the way for its application to the NCCB in the following one.
Brief review of the modified GU formalism
The modified gauge unfixing formalism developed by Neto [23; 51; 52; 55] is based on the idea of selecting one of the two second-class constraints to be the gauge symmetry generator and the other one being discarded in a broader gauge-invariant context. Consider for example a given second-class phase-space function \(T(A_{\mu},\pi_{\mu})\) with the index \(\mu\) running through all phase space variables. Our strategy is to write a first-class function \(\tilde{T}(A_{\mu},\pi_{\mu})\) obtained from the second-class function \(T\) as
\[\tilde{T}(A_{\mu},\pi_{\mu})\equiv T(\tilde{A}_{\mu},\tilde{\pi}_{\mu})\,, \tag{21}\]
by redefining the original phase space variables
\[A_{\mu} \longrightarrow\tilde{A}_{\mu}(A_{\mu},\pi_{\mu})\,, \tag{22}\] \[\pi_{\mu} \longrightarrow\tilde{\pi}_{\mu}(A_{\mu},\pi_{\mu})\,, \tag{23}\]
such that
\[\delta\tilde{A}_{\mu}=\alpha\left\{\tilde{A}_{\mu},\psi\right\}=0\,, \tag{24}\]
and
\[\delta\tilde{\pi}_{\mu}=\alpha\left\{\tilde{\pi}_{\mu},\psi\right\}=0\,, \tag{25}\]
where \(\alpha\) is an infinitesimal parameter and \(\psi\) is the second-class constraint chosen to be the gauge symmetry generator. The deformed variables \(\tilde{A}_{\mu}\), \(\tilde{\pi}_{\mu}\) are known as _GU variables_. It is clear now that functions of the GU variables, in particular \(\tilde{T}\), will be gauge invariant since
\[\left\{\tilde{T},\psi\right\}=\left\{\tilde{A},\psi\right\}\frac{\partial T}{ \partial\tilde{A}}+\frac{\partial T}{\partial\tilde{\pi}}\left\{\tilde{\pi}, \psi\right\}=0\,. \tag{26}\]
Consequently, we can obtain a gauge invariant function from the replacement of
\[T(A_{\mu},\pi_{\mu})\to T(\tilde{A}_{\mu},\tilde{\pi}_{\mu})=\tilde{T}(A_{\mu },\pi_{\mu})\,. \tag{27}\]
Now suppose the system has only two second class constraints \(Q_{1}\) and \(Q_{2}\). So, the GU gauge invariant phase space variables, collectively denoted by \(\tilde{\Lambda}\equiv(\tilde{A}_{\mu},\tilde{\pi}_{\mu})\), can be constructed as a power the series in the discarded constraint \(Q_{2}\)
\[\tilde{\Lambda}(x)=\Lambda(x)+\int dyb_{1}(x,y)Q_{2}(y)+\iint dydzb_{2}(x,y,z) Q_{2}(y)Q_{2}(z)+...\,, \tag{28}\]
satisfying, on the constraint surface \(Q_{2}=0\), the boundary condition
\[\tilde{\Lambda}\big{|}_{Q_{2}=0}=\Lambda\,. \tag{29}\]
This assures that we recover the original second-class system when \(Q_{2}=0\). The coefficients \(b_{n}\) in relation (28) are then determined by the GU invariant requirement
\[\delta\tilde{\Lambda}=\alpha\left\{\tilde{\Lambda},Q_{1}\right\}=0\,. \tag{30}\]
The general equation for \(b_{n}\) is
\[\delta\tilde{\Lambda}(x)=\delta\Lambda(x)+\delta\int dyb_{1}(x,y)Q_{2}(y)+ \delta\iint dydzb_{2}(x,y,z)Q_{2}(y)Q_{2}(z)+...=0\,, \tag{31}\]
in which we have
\[\delta\Lambda(x)=\int dy\,\alpha(y)\left\{\Lambda(x),\psi(y) \right\}, \tag{32}\] \[\delta b_{1}(x)=\int dy\,\alpha(y)\left\{b_{1}(x),\psi(y)\right\},\] (33) \[\delta Q_{2}(x)=\int dy\,\alpha(y)\left\{Q_{2}(x),\psi(y)\right\}\,. \tag{34}\]
So, for the first order correction term (\(n=1\)) we have from Eq. (31)
\[\delta\Lambda(x)+\int dyb_{1}(x,y)\delta Q_{2}(y)=0\,. \tag{35}\]
From Eq. (35) we can determine the coefficient \(b_{1}\). For the second order correction term (\(n=2\)) we have
\[\int dy\delta b_{1}(x,y)Q_{2}(y)+2\iint dydzb_{2}(x,y,z)\delta Q_{2}(y)Q_{2}(z )=0\,. \tag{36}\]
Then, from Eq. (36) we can determine the coefficient \(b_{2}\) and so on and so forth. Therefore, from the GU variables power series defined in Eq. (28) we can derive a corresponding gauge invariant theory.
## 4 Disclosing Hidden Symmetries
In this section, we apply the modified GU formalism to the NCCB. As we have seen, the NCCB has two second-class constraints given by Eq. (9). Writting them out explicitly in terms of the components of \(\Delta_{ab}\), we have
\[\Omega_{1}=\pi_{1}-\frac{\phi_{1}^{\prime}}{1+\theta^{2}}+\frac{ \theta\,\phi_{2}^{\prime}}{1+\theta^{2}}\,, \tag{37}\] \[\Omega_{2}=\pi_{2}+\frac{\theta\,\phi_{1}^{\prime}}{1+\theta^{2} }+\frac{\phi_{2}^{\prime}}{1+\theta^{2}}\,. \tag{38}\]
Then, following the ideas of the modified GU formalism, one of the two second-class constraints will be chosen to be the gauge symmetry generator and the other one will be discarded. Thus, two possible choices for the gauge symmetry generator for the NCCB are possible. We consider below each of these two different cases separately.
### Case A
In this first case, we consider the constraint \(\Omega_{1}\), Eq. (37), as the gauge symmetry generator and discard the other second-class constraint \(\Omega_{2}\), Eq. (38). The \(\varepsilon\)-infinitesimal gauge transformations generated by \(\Omega_{1}\) are given by
\[\delta\phi_{1} =\int dy\,\alpha(y)\left\{\phi_{1},\Omega_{1}(y)\right\}=\alpha\,, \tag{39}\] \[\delta\phi_{2} =\int dy\,\alpha(y)\left\{\phi_{2},\Omega_{1}(y)\right\}=0\,,\] (40) \[\delta\pi_{1} =\int dy\,\alpha(y)\left\{\pi_{1},\Omega_{1}(y)\right\}=-\frac{1 }{1+\theta^{2}}\,\alpha^{\prime}\,,\] (41) \[\delta\pi_{2} =\int dy\,\alpha(y)\left\{\pi_{2},\Omega_{1}(y)\right\}=\frac{ \theta}{1+\theta^{2}}\,\alpha^{\prime}\,, \tag{42}\]
while the constraint \(\Omega_{2}\) transforms under \(\Omega_{1}\) as
\[\delta\Omega_{2}=\int dy\,\alpha(y)\left\{\Omega_{2},\Omega_{1}(y)\right\}= \frac{2\theta}{1+\theta^{2}}\,\alpha^{\prime}\,. \tag{43}\]
Since the fields \(\phi_{1},\pi_{1}\) and \(\pi_{2}\) are not naturally gauge invariant under \(\Omega_{1}\), following the improved GU approach, we need to construct their GU invariant combinations in terms of a power series in \(\Omega_{2}\). For instance, the first gauge invariant GU variable \(\tilde{\phi}_{1}\) can be written as
\[\tilde{\phi}_{1}(x)=\phi_{1}(x)+\int dyb_{1}(x,y)\,\Omega_{2}(y)+\int dydzb_{2} (x,y,z)\,\Omega_{2}(y)\Omega_{2}(z)+...\,, \tag{44}\]
with the correction coefficient functions \(b_{n}\) to be calculated from the invariant condition \(\delta\tilde{\phi}_{1}=0\). For the linear correction term (\(n=1\)), we have
\[\delta\phi_{1}+\int dyb_{1}(x,y)\delta\Omega_{2}(y)=0\,, \tag{45}\]
and, plugging Eqs. (39) and (43) into (45), we obtain
\[b_{1}(x,y)=-\frac{1+\theta^{2}}{2\theta}\epsilon(x-y)\,. \tag{46}\]
For the quadratic term we have \(b_{2}=0\) and thus, for \(n\geq 2\), all the remaining correction coefficient functions \(b_{n}\) are null. Taking this into consideration and inserting Eq. (46) in (44), we obtain the final expression for the first gauge-invariant GU variable \(\tilde{\phi}_{1}\) as
\[\tilde{\phi}_{1}(x)=\phi_{1}(x)-\frac{1+\theta^{2}}{2\theta}\int dy\epsilon(x- y)\Omega_{2}(y)\,. \tag{47}\]
Repeating this same iterative process, in a similar fashion, we can derive the remaining gauge-invariant GU variables in phase space as
\[\tilde{\phi}_{2} = \phi_{2}\,, \tag{48}\] \[\tilde{\pi}_{1} = \pi_{1}+\frac{1}{2\theta}\,\Omega_{2}\,,\] (49) \[\tilde{\pi}_{2} = \pi_{2}-\frac{1}{2}\,\Omega_{2}\,. \tag{50}\]
Differentiating with respect to space, from Eqs. (47) and (48), we have the further useful relations
\[\tilde{\phi}^{\prime}_{1} = \phi^{\prime}_{1}-\frac{1+\theta^{2}}{2\theta}\Omega_{2}\,, \tag{51}\] \[\tilde{\phi}^{\prime}_{2} = \phi^{\prime}_{2}\,. \tag{52}\]
By construction, as seen in the previous section, any function of the GU variables is automatically gauge-invariant. In particular, the gauge-invariant GU Hamiltonian can be directly obtained from Eq. (11) as
\[\tilde{H}=\int dx\left[\left(\phi^{\prime}_{1}-\frac{1+\theta^{2}}{2\theta}\, \Omega_{2}\right)^{2}+\phi^{\prime}_{2}\phi^{\prime}_{2}\right]\,, \tag{53}\]
and satisfies the gauge-invariant condition \(\{\tilde{H},\Omega_{1}\}=0\). Due to its gauge invariance, the GU Hamiltonian can be used to obtain the phase space partition function in the functional quantization approach [60] as
\[Z=\int{\cal D}\phi_{1}{\cal D}\pi_{1}{\cal D}\phi_{2}{\cal D}\pi_{2}\,\delta( \Omega_{1})\delta(\Gamma_{1})\,det|\{\Omega_{1},\Gamma_{1}\}|e^{iS}\,, \tag{54}\]
where \(\Gamma_{1}\) denotes a suitable gauge-fixing condition and
\[S=\int d^{2}x\left(\pi_{1}\dot{\phi}_{1}+\pi_{2}\dot{\phi}_{2}-\tilde{H} \right)\,. \tag{55}\]
Concerning gauge achievability, the gauge fixing function \(\Gamma_{1}\) must be chosen in such a way that the determinant in the integration measure in (54) does not vanish. This concludes the functional quantization of the gauge-invariant description of the NCCB, with gauge transformations generated by \(\Omega_{1}\).
### Case B
Next, we consider the other possible choice for the gauge symmetry generator. Let now \(\Omega_{2}\) generate the gauge symmetries, certainly different from the ones in case A, while
the constraint \(\Omega_{1}\) is discarded. The infinitesimal gauge transformations generated by \(\Omega_{2}\) read
\[\delta\phi_{1} =0\,, \tag{56}\] \[\delta\phi_{2} =\int dy\,\alpha(y)\left\{\phi_{2},\Omega_{2}(y)\right\}=\alpha\,,\] (57) \[\delta\pi_{1} =\int dy\,\alpha(y)\left\{\pi_{1},\Omega_{2}(y)\right\}=\frac{ \theta}{1+\theta^{2}}\alpha^{\prime}\,,\] (58) \[\delta\pi_{2} =\int dy\,\alpha(y)\left\{\pi_{2},\Omega_{2}(y)\right\}=\frac{1}{ 1+\theta^{2}}\alpha^{\prime}\,,\] (59) \[\delta\Omega_{1} =\int dy\,\alpha(y)\left\{\Omega_{1},\Omega_{2}(y)\right\}=\frac{ 2\theta}{1+\theta^{2}}\,\alpha^{\prime}\,. \tag{60}\]
Eq. (56) shows that \(\phi_{1}\) is already gauge invariant under transformations generated by \(\Omega_{2}\), thus
\[\tilde{\phi}_{1}=\phi_{1}\,.\]
The gauge invariant field \(\tilde{\phi}_{2}\) is constructed as
\[\tilde{\phi}_{2}=\phi_{2}+\int dyc_{1}(x,y)\,\Omega_{1}(y)+\int dydzc_{2}(x,y,z)\,\Omega_{1}(y)\Omega_{1}(z)+...\,. \tag{61}\]
Now, imposing the gauge-invariant condition \(\delta\tilde{\phi}_{2}=0\), the correction terms \(c_{n}\) can be obtained. For the linear correction term (\(n=1\)), we have
\[\delta\phi_{2}(x)+\int dyc_{1}(x,y)\delta\Omega_{1}(y)=0\,. \tag{62}\]
Using Eqs. (57) and (60) we find
\[c_{1}(x,y)=-\frac{1+\theta^{2}}{2\theta}\epsilon(x-y)\,. \tag{63}\]
It is easy to see that the variation of \(c_{1}\) leads to \(c_{2}=0\). Then, for \(n\geq 2\), all correction terms \(c_{n}\) are null. Therefore, by putting Eq. (63) into Eq. (61), the GU variable \(\tilde{\phi}_{2}\) acquires the form
\[\tilde{\phi}_{2}=\phi_{2}-\frac{1+\theta^{2}}{2\theta}\int dy\epsilon(x-y) \Omega_{1}(y)\,.\]
The remaining GU variables can be obtained by the same iterative process. Proceeding this way and putting all GU variables together we have
\[\tilde{\phi}_{1} =\phi_{1}\,, \tag{64}\] \[\tilde{\phi}_{2} =\phi_{2}-\frac{1+\theta^{2}}{2\theta}\int dy\epsilon(x-y)\Omega _{1}(y)\,,\] (65) \[\tilde{\pi}_{1} =\pi_{1}-\frac{1}{2}\,\Omega_{1}\,,\] (66) \[\tilde{\pi}_{2} =\pi_{2}-\frac{1}{2\theta}\,\Omega_{1}\,. \tag{67}\]
From Eqs. (64) and (65), it immediately follows
\[\tilde{\phi}_{1}^{\prime} = \phi_{1}^{\prime} \tag{68}\] \[\tilde{\phi}_{2}^{\prime} = \phi_{2}^{\prime}-\frac{1+\theta^{2}}{2\theta}\,\Omega_{1}\,. \tag{69}\]
Now we are able to write the gauge invariant version of the Hamiltonian (11). Substituting Eqs. (68) and (69) into (11) we have
\[\tilde{H}=\int dx\left[\left(\phi_{2}^{\prime}-\frac{1+\theta^{2}}{2\theta}\, \Omega_{1}\right)^{2}+\phi_{1}^{\prime}\phi_{1}^{\prime}\right]\,. \tag{70}\]
By construction, the GU Hamiltonian, \(\tilde{H}\), satisfies the condition \(\{\tilde{H},\Omega_{2}\}=0\).
As was done in case A, given the gauge invariant Hamiltonian, we can derive the phase space partition function
\[Z=\int{\cal D}\phi_{1}{\cal D}\pi_{1}{\cal D}\phi_{2}{\cal D}\pi_{2}\,\delta( \Omega_{2})\delta(\Gamma_{2})\,det|\{\Omega_{2},\Gamma_{2}\}|e^{iS}\,, \tag{71}\]
where
\[S=\int d^{2}x\left(\pi_{1}\dot{\phi}_{1}+\pi_{2}\dot{\phi}_{2}-\tilde{H} \right)\,, \tag{72}\]
with the Hamiltonian density \(\tilde{H}\) given by Eq. (70). The gauge fixing condition \(\Gamma_{2}\) is chosen so that the determinant appearing in the functional measure is nonvanishing. In both cases A and B, we were able to obtain a gauge-invariant version for the NCCB model.
## 5 Conclusions
In this work, we have converted the NCCB model into a first-class constrained system using a modified GU formalism, whose convenience lies on the freedom of choice of the gauge symmetry generator and on the redefinition of the phase space itself without using any extra variables. One of the constraints becomes the generator of gauge symmetries and the other one is discarded. Such ambiguity allowed us to obtain two gauge invariant systems consistent with the original second-class one, which can be recovered back in a straightforward manner by setting the discarded constraint equal to zero. In case A, the constraint \(\Omega_{1}\), Eq. (37), was chosen in order to be the gauge symmetries generator, while in case B the constraint \(\Omega_{2}\), Eq. (38), was selected to be the generator of gauge symmetries. We can note that the canonical structure acquired from the modified GU formalism, for both cases, is similar to those obtained from other approaches. It is worth mentioning the non-local form derived for the fields \(\tilde{\phi}_{1}\), in case A, and \(\tilde{\phi}_{2}\), in case B, characterized by the presence of the antisymmetric step function \(\epsilon(x-y)\) in their expressions, since non-locality can also be generated from noncommutative field theories. Here, we can state that the GU variables are dictating the rules for obtaining a gauge theory from a second-class constrained system. Thus, as has become clear throughout our work, once GU variables are computed the corresponding gauge theory is obtained consistently and in a very simple way.
Acknowledgments
The authors sincerely thank Nikoofard Vahid for useful comments. RT kindly thanks the CERN Theoretical Physics Department (CERN-TH) for hospitality in a insightful research environment. The CAPES (Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior) and FAPEMIG (Fundacao de Amparo a Pesquisa do Estado de Minas Gerais) are ackknowledged for financial support. Jorge Ananias Neto thanks CNPq (Conselho Nacional de Desenvolvimento Cientifico e Tecnologico), Brazilian scientific support federal agency, for partial financial support, CNPq-PQ, Grant number 307153/2020-7.
|
2303.05719 | Boosting Adversarial Attacks by Leveraging Decision Boundary Information | Due to the gap between a substitute model and a victim model, the
gradient-based noise generated from a substitute model may have low
transferability for a victim model since their gradients are different.
Inspired by the fact that the decision boundaries of different models do not
differ much, we conduct experiments and discover that the gradients of
different models are more similar on the decision boundary than in the original
position. Moreover, since the decision boundary in the vicinity of an input
image is flat along most directions, we conjecture that the boundary gradients
can help find an effective direction to cross the decision boundary of the
victim models. Based on it, we propose a Boundary Fitting Attack to improve
transferability. Specifically, we introduce a method to obtain a set of
boundary points and leverage the gradient information of these points to update
the adversarial examples. Notably, our method can be combined with existing
gradient-based methods. Extensive experiments prove the effectiveness of our
method, i.e., improving the success rate by 5.6% against normally trained CNNs
and 14.9% against defense CNNs on average compared to state-of-the-art
transfer-based attacks. Further we compare transformers with CNNs, the results
indicate that transformers are more robust than CNNs. However, our method still
outperforms existing methods when attacking transformers. Specifically, when
using CNNs as substitute models, our method obtains an average attack success
rate of 58.2%, which is 10.8% higher than other state-of-the-art transfer-based
attacks. | Boheng Zeng, LianLi Gao, QiLong Zhang, ChaoQun Li, JingKuan Song, ShuaiQi Jing | 2023-03-10T05:54:11Z | http://arxiv.org/abs/2303.05719v1 | # Boosting Adversarial Attacks by Leveraging Decision Boundary Information
###### Abstract
Due to the gap between a substitute model and a victim model, the gradient-based noise generated from a substitute model may have low transferability for a victim model since their gradients are different. Inspired by the fact that the decision boundaries of different models do not differ much, we conduct experiments and discover that the gradients of different models are more similar on the decision boundary than in the original position. Moreover, since the decision boundary in the vicinity of an input image is flat along most directions, we conjecture that the boundary gradients can help find an effective direction to cross the decision boundary of the victim models. Based on it, we propose a Boundary Fitting Attack to improve transferability. Specifically, we introduce a method to obtain a set of boundary points and leverage the gradient information of these points to update the adversarial examples. Notably, our method can be combined with existing gradient-based methods. Extensive experiments prove the effectiveness of our method, _i.e._, improving the success rate by 5.6% against normally trained CNNs and 14.9% against defense CNNs on average compared to state-of-the-art transfer-based attacks. Further we compare transformers with CNNs, the results indicate that transformers are more robust than CNNs. However, our method still outperforms existing methods when attacking transformers. Specifically, when using CNNs as substitute models, our method obtains an average attack success rate of 58.2%, which is 10.8% higher than other state-of-the-art transfer-based attacks.
Adversarial Attack, Transferability, Decision Boundary.
## I Introduction
Even though convolutional neural networks (CNNs) have achieved significant success, Szegedy _et al._[1] find that adding human-imperceptible noises to clean images can easily fool state-of-the-art CNNs. Worse still, the malicious adversarial examples can be applied to cause social harm in real-world applications like face recognition [2, 3, 4] and self-driving cars [5], which pose a severe threat to CNNs. To further explore the vulnerability of CNNs, various works [6, 7, 8, 9, 10] pay attention to the generation of adversarial examples.
Generally, adversarial attack settings can be broadly divided into white-box and black-box. In the white-box setting [11, 12, 13], an attacker has full knowledge of model architecture and parameters, thus having a high attack success rate. However, white-box attacks are impracticable for real-world applications since it is almost impossible for an attacker to obtain such information from the victim model. To tackle this issue, most of the black-box methods resort to the cross-model transferability [14] to perform attacks, _i.e._, adversarial example crafted by a substitute model may fool other models.
Nevertheless, the gap (_e.g._, loss surface) between models is usually large, which limits the transferability of adversarial examples. To narrow the gap, various methods [15, 16, 17, 18, 19, 20] are proposed recently. For instance, Gao _et al._[17] propose a patch-wise iterative algorithm to update adversarial examples. Xie _et al._[16] apply random sizing and padding to transform images. Wang _et al._[20] propose a method that admixes the input image with a set of images randomly sampled from other categories as input, where the mixed images retain the original label. However, these methods may not find effective data points for calculating gradients, thus limiting the transferability of adversarial examples.
However, CNNs share many commonalities, e.g decision boundary, which enabling transferability. Tramer _et al._[21] find that adversarial subspace span a contiguous subspace of large dimensionality and different models share a significant fraction of the adversarial subspace thus enabling transferability. Therefore, finding a highly overlapped adversarial subspace can boost attacks. Moreover, based on the observation [21] of the decision boundaries of the different models are actually close, we further demonstrate that different models have more similar gradients on the decision boundary than on the original position. Besides, as the prior work [22] shows that the decision boundary tends to be flat, we conjecture that leveraging the decision boundary gradient of the substitute mode can find an effective direction across the
Fig. 1: Adversarial examples crafted via Inc-v3 by our BF-FGSM transfer towards victim model Res-152. The bottom caption of each image is its predicted label and corresponding confidence.
decision boundary of different models.
Based on the analysis above, we propose a Boundary Fitting Attack to enhance the transferability of adversarial examples. The details are shown in Algorithm. 1. In conjunction with Figure 2, we introduce our method. First, we move the input image \(\mathbf{x}\) along random directions to obtain a set of boundary points. Then we average the gradients of these boundary points to get the update direction, _i.e._, the red line. Compared with the purple line, which denotes the original gradient direction of \(\mathbf{x}\), our derived direction crosses the decision boundary of the victim model at a closer distance (experimental results shown in Table II also support it). Further, we demonstrate theoretically and experimentally the effectiveness of boundary gradient averaging and give information about the time consumption of this operation.
To reflect the boundary distance on transferability, we compare the proposed Boundary Fitting Attack with state-of-the-art attack methods on the ImageNet dataset. Notably, our attack achieves the highest average attack success rate against both normally trained and defense CNNs. In addition, our attack also has best performance on transformers whatever from CNNs transfer to transformers (CNNs2trans), transformers to transformers (trans2trans) or transformers to CNNs (trans2CNNs). And the experiments show that the average attack success rate of CNNs2trans lower than trans2CNNs which manifest transformers are more robust than CNNs in a some certain. To sum up, our contributions are as follows:
* We analyze the decision boundaries of different models and find that the gradients at the boundary points are more similar than the original gradients between models. These gradients on the decision boundary can potentially form a transferable perturbation noise. Based on the analysis, we propose a boundary fitting attack, which averages the gradients of a set of boundary points to generate a direction that can effectively cross the decision boundaries of victim models.
* We propose a concept: decision boundary distance. We conduct experiments and find that robustness is positively correlated with the decision boundary distance along natural directions, which also explains why transformers are more robust than CNNs.
* Extensive experiments on the ImageNet dataset show that our proposed method significantly improves the success rates by 5.6% and 14.9% on normally trained and defense CNN models, respectively. We also compared transformers with CNNs, the results indicate that transformers are more robust than CNNs. Moreover, our method have even higher performance on transformers. Our method outperforms other state-of-the-art transfer methods by 10.8%, 11.1% and 13.6% on CNNs2trans, trans2trans and trans2CNNs respectively.
## II Related Work
### _Adversarial Attacks_
Since Goodfellow _et al._[6] proposed Fast Gradient Sign Method (FGSM) to craft adversarial examples, various works proposed novel methods based on it to improve transferability. Therefore, we follow this family to boost adversarial attacks. Dong _et al._[15] introduce the momentum term to stabilize update directions and avoid adversarial examples falling into local optimum. Xie _et al._[16] propose to improve the transferability of adversarial examples by applying diverse input transformation. Dong _et al._[23] shift the input to create diverse translated images and approximately estimate the overall gradient to mitigate the problem of over-reliance on the substitute model. Gao _et al._[17] propose patch-wise perturbations to better cover the discriminate region of images. Lin _et al._[18] discover the scale-invariant property of DNNs
Fig. 2: Overview of our Boundary Fitting Method. Background is contour of cross entropy loss. The solid black line is the decision boundary, which separate ‘gazelle’ and other categories, for the substitute model Inc-v3 while the dashed black line is of the victim model Inc-v4. We sample a batch of boundary points, _i.e._\(\mathcal{B}_{1}(x)\) and \(\mathcal{B}_{2}(x)\) (in fact, we sampled 20 boundary points in our experiment as default), and average their gradients. The generated direction by our method (red line) is easier to fool Inc-v4 than the direction of the original gradient of \(\mathbf{x}\) (purple line), as the left-hand plot shows that our generated direction has a shorter distance to the decision boundary.
and thus average the gradients of different scaled images to update adversarial examples. Wang _et al._[24] stabilize the update direction by fine-tuning the current gradient based on the gradient variance from the last iteration. Wang _et al._[19] disrupt important features which dominate model predictions and promote trivial features to boost attacks. Wang _et al._[20] average the gradients of a set of admixed images (_i.e._, an input image admixed with a small portion of other images) to boost attack.
### _Decision Boundary of DNN models_
Recently, several works have analyzed the decision boundary of DNNs. Fawzi _et al._[25] analyze the robustness of DNNs from a perspective of the curvature of decision boundary, and suggest that DNNs are robust to random noise when the decision boundary has a small curvature. Tramer _et al._[21] propose novel methods for estimating the space of adversarial examples. They find that the decision boundary between different models are actually close in arbitrary directions, and the adversarial subspace share a large fraction between different models which enabling transferability. Fawzi _et al._[22] experimentally found that the classification region is connected and the decision boundary is flat in most directions, which makes it possible to obtain boundary points. Khoury _et al._[26] highlight the importance of codimension: for an input image, it can be represented as low-dimension data manifold in high-dimension space and there are many directions off the manifold construct adversarial examples. Furthermore, they propose a gradient-free geometric attack to manifest the importance of the decision boundaries. Maho _et al._[27] proposed a query-based attack SurFree which focuses on the geometrical properties of the decision boundary and significantly reduce the query budgets. While recent works almost focus on the decision boundary of a single model. The decision boundary relationships between models have rarely been studied. In this paper, we focus on using the decision boundary of the substitute model to fit other models thus improve the transferability of adversarial examples.
### _Adversarial Defenses_
As a counterpart of adversarial attacks, adversarial defenses aim to mitigate the threat of adversarial examples. To that end, various adversarial defense methods have been proposed in recent years. Tramer _et al._[28] introduce an ensemble adversarial training method that augments training data with adversarial examples crafted by other models. Xie _et al._[29] apply random resizing and padding (RP) to the inputs at inference time to mitigate adversarial effects. Liao _et al._[30] propose high-level representation guided denoiser (HGD) to suppress the influence of adversarial perturbation. Xie _et al._[31] develop an end-to-end trained network with the aim of denoising the intermediate features to significantly improve the robustness against adversarial examples. Jia _et al._[32] leverage randomized smoothing (RS) with Gaussian noise to enhance model robustness. Naseer _et al._[33] design a Neural Representation Purifier (NRP) model that learns to clean adversarial perturbed images based on a self-supervised adversarial training mechanism. Wang _et al._[34] build a more robust classification system that can be viewed as a structural black box. After adding a buffer to the classification system, attackers can be efficiently deceived.
## III Methodology
In this section, we give an introduction to our motivation and the implementation of our approach. We first give a brief definition of adversarial attacks and the decision boundary in Sec. III-A. In sec. III-B, we introduce our motivation and propose a conjecture. Based on it, we finally provide the detailed algorithm of our method in Sec. III-C.
### _Preliminaries_
Formally, let \(\mathbf{x}\) denote an image, and \(y\) denote the corresponding ground-truth label. We use \(f_{\theta}:\mathbf{x}\to y\) denote a classification model, where \(\theta\) indicates the parameters of the model. The goal of the adversarial attacks is to seek an adversarial example \(\mathbf{x^{\prime}}\) within a \(l_{p}\)-ball around the original image to mislead the classifier, _i.e._, \(f_{\theta}(\mathbf{x^{\prime}})\neq y\) (\(a.k.a.\) non-targeted attack). Following prior works [6, 15, 19], we use \(l_{\infty}\)-norm to constrain the size of perturbation, _i.e._, \(||\mathbf{x^{\prime}}-\mathbf{x}||_{\infty}\leq\epsilon\) and \(\epsilon\) is the maximum perturbation.
For a classifier which has a \(k\) classification regions \(\mathcal{R}\), _e.g._, \(\mathcal{R}_{i}\) corresponds to the classification region of class \(i\). The decision boundary \(\mathcal{B}_{i,j}\) separates \(\mathcal{R}_{i}\) and \(\mathcal{R}_{j}\), which can be expressed as follows:
\[\mathcal{B}_{i,j}=\{f_{i}(\mathbf{x})-f_{j}(\mathbf{x})=0\}. \tag{1}\]
In this paper, we focus on non-targeted attacks. Therefore, we are only concerned with the decision boundary \(\mathcal{B}_{i}\) which can be expressed as
\[\mathcal{B}_{i}=\mathop{\cup}\limits_{j=1}^{k}\mathcal{B}_{i,j},j\neq i, \tag{2}\]
### _Motivation_
Motivated by prior work [21] which shows that the decision boundaries of different models are actually close, we wonder whether the gradients on the decision boundary points of different models are more similar than the gradients on the original position. To check it, we evaluate the gradient cosine similarity of several white-box\(\rightarrow\)black-box pairs at different data points.
As demonstrated in Table. I, boundary gradients are less diverse, and there is more commonl
models. Inspired by the above observation and the discovery [22] that the decision boundary tends to be flat, we have the following conjecture: _Leveraging the decision boundary gradient of the substitute model can generate a more effective direction to cross the decision boundaries of victim models._
### _Boundary Fitting Attack_
Based on the above conjecture, we propose a Boundary Fitting Attack that leverages the boundary gradients of the substitute model to enhance the transferability of resulting adversarial examples, as detailed in the Algorithm. 1. Specifically, we first introduce a random transformation \(\mathcal{B}_{i}(\cdot)\) which moves an image \(\mathbf{x}:f(\mathbf{x})=y\) to its decision boundary \(\mathcal{B}_{f(x)}\):
\[\mathcal{B}_{i}(\mathbf{x})=\mathbf{x}+\gamma^{t}\cdot\mathbf{d_{i}},\quad\text{s.t. }f(\mathcal{B}_{i}(\mathbf{x}))=f(\mathbf{x}), \tag{3}\]
here we randomly choose a direction \(d_{i}\) which is sampled from a Gaussian distribution \(\mathcal{N}(0,\sigma^{2}I)\) to keep diversity. Here we set \(\sigma\) to a large value so that \(\mathbf{x}+\mathbf{d_{i}}\) is capable of moving out of \(\mathcal{B}_{f(x)}\). After that, we repeatedly multiply a shrinkage factor \(\gamma\) until it goes back to the source classification region.
Since Table I demonstrates that averaging the gradients of multiple boundary points can better fit the black-box model, we obtain the update direction by a set of boundary points \(\{\mathcal{B}_{1},...,\mathcal{B}_{n}\}\) (_i.e._, apply \(\mathcal{B}_{i}(\cdot)\) multiple times) with the aim of enhancing the transferability of adversarial examples. Formally, it can be expressed as:
\[\mathcal{G}=\frac{1}{N}\sum_{i=1}^{N}\nabla_{\mathcal{B}_{i}(\mathbf{x})}J( \mathcal{B}_{i}(\mathbf{x}),y;\phi). \tag{4}\]
In combination with I-FGSM, the update function of our proposed Boundary Fitting Fast Gradient Sign Method (BF-FGSM) can be written as:
\[\mathbf{x}_{t+1}^{\prime}=\mathrm{clip}_{\mathbf{x}_{t}\in\mathcal{X}}\{\mathbf{x}_{t}^{ \prime}+\alpha\cdot sign(\mathcal{G}))\}, \tag{5}\]
where \(\mathrm{clip}_{\mathbf{x}_{t}}(\cdot)\) denotes an element-wise clipping operation to ensure \(\mathbf{x}^{\prime}\in[\mathbf{x}-\epsilon,\mathbf{x}+\epsilon]\), and \(\alpha\) is the step size. The adversarial examples are shown in Figure 1. Compared with I-FGSM, our method yields more transferable adversarial examples, which can fool black-box models with high confidence.
### _Boundary Analysis_
In previous Section III-B, we have demonstrated that gradients of different models on decision boundary are closer. Based on this observation, we craft adversarial examples by boundary gradient with the aim of narrowing the gap between the substitute model and victim model. Considering that the perturbations are added to the original input, it is interesting to see whether this way is effective in moving adversarial examples out of the boundary of the victim model. Therefore, we compared the \(\mathcal{L}_{\infty}\)-distance to the decision boundary (decision boundary distance) along the direction of the sign gradient generated by different approaches to further support the rationality of our method design.
The results are shown in Table II. Remarkably, our BF-FGSM can effectively reduce the decision boundary distance of the victim model compared to state-of-the-art attacks. For example, in Inc-v3\(\rightarrow\)IncRes-v2 case, the average decision boundary distance along the direction of sign gradient derived from our BF-FGSM is only 19.23, while existing input transformation methods (_e.g._, Admix) usually have much longer distances.These results again demonstrate that taking boundary gradients into account is helpful for boosting adversarial attacks.
In addition, we observe that model robustness is positively correlated with the decision boundary distance. Therefore, we conjecture that the decision boundary distance obtained along arbitrary directions is larger than that of CNNs, which
enables transformers are more robust than CNNs. To verify this conjecture, we compare the average decision boundary distance of transformers and CNNs along natural directions (sampled by Gauss distribution) and adversarial directions (the gradient of the source image). The results are shown in Tab. V, the average decision boundary distance of transformers is larger than that of CNNs when along natural directions, while it is similar when along the adversarial directions (_i.e._ one-iteration white-box attack). A larger decision boundary distance means a smaller adversarial subspace that intersects with other models, thus adversarial examples are harder to transfer to transformers compared to CNNs, so transformers exhibit stronger robustness than CNNs. Therefore we claim that _model robustness is positively correlated with the average decision boundary distance along natural directions._
## IV Experiment
In this section, we display the experimental results to demonstrate the effectiveness of our proposed method. In Sec.IV-A, we first define the experimental setup. Then we conduct experiments to verify that the proposed method is effective for both normally trained models and defense models in Sec.IV-B and Sec.IV-C. In Sec.IV-D, we further conduct experiments on transformers and compare them with CNNs. Finally, we conduct a series of ablation experiments to study the impact of different parameters in Sec. IV-E.
### _Experiment Setup_
**Dataset.** Following prior works [15, 16, 20], we use the ImageNet-compatible dataset1 comprised of 1000 images to conduct experiments.
Footnote 1: [https://github.com/tensorflow/cleverhans/tree/master/examples/nips17_adversarial_competition/dataset](https://github.com/tensorflow/cleverhans/tree/master/examples/nips17_adversarial_competition/dataset)
**Models.** For evaluation of transferability, we adopt six normally trained models: Inception-v3 (Inc-v3) [35], Inception-v4 (Inc-v4), Inception-Resnet-v2 (IncRes-v2) [36], Resnet-v2-50 (Res-50), Resnet-v2-101 (Res-101), Resnet-v2-152 (Res-152) [37] and eleven defense models: Inc-v3\({}_{ens3}\), Inc-v3\({}_{ens4}\), IncRes-v2\({}_{ens4}\), IncRes-v2\({}_{ens}\)[28], HGD [30], R&P [29], BS [23], NRP [33], NIPS-r3\({}^{2}\), ResNeXt\({}_{DA}\), Res152\({}_{B}\), Res152\({}_{D}\)[31]. For evaluating transformers, we adopt four models: Vision Transformer
(ViT) [38], Data-efficient Image Transformers (DeiT) [39], Transformer in Transformer (TNT) [40] and Swin Transformer (Swin) [41].
**Competitor.** To manifest the effectiveness of our proposed approach, we compare it with state-of-the-art attacks including MI-FGSM [15], DI-FGSM [16], PI-FGSM [17], SI-FGSM [18], VT-FGSM [24], FI-FGSM [19] and Admix [20]. We also compare the combined version of these attacks, _e.g._, DIM (_i.e._, combined version of MI-FGSM and DI-FGSM and TI-DIM.
**Parameter Settings.** In all experiments, the maximum perturbation \(\epsilon=16\), the iteration \(T=10\), and the step size \(\alpha=\epsilon/T=1.6\). For MI-FGSM, we set the decay factor \(\mu\) = 1.0. For DI-FGSM, we set the transformation probability \(p\) = 0.5. For TI-FGSM, we set the kernel length \(k\) = 7. For PI-FGSM, we set the amplification factor \(\beta\) = 10, project factor \(\gamma\) = 16 and the kernel length \(k_{w}\) = 3 for normally trained models, \(k_{w}\) = 7 for defense models. For SI-FGSM, we set the number of copies \(m\) = 5. For VT-FGSM, we set the hyper-parameter \(\beta\) = 1.5, number of sampling examples \(N=20\). For FI-FGSM, the drop probability \(p_{d}\) = 0.3 for normally trained models and \(p_{d}\) = 0.1 for defense models, and the ensemble number \(N=30\). For Admix, we set sample number \(m_{2}\) = 3 and the admix ratio \(\eta\) = 0.2. For our proposed BF-FGSM, we set maximum number of moves \(t=5\) to guarantee efficiency, the shrinkage factor \(\gamma\) = 0.6, the initial direction standard deviation \(\sigma=20\), and the number of sampled boundary points \(N=20\). Note that the parameter settings for the combined version are the same.
### _Attack Normally trained CNN Models_
In this section, we investigate the vulnerability of normally trained models. We first compare MI-FGSM, DI-FGSM, PI-FGSM with our BF-FGSM to verify the effectiveness of our method, and the results are shown in Table III. Notably, our proposed BF-FGSM consistently surpasses all well-known
methods in the black-box setting. For example, when attacking Inc-v4, BF-FGSM can outperform MI-FGSM, DI-FGSM and PI-FGSM by 7.9%, 17.1% and 12.6% on average, respectively.
Furthermore, we compare three other state-of-the-art attacks that are equipped with momentum term [15], _i.e._, SI-MI-FGSM, VT-MI-FGSM and FI-MI-FGSM. From the Table III, we observe that momentum term can significantly boost our method. Specifically, it raises the average success rate from 68.3% to 89.0%. Remarkably, our BF-MI-FGSM can obtain an average success rate of 92.8% when the substitute model is Res-152, which outperforms SI-MI-FGSM, VT-MI-FGSM and FI-MI-FGSM by 10.0%, 10.2% and 3.9%. These results also convincingly demonstrate the superiority of our approach.
### _Attack Defense CNN Models_
Although many adversarial attack methods can successfully fool normally trained models, they usually fail at attacking models with defense mechanisms.
To further validate the superiority of our method, we conduct a series of experiments against defense models. Specifically, we compare our BF-TI-DIM and BF-SI-TI-DIM with TI-DIM, PI-TI-DIM, SI-TI-DIM, FI-TI-DIM and Admix-TI-DIM. Note that Admix is equipped with SI-FGSM by default.
**Single-Model Attacks.** We first craft adversarial examples via a single model, and the results are shown in Table IV. Due to the space limitation, here we only report the results of attacking Inc-v3 and other results can be found in Supplementary Sec. A. From Table IV, we observe that the transferability of adversarial examples crafted by our methods is far superior to existing state-of-the-art approaches. In particular, when transferring adversarial examples to NRP, SI-TI-DIM obtains the highest attack success rate (40.1%) among other methods, while our BF-TI-DIM can achieve a success rate of 69.9%.
In addition, our BF-TI-DIM can be further enhanced when combined with the SI-FGSM. Compared with Admix-TI-DIM, whose average success rate is 47.9%, our BF-SI-TI-DIM can significantly enhance the transferability up to 62.8%.
These results also demonstrate that exploring the update direction at the decision boundary can help the adversarial example find an effective path to bypass the defense mechanisms.
**Ensemble-Model Attacks.** To further improve the transferability, we craft adversarial examples via an ensemble of models. Specifically, we adopt the strategy proposed in [15] which fuses the logit activations of different models. As demonstrated in Table VI, our adversarial examples crafted via an ensemble of Inc-v3, Inc-v4, IncRes-v2 and Res-152 can evade most defense models with a high success rate. Noticeably, 95.0% adversarial examples crafted by our BF-TI-DIM can fool Inc-v3\({}_{ens3}\). On average, our proposed attack can fool state-of-the-art defense models at a 71.5% success rate, which is 18.5%,
16.8%, 4.6%, 11.1%, 4.1% and 13.4% higher than TI-DIM, PI-TI-DIM, SI-TI-DIM, VT-TI-DIM, Admix-TI-DIM and FI-TI-DIM, respectively. Admittedly, feature denoising defenses [31] are very robust to current attacks. Nonetheless, our method still outperforms state-of-the-art PI-TI-DIM and Admix-TI-DIM by 5.8% and 5.2%. These experimental results also show that most current defenses are still vulnerable to malicious adversarial examples.
### _Attack Transformers_
Most of the current adversarial attack methods only attack the CNN model, however, transformer is now known to have higher accuracy and robustness. Therefore, we compare our algorithm with recently state-of-the-art attacks on transformers to disclose the vulnerability of transformers and show the effectiveness of our algorithm. The experiments have three parts: CNNs to transformers, transformers to transformers and transformers to CNNs. For CNNs, we choose Inc-v3, Inc-v4, IncRes-v2, Res-50, Res-101 and Res-152. For transformers, we choose several versions of ViT, DeiT, TNT and Swin (_i.e._ ViT-S, ViT-B, ViT-L, DeiT-T, DeiT-S, DeiT-B, TNT-S, Swin-T, Swin-S, Swin-B).
**CNNs to Transformers.** We first craft adversarial examples via CNN models and leverage the adversarial examples to attack transformers. The results are shown in Table VII. Compare with Table III, when using our BF-MI-FGSM, the average success rate from Inc-v3 to transformers is 32.7% lower than that of Inc-v3 to CNNs. The results reveal that transformers are more robust than CNNs, as demonstrated in [42]. However, our method still has a considerate success rate and outperforms other attacks by a large margin. Specifically, our BF-MI-FGSM tool transformers at a 58.8% success rate on average when attacking Inc-v4, which is 30.4%, 25.3%, 25.5%, 14.7%, 17.4% and 9.5% higher than MI-FGSM, DI-MI-FGSM, TI-MI-FGSM, SI-MI-FGSM, VMI and Admix respectively. Combined with SIM, our BF-SIM has an even higher success rate of 67.9% when attacking Inc-v4.
**Transformers to Transformers.** To demonstrate our method, we also craft adversarial examples via transformers and leverage the adversarial examples to attack transformers. The results are shown in Table VIII. Compare to Table VII, the attack success rate improves by a large margin, which shows that transformers can generate more transferable adversarial examples. In this case, our method gets higher attack success rate compare with other state-of-the-art methods. Our BF-MI-FGSM obtains an 88.7% average attack success rate which outperforms MI-FGSM, DI-MI-FGSM, TI-MI-FGSM, SI-MI-FGSM, VMI and Admix by 22.6%, 33.2%, 24.4%, 12.3%, 9.8% and 20.4% respectively.
**Transformers to CNNs.** In this section, we craft adversarial examples via transformers and leverage the adversarial examples to attack CNNs. The results are shown in Table IX, which shows that adversarial examples crafted by our method are more transferable than other method. Notably, our BF-MI-FGSM obtains a 78.1% average attack success rate, which outperforms other state-of-the-art method by 13.6%. When combined with SIM, our BF-SIM get a 82.6% average attack success rate.
### _Ablation Study_
In this section, we conduct ablation experiments to analyze the impact of our parameters on transferability.
**Effect of shrinkage factor \(\gamma\).** To guarantee the efficiency of generating adversarial examples, we fix the maximum move steps \(t\) to 5. In this case, a small value of \(\gamma\) may pull
Fig. 4: The attack success rates (%) of adversarial examples crafted by BF-MI-FGSM w.r.t. sampled number \(N\). The substitute model is Inc-v3. **Left**: The transferability towards normally trained models. **Right**: The transferability towards defense models.
Fig. 3: Average attack success rate (%) of adversarial examples crafted by BF-MI-FGSM w.r.t. shrinkage factor \(\gamma\). The substitute model is Inc-v3. **Left**: The transferability towards normally trained models. **Right**: The transferability towards defense models.
obtained data points too close to the inputs, and conversely, a large value of \(\gamma\) may cause data points to be far away from the decision boundary. Therefore, it is necessary to select a suitable shrinkage factor \(\gamma\) for boundary search. As shown in Figure 3, the average attack success rate rises as \(\gamma\) increases and then drops after \(\gamma\) exceeds 0.6. Therefore, we set the shrinkage factor \(\gamma\) to 0.6 in all experiments.
**Effect of sampled number \(N\).** In this section, we investigate the effect of the number of sampled boundary points for our BF-MI-FGSM, and adversarial examples are crafted via Inc-v3 with \(\gamma=0.6\). As illustrated in Figure 4, when transferring to the normally trained model, the increment in transferability is very limited after \(N\) exceeds 20. In contrast, for the defense model, the transferability of the adversarial examples continues to be enhanced even after \(N\) exceeds 20. This suggests that conducting extensive exploration on the decision boundary of a normally trained model can simulate defense models to some extent. However, the computational cost is proportional to \(N\). Therefore, we set \(N\) = 20 in our paper for a trade-off.
Fig. 5: Visualization of randomly selected benign images and corresponding adversarial examples.
Fig. 6: Visualization for attention shift. We use Grad-CAM [43] to visualize attention maps of clean (1st row) and adversarial images (2nd row). Adversarial examples are crafted via Inc-v3 by our BF-FGSM. The comparison demonstrates that our method is capable of shifting model’s attention on images.
## V Visualization
### _Visualization of Adversarial Examples_
We randomly select twelve benign images and their corresponding adversarial examples crafted by our BF-FGSM in Figure 5. It can be observed that these crafted adversarial examples are human-imperceptible.
### _Visualization of Attention shift_
In this section, we investigate the effectiveness of our attack from a perspective of attention shift on adversarial examples. As illustrated in Figure 6, our proposed method effectively narrows the original attention region and enhances the irrelevant region. Consequently, the victim model will capture other irrelevant features, thus leading to misclassification.
## VI Conclusion
This paper gives a new insight into the boundary gradients of different models and shows that they are more similar than the original gradients. Furthermore, as the decision boundary tends to be flat, we conjecture that the input image reaches the decision boundary closer along the direction of the boundary gradients. Based on it, we proposed a Boundary Fitting Attack, which enhances the transferability of adversarial examples by leveraging the decision boundary information of the substitute model. We also introduce a concept: decision boundary distance, and propose that model robustness is positively related to the decision boundary distance along natural directions. Extensive experiments demonstrate the effectiveness of our method and support the rationality of our method design. Moreover, we also find that transformers are more robust than CNNs and our method is able to craft more transferable adversarial examples on transformers than existing methods. Finally, our method achieves the highest average attack success rate on both CNNs and transformers.
|
2306.09925 | Query-Free Evasion Attacks Against Machine Learning-Based Malware
Detectors with Generative Adversarial Networks | Malware detectors based on machine learning (ML) have been shown to be
susceptible to adversarial malware examples. However, current methods to
generate adversarial malware examples still have their limits. They either rely
on detailed model information (gradient-based attacks), or on detailed outputs
of the model - such as class probabilities (score-based attacks), neither of
which are available in real-world scenarios. Alternatively, adversarial
examples might be crafted using only the label assigned by the detector
(label-based attack) to train a substitute network or an agent using
reinforcement learning. Nonetheless, label-based attacks might require querying
a black-box system from a small number to thousands of times, depending on the
approach, which might not be feasible against malware detectors. This work
presents a novel query-free approach to craft adversarial malware examples to
evade ML-based malware detectors. To this end, we have devised a GAN-based
framework to generate adversarial malware examples that look similar to benign
executables in the feature space. To demonstrate the suitability of our
approach we have applied the GAN-based attack to three common types of features
usually employed by static ML-based malware detectors: (1) Byte histogram
features, (2) API-based features, and (3) String-based features. Results show
that our model-agnostic approach performs on par with MalGAN, while generating
more realistic adversarial malware examples without requiring any query to the
malware detectors. Furthermore, we have tested the generated adversarial
examples against state-of-the-art multimodal and deep learning malware
detectors, showing a decrease in detection performance, as well as a decrease
in the average number of detections by the anti-malware engines in VirusTotal. | Daniel Gibert, Jordi Planes, Quan Le, Giulio Zizzo | 2023-06-16T15:48:40Z | http://arxiv.org/abs/2306.09925v1 | Query-Free Evasion Attacks Against Machine Learning-Based Malware Detectors with Generative Adversarial Networks
###### Abstract
Malware detectors based on machine learning (ML) have been shown to be susceptible to adversarial malware examples. However, current methods to generate adversarial malware examples still have their limits. They either rely on detailed model information (gradient-based attacks), or on detailed outputs of the model - such as class probabilities (score-based attacks), neither of which are available in real-world scenarios. Alternatively, adversarial examples might be crafted using only the label assigned by the detector (label-based attack) to train a substitute network or an agent using reinforcement learning. Nonetheless, label-based attacks might require querying a black-box system from a small number to thousands of times, depending on the approach, which might not be feasible against malware detectors.
This work presents a novel query-free approach to craft adversarial malware examples to evade ML-based malware detectors. To this end, we have devised a GAN-based framework to generate adversarial malware examples that look similar to benign executables in the feature space. To demonstrate the suitability of our approach we have applied the GAN-based attack to three common types of features usually employed by static ML-based malware detectors: (1) Byte histogram features, (2) API-based features, and (3) String-based features. Results show that our model-agnostic approach performs on par with MalGAN, while generating more realistic adversarial malware examples without requiring any query to the malware detectors. Furthermore, we have tested the generated adversarial examples against state-of-the-art multimodal and deep learning malware detectors, showing a decrease in detection performance, as well as a decrease in the average number of detections by the anti-malware engines in VirusTotal.
Adversarial Malware Examples, Generative Adversarial Networks, Machine Learning, Malware Detection, Evasion Attack
## 1 Introduction
The rate of cybercrimes is increasing every year, and their cost to the world is estimated to be $8 trillion annually in 2023, representing the greatest transfer of economic wealth in history.1 There are many types of cyberattacks, including Denial-of-Service (DoS) attacks, phishing, malicious software or malware, SQL injection, and zero-day exploits. Among the aforementioned attacks, malware, and more specifically ransomware, has reached epidemic proportions globally, with an estimated cost of $20 billion in 2021.2
Footnote 1: [https://cybersecurityventures.com/cybercrime-to-cost-the-world-8-trillion-annually-in-2023/](https://cybersecurityventures.com/cybercrime-to-cost-the-world-8-trillion-annually-in-2023/)
Footnote 2: [https://cybersecurityventures.com/global-ransomware-damage-costs-predicted-to-reach-20-billion-usd-by-2021/](https://cybersecurityventures.com/global-ransomware-damage-costs-predicted-to-reach-20-billion-usd-by-2021/)
To defend against malware, a layered defense is typically employed, with various layered security elements working in conjunction with each other to keep computer devices safe.
One of the most important components is endpoint protection. Traditionally, endpoint protection has relied on signature-based antivirus solutions to detect malware, consisting of a large database of malicious software signatures and definitions. These solutions detect malware by scanning files and looking for patterns that match the signatures and definitions from the database. As a result, they can only recognize known threats. To mitigate new, unknown threats, endpoint protection solutions started adopting machine learning as it has proven capable of discovering hidden patterns from huge amounts of data without human intervention [7]. Nowadays, most modern anti-virus solutions, also known as Next-Generation Antivirus (NGAV), use a combination of machine learning and behavioral detection so that known and unknown threats can be mitigated and immediately prevented.
### Motivation
Unfortunately, machine learning-based malware detectors can be fooled by evasion attacks, where the goal of the attacker is to modify a given executable in order to evade detection. These carefully crafted executables that evade detection are referred to as adversarial malware examples. Various approaches [1, 3, 10, 11, 6, 12, 19] to generate adversarial malware examples have been presented in the literature. The majority of these attacks rely either on complete knowledge of the model [1, 11, 12], i.e. gradient-based attacks, or on confidence scores such as the probability of the executable being malicious [6, 19], i.e. score-based attacks. However, in a real-world scenario only the decision of the detector is available [3, 10], i.e whether the executable is malicious. One approach to attack a black-box detector in this setting is to use the labels to train a substitute detector
that emulates the black-box detector and then attack the substitute detector [10]. Another approach in this setting is to train an agent using reinforcement learning to select which set of actions to perform on a Portable Executable (PE) file in order to evade detection [3]. Nonetheless, the aforementioned methods require from few to unlimited number of queries to attack the black-box detectors, which might raise suspicion concerning the submitted samples, as submitting a high number of similar queries, or any query, to a cloud security provider might result in a close and thorough inspection of the files. To make things worse, the aforementioned attacks assume detailed knowledge of the model's input features; which, given the secrecy and confidentiality of cybersecurity actors, is not available.
### _Contributions_
Given the aforementioned limitations of evasion attacks against malware detectors, this paper presents a query-free end-to-end evasion attack that generates adversarial malware executables by exploiting the distinct characteristics of benign and malicious executables. The main contributions of this paper are the following:
* We propose a general framework using Generative Adversarial Networks to generate adversarial malware executables. Our Conditional Wasserstein GAN generates malware examples that resemble benign examples in the feature space, thus fooling malware detection systems. The GAN architecture consists of two networks, the generator and the critic, that allow us to automatically generate fake malicious examples that look similar to real benign examples.
* The generalization ability of our approach has been tested on three different type of features commonly employed by ML-based malware detectors to discern between goodware and malware: (1) the executables' byte distribution, (2) the libraries and functions imported, and (3) the strings found in the executables' content. For instance, the GAN-based framework will transform the malware's byte distribution into a more "benign" byte distribution according to the generator's output.
* We show how the attack performed in the feature space can be converted to an end-to-end attack. For example, to modify the executables' byte distribution one can append the corresponding bytes necessary to move from the original to the target byte distribution at the end of the Portable Executable files. This process is known as overlay append.
* We formulate the problem of determining the number of byte values to be appended at the end of executables as an integer linear programming problem with soft constraints on the byte frequency so that the size of the resulting adversarial malware executables is minimized to avoid unmanageable growth in size.
* We demonstrate on a public benchmark, the BODMAS dataset [18], that the proposed model agnostic attack performs on par with the MalGAN black-box evasion attack [10] without requiring any queries to the target malware detectors in order to craft the adversarial malware executables.
* We further analyze the evasion performance on state-of-the-art malware detectors, showing the transferability of our attacks.
* We upload the generated adversarial malware executables to VirusTotal to demonstrate the suitability of our attack to evade some commercial anti-virus solutions.
## 2 Related Work
Machine learning-based malware detectors have proven to be vulnerable to evasion attacks, adversarial attacks that consists of carefully perturbing the malicious executables at test time to have them misclassified as benign software. Evasion attacks in the literature can be categorized broadly in two groups, depending on the attacker's access to the model: (1) white-box attacks [11, 12, 1], where an attacker has access to detailed information of the model such as the learning algorithm and its parameters, and (2) black-box attacks, where the attacker only has access to the output assigned to a given executable. Moreover, black-box attacks can be further divided into score-based [6, 19] and label-based [3, 10] attacks depending on whether the output of the model is a confidence classification score or a label indicating whether the executable is malicious or benign.
B. Kolosnjaji et al. [11] introduced a gradient-based attack to generate adversarial malware executables by manipulating certain bytes in each executable to maximally increase the probability of the executables being classified as benign. The attack aims at minimizing the confidence associated with the malicious class under the constraint that \(q_{max}\) bytes can be injected using gradient descent. The attack was conceived against MalConv [15], a shallow convolutional neural network trained on raw bytes.
Kreuk et al. [12] and O. Suciu et al. [17] adapted the Fast Gradient Sign Method originally described in Biggio et al. [5] to generate adversarial malware executables. This was done by generating a small-sized adversarial payload and iteratively updating its bytes until the adversarial executables evade being detected by MalConv. Similarly, Al-Dujaili et al. [1], adapted a well-known gradient-based inner maximization methods for continuous feature spaces, the Fast Gradient Sign Method (FGSM), to binary feature spaces. As a result, the adapted version of FGSM could be used to modify a binary indicator feature vector. Each index of the feature vector represents a unique API function, where a "1" indicates the presence of the corresponding API function in the Import Address Table (IAT) of the PE executable.
L. Demetrio et al. [6] proposed a functionality-preserving black-box attack. It injects benign content, extracted from benign software, at the end of a malicious file or within newly-created sections. Afterwards, a genetic algorithm is used to modify the injected bytes until the resulting malicious file evades detection. Similarly, J. Yuste et al. [19] presented a score-based black-box attack based on dynamically introducing unused blocks, or section caves, within malware binaries. Afterwards,
the content of the newly-introduced blocks of bytes is optimized using a genetic algorithm.
H. Anderson et al. [3] proposed a general framework for attacking static ML-based malware detectors via Reinforcement Learning (RL). In their work, they trained a RL agent using the Deep Q-Network algorithm to select the actions to perform on a PE file among a set of functionality-preserving operations including, but not limited to, adding a new function to the Import Address Table, manipulating section names, creating new sections, modifying the slack space between sections, packing or unpacking the file.
W. Hu et al. [10] introduced a GAN-based algorithm named MalGAN to generate adversarial API-based feature vectors to attack simple unimodal static API-based malware detection models. MalGAN consists of two feed-forward neural networks, (1) a generator and (2) a substitute detector. The generator network is trained to minimize the generated adversarial malware feature vectors' maliciousness probabilities predicted by the substitute detector. The substitute detector is trained to fit the API-based malware detection system. By training both networks together, the generator will learn what changes have to be performed to the malware's feature vector in order to evade the target API-based malware detection system.
### _Limitations of Existing Adversarial Evasion Attacks_
Despite the aforementioned research, current methods used to generate adversarial malware examples are limited and not practical in the real-world. On the one hand, white-box or gradient-based attacks, although they successfully generate adversarial malware examples, are not feasible in a real-world scenario as the algorithm and parameters of the machine learning malware detectors are not available to attackers. In addition, the maliciousness score predicted by the ML-based detectors is not available to attackers either, and thus, score-based attacks are also not realistic. On the other hand, the only information that is provided by malware detectors is the label associated to a given executable, that is, whether or not the executable is malicious. However, current methods require from a small number of queries [3] to unlimited queries to attack the black-box detectors [10], which might raise suspicion on the submitted samples. For instance, VirusTotal 3, a popular aggregator scanner, shares the submitted files between the examining partners, who use the results to improve their own systems. Furthermore, submitting similar queries, or any query, to a cloud security provider might generate suspicion and raise the alarm, resulting in a close and thorough inspection. Given the aforementioned constraints, we propose a _query-free_ evasion attack to craft adversarial malware examples based on the assumption that the more similar the malicious executables are to benign executables in terms of structure and behavior, the harder it is for the ML-based detector to classify them correctly.
Footnote 3: [https://www.virustotal.com](https://www.virustotal.com)
## 3 Towards Generating Adversarial Malware Examples with GANs
This paper proposes a query-free approach to generating adversarial malware examples without assuming any known knowledge of the target malware detection system we want to evade, including the input features, the machine learning algorithm used, the parameters of the model, or the output of the model. Instead, this work is based on the assumption that malicious and benign pieces of software are inherently different, in terms of structure and behavior, which machine learning algorithms such as boosted decision trees or neural networks take advantage of to learn a function mapping the input variables to a target label. For malware detection, the input variables are the features extracted either statically or dynamically from the executables and the output is the probability that an executable is malicious. Thus, by altering the malicious executables in a way that they resemble benign software we might evade detection.
Recently, a class of machine learning algorithms, named Generative Adversarial Networks or GANs [8], has been proposed to generate fake data that are similar to real data. GANs are an approach to generative modelling using deep learning methods. A GAN consists of two neural networks, namely the generator and the discriminator, which are in competition with each other in order to discover the patterns or regularities in the given real data in such a way that the generator learns to generate fake data or new examples that plausibly could have been drawn from the original dataset. In our work, we use a Conditional Wasserstein GAN [4] to generate adversarial malware examples similar to benign executables. To this end, the GAN will be used to transform a malicious feature vector in a way that it resembles a benign feature vector without altering the original malware's behavior. See Figure 1 for a complete description of the GAN architecture.
### _Feature Types_
In this work, we apply GANs to three types of features commonly used by ML-based malware detectors:
* The byte frequency distribution of the executables, referred to as _byte unigram features_.
* The libraries and functions imported by the executables, referred to as _API-based features_.
* The ASCII strings found in the executables' content, referred to as _string-based features_.
Figure 1: The architecture of the GAN.
#### 3.1.1 Btye Unigram Features
The simplest and most common type of features usually extracted from executables to detect malware is the byte frequency distribution of the executables, also known as byte unigram features. Byte unigram features represent the frequency of each byte in the executable, and thus, are described with a 256-dimensional vector. Mathematically, the byte unigram features of a given executable \(x\) can be described as follows:
\[\mathbf{x}=\begin{bmatrix}x_{0}\\ x_{1}\\ \vdots\\ x_{255}\end{bmatrix}\text{ where }\sum_{i=0}^{255}x_{i}=1.0\]
#### 3.1.2 API-based Features
The Application Programming Interface (API) provide services to other pieces of software to communicate with each other and to communicate with the hardware of a computer system. Although the use of the operating system (OS) API is not illegitimate by itself, malware writers make use of these API functions to interact with the OS and perform nefarious tasks. The libraries and functions imported by executables are usually mapped as a binary feature vector, \(x\in\{0,1\}^{M}\). More specifically, if \(M\) API functions are used as features, an \(M\)-dimensional feature vector is constructed to represent a given executable. If the executable imports the \(d\)-th API function, the \(d\)-th feature value is set to 1; otherwise it is set to 0.
#### 3.1.3 String-based Features
Strings are ASCII and Unicode-printable sequences of characters embedding within a file. Strings can give us information about the program functionality and indicators associated with malicious or suspicious behavior. For instance, strings extracted from a binary executable might contain references to filenames, URLs, domain names, IP addresses, registry keys, attack commands, etcetera. The strings extracted from binary executables are also typically mapped as a binary feature vector, \(x\in\{0,1\}^{N}\), where \(N\) is the number strings used as features.
### _Generator Network_
The generator receives as input the concatenation \(c\) of a feature vector \(m\) and a noise vector \(z\). The size of \(m\) and \(z\), as well as the network architecture, depends on the type of features that we want the generator to generate. The idea behind feeding the original features to the generator is to condition it to craft a specialized adversarial feature vector [13]. \(z\) is a \(Z\)-dimensional vector (\(Z\) is different for each feature type), where each element of \(z\) is a random number sampled from a uniform distribution in the range \([0,1)\). \(z\) allows the generator to produce a wide variety of adversarial examples from a single malicious feature vector by sampling from different places in the input distribution. The input vector \(c\) is fed into the generator, a multi-layer feed-forward neural network, which will generate an output vector denoted by \(m^{\prime}\). Depending on the input features the architecture of the generator might vary. See Table IV for a complete description of the architecture. Below, we describe the main differences between the generator networks devised for each feature type.
#### 3.2.1 Btye Unigram Generator Network
To generate a target byte frequency distribution that resembles those found in benign executables, the generator will receive as input a 256-dimensional vector \(m\), where each element of \(m\) corresponds to the frequency of a particular byte in the executable, concatenated with the noise vector \(z\). To sum up, \(m_{0}\) corresponds to the frequency of the byte 0x00, \(m_{1}\) corresponds to the frequency of the byte 0x01, and so on. The output of the generator, whose architecture is specified in Table IV, is also a 256-dimensional vector (same size as the input feature vector). That is, the output layer of the generator has 256 neurons and the activation function used by the last layer is the softmax function to force the generated features to sum to 1.
#### 3.2.2 API-based and String-based Generator Networks
The API-based and String-based generator networks main difference with the byte unigram-based generator is the output layer. The input binary feature vector \(m\) will have size \(M\), which is the number of API functions or strings, respectively, used as input. The output layer of the generator network, denoted by \(o\), will use the sigmoid function instead of the softmax function. Furthermore, a binarization transformation will be applied to \(o\) according to whether or not the element is greater than \(0.5\), which produces a binary vector \(o\). In this case, we cannot freely modify all binary features as removing a feature from the original executable might break it. For this reason, we only allow new features to be added. The resulting adversarial feature vector can be expressed as \(m^{{}^{\prime}}=m|o\), where \(|\) is the element-wise binary OR operation.
To back propagate the gradients we used the smooth function \(G\) shown in Equation 1 that was defined by W. Hu et al. [10]. The smooth function was defined as follows:
\[G_{\theta}(m,z)=max(m,o) \tag{1}\]
The idea behind \(G\) is to use the network's real output value if an element of \(m\) has value 0. Otherwise, it is 1. For more information about the smooth function \(G\) we refer readers to the work of W. Hu et al. [10].
### _Critic Network_
The critic network receives as input a feature vector \(x\), where \(x\)'s size depend on the feature type and outputs a "benignness" score for a given sample. Vanilla GANs use a sigmoid activation function in the output layer of the discriminator to predict the likelihood of a given sample being real. Instead, the critic network replaces the sigmoid function with a linear function to predict the "realness" for a given sample. In our case, this is the "benignness". The critic network is a multi-layer feed-forward neural network. Cf. Table V for the details of the critic architecture.
### _Training the GAN_
Training the Conditional Wasserstein GAN with Gradient Penalty [9] to generate "benign" feature vectors
requires collecting both benign and malicious executables, whereas the more representative the live malware and the benign software the better.
The loss function of the critic \(L_{D}\) is defined as:
\[L_{D} =\mathbb{E}_{\tilde{x}\sim\mathbb{P}_{g}}[f(\tilde{x})]-\mathbb{E} _{r\sim\mathbb{P}_{r}}[f(x)]\] \[+\lambda\mathbb{E}_{\tilde{x}\sim\mathbb{P}_{g}}[(||\bigtriangledown _{\tilde{x}}f(\tilde{x})||_{2}-1)^{2}]\]
where the terms to the left of the sum are the original critic loss and the terms to the right of the sum are the gradient penalty. \(\mathbb{P}_{\tilde{x}}\) is the distribution obtained by uniformly sampling along a straight line between the benign and the generated distributions, \(\mathbb{P}_{r}\) and \(\mathbb{P}_{g}\), respectively. \(\lambda\) is the penalty coefficient used to weight the gradient penalty term. In our experiments, we set \(\lambda=10\).
To train the critic network, \(L_{D}\) should be minimized with respect to the weights of the critic network. Instead of predicting the probability of a generated sample being "benign", the critic in a Wasserstein GAN scores the "benignness" or "maliciousness" of a given feature vector. Unlike the vanilla GAN discriminator model that, once trained, may fail to provide useful gradient information for updating the generator model, the critic's loss does not saturate and hence always yields useful gradient information.
The loss of the generator is defined as:
\[L_{G}=\mathbb{E}_{\tilde{x}\sim\mathbb{P}_{g}}[f(\tilde{x})]\]
where \(\mathbb{P}_{g}\) is the generated distribution.
The whole process of training the GAN is shown in Algorithm 1. For a given step in the training process, \(\Theta_{D}\) is updated according to \(L_{D}\), and for every \(n\_generator\) step, so is \(\Theta_{G}\) according to \(L_{G}\).
```
0: B : set of goodware samples, M : set of malware samples, \(\Theta_{D}\) : r weights, \(\Theta_{G}\) : generator weights \(n\_generator\gets 5\) \(MAX\_STEPS\leftarrow|B|\times NUM\_EPOCHS\) for step \(\leftarrow\) 1 to MAX_STEPS do if converged enough then break endif \(b\gets sample\_minibatch(B)\) \(m\gets sample\_minibatch(M)\) \(z\gets noise\_vector()\) \(m^{\prime}\gets generator(m,z)\) \(\Theta_{D}\leftarrow\Theta_{D}+\triangledown_{\theta_{D}}L_{D}\) if\(step\mod n\_generator=0\)then \(m^{\prime}\gets generator(m,z)\) \(\Theta_{G}\leftarrow\Theta_{G}+\triangledown_{\theta_{G}}L_{G}\) endif endfor
```
**Algorithm 1** Conditional Wasserstein GAN Training Process
## 4 From Feature-based to End-to-End
So far, we have described the process followed to generate adversarial feature vectors with GANs. However, modifying the malware's feature vector representation is not an end-to-end attack. To convert the aforementioned feature-based attack into an end-to-end attack we need to modify the executables so that they have the generated adversarial features. Accordingly, the modifications that need to be performed to the executables depend on the type of features that we want to modify: (1) to modify the executables so as they have the target byte frequency distribution we will append the corresponding bytes at the end of the executables; (2) to add new libraries and import functions we will modify the Import Address Table of the executables; (3) to insert new strings we will create a new section and add the corresponding strings to it. All the aforementioned modifications have been performed using LIEF4, a Python library specifically designed to parse and modify executables [2, 3, 6].
Footnote 4: [https://lief-project.github.io/](https://lief-project.github.io/)
### Determining the Bytes to be Appended at the End of Executables
The problem of determining the number of byte values to be appended at the end of the executable to have a target byte frequency distribution in accordance with the original one can be codified as an integer linear programming problem:
\[\begin{split}&\mathrm{minimize}\quad\sum_{i=0}^{255}p_{i}\\ &\mathrm{subject\ to}\\ & b_{i}+p_{i}=r_{i}\times\left(\sum_{j=0}^{255}b_{j}+p_{j} \right)\quad i=0,\ldots,255\end{split}\]
where \(p_{i}\) is an integer variable that indicates the the amount of byte \(i\) bytes that need to be padded at the end of the executable, \(r_{i}\) is a real variable with the target byte distribution (ratio) we want to achieve for byte \(i\), and \(b_{i}\) is the original number of bytes found in the executable for byte \(i\). However, this model results in huge padding values, due to the equality in the constraint, if a solution is found. The computation can be accelerated by relaxing the integer variables to real variables, which gives a good approximate solution (in the current experimentation, the difference between the solutions is \(\approx 1\) byte).
However, appending bytes at the end of the executables to exactly map a target byte distribution from their original byte distribution generates large, unrealistic executables. To this end, we propose to map the original byte distribution to an approximated version of the target byte distribution, allowing for some error among the resulting byte unigram values. This can be done by allowing the solution to be near the required distribution in order to obtain lower (and practical) values. To map the original byte distribution to an approximated version of the target distribution, the constraint is changed by adding an upper bound and a lower bound both with a gap, the allowed error interval, as follows:
\[\text{minimize}\quad\sum_{i=0}^{255}p_{i}\] subject to \[r_{i}\times\left(\sum_{j=0}^{255}b_{j}+p_{j}\right)-g\leq b_{i}+p_ {i}\quad i=0,\ldots,255,\] \[b_{i}+p_{i}\leq r_{i}\times\left(\sum_{j=0}^{255}b_{j}+p_{j} \right)+g\quad i=0,\ldots,255\]
where \(p_{i},r_{i}\) and \(b_{i}\) are as defined before, and \(g\) is the gap, which is set to \(0.001\) in the current experimentation.
The models have been implemented in Zimpl,5 and solved with SoPlex,6 an optimization package for solving linear programming problems. The authors are aware methods exist to detect whether or not the overlay of executables has been modified. However, as the goal of our work is to evade ML-based malware detectors we did not consider stealthier mechanisms to insert the new content, i.e. the byte values that must be inserted into the executable in order to have a specific target byte frequency distribution.
Footnote 5: [https://zimpl.zib.de/](https://zimpl.zib.de/)
## 5 Experiments
### _Experimental Setup_
The dataset used in this paper is the BODMAS dataset [18]. It consists of 57,293 malware and 77,142 benign Windows PE files. The dataset has been divided into training, validation, and testing sets, consisting of 80%, 10% and 10% of the data, respectively. The same training, validation and testing splits have been used to train our query-free GAN to generate adversarial examples by modifying the byte distribution, the Import Address Table and the Strings of malicious executables.
The experiments were run on a machine with an Intel Core i7-7700k CPU, 1xGeforce GTX1080Ti GPU and 64Gb RAM. The code has been implemented with PyTorch [14] and is publicly available in our Github repository 7
Footnote 6: [https://sopllex.zib.de/](https://sopllex.zib.de/)
### _Attack Evaluation_
This section presents the evaluation of the proposed query-free attack against various unimodal detectors, state-of-the-art detectors in the literature, and on Virus-Total Service.
#### 5.2.1 Attack Evaluation against Unimodal Detectors.
Our query-free attack has been evaluated against various unimodal malware detectors, i.e. they take as input a single type of features, to show the effect of our attacks in various scenarios. For each type of features we have trained one detector, using the samples from the EMBER dataset [2]. Using only a single type of features will allow us to measure the evasion capability of our GAN-based approach. Below are listed the unimodal malware detectors evaluated against our GAN-based generated adversarial examples:
* Byte Unigrams Detector. This refers to the malware detector trained using as features the byte unigrams or byte frequency distribution of the samples from the EMBER dataset.
* Top-K API Detector. This refers to the malware detector trained using as features the API features of the samples from the EMBER dataset.
* Hashed API Detector. This refers to the malware detector trained using as features the hashed version of the API features of the samples from the EMBER dataset.
* Hashed Strings Detector. This refers to the malware detector trained using as features the hashed version of the Strings features from the EMBER dataset.
The difference between the Top-K API and the Hashed API models is that the Top-K API models take as input a vector of 1s and 0s, indicating whether or not a particular API function has been imported. As there are millions of API functions that an executable can import, we limited the set of API functions to a subset of K functions more commonly found in benign executables, where \(K\in\{150,300,500,1000,2000\}\). On the other hand, the Hashed API models use the hashing trick to vectorize the information about imported libraries and functions that can be found in the Import Address Table (IAT) of PE files into a fixed low-dimensional vector of size \(1280\). Similarly, the Hashed Strings models take as input the vectorized string-based features defined in EMBER 8. These features include statistics about the strings, their average entropy, the number of paths, urls, registries found, etcetera. For the raw string-based features, we limited the set of strings to a subset of \(K\) strings more commonly found in benign executables, where \(K\in\{2000,5000,10000\}\).
Footnote 8: [https://github.com/elastic/ember/blob/master/ember/features.py](https://github.com/elastic/ember/blob/master/ember/features.py)
**Exploring the Effects of the Gap Size on the Generated Adversarial Examples.** Results in Figure 2 and Table 1 are obtained from generating adversarial malware examples on a subset of 200 samples randomly selected from the test set of the BODMAS dataset. Appending bytes at the end of the PE executables to exactly have the target byte frequency distribution generated by the generator gives rise to large, unrealistic executables. For this reason, we have proposed to map the original byte frequency distribution to an approximated version of the target byte frequency distribution by allowing a small error. Depending on the error, the size of the adversarial examples will vary, from a few megabytes to tens of megabytes as shown in Figure 2.
In addition, it can be observed in Table 1 that the greater the error between the target byte frequency distribution and the approximated version of the target byte frequency distribution the greater the accuracy of the byte-based unimodal detector on the resulting adversarial examples. This is because the greater the error the greater
the differences between the generated byte frequency distribution and its approximated version. Table 1 shows that the accuracy of the models trained on the byte unigram features drops to 0% for the exact solutions. However, the resulting malicious executables are non-viable as they have an average size of \(\approx\)56MB, a 20918.5% increase with respect to the original executables. A good trade-off between evasion rate and the size of the adversarial examples is observed using 0.001 as the gap value or error. A higher gap generates less evasive adversarial examples while a lower gap generates adversarial examples that are too big compared to the original size of the executables. Thus, for the remaining experiments, the approximated target byte frequency distributions will be generated using a gap value equal to 0.001.
**Comparison with MalGAN and the Benign Code Injection Attack.** This section presents a comparison of our query-free attacks against the benign code injection attack and MalGAN [10].
On the one hand, the Benign Code Injection Attack is a well-known attack against ML-based malware detectors that consists of injecting benign content within the malicious content to try to disguise the malicious code and make it look more like benign code. This attack serves as a baseline to evaluate the feasibility of our query-free attack based on GANs as it is the only attack presented in the literature that can be implemented without querying the target malware detectors. To this end, we generate adversarial malware examples by injecting the code of a randomly selected benign example into its overlay. Different variations of the benign code injection attack exist, i.e. create one or more new sections with the benign content, etcetera, but for simplicity purposes we decided to just append the benign content at the end of the file.
On the other hand, MalGan is state-of-the-art GAN-based approach to generate adversarial malware examples. MalGAN consists of two feed-forward neural networks, (1) a generator and (2) a substitute detector. The generator network is trained to minimize the generated adversarial malware examples' maliciousness probabilities predicted by the substitute detector whereas the substitute detector is trained to fit the black-box malware detection system. MalGAN was originally trained on a 160-dimensional binary feature vector for each program, based on 160 system level APIs, and evaluated on various black-box detectors, i.e. random forest, logistic regression, decision trees, etc, trained on the same feature set. However, MalGAN relies on having unrestricted access to the black-box detection system to be able to train a good substitute detector. In contrast, our attack does not require querying the black-box detection system at all. Notice that MalGAN was proposed for evading API-based detectors. In this work, we adapted and extended MalGAN to also attack byte-based and string-based malware detectors.
Figure 3 presents the detection rate of the byte-based ML model on the adversarial examples generated by the benign code injection attack, MalGAN and our approach. It can be observed that all approaches reduced the detection rate of the target classifier from approximately 90.70% to 52.46%, 5.72% and 0.37%, respectively. It is important to note that among all three methods, the adversarial samples produced by the benign code injection attack are the least evasive. This is due to the fact that even though adding benign content changes the malicious executable's byte frequency distribution, doing so requires adding a significant amount of content compared to the executable's original size in order to flip the classifier's prediction from a "malicious" to a "benign" byte frequency distribution. Notice that both MalGAN and our query-free approach successfully reduce the detection rate of the ML-based detector to almost zero. On the one hand, by non-restrictively interacting with the target malware classifier, MalGAN learns to which target byte frequency distribution the executables must be mapped in order to flip the classifier's prediction. On the other hand, our query-free approach discovers which byte frequency distribution corresponds to "genuine" or "benign" executables. Results suggest that if the features can be freely modified without restrictions, altering the features in a way that they look "benign" is a plausible way to evade detection. However, having access to the same feature set as the ML detection model is a best case scenario.
A more common scenario is to only have access to the raw features used by the ML model instead of the
Figure 3: Detection rate of the byte-based detectors on the original and the adversarial byte histogram features.
Figure 2: Size comparison of the resulting adversarial malware examples for different gap values.
vectorized features or the final representation of those features. For instance, let us consider the list of API libraries and functions imported by Portable Executable files. This information can be obtained from the Import Address Table. In this case, we have access to the raw features, or the list of libraries and functions imported, but we may not be aware of the exact process that has been used to produce the final feature vector representation. As there are millions of API libraries and functions it is common to employ feature selection and dimensionality reduction techniques to map the high-dimensional vectors to a low-dimensional representation that is later used to train the ML models. In the present case, we are still able to modify the raw features by importing new libraries and functions but the effect of such modifications may be constrained in some way by the feature selection and dimensionality reduction techniques employed to map the high-dimensional API-based features to a more tractable low-dimensional representation, even though some transferability will apply. Furthermore, not all features can be adjusted without restraint. For instance, existing API features, i.e. the presence of a particular API function in the program, cannot be removed as that would make the malware crack. Thus, only new features can be added.
Figure 4 presents the detection rate of various ML models trained on the raw API-based features and their hashed representation, respectively. On the one hand, the raw API-based feature vector is a vector \(x\) of size \(M\), where each element in \(x\) indicates whether or not a particular API function has been imported. On the other hand, the hashed API feature vector \(x^{{}^{\prime}}\) maps the information about the imported libraries and functions from the Import Address Table to a low-dimensional feature vector using the hashing trick. For detailed information on the mapping between the raw and hashed features we refer the reader to the work of H. Anderson et al. [2].
In Figure 4, it can be observed that the percentage of detected adversarial examples generated by our approach is greater than that of the MalGAN approach. Our intuition is that because "malicious" features cannot be removed and only new features can be added it is more difficult to disguise the feature vectors as "benign". Another com
\begin{table}
\begin{tabular}{l|c c|c c c c c c c c} \hline \multirow{2}{*}{Detector} & \multicolumn{5}{c}{Gap} \\ \cline{2-11} & Orig. & Exact & 0.01 & 0.008 & 0.005 & 0.003 & 0.001 & 0.0008 & 0.0005 & 0.0003 & 0.0001 \\ \hline Byte Unigram & 0.895 & 0.0 & 0.48 & 0.385 & 0.3 & 0.1 & 0.005 & 0.015 & 0.0 & 0.0 & 0.0 \\ \hline MB & 0.27 & 56.75 & 0.89 & 0.96 & 1.15 & 1.47 & 2.99 & 3.53 & 4.99 & 7.19 & 14.45 \\ \hline \end{tabular}
\end{table} TABLE I: Detection rate of the Byte-based malware detector against the adversarial malware examples generated by the GAN using various gap values. The size of the executables in Megabytes is the average over the set.
Figure 4: Detection rate of the Top-K and Hashed API-based detectors on the original and the adversarial API-based features.
plementary explanation is that there is a great deal of overlapping between the API libraries and functions used by benign and malicious software and thus, importing "benign" functions is not the best choice as those functions are also found in malicious software. It should be noted that MalGAN achieves 100% evasion by generating adversarial examples that could be considered outliers as they are importing an unrealistic amount of libraries and functions from the Windows Application Programming Interface. In contrast to the 47.76 functions imported in average by the original malicious executables, MalGAN's adversarial feature vectors import an average of 1751.91 and 1734.97 functions. This represents a 3568.15% and 3532.68% increase, respectively. In comparison, our approach generates more "real" feature vectors while importing only 82.66 functions on average per sample, less than twice the number of functions imported in the original samples. Furthermore, it can be observed that the detection rate of the ML models trained on the hashed API features is higher than the one reported by the model trained on the raw API features suggesting that even though the modifications performed on the raw features are transferred to the hashed features, the hashed representation reduces the effectivity of the alterations.
Lastly, we present the detection rate of the hashed string-based ML models in Figure 5. In this case, neither approach is able to reduce the detection rate of the String-based model below 60%. Notice that MalGAN is not able successfully generate adversarial examples similarly to when the target detector was trained on the hashed API-based features. The reason behind is that the string-based vectorized feature vector contains statistics about strings, their entropy, number of paths, urls, etcetera, and thus, it is difficult for MalGAN to learn how injecting strings affects the prediction of the target classifier. In contrast, our query-free approach injects strings so that the raw feature vector representation looks "benign" rather than relying on the feedback of the target classifier. However, injecting strings is not enough to evade the String-based model as it not only uses features related to the ASCII strings found in the executable's content but the number of paths, urls, registries, number of printables, etcetera, which are not altered by injecting strings.
Notice that the evasion rates of the proposed query-free approach are greater when we allow the generator to select among the 2000 most used APIs and Strings found in benign samples. For this reason, in the next Section we use the corresponding generators to generate the adversarial examples.
#### 5.2.2 Attack Evaluation.
#### 5.2.3 Attack Evaluation on State-Of-The-Art Malware Detectors.
Next, the quality of the adversarial examples generated by our query-free model-agnostic GAN, will be assessed against the following state-of-the-art detectors: (1) the EMBER LightGBM model and (2) the MalConv model.
The EMBER model refers a a gradient boosting trees model (feature-based detector) that receives as input a feature vector consisting of the following types of features:
* Byte unigram features.
* 2D byte/entropy histogram features [16].
* Information about the section names, their sizes and entropy.
* Information about the imported libraries and functions from the Import Address Table (IAT).
* Information about the exported functions.
* General information about the file such as its size, the virtual size, the number of imported and exported functions, whether it has a signature, etcetera.
* Information extracted from the header such as the targeted machine, its architecture, OS, the major and minor linker versions, etcetera.
* Information about the strings extracted from the raw byte stream.
* Information about the size and virtual address of the first 15 data directories.
The resulting feature vector has size 2351, where 256, 1280, and 104 of the features correspond to the byte unigram features, the API features, and the string features, respectively. In contrast, the MalConv model refers to a shallow convolutional neural network (deep learning-based detector) which receives as input the raw byte stream, up to 1,048,576 \(\sim\)1Mb.
In addition to the aforementioned state-of-the-art detectors we trained two multimodal detectors containing (1) the byte and API-based features and (2) the byte, API,
Figure 5: Detection rate of the Top-K Hashed String-based detectors on the original and the adversarial String-based features.
and String-based features. These models are referred as EMBER v1 and EMBER v2 from now on. Notice that all models have been trained using the data from the EMBER dataset [2].
The proposed attack is model-agnostic, i.e. does not require knowing anything about the target malware detectors. However, the attack requires that the target malware detectors are influenced directly or indirectly by the features modified. For instance, by appending bytes at the end of the executable in order to move from the original to the target byte frequency distribution we will indirectly modify other features from the EMBER set such as the 2D byte/entropy histogram features, the size of the file, and the size of the overlay. Similarly, if we modify the Import Address Table to inject new API libraries and functions, the size of the file, the number of imported functions and other features from the EMBER set will be indirectly modified. The same occurs when adding a new section containing the benign strings we want to inject. In this case, the Section Table will be modified with a new entry, new content will be added to the newly-created section, and thus, some features will be indirectly modified in addition to the strings features of the EMBER set. Note that the aforementioned modifications to the executables manipulate their byte contents and thus, the feature-based attacks might also transfer to deep learning models that take as input the raw byte sequence of executables, i.e. MalConv.
The detection accuracy of the ML-based detectors on the test set's original and generated adversarial examples is shown in Table 2. It can be observed that the adversarial examples generated by the byte-based GAN decrease the detection accuracy of the deep learning model, i.e. MalConv, from 91.34% to 53.74%, respectively. We conclude that this interesting drop in performance is because MalConv learned that large chunks of a given byte value, such as the perturbations we perform, are indicative of benign samples. In addition, the results show that when combining the three types of modifications the detection accuracy of the generated adversarial examples drops even more, showing that the more modifications that are stacked together the lower the detection accuracy, as more features will be modified. This applies to any feature-based detector, i.e. EMBER, EMBER v1, and EMBER v2. Furthermore, the findings in Table 2 indicate that our query-free approach generates more evasive adversarial examples than the benign code injection attack, decreasing the detection accuracy of the EMBER v1, EMBER v2 and EMBER models from 97.02%, 96.03% and 98.64% to 28.48%, 8.15%, and 82.84%. In contrast, the benign code injection attack is unsuccessful in reducing the performance of the EMBER models down to less than 50%. We would like to point out that we cannot provide a comparison of MalGAN and our approach against the SOTA ML detectors in Table 2. The reason is that MalGAN generates very anomalous API-based feature vectors and when we tried to map those feature vectors back to the executables the server ran out of resources.
#### 5.2.4 Attack Evaluation on VirusTotal Service
The adversarial malware executables generated by our attack were uploaded to VirusTotal to check whether or not the number of detections decreased in comparison to the detections in the original executables. We would like to point out that the adversarial examples have been specifically generated to evade a feature-based ML detectors by modifying the feature vectors so as they look "benign" and thus, it is unrealistic to expect the adversarial examples to evade real-world malware detectors. Nevertheless, results show that the generated adversarial examples are able to evade various anti-malware engines.
Table 3 presents the median number of average VirusTotal detections in a subset of samples randomly selected from the test set. It an be observed that the average number of detections decreases from \(59.05/74\) to \(52.95/74\), \(47.4/74\) and \(50.05/74\), for the adversarial examples generated by appending bytes at the overlay, importing new API functions into the Import Address Table and injecting strings into a new section, respectively. Furthermore, by stacking various types of modifications the average number of detections in the adversarial examples decreases to \(46.57/74\). Results suggest that the more modifications applied to the original samples the higher the evasion rate.
## 6 Conclusions
Recent research on evasion attacks against ML-based malware detectors is limited and impractical in a real-world scenario where no knowledge about the detection system is available and the attackers do not have unlimited queries to the detection system. This paper presents the first model-agnostic attack that generates adversarial malware examples without querying the detection system and without assuming partial or complete knowledge of the system. The proposed attack modifies the malicious executables in a way that make them look benign, and thus makes them harder to detect by malware detection systems. This represents a novel and unexplored direction in automatic evasion research.
### Discussions
ML-based malware detectors have been proven to be susceptible to evasion attacks. However, existing white-box and black-box attacks require access to some sort of information about the ML detector in order to succeed, i.e. the algorithm used to train the ML detector, its parameters, its output, etcetera. This limits the applicability of the aforementioned attacks in the real-world as information about the ML models or the scores associated with a submitted executable might not be available. To circumvent the limitations of existing approaches, we designed a GAN-based framework that generates adversarial examples by modifying the malware's features so they resemble those found in benign executables.
This work provides an alternative approach to generate adversarial malware examples when restrictions are imposed on access to the model's algorithm, parameters and number of queries. In the hypothetical scenario where malware authors have access to the model's training algorithm, parameters, scores and have unlimited queries, any existing evasion attack might perform better than our approach. In general, independently of the domain application, the evasion rate of the adversarial examples generated with white-box attacks is greater than that of those generated by black-box attacks as they can use the
model's information and gradients to tweak the adversarial examples at their convenience. Furthermore, adversarial examples generated with score-based attacks are usually more evasive than than label-based attacks as the scores can be used to numerically estimate the gradient. Lastly, label-based attacks will generate more deceptive examples than those that do not use any kind of output from the model to generate the adversarial examples. This is true for all domain applications and not only for the task of malware detection. Thus, it is unrealistic to expect a query-free attack to generate more elusive adversarial examples than those evasion attacks that use any kind of information and output from the ML detector.
Nevertheless, experiments have shown that the adversarial examples generated by our approach achieve similar evasion rates to MalGAN but without needing to query the target ML detector in the process. In addition, the adversarial examples generated by our approach look more _real_ than the ones generated by MalGAN, as the later could well be labelled as outliers. In addition, results show that the query-free approach decreases the detection accuracy in unimodal detectors as well as the accuracy of multimodal and deep learning detectors. Moreover, results suggest that stacking one or more modifications together leads to better evasion rates.
Furthermore, we have made every effort to evaluate the generated adversarial examples against the widest range of state-of-the-art malware detectors as possible, including multimodal and deep learning detectors. We believe our approach to generate adversarial examples starts a new methodology, and that it is more challenging than the approaches experimentally compared with in this work. For helping further research, we have open-sourced our code under the MIT License and we have used a public benchmark to evaluate our approach in order to allow researchers to reproduce our work and build upon it.
### Future Work
The proposed query-free attack has been applied to modify three types of features commonly employed to detect malware: (1) byte unigram features, (2) API-based, and (3) string-based features. Apart from the aforementioned features, static ML-based malware detectors employ a wide range of feature types, i.e. information extracted from the PE header, the list of exported functions, the properties of each section, byte and opcode n-gram features, etc. Thus, a natural extension of our work would be to extend our approach to deal with these features. In addition, our approach could be used in conjunction with the typically employed compression and encryption techniques used to obfuscate the malware examples, and in conjunction with the newer adversarial evasion techniques developed to bypass ML-based detection.
## Acknowledgements
This project has received funding from the Spanish Science and Innovation Ministry funded project PID2019-111544GB-C22, Enterprise Ireland and the European Union's Horizon 2020 Research and Innovation Programme under Marie Sklodowska-Curie grant agreement No 847402. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of CeADAR, University College Dublin, IBM Ireland Limited, and the University of Lleida. We would like to thank Cormac Doherty and UCD's Centre for Cybersecurity and Cybercrime Investigation for their support.
## Data and Code Availability
The BODMAS dataset is available to the public 9 and the source code 10of our approach will be made available under a MIT License after the paper is accepted.
Footnote 9: [https://whyisyoung.github.io/BODMAS/](https://whyisyoung.github.io/BODMAS/)
Footnote 10: [https://github.com/code_repository](https://github.com/code_repository)
|
2305.10924 | Structural Pruning for Diffusion Models | Generative modeling has recently undergone remarkable advancements, primarily
propelled by the transformative implications of Diffusion Probabilistic Models
(DPMs). The impressive capability of these models, however, often entails
significant computational overhead during both training and inference. To
tackle this challenge, we present Diff-Pruning, an efficient compression method
tailored for learning lightweight diffusion models from pre-existing ones,
without the need for extensive re-training. The essence of Diff-Pruning is
encapsulated in a Taylor expansion over pruned timesteps, a process that
disregards non-contributory diffusion steps and ensembles informative gradients
to identify important weights. Our empirical assessment, undertaken across
several datasets highlights two primary benefits of our proposed method: 1)
Efficiency: it enables approximately a 50\% reduction in FLOPs at a mere 10\%
to 20\% of the original training expenditure; 2) Consistency: the pruned
diffusion models inherently preserve generative behavior congruent with their
pre-trained models. Code is available at
\url{https://github.com/VainF/Diff-Pruning}. | Gongfan Fang, Xinyin Ma, Xinchao Wang | 2023-05-18T12:38:21Z | http://arxiv.org/abs/2305.10924v3 | # Structural Pruning for Diffusion Models
###### Abstract
Generative modeling has recently undergone remarkable advancements, primarily propelled by the transformative implications of Diffusion Probabilistic Models (DPMs). The impressive capability of these models, however, often entails significant computational overhead during both training and inference. To tackle this challenge, we present _Diff-Pruning_, an efficient compression method tailored for learning lightweight diffusion models from pre-existing ones, without the need for extensive re-training. The essence of Diff-Pruning is encapsulated in a Taylor expansion over _pruned timesteps_, a process that disregards non-contributory diffusion steps and ensembles informative gradients to identify important weights. Our empirical assessment, undertaken across four diverse datasets highlights two primary benefits of our proposed method: 1) _Efficiency:_ it enables approximately a 50% reduction in FLOPs at a mere 10% to 20% of the original training expenditure; 2) _Consistency_: the pruned diffusion models inherently preserve generative behavior congruent with their pre-trained progenitors. Code is available at [https://github.com/VainF/Diff-Pruning](https://github.com/VainF/Diff-Pruning).
## 1 Introduction
Generative modeling has undergone significant advancements in the past few years, largely propelled by the advent of Diffusion Probabilistic Models (DPMs) [16; 33; 29]. These models have derived numerous applications ranging from text-to-image generation [32], image editing [47], image translation[36], and even discriminative tasks [2; 1]. The incredible power of DPMs, however, often comes at the expense of considerable computational overhead during both training [40] and inference [34]. This dichotomy between performance and efficiency presents a critical challenge in the broader application of these models, particularly in resource-constrained environments.
In the literature, huge efforts have been made to improve diffusion models, which primarily revolved around three broad themes: improving model architectures [33; 31; 42], optimizing training methods [40; 10] and accelerating sampling [37; 34]. As a result, a multitude of well-trained diffusion models have been created in these valuable works, showcasing their potential for various applications [39]. However, the notable challenge still remains: the absence of a general compression method that enables the efficient reuse and customization of these pre-existing models without heavy re-training. Overcoming this gap is of paramount importance to fully harness the power of pre-trained diffusion models and facilitate their widespread adoption across different domains and tasks.
In this work, we demonstrate the remarkable effectiveness of structural pruning [20; 7; 22; 3] as a method for compressing diffusion models, which offers a flexible trade-off between efficiency and quality. Structural pruning is a classic technique that effectively reduces model sizes by eliminating redundant parameters and sub-structures from networks. While it has been extensively studied in discriminative tasks such as classification [14], detection [43], and segmentation [11], applying structural pruning techniques to Diffusion Probabilistic Models poses unique challenges that necessitate
a rethinking of traditional pruning strategies. For example, the iterative nature of the generative process in DPMs, the models' sensitivity to small perturbations in different timesteps, and the intricate interplay in the diffusion process collectively create a landscape where conventional pruning strategies often fall short.
To this end, we introduce a novel approach called _Diff-Pruning_, explicitly tailored for the compression of diffusion models. Our method is motivated by the observation in previous works [33; 42] that different stages in the diffusion process contribute variably to the generated samples. At the heart of our method lies a Taylor expansion over pruned timesteps, which defrly balances the image content, details, and the negative impact of noisy diffusion steps during pruning. Initially, we show that the objective of diffusion models at late timesteps (\(t\to T\)) prioritize the high-level content of the generated images during pruning, while the early ones (\(t\to 0\)) refine the images with finer details. However, it is also observed that, when using Taylor expansion for pruning, the noisy stages with large \(t\) can not provide informative gradients for importance estimation and can even harm the compressed performance. Therefore, we propose to model the trade-off between contents, details, and noises as a pruning problem of the diffusion timesteps, which leads to an efficient and flexible pruning algorithm for diffusion models.
Through extensive empirical evaluations across diverse datasets, we demonstrate that our method achieves substantial compression rates while preserving and in some cases even improving the generative quality of the models. Our experiments also highlight two significant features of Diff-Pruning: efficiency and consistency. For example, when applying our method to an off-the-shelf diffusion model pre-trained on LSUN Church, we achieve an impressive compression rate of 50% FLOPs. Remarkably, this compression is attained with only 10% of the training cost required by the original models, equating to 0.5 million steps compared to the 4.4 million steps of the pre-existing models. Furthermore, we have thoroughly assessed the generative behavior of the compressed models both qualitatively and quantitatively. Our evaluations demonstrate that the compressed model can effectively preserve a similar generation behavior as the pre-trained model, meaning that when provided with the same inputs, both models yield consistent outputs. Such consistency further reveals the practicality and reliability of Diff-Pruning as a compression method for diffusion models.
In summary, this paper introduces Diff-Pruning as an efficient method for compressing Diffusion Probabilistic Models, which is able to achieve compression with only 10% to 20% of the training costs compared to pre-training. This work may serve as an initial baseline and provides a foundation for future research aiming to enhance the quality and consistency of compressed diffusion models.
## 2 Ralted Works
Efficient Diffusion ModelsThe existing methodologies principally address the efficiency issues associated with diffusion models via three primary strategies: the refinement of network architectures [33; 42; 29], the enhancement of training procedures [10; 40], and the acceleration of sampling [16; 23]. Diffusion models frequently employ U-Net models as denoisers, whose efficiency can be augmented through the introduction of hierarchical designs [32] or by executing the training within a novel latent space [33; 17]. Recent studies suggest integrating more efficient layers or structures into the denoiser to bolster the performance of the U-Net model [42; 31], thereby facilitating superior image quality learning during the training phase. Moreover, a considerable number of studies concentrate on amplifying the training efficiency of diffusion models, with some demonstrating that the diffusion training can be expedited by modulating the weights allocated to distinct timesteps [34; 10]. The training efficiency can also be advanced by learning diffusion models at the patch level [40]. The final approach underscores sampling efficiency, which typically does not necessitate the retraining of diffusion models [23]. Numerous studies aim to diminish the required steps through methods such as early stopping [28] or distillation [34].
Network PruningIn recent years, the field of network acceleration has seen notable progress through the deployment of network pruning techniques [25; 14; 27; 20; 12; 4; 13]. The taxonomy of pruning methodologies typically bifurcates into two main categories: structural pruning [20; 5; 45; 22; 45] and unstructured pruning [30; 6; 35; 19]. The distinguishing trait of structural pruning is its ability to physically eliminate parameters and substructures from networks, while unstructured pruning essentially masks parameters by zeroing them out [7; 3]. However, the preponderance of network pruning research is primarily focused on discriminative tasks, particularly classification
tasks [14]. A limited number of studies have ventured into examining the effectiveness of pruning in generative tasks, such as GAN compression [21; 38]. Moreover, the application of structural pruning techniques to Diffusion Probabilistic Models introduces unique challenges that demand a reevaluation of conventional pruning strategies. In this work, we introduce the first dedicated method explicitly designed for pruning diffusion models, which may serve as an initial baseline for future works.
## 3 Diffusion Model Objectives
Given a data distribution \(q(\mathbf{x})\), diffusion models aim to model a generative distribution \(p_{\theta}(\mathbf{x})\) to approximate \(q(\mathbf{x})\), taking the form
\[p_{\theta}(\mathbf{x})=\int p_{\theta}(\mathbf{x}_{0:T})d\mathbf{x}_{1:T},\qquad\text{ where}\quad p_{\theta}(\mathbf{x}_{0:T}):=p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\theta}(\mathbf{x}_{t-1}| \mathbf{x}_{t}) \tag{1}\]
where \(\mathbf{x}_{1},...,\mathbf{x}_{T}\) are the latent variables, which contribute to the joint distribution \(p_{\theta}(\mathbf{x}_{0:T})\) with learned Gaussian transitions \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1};\mu_{\theta}(\mathbf{ x}_{t},t),\Sigma_{\theta}(\mathbf{x}_{t},t))\). Diffusion Models involve two opposite processes: a forward (diffusion) process \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t -1},\beta_{t}I)\) that adds noises according to a pre-defined variance schedule \(\beta_{1:T}\) to the \(\mathbf{x}_{t-1}\), and a reverse process \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) which "denoises" the observation \(\mathbf{x}_{t}\) to get \(\mathbf{x}_{t-1}\). Using the notation \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s}\), DDPMs [16] trains a noise predictor with the objective:
\[\mathcal{L}(\mathbf{\theta}):=\mathbb{E}_{t,\mathbf{x}_{0}\sim q(\mathbf{x}),\mathbf{\epsilon }\sim\mathcal{N}(0,1)}\left[\|\mathbf{\epsilon}-\mathbf{\epsilon}_{\mathbf{\theta}}(\sqrt {\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{\epsilon},t)\|^{2}\right] \tag{2}\]
where \(\mathbf{\epsilon}\) is a random noise drawn from a fixed Gaussian distribution and \(\mathbf{\epsilon}_{\theta}\) refers to a learned noise predictor, which is usually an Unet autoencoder in practice. After training, synthetic images \(\mathbf{x}_{0}\) can be sampled through an iterative process from a noise \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{1})\) with the formular:
\[\mathbf{x}_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{x}_{t}-\frac{\beta_{t}}{ \sqrt{1-\bar{\alpha}_{t}}}\mathbf{\epsilon}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\right)+ \sigma_{t}\mathbf{z} \tag{3}\]
where \(\mathbf{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) for steps \(t>1\) and \(\mathbf{z}=\mathbf{0}\) for \(t=1\). In this work, we aim to craft a lightweight \(\mathbf{\epsilon}_{\mathbf{\theta^{\prime}}}\) by removing redundant parameters of \(\mathbf{\epsilon}_{\mathbf{\theta}}\), which are expected to produce similar \(\mathbf{x}_{0}\) while the same \(\mathbf{x}_{T}\) are presented.
## 4 Structural Pruning for Diffusion Models
Given the parameter \(\mathbf{\theta}\) of a pre-trained diffusion model, our goal is to craft a lightweight \(\mathbf{\theta^{\prime}}\) by removing sub-structures from the network following existing paradigms [7]. Without loss of generality, we assume that the parameter \(\mathbf{\theta}\) is a simple 2-D matrix, where each sub-structure \(\mathbf{\theta}_{i}=[\theta_{i0},\theta_{i1},...,\theta_{iK}]\) is a row vector that contains a group of scalar parameters. Structural pruning aims to find a sparse parameter matrix \(\mathbf{\theta^{\prime}}\) that maximally preserves the original performance. Thus, a natural choice is to optimize the loss disruption caused by pruning:
\[\min_{\mathbf{\theta^{\prime}}}|\mathcal{L}(\mathbf{\theta^{\prime}})-\mathcal{L}(\bm {\theta})|,\qquad\text{s.t. }\|\mathbf{\theta^{\prime}}\|_{0}\leq s \tag{4}\]
The term \(|\mathbf{\theta^{\prime}}|_{0}\) denotes the L-0 norm of the parameters, which serves to count the non-zero row vector, and \(s\) represents the sparsity of the pruned model. Nevertheless, due to the iterative nature intrinsic to diffusion models, the training objective, denoted by \(\mathcal{L}\), can be perceived as a composition of \(T\) interconnected tasks: \(\{\mathcal{L}_{1},\mathcal{L}_{2},...,\mathcal{L}_{T}\}\). Each task affects and depends on the others, thereby posing a new challenge distinct from traditional pruning problems, which primarily concentrate on optimizing a single objective. In light of the pruning objective as defined in Equation 4, we initially delve into the individual contributions of each loss component, \(\mathcal{L}_{t}\) in pruning, and subsequently propose a tailored method, Diff-Pruning, designed for diffusion models pruning.
Taylor Expansion at \(\mathcal{L}_{t}\)Initially, we need to model the contribution of \(\mathcal{L}_{t}\) for structural pruning. This work leverages Taylor expansion on \(\mathcal{L}_{t}\) to linearly approximate the loss disruption:
\[\mathcal{L}_{t}(\mathbf{\theta^{\prime}}) =\mathcal{L}_{t}(\mathbf{\theta})+\nabla\mathcal{L}_{t}(\mathbf{\theta})( \mathbf{\theta^{\prime}}-\mathbf{\theta})+O(\|\mathbf{\theta^{\prime}}-\mathbf{\theta}\|^{2}) \tag{5}\] \[\Rightarrow\mathcal{L}_{t}(\mathbf{\theta^{\prime}})-\mathcal{L}_{t}( \mathbf{\theta}) =\nabla\mathcal{L}_{t}(\mathbf{\theta})(\mathbf{\theta^{\prime}}-\mathbf{ \theta})+O(\|\mathbf{\theta^{\prime}}-\mathbf{\theta}\|^{2})\]
Taylor expansion offers a robust framework for network pruning, as it can estimate the loss disruption using first-order gradients. To evaluate the importance of an individual weight \(\mathbf{\theta}_{ik}\), we can simply set \(\mathbf{\theta^{\prime}}_{ik}=0\), which results in the following importance criterion:
\[\mathcal{I}(\mathbf{\theta}_{ik},\mathbf{x})=|\mathbf{\theta}_{ik}\cdot\nabla_{\mathbf{\theta}_ {ik}}\mathcal{L}(\mathbf{\theta},\mathbf{x})| \tag{6}\]
In structural pruning, we aim to eliminate the entire vectoe \(\mathbf{\theta^{\prime}}_{i}\) concurrently. The standard Taylor expansion for multiple variables, as described in the literature [8], advocates using \(|\sum_{k}\mathbf{\theta}_{ik}\cdot\nabla_{\mathbf{\theta}_{ik}}\mathcal{L}_{t}(\mathbf{ \theta},\mathbf{x})|\) for importance estimation. This method exclusively takes into account the loss difference between the initial state \(\mathbf{\theta}\) and the final states \(\mathbf{\theta^{\prime}}\). However, considering the iterative nature of diffusion models, even minor fluctuations in loss can influence the final generation results. As a result, we propose to conceptualize structural pruning as a sequential process, wherein each \(\mathbf{\theta}_{ik}\) is sequentially removed. This modification models cumulative loss disturbance induced by each \(\mathbf{\theta}_{ik}\)'s removal which leads to a slightly different score function for structural pruning:
\[\mathcal{I}_{t}(\mathbf{\theta}_{ik},\mathbf{x})=\sum_{k}|\mathbf{\theta}_{ik}\cdot\nabla_ {\mathbf{\theta}_{ik}}\mathcal{L}_{t}(\mathbf{\theta},\mathbf{x})| \tag{7}\]
In the subsequent sections, we will utilize this importance function to identify non-critical parameters for different objectives \(\mathcal{L}_{t}\).
The Contribution of \(\mathcal{L}_{t}\).With the Taylor expansion framework, we further explore the contribution of different loss terms \(\{\mathcal{L}_{1},...,\mathcal{L}_{T}\}\) in pruning. For a given timestep \(t\), the loss term \(\mathcal{L}_{t}=|\mathbf{\epsilon}-\mathbf{\epsilon_{\theta}}|\) affects the final generated sample \(x_{0}\) through a chained process detailed in 3. Assuming that pruning \(\mathbf{\theta}\) incurs a prediction error \(\delta_{t}\) at timestep \(t\), the reverse process allows us to approximate the final effects \(\delta_{0}\) on the generated images \(x_{0}\) by iteratively applying the Equation 3 starting from a disturbed \(\mathbf{\epsilon_{\theta^{\prime}}}(\mathbf{x},t)=\mathbf{\epsilon_{\theta}}(\mathbf{x},t)+ \mathbf{\delta}_{t}\). At the \(t-1\) step, it leads to the error \(\mathbf{\delta}_{t-1}\) derived as:
\[\begin{split}\delta_{t-1}&=\left[\frac{1}{\sqrt{ \alpha_{t}}}\left(\mathbf{x}_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\mathbf{ \epsilon_{\theta}}(\mathbf{x}_{t},t)\right)+\sigma_{t}\mathbf{z}\right]-\left[\frac{1 }{\sqrt{\alpha_{t}}}\left(\mathbf{x}_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{ t}}}(\mathbf{\epsilon_{\theta}}(\mathbf{x}_{t},t)+\delta_{t})\right)+\sigma_{t}\mathbf{z} \right]\\ &=\frac{1}{\sqrt{\alpha_{s}}}\frac{\beta_{t}}{\sqrt{1-\bar{ \alpha}_{t}}}\delta_{t}\end{split} \tag{8}\]
If no additional prediction error is presented in the other steps, we can use the disturbed \(x_{t-1}+\delta_{t-1}\) as the initialization and continuous to apply Equation 3 to estimate the \(\delta_{0}\):
\[\delta_{0}=-\prod_{s=0}^{t}\frac{1}{\sqrt{\alpha_{s}}}\cdot\frac{\beta_{t}}{ \sqrt{1-\bar{\alpha}_{t}}}\delta_{t}=\frac{\beta_{t}}{\sqrt{\bar{\alpha}_{t} \cdot(1-\bar{\alpha}_{t})}}\delta_{t} \tag{9}\]
It is observed that the final distortion induced by \(\delta_{t}\) is progressively magnified by a factor of \(\frac{1}{\sqrt{\alpha_{s}}}>1\) along the sampling path. Consequently, prediction errors occurring at larger \(t\) primarily impact the high-level content of generated images, while smaller \(t\) values concentrate on refining the images
Figure 1: Diff-Pruning leverages Taylor expansion at pruned timesteps to estimate the importance of weights, where early steps focus on local details like edges and color and later ones pay more attention to contents such as object and shape. We propose a simple thresholding method to trade off these factors with a binary weight \(\alpha_{t}\in\{0,1\}\), leading to a practical algorithm for diffusion models. The generated images produced by 5%-pruned DDPMs (without post-training) are illustrated.
with relatively small modifications. These findings align with our empirical examination using Taylor expansion in Figure 1, as well as the observation in previous works [16; 42], which shows that diffusion models tend to generate object-level information at larger \(t\) values and fine-tune the features at smaller ones. To this end, we model the pruning problem as a weighted trade-off between contents and details by introducing \(\alpha_{t}\), which acts as a weighting variable for different timesteps \(t\). A similar re-weighting strategy has also been explored in the field of efficient training of diffusion models [34; 10]. Nevertheless, applying the Taylor expansion at all steps can prove to be highly inefficient as it at least necessitates \(T\) forward-backward passes for Taylor expansion. This process creates a vast sampling space, leading to the inaccuracy of the approximation. To address this issue, we simplify the re-weighting strategy by treating it as a pruning problem, where \(\alpha_{t}\) takes the value of either 0 or 1 for all steps, allowing us to only leverage partial steps for pruning:
\[\min_{\mathbf{\theta}^{\prime}}|\sum_{t}\alpha_{t}\cdot(\mathcal{L}_{t}(\mathbf{ \theta}^{\prime})-\mathcal{L}_{t}(\mathbf{\theta}))|,\qquad\text{s.t. }\|\mathbf{\theta}^{\prime}\|_{0}\leq s,\ \alpha_{t}\in\{0,1\} \tag{10}\]
Taylor Score over Pruned Timesteps.In Equation 10, we try to remove some "unimportant" timesteps in the diffusion process so as to enable an efficient and stable approximation for partial steps. Our observations lead us to two key findings. Firstly, we note that the timesteps responsible for generating content are not exclusively found towards the end of the diffusion process (\(t\to T\)). Instead, there are numerous noisy and redundant timesteps that contribute minorly to the overall generation, which is similar to the observations in the related work [28]. Secondly, through our experiments, we discovered that employing the full-step objective can sometimes yield suboptimal results compared to using a partial objective. We attribute this negative impact to the presence of converged gradients in the noisy steps (\(t\to T\)). As mentioned earlier, the Taylor approximation in Equation 5 comprises both first-order gradients and higher-order terms. When the loss \(\mathcal{L}_{t}\) converges, the loss curve is predominantly influenced by the higher-order terms rather than the gradients we utilize. In a diffusion model, the loss term \(\mathcal{L}_{t}\) rapidly approaches 0 as \(t\to T\). For example, on a pre-trained diffusion model for CIFAR-10, the relative loss \(\frac{\mathcal{L}_{t}}{\mathcal{L}_{max}}\) decreases to 0.05 when \(t=250\). Consequently, a full Taylor expansion can accumulate a considerable amount of noisy gradients from these converged or unimportant steps, resulting in an inaccurate estimation of weight importance.
The above observations naturally lead to a simple and practical thresholding strategy for determining the weighting strategy. To achieve this, we introduce a threshold parameter \(\mathcal{T}\) based on the relative loss \(\frac{\mathcal{L}_{t}}{\mathcal{L}_{max}}\). Timesteps with a relative loss below this threshold, i.e., \(\frac{\mathcal{L}_{t}}{\mathcal{L}_{max}}<\mathcal{T}\), are considered uninformative and are disregarded by setting \(\alpha_{t}=0\), which yields the finalized importance score:
\[\mathcal{I}(\mathbf{\theta}_{i},\mathbf{x})=\sum_{k}\left|\mathbf{\theta}_{ik}\cdot \sum_{\frac{\mathcal{L}_{t}}{\mathcal{L}_{max}}>\mathcal{T}}\nabla_{\mathbf{ \theta}_{ik}}\mathcal{L}_{t}(\mathbf{\theta},\mathbf{x})\right| \tag{11}\]
In practice, we should select an appropriately large value for \(\mathcal{T}\) to strike a good balance between details and content, while also avoiding uninformative gradients from noisy loss terms. While there exists a variety of reweighting strategies addressing this issue, our findings indicate that this straightforward binary pruning method proves effective and efficient (owing to the zero weights \(\alpha_{i}=0\)) in most cases.
## 5 Experiments
### Settings
Datasets, Models, and Training Protocols.The efficacy of Diff-Pruning is empirically validated across four diverse datasets, including CIFAR-10 (32\(\times\)32)[18], CelebA-HQ (64\(\times\)64)[26], LSUN Church (256\(\times\)256), and LSUN Bedroom (256\(\times\)256) [46]. We establish an initial benchmark for evaluation, juxtaposing our proposed method with several classic methods adapted from discriminative tasks. Our study centers on standard Denoising Diffusion Probability Models (DDPMs) that employ simple UNet models trained with a noise prediction loss. For the sake of reproducibility, we utilize off-the-shelf DDPMs from [16] as pre-trained models and prune these models in a one-shot fashion[20]. Our training protocol aligns with that of [16], albeit with significantly fewer training steps.
Evaluation MetricsIn this paper, we concentrate primarily on three types of metrics: 1) Efficiency metrics, which include the number of parameters (#Params) and Multiply-Add Accumulation (MACs); 2) Quality metric, namely the Frechet Inception Distance (FID) [15]; and 3) Consistency metric, represented by Structural Similarity (SSIM) [41]. Unlike previous generative tasks that lacked reference images, we employ the SSIM index to evaluate the similarity between images generated by pre-trained models and pruned models, given identical noise inputs. All images in our experiments are generated using 100-step DDIM [37].
### An Initial Benchmark for Diffusion Pruning
Table 1 presents the parameter count, MACs, FID scores, SSIM scores, and training steps for CIFAR-10 and CelebA-HQ. Utilizing various methods, we accelerate the pre-trained models from [16] by approximately 1.8 \(\times\) in terms of MACs. In order to evaluate the similarity between network outputs, we ensure all random seeds are fixed and feed identical initial noises for sampling.
Scratch Training v.s. Pruning.The first baseline method that piques our interest is scratch training. Numerous studies on network pruning [9] suggest that training a compact network from scratch can be a formidable contender. To ensure a fair comparison, we create randomly-initialized networks with the same architecture as the pruned ones for scratch training. Our results reveal that scratch training demands relatively more steps to reach convergence. This suggests that training lightweight models from scratch may not be the most efficient approach, given its training cost is comparable to that of pre-trained models. Conversely, we observe that all pruning methods manage to converge within approximately 100K steps and outperform scratch training in terms of FID and SSIM scores. Thus, pruning emerges as a potent technique for compressing pre-trained Diffusion Models.
Pruning Criteria.A significant aspect of network pruning is the formulation of pruning criteria, which serve to identify superfluous parameters within networks. Due to the absence of dedicated
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{3}{c}{**CIFAR-10 32 \(\times\) 32 (100 DDIM steps)**} \\
**Method** & **\#Params \(\downarrow\)** & **MACs \(\downarrow\)** & **FID \(\downarrow\)** & **SSIM \(\uparrow\)** & **Train Steps \(\downarrow\)** \\ \hline Pretrained & 35.7M & 6.1G & 4.19 & 1.000 & 800K \\ \hline Scratch Training & & & 9.88 & 0.887 & 100K \\ Scratch Training & & & 5.68 & 0.905 & 500K \\ Scratch Training & & & 5.39 & 0.905 & 800K \\ Random Pruning & & & 5.62 & 0.926 & 100K \\ Magnitude Pruning & & & 5.48 & 0.929 & 100K \\ Taylor Pruning & & & 5.56 & 0.928 & 100K \\ \hline Ours (\(\mathcal{T}=0.00\)) & & & 5.49 & 0.932 & 100K \\ Ours (\(\mathcal{T}=0.02\)) & 19.8M & 3.4G & 5.44 & 0.931 & 100K \\ Ours (\(\mathcal{T}=0.05\)) & & & **5.29** & **0.932** & 100K \\ \hline \hline & \multicolumn{3}{c}{**CelebA-HQ 64 \(\times\) 64 (100 DDIM steps)**} \\
**Method** & **\#Params** & **MACs** & **FID** & **SSIM** & **Train Steps** \\ \hline Pretrained & 78.7M & 23.9G & 6.48 & 1.000 & 500K \\ \hline Scratch Training & & & 7.08 & 0.833 & 100K \\ Scratch Training & & & 6.73 & 0.867 & 300K \\ Scratch Training & & & 6.71 & 0.869 & 500K \\ Random Pruning & & & 6.70 & 0.874 & 100K \\ Magnitude Pruning & & & 7.08 & 0.870 & 100K \\ Taylor Pruning & & & 6.64 & 0.880 & 100K \\ \hline Ours (\(\mathcal{T}=0.00\)) & & & **6.24** & **0.885** & 100K \\ Ours (\(\mathcal{T}=0.02\)) & 43.7M & 13.3G & 6.45 & 0.878 & 100K \\ Ours (\(\mathcal{T}=0.05\)) & & & 6.52 & 0.878 & 100K \\ \hline \hline \end{tabular}
\end{table}
Table 1: Diffusion pruning on CIFAR-10 and CelebA. We leverage Frechet Inception Distance (FID) and Structural Similarity (SSIM) to estimate the quality and similarity of generated samples under the same random seed. A larger SSIM score means more consistent generation.
work on Diffusion model pruning, we adapted three basic pruning methods from discriminative tasks: random pruning, magnitude-based pruning, and Taylor-based pruning, which we refer to as Random, Magnitude, and Taylor respectively in subsequent sections. For a given parameter \(\mathbf{\theta}\), Random assigns importance scores derived from a uniform distribution to each \(\mathbf{\theta}_{i}\) randomly, denoted as \(\mathcal{I}(\mathbf{\theta})\sim\mathbf{U}(0,1)\). This results in a straightforward baseline devoid of any prior or bias, and has been shown to be a competitive baseline for pruning [24]. Magnitude subscribes to the "smaller-norm-less-informative" hypothesis [20, 44], modelling the weight importance as \(\mathcal{I}(\mathbf{\theta})=|\mathbf{\theta}|\). In contrast, Taylor is a data-driven criterion that measures importance as \(\mathcal{I}(\mathbf{\theta},x)=|\mathbf{\theta}\cdot\nabla_{\mathbf{\theta}}\mathcal{L}(x, \mathbf{\theta})|\), which aims to minimize loss change as discussed in our method. As shown in 1, an intriguing phenomenon is that these three baseline methods do not maintain a consistent ranking the two datasets. For instance, while Magnitude achieves the best FID performance among the three on CIFAR-10, it performs poorly on CelebA datasets. In contrast, our method delivers stable improvements over baseline methods, demonstrating superior performance on both datasets. Remarkably, our method even surpasses the pre-trained model on CelebA-HQ, with only 100K optimizations. Nonetheless, performance degradation is observed on CIFAR-10, which can be attributed to its more complex scene and a larger number of categories.
\begin{table}
\begin{tabular}{l c c c c|l c c c} \hline \hline \multicolumn{6}{c|}{**LSUN-Church 256 \(\times\) 256 (DDIM 100 Steps)**} & \multicolumn{6}{c}{**LSUN-Bedroom 256 \(\times\) 256 (DDIM 100 Steps)**} \\
**Method** & **\#Params** & **MACs** & **FID** & **Steps** & **Method** & **\#Params** & **MACs** & **FID** & **Steps** \\ \hline Pretrained & 113.7M & 248.7G & 10.6 & 4.4M & Pretrained & 113.7M & 248.7G & 6.9 & 2.4M \\ Scratch Training & 46.5M & 100.7G & 40.2 & 0.5M & Scratch Training & 46.5M & 100.7G & 50.3 & 0.2M \\ Ours (\(\mathcal{T}=0.01\)) & 46.5M & 100.7G & **13.9** & 0.5M & Ours (\(\mathcal{T}=0.01\)) & 46.5M & 100.7G & **18.6** & 0.2M \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pruning diffusion models on LSUN Church and LSUN Bedroom.
Figure 2: Generated images of the pre-trained models [16] (left) and the pruned models (right) on LSUN Church and LSUN Bedroom. SSIM measures the similarity between generated images.
### Pruning at Higher Resolutions
To further validate the efficiency and effectiveness of our proposed Diff-Pruning, we perform pruning experiments on two 256\(\times\)256 scene datasets, LSUN Church, and LSUN Bedroom. The pre-trained models from [16] require around 2.4M and 4.4M training steps, which can be quite time-consuming in practice. We demonstrate that Diff-Pruning can compress these pre-existing models using only 10% of the standard training resources. We report the number of parameters, MACs, and FID scores in Table 2. We compare the pruned methods with the pre-trained models as well as a new model trained from scratch. The pruned model converges with a passable FID score in 10% of the standard steps, while a model trained from scratch is still severely under-fitted. Nevertheless, we also discover that compressing a model trained on large-scale datasets, such as LSUN Bedroom, which contains 300K images, proves to be quite challenging with a very limited number of training steps. We show that, in the supplementary materials, the FID scores can be further improved with more training steps. Moreover, we also visualize the generated images in Figure 2 and report the single-image SSIM score to measure the similarity of generated images. By nature, the pruned model can preserve similar generation capabilities as it inherits most parameters from the pre-trained models. This feature is valuable in practical settings, as it does not significantly alter the user experience when transferred to the compressed diffusion models. However, some inconsistencies can still be detected, such as the watermark in the first example of Figure 2.
Figure 4: The SSIM of models pruned with different numbers of timesteps. For CIFAR-10, most of the late timesteps can be pruned safely. For CelebA-HQ, using more steps is consistently beneficial.
Figure 3: Generated images of 5%-pruned models using different important criteria. We report the SSIM of batched images without post-training.
### Ablation Study
Pruned Timesteps.First, we conduct an empirical study evaluating the partial Taylor expansion over pruned timesteps. This approach prioritizes steps with larger gradients and strives to preserve as much content and detail as possible, thereby enabling more accurate and efficient pruning. The impacts of timestep pruning are demonstrated in Figure 4. We seek to prune a pre-trained diffusion model over a range of steps, spanning from 50 to 1000, after which we utilize the SSIM metric to gauge the distortion induced by pruning. In diffusion models, earlier steps (\(t\to 0\)) usually present larger gradients compared to the later ones (\(t\to T\)) [33]. This inherently leads to gradients that have reached a convergence when \(t\) is large. In the CIFAR-10 dataset, we find that the optimal SSIM score can be attained at around 250 steps, and adding more steps can slightly deteriorate the quality of the synthetic images. This primarily stems from the inaccuracy of the first-order Taylor expansion at converged points, where the gradient no longer provides useful information and can even distort informative gradients through accumulation. However, we observe that the situation differs slightly with the CelebA dataset, where more steps can be used for evaluation. In practice, we can strike a balance between efficiency and quality via the predefined threshold \(\mathcal{T}\) and only apply the Taylor expansion to the loss \(L_{t}\) that satisfies \(\frac{\mathcal{L}_{t}}{\mathcal{L}_{max}}>\mathcal{T}\).
Visualization of Different Importance Criteria.Figure 3 visualizes the images generated by pruned models using different pruning criteria, including the proposed method with \(\mathcal{T}=0\) (w/o timestep pruning) and \(\mathcal{T}>0\). The SSIM scores of the generated samples are reported for a quantitative comparison. The Diff-Pruning method with \(\mathcal{T}>0\) achieves superior visual quality, with an SSIM score of 0.905 after pruning. It is observed that employing more timesteps in our method could have a negative impact, leading to greater distortion in both textures and contents. Additionally, Magnitude is not a practical choice for diffusion models, which yields even worse visual quality than Random. This may be attributed to the multi-step nature of Diffusion Models, where the Magnitude criterion could be biased.
Pruning Ratios.Table 3 presents the #Params, MACs, FID, and SSIM scores of models subjected to various pruning ratios based on MACs. Notably, our findings reveal that, unlike CNNs employed in discriminative models, diffusion models exhibit a significant sensitivity to changes in model size. Even a modest pruning ratio of \(16\%\) leads to a noticeable degradation in FID score (\(4.19\to 4.62\)). In classification tasks, a perturbation in loss does not necessarily impact the final accuracy; it may only undermine prediction confidence while leaving classification accuracy unaffected. However, in generative models, the FID score is a continuous measure, making it more susceptible to domain shift. Nevertheless, our proposed method consistently maintains high SSIM results across different pruning ratios, indicating its effectiveness in preserving generation consistency.
Thresholding.In addition, we conducted experiments to investigate the impact of the thresholding parameter \(\mathcal{T}\). Setting \(\mathcal{T}=0\) corresponds to a full Taylor expansion at all steps, while \(\mathcal{T}>0\) denotes pruning of certain timesteps during importance estimation. The quantitative findings presented in Table 4 align with the SSIM results depicted in Figure 4. Notably, Diff-Pruning attains optimal performance when the quality of generated images reaches its peak. For datasets such as CIFAR-10, we observed that a 200-step Taylor expansion is sufficient to achieve satisfactory results. Besides, using a full Taylor expansion, in this case, can be detrimental, as it accumulates noisy gradients over approximately 700 steps, which obscures the correct gradient information from earlier steps.
Conclusion
This work introduces Diff-Pruning, a dedicated method for compressing diffusion models. It utilizes Taylor expansion over pruned timesteps to identify and remove non-critical parameters. The proposed approach is capable of crafting lightweight yet consistent models from pre-trained ones, incurring only about 10% to 20% of the cost compared to pre-training. This work may set an initial baseline for future research that aims at improving both the generation quality and the consistency of pruned diffusion models.
|
2301.05160 | Venus, Phosphine and the Possibility of Life | The search for life elsewhere in the universe is one of the central aims of
science in the 21st century. While most of this work is aimed at planets
orbiting other stars, the search for life in our own Solar System is an
important part of this endeavour. Venus is often thought to have too harsh an
environment for life, but it may have been a more hospitable place in the
distant past. If life evolved there in the past then the cloud decks of Venus
are the only remaining niche where life as we know it might survive today. The
discovery of the molecule phosphine, PH$_3$, in these clouds has reinvigorated
research looking into the possibility of life in the clouds. In this review we
examine the background to studies of the possibility of life on Venus, discuss
the discovery of phosphine, review conflicting and confirming observations and
analyses, and then look forward to future observations and space missions that
will hopefully provide definitive answers as to the origin of phosphine on
Venus and to the question of whether life might exist there. | David L. Clements | 2023-01-12T17:28:01Z | http://arxiv.org/abs/2301.05160v1 | # Venus, Phosphine and the Possibility of Life
###### Abstract
The search for life elsewhere in the universe is one of the central aims of science in the 21st century. While most of this work is aimed at planets orbiting other stars, the search for life in our own Solar System is an important part of this endeavour. Venus is often thought to have too harsh an environment for life, but it may have been a more hospitable place in the distant past. If life evolved there in the past then the cloud decks of Venus are the only remaining niche where life as we know it might survive today. The discovery of the molecule phosphine, PH\({}_{3}\), in these clouds has reinvigorated research looking into the possibility of life in the clouds. In this review we examine the background to studies of the possibility of life on Venus, discuss the discovery of phosphine, review conflicting and confirming observations and analyses, and then look forward to future observations and space missions that will hopefully provide definitive answers as to the origin of phosphine on Venus and to the question of whether life might exist there.
V enus; astrobiology; search for life;
## 1 Introduction
The search for life elsewhere in the universe is one of the major driving forces for astronomy and astrophysics in the 21st century [1] with extremely large telescopes on the ground and in space being designed and built to this end. The main goal of these facilities is to study the atmospheres of rocky, terrestrial planets in orbit around other stars - so-called exo-planets - and follows the explosive development of exoplanet studies since their first discovery in 1995 [2]. We now know of over 5000 exoplanets, and more are announced all the time (for the latest information on exoplanet discoveries see www.exoplanet.eu). However, while convincing evidence for life on exoplanets may arrive in the next 20 years, there are many questions that such observations will not be able to answer, including the biochemical processes exoplanet life uses, how and when it originated, and how it evolved into whatever form it has today - a form that will also be unknowable.
If we want answers to these further questions the only places they may be answered is in our own Solar System. While still very distant, planets such as Mars, the Jovian moon Europa, and Saturn's moons Enceladus and Titan, which might harbour signs of life, are accessible to direct _in situ_ studies that can answer these questions. And any answers inevitably lead back to the question of our own origins on Earth, as well as the more general issue of the prevalence of life in the universe.
The search for signs of life on Mars is well underway, with orbiting spacecraft looking at its atmosphere (eg. the ExoMars Trace Gas Orbiter (TGO) [3]) and an ever-increasing number of rovers scouring its surface [4]. Some of these are already preparing samples of rock to be returned to Earth for laboratory analysis. Further out in the Solar System, the first stages of the exploration of Jupiter's moons Ganymede and Europa, in part for signs of habitable environments, are already under development with the European Space Agency's (ESA's) JUpiter ICy moons Explorer (JUICE) [5], aimed primarily at Ganymede, due for launch in 2023, and NASA's Europa Express, aimed squarely at Europa, due for launch in 2024. Plans are at an earlier stage for the exploration of the moons of Saturn that might harbour life, including Titan with its thick atmosphere and Enceladus with its plumes of water vapour spewing into space, but it is clear that these too will be visited sometime in the next few decades.
Until very recently this list of targets - Mars, Europa, Ganymede, Titan and Enceladus - would have been considered the most likely places to find signs of life in the Solar System. It was thus rather surprising to find Venus added to this list in late 2020 with the discovery of an unusual gas, phosphine, chemical formula PH\({}_{3}\), in its atmosphere [6]. Venus, as we will see below, has a surface that is completely hostile to life and so had been largely discounted from these considerations. But, as we will also see, there have long been thoughts that there might be niches in the atmosphere of this planet that might be more favourable to the existence of life than its deeply unpleasant surface [7, 8]. In this article we discuss how life is sought using atmospheric observations, why phosphine is a potential signature of biological activity, how it was detected on Venus, and what its discovery might mean for our understanding of the history of Venus and of life in the Solar System. We also look at the prospects for future studies of Venus in search of further possible signs of life.
## 2 What is Life?
The formal definition of life adopted by NASA for searches for life elsewhere is that 'Life is a self-sustaining chemical system capable of Darwinian evolution' [9]. This definition clearly applies to most things that we would consider alive on Earth, but it does leave some things out. Viruses, for example, cannot reproduce on their own, but instead require a host cell, whose reproductive machinery they take over. Alternative definitions are available (eg. [10]), but the NASA definition seems a useful starting point to begin any discussion of where life might exist in the Solar System or elsewhere. Given this definition we can start to examine what the requirements might be for life to exist.
From a physicist's perspective the key thing that life needs to be able to support itself is a source of energy. For most of the life we are familiar with on the Earth that source of energy is the Sun - plants photosynthesis using sunlight, while animals and other organisms consume plants in various ways, including eating things that have eaten plants. However, sunlight is not the only source of energy used by life on Earth. In the depths of Earth's oceans, where sunlight never shines, there are thriving communities of organisms surrounding and fed by hydrothermal vents [11]. These arise where ocean water can enter the Earth's crust and be heated by magma. The heated water dissolves minerals from the rocks and circulates back into the ocean through the vents. The primary producers of energy in these vents are chemosynthetic bacteria that use a variety of processes to derive energy from the chemicals emerging from the vents. These then support a diverse community of other organisms around the vents,
including giant tube worms that can be up to 3 metres long. Hydrothermal vents in the young Earth are a possible site for the first emergence of life on our planet [12]. Similar hydrothermal vent structures are thought to exist beneath the ice-covered surfaces of moons like Europa and Enceladus [13]. However, life without light on Earth is not limited to hydrothermal vents. It has even been suggested that most ecosystems on Earth exist in the dark, deriving their energy from chemical processes separate from, and independent of, photosynthesis [14]. This clearly has implications for the search for life elsewhere.
What requirements for life are there beyond a source of energy? Revisiting NASA's definition of life we see that it is defined as a _chemical_ system. The chemical basis for all the life we know on Earth is the element carbon. This element is exceptional in the Periodic Table as the lightest element in Group IV. It thus has a half-filled electron shell giving it a valence of 4 so it can donate or receive up to 4 electrons, allowing it to bind with itself to form chains, and with numerous other elements. Some of the commonest are hydrogen, oxygen and nitrogen, contributing to the wide variety of complex organic compounds found in living organisms. The next heaviest Group IV element is silicon. It has similar high valence and has been proposed as an alternative building block for life [15], but there are a number of issues which mean that carbon is a better choice.
Given a chemical basis for life, there must be a way for the necessary chemical reactions to take place so that the processes of life can operate. The best way to achieve this is for the reacting chemicals to be dissolved by some solvent so that they can easily combine and interact. On Earth the solvent behind all biology is water. While other solvents have been suggested, especially in environments too hot or too cold for liquid water to be available [16], water remains the only solvent which we know is associated with life. The bulk of our searches for life elsewhere have thus been focussed on places where liquid water might, or is known to, exist.
Our consideration of the nature of life thus leads us to the conclusion that we should be looking for life on an astronomical body where there is a ready supply of energy, where complex carbon-based chemistry can take place, and where liquid water can exist. In our own Solar System this leads us to focus on the planet Mars and the icy moons of gas giants, such as Europa and Ganymede orbiting Jupiter, and Titan and Enceladus orbiting Saturn. Venus does not appear on this list because, as we will see below, its current surface conditions are far too hostile for life as we know it, but, as we will also see, this may not always have been the case, and there may be loopholes that could allow any life that might have emerged on the surface of this planet to persist today in the dense clouds above its surface.
## 3 Looking for Life
Life is easy to find on Earth - we are surrounded by it - but looking for life on astronomical bodies, even our closest neighbours, is a rather more difficult task, especially since we do not expect to find something as obviously alive as a tree or a cow. In the absence of complex macroscopic life we have to look for signs of activity from microbial life. What might these be? Using Earth as an example, the clearest sign that life exists is the presence of oxygen in our atmosphere. This is generated by photosynthesising organisms. Today we would associate this with plants, but in the ancient history of our planet, the first oxygenating organisms were single-celled microbes known as blue-green algae. These were responsible for the Great Oxygenation Event which took place
about 2.5 billion years ago [17]. Before this event oxygen was not a major constituent of Earth's atmosphere. After it, the Earth had an atmosphere much closer to what we see today, with abundant free oxygen available across the planet. In principle, the presence of oxygen in the atmosphere of Earthlike exoplanets will be detectable by future instruments through the absorption of ozone, O\({}_{3}\), at mid-infrared wavelengths [18].
Molecules that potentially indicate the presence of life, such as oxygen and ozone, are known as biosignatures (see [19] and references therein). They include oxygen, ozone, methane (CH\({}_{4}\)), N\({}_{2}\)O, C\({}_{2}\)H\({}_{6}\) CH\({}_{3}\)Cl, CH\({}_{3}\)SH and more. A wider variety of biosignatures than just oxygen and ozone are needed to cope with potentially different biospheres than the one we currently inhabit. In fact, for much of the history of life on Earth, there would not have been sufficient oxygen or ozone for life to be detectable since the Earth was dominated by anaerobic life (ie. life that does not require or produce oxygen). Searches for other small molecules that might be additional biosignatures are also underway [20]. The common factor among all of these biosignature molecules is that they should not exist in the abundances seen unless biological processes are maintaining their abundance ie. their abundance is out of equilibrium with their environment. However, biological processes are not the only way of maintaining an out-of-equlibrium abundance of some of these biosignatures. For example, the splitting of water into hydrogen and oxygen by stellar ultraviolet radiation, and the subsequent escape of hydrogen from the planet's atmosphere, can produce a significant partial pressure of oxygen on an abiotic (ie. lifeless) planet given certain other conditions [21]. Care must therefore be taken in interpreting any unusual abundance of a single molecule as an unambiguous biosignature without a good understanding of the broader environment in which it is found.
Phosphine, PH\({}_{3}\), is one of the small molecules suggested as a potential biosignature by the team investigating novel biosignature molecules [22]. On Earth it is exclusively associated with anaerobic ecosystems, or with human industrial chemistry. Sources have been found associated with anaerobic environments in ponds, marshes and sludges, and specifically with piles of penguin guano and in the faeces of European badgers. While it is clearly associated with anaerobic biology, the specific biochemical pathway for phosphine production in anaerobic systems remains unclear. Its use as a biosignature was originally envisioned in the context of a significant concentration of the gas within the atmosphere of a terrestrial exoplanet with a large anaerobic biosphere. While phosphine has been detected in the vast reducing (ie. hydrogen rich) atmosphere of the gas giant Jupiter [23], where it is produced by normal chemical processes in the very dense, hot inner regions then brought to the surface by convection, its presence in the oxidised atmosphere of a terrestrial planet would be difficult to explain by equilibrium chemistry, making it a good candidate biosignature. Observational facilities able to find phosphine in the atmosphere of a distant exoplanet are some way in the future, but, as we see below, a surprise was waiting for us much closer to home.
## 4 Venus - an unlikely candidate for astrobiology
### Venus Today
Venus has often been described as Earth's evil twin since the two planets have very similar sizes and masses, with Venus having a radius about 95%, and a mass about
82% of the Earth's. It is also a little under 30% closer to the Sun than the Earth. Viewed from a telescope, Venus appears as a featureless pale disk because it has a thick atmosphere containing a permanent cloud layer, opaque to optical observations, that reflects over 70% of the sunlight that falls on it (see Figure 1). Venus' orbit closer to the Sun, combined with the effects of these reflective clouds, might naively suggest a surface temperature not too different from the Earth. Because of this, in the early 20th century it was thought that the clouds might be water vapour and that the surface of Venus could be habitable, inspiring visions of steamy tropical jungles filled with alien life. Once more detailed observations became available, and space probes started to visit in the 1960s, it became apparent that Venus was very different from these initial speculations.
Rather than having an Earthlike atmosphere, early 1960s space probes like NASA's Mariner 2 and the Soviet Union's Venera 4 found that Venus has an extremely dense atmosphere dominated by carbon dioxide, CO\({}_{2}\), with a small amount of nitrogen (3.5%) and traces of other gases such as sulphur dioxide (SO\({}_{2}\)). The surface atmospheric pressure on Venus is about 93 times higher than sea level pressure on the Earth. This huge blanket of CO\({}_{2}\) absorbs thermal radiation from the surface of the planet which would otherwise be radiated away into space. The Sun's radiation is thus trapped beneath the clouds, leading to a runaway greenhouse effect and surface temperatures of about 735 K (462 C), making it hotter than the maximum surface temperature of Mercury [24]1. To make things even more unpleasant, the thick clouds are largely made up of sulphuric acid as a result of atmospheric SO\({}_{2}\) dissolving into droplets of water that condense at altitudes of about 55 km above the surface. All of this combines to make the surface of Venus today an utterly inhospitable place for anything we might consider a biological system. The true nature of the surface of Venus in fact led leading
Figure 1: Left: Venus as seen in the optical by the Messenger mission. In the optical the planet appears almost featureless because of the highly reflective clouds that cover the entire planet. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington Right: Venus as seen in the ultraviolet by the Japanese space mission AKATSUKI. This image is produced by observations in two ultraviolet bands, at 365 and 283 nm. The colours are the result of unexplained ultraviolet absorption by small particles in the cloud layer. Credit: JAXA/ISAS/DARTS/Kevin M. Gill
astronomer Carl Sagan to describe the planet as hell.
Spacecraft continued to visit Venus after the early missions, including a long series of Venera probes launched by the Soviet Union. Several of these, as well as the Pioneer Venus mission from NASA, sent probes into the atmosphere of Venus to better understand its chemistry and constituents. The Venera and Vega projects also attempted to land probes on the hostile surface of Venus. After a number of failures they were eventually successful and conducted a number of studies. None of the landers lasted very long, with the longest lived surviving only about 2 hours. Nevertheless, some of these landers managed to send back images of the Venusian surface, one of which can be seen in Figure 2.
While the surface of Venus is undoubtedly hostile to life at the present time, there is a potential niche for biological processes in the clouds that obscure the planet since they are cool enough for liquid water to be present, and at an atmospheric pressure that matches that of the Earth at sea level [25, 26] (see Figure 3). Speculation about the possibility of life in the clouds of Venus dates back to the very earliest days of our understanding of the true conditions on the planet [27] and continues to the present day (eg. [28, 29]).
### Venus in the past
While the surface of Venus is undoubtedly hostile today, this might not have been the case in the distant past. If the surface of Venus was warm and wet - in the sense that liquid water could flow on the surface - then it is possible that life might have emerged there. When conditions later become increasingly hostile to life on the surface it might have evolved to seek sanctuary in clouds, the last habitable ecological niche on the planet. But is there any evidence to suggest that Venus was ever less hostile than it is today?
Sadly, we do not have direct access to information about the conditions on Venus billions of years ago, but there are hints that suggest that Venus may once have had much more water on its surface, and in its atmosphere, than it does today. Principal among this evidence is the deuterium to hydrogen ratio (D/H ratio). Venus currently has a D/H ratio that is 150\(\pm\)30 times that of the Earth [30]2. This suggests that Venus has lost a substantial fraction of its water through the escape of hydrogen, which is favoured over the escape of deuterium since the latter has a higher mass. Hydrogen is driven off Venus through interactions with the solar wind which can directly interact with the upper layers of its atmosphere since, unlike the Earth, Venus lacks a protective magnetic field. However, it is possible that Venus may have retained
Figure 2: A 360 degree panorama of the surface of Venus captured by the Venera 9 probe during its 53 minutes of operation on the harsh surface of the planet. This is the first image ever obtained from the surface of Venus. Part of the probe can be seen at the bottom of the image.
a magnetic field, and thus the possibility of liquid water oceans, for several billion years after its formation [31].
Computer simulations have been used to asses whether a warm wet Venus in the first billion years of the Solar System would have a stable climate that could be conducive to the emergence of life [32]. While the conclusions of this work are still not agreed - some suggest that even a wet Venus would never have been able to condense its water into oceans, leading to a so-called'steam Earth' scenario [33] - it is intriguing to consider the possibility that Venus may in fact have been the first habitable planet in the Solar System. A warm wet Venus (see Figure 4) might have remained habitable until as recently as about 700 million years ago, allowing substantial time for life to evolve and propagate across its surface given that life seems to have emerged on Earth somewhere between 3.7 and 4.3 billion years ago [12]. If this is correct, it may only be in the most recent 15% of the age of the Solar System that massive volcanic activity on Venus established a runaway greenhouse effect on the planet, leading to water stripping and the hot, dry, hostile surface that we see today.
## 5 Atmospheric mysteries
As well as uncertainty about the role of water in Venus' past, there are also some significant mysteries about the planet's atmosphere today even before we get to the detection of phosphine. Long-standing issues include ([34] and references therein):
* The variation of the abundance of water vapour and SO\({}_{2}\) with altitude in and above the cloud layers [35]. Water, H\({}_{2}\)O, persists throughout the atmosphere but SO\({}_{2}\) levels drop from parts per million abundances below the clouds to parts per billion above. This is not what is expected given that both gases are thought
Figure 3: The atmospheric structure of Venus, showing how temperature and pressure vary with height. The cloud layer at around 55km altitude has temperature and atmospheric pressure levels that are comparable to those on the surface of the Earth. [29]
to be released by volcanism at the surface and to be well mixed throughout the atmosphere until both are destroyed by solar UV at altitudes of 70 km or higher. The apparent depletion of SO\({}_{2}\) in the clouds is currently not understood.
* Oxygen, O\({}_{2}\), is present in the clouds of Venus where it was detected by gas chromatographs on board Pioneer Venus Probe [36] and the Venera 13 and 14 descent modules [37]. There is currently no known process by which oxygen can be formed in the cloud layers so its origin is something of a mystery.
* Observations of the clouds of Venus in the ultraviolet reveal complex spatial and temporal changes in absorption and reflection [8]. This is in contrast to observations in the optical and near-infrared where the clouds of Venus appear nearly featureless on the dayside (see eg. Figure 1). Significant cloud contrasts are only seen in reflected sunlight at wavelengths shorter than about 400 nm, and at near-infrared wavelengths (1.7 to 2.4 \(\mu\)m) in emission on the nightside. These variations in ultraviolet absorption were first observed in photographic observations in 1928 [38], but, despite all the ground-based observations, spacecraft observations from orbit, and descent probes sampling the atmosphere, the chemical and physical origin of the absorber responsible remains unknown.
* The clouds of Venus contain a variety of constituents. Based on size analysis from the Pioneer Venus Probe particle size spectrometer [39] they can be divided up into three particle sizes. These correspond to aerosols, of size \(\sim\)0.4 \(\mu\)m, droplets of size \(\sim\)2 \(\mu\)m, and larger particles of size \(\sim\)7 \(\mu\)m. The larger particles are present only in the middle and lower cloud layers, at altitudes from 47.5 to 56.5 km above the surface. The nature of these largest particles, which may have a substantial solid component, and be non-spherical in shape, is currently unclear.
Figure 4: An artist conception of what a warm, wet Venus might have looked like during earlier stages in the evolution of the Solar System. Credit: NASA
* Recent reanalysis of the chemistry of Venus' atmosphere based on measurements by mass spectrometers on descent probes suggest the possibility of chemical disequilibrium in the middle cloud layers [40]. This result is based on the presence of several species in the mass spectrometer data, including hydrogen sulphide, nitrous, nitric & hydrochloric acids, carbon monoxide, ethane, and hydrogen cyanide as well as phosphine (this reanalysis was conducted after the detection of phosphine by ground based observations, of which more later) and possibly ammonia. This chemical mix indicates that reducing chemistry is taking place in the clouds. If so, then the processes behind this activity would be out of equilibrium with the oxidising chemistry of Venus. In the context of the search for life elsewhere, chemical disequilibrium is a potential biosignature, making these results very interesting. More recently still, it seems that ammonia, NH\({}_{3}\), may have been independently detected in Venus' atmosphere by ground based observations (Greaves et al., private communication), confirming the tentative results from the Pioneer Venus Probe mass spectrometer reanalysis, and adding an extra piece of evidence in favour of chemical disequilibrium in the clouds of Venus.
These problems with our understanding of the atmosphere of Venus, and specifically its clouds, make the case for renewed interest in the planet as a possible astrobiology target. Even if biological processes do not provide the explanation for these poorly understood phenomena, it is clear that there are chemical and physical processes underway in the clouds of Venus that we do not currently understand. Further observations to explore the chemistry of Venus and its atmosphere are thus needed. It is in this context that a team of astronomers proposed to look for phosphine, PH\({}_{3}\), on Venus3.
Footnote 3: The author of the current paper was part of this team and is part of the ongoing work to study phosphine on Venus.
## 6 The Search for Phosphine
Phosphine, as we have seen above, has been proposed as a potential biosignature gas which might be present in significant quantities on inhabited planets orbiting other stars [22]. Current observational facilities, however, are not yet able to make phosphine observations of terrestrial exoplanets. While we know that phosphine is present in the Earth's atmosphere in small amounts, thanks to industrial processes and anaerobic organisms, there were no limits on the amount of phosphine present on other Solar System terrestrial planets. Mercury has essentially no atmosphere so is an inappropriate target. Mars has a very thin atmosphere so is unlikely to have much phosphine even if there is biological activity underway there, and any phosphine present would be rapidly destroyed by solar UV radiation. This leaves Venus as the only reasonable terrestrial planet in the Solar System where some test observations in search of phosphine might be conducted.
Therefore in 2016 a team of astronomers led by Prof Jane Greaves put together a proposal to the James Clerk Maxwell Telescope (JCMT) to conduct test observations of Venus' atmosphere to look for absorption from the J=1-0 rotational transition of phosphine, which would produce an absorption line at a wavelength of 1.123 mm (\(\sim\) 267 GHz). There are other transitions of phosphine at other wavelengths, notably in the far-IR and in the mid-IR, but this particular transition has some advantages, Firstly,
observations can be conducted from the ground. Observations of the next highest rotational transition, J=2-1, would require observations from the stratosphere (see below). Mid-IR observations of other transitions can be conducted from the ground, of which more later, but the Greaves team did not have easy access to the necessary mid-IR facilities.
The initial idea for the observations was to acquire a few hours of data to better understand the observational issues with looking for a weak absorption line against a very bright continuum source, Venus, with the eventual intent to propose a longer series of observations to set a stringent upper limit, since phosphine was not expected to be found. That is not, however, how things turned out.
### JCMT Observations
The JCMT is a 15 m diameter mm/submm telescope on the mountain Mauna Kea on the Big Island of Hawaii, at an altitude of about 4000 m. Mauna Kea is an ideal site for mm/submm observations since it is both high and dry, and so avoids much of the water vapour in the atmosphere of the Earth that would otherwise absorb and contaminate observations at these wavelengths. It is equipped with an array of instruments that operate both as continuum imagers and as high resolution spectrometers. For the first set of JCMT phosphine observations [6] an instrument called Receiver A3 (RxA3) was used. RxA3 was one of the early instruments used on the JCMT and was delivered to the telescope in 1998. It was retired not long after the phosphine detection observations reported in [6].
Like many mm/submm spectroscopy receivers, RxA3 uses a heterodyne approach, whereby the incoming astronomical signal, in this case at frequencies of around 267 GHz, is multiplied by a pure sine wave signal at a nearby frequency, the so-called local oscillator frequency, using a device called a mixer. This results in the production of a signal at a frequency that corresponds to the difference in frequencies between the received signal and the local oscillator frequency, and allows signals at frequencies outside the frequency range of interest to be removed. This lower frequency signal, at what is called the intermediate frequency, or IF, is then measured and dealt with by later stages of processing. The technology used in most mm/submm receivers relies on SIS mixers (superconductor-insulator-superconductor) to mix the astronomical and local oscillator frequencies. For more information on how these operate, and on much else in radio astronomy, see [41].
The IF signal from RxA3 is then processed by the Auto-Correlation Spectral Imaging System (ACSIS - this system is used by all spectral receivers at the JCMT, including both RxA3 and its replacement 'U\(\bar{\rm{}}\)'\(\bar{\rm{}}\)). This digitises the input IF signal, calculates the autocorrelation of the signal with itself - essentially multiplying the signal by a time delayed version of itself - and then calculates the Fourier transform of the autocorrelated signal. According to the Wiener-Khinchin theorem [41], the autocorrelation of a signal is the Fourier transform of the signal's power spectral density ie. the amount power received as a function of frequency. Fourier transforming the autocorrelation of the IF signal thus gives us what we want - the spectrum of the source in the frequency range of interest.
Venus was observed by the JCMT using RxA3 in search of phosphine on five mornings in June 2017. These dates were chosen so that Venus appeared large enough to fill the telescope beam, minimising any effects due to errors in pointing the telescope. Venus is a strong continuum emitter at millimetre wavelengths, so phosphine would
be detected as a weak absorption line against this strong continuum. This strong continuum, however, leads to a number of problems with the quality of the data. A number of effects, including reflections from the floor or roof of the telescope dome, or in the receiver cabin itself, entering the beam, lead to strong, time varying baselines in the output spectra. These have to be detected and removed. For the initial JCMT detection of phosphine [6] these effects were removed by the usual method of fitting polynomial functions to the data, excluding the region of the spectrum where phosphine might lie. Once this process was applied to each of the 140 spectra that made up the observations, and despite the original assumption that only an upper limit would be found, an absorption line ascribed to phosphine was detected, corresponding to an abundance of about 20 to 25 parts per billion (ppb). The JCMT spectrum of phosphine can be seen on the right hand side in Figure 6.
### ALMA Observations
Following the rather surprising detection of phosphine at the JCMT some further observations in search of independent confirmation of this discovery were needed. To this end, observing time was granted on the Atacama Large Millimetre/Submillimetre Array (ALMA) in March 2019. Despite operating at similar mm/submm wavelengths, ALMA is a rather different telescope to the JCMT because it is an interferometer. It is made up of 66 separate antennae, mostly 12m in diameter, the signals of which are combined together to produce the final results. 43 of the 12 m antennae were used for the Venus phosphine observations.
Figure 5: The James Clerk Maxwell Telescope, a 15 m diameter mm/submm telescope on Mauna Kea in Hawaii, currently operated by the East Asian Observatory. The 15 m primary mirror is protected from wind during observations by a large gortex screen which is why it cannot be seen directly even when the telescope is taking observations, as in this picture. Credit: William Montgomerie/EAO/JCMT.
Figure 6: The Phosphine 1.123mm J=1-0 line as detected by ALMA (left) [42] and JCMT with RxA3 (right) [6]. The black lines indicate the level of SO\({}_{2}\) absorption derived from simultaneous (ALMA) and near-simultaneous (JCMT) observations. As can be seen the PH\({}_{3}\) detections are clear and the SO\({}_{2}\) contamination is minimal. These spectra are continuum subtracted, so zero on the y-axis represents the continuum level. We use the standard astrophysics approach for presenting high resolution spectra in this Figure, where the spectrum is centred on the line of interest at zero velocity and frequencies are indicated by the doppler velocity in km/s needed to shift from this central value.
While the signals received by each ALMA antenna are dealt with in a manner similar to RxA3 on the JCMT, using a heterodyne SIS mixer and local oscillator in the receiver, these signals are then cross-correlated pairwise with those from each of the other antennae in the array (where each pair of antennae forms a 'baseline') to produce an interferometric map of the target. Interferometry allows angular resolutions to be achieved that correspond to a telescope whose diameter equals the longest baseline separating individual antennae.
The cross-correlation of signals detected by each pair of antennae produces a series of 'visibilities' which are a measure of the two-dimensional Fourier transform of the sky distribution of brightness. The visibilities at each observed frequency are then Fourier transformed to produce a series of images at successive frequencies, i.e. a spectral cube. However, since only a finite number of antennae pairs are available, even for an array with as many antennae as ALMA, the Fourier plane is like a telescope with lots of holes. Various methods are used in a process called 'cleaning' to derive the actual image from the limited sampling in the Fourier plane. For more information on interferometry see [41] or the ALMA Primer4.
Footnote 4: [https://almascience.eso.org/documents-and-tools/cycle9/alma-science-primer](https://almascience.eso.org/documents-and-tools/cycle9/alma-science-primer)
Processing interferometric data involves different challenges to those encountered at the JCMT. For example, the angular size of Venus was so great that even the shortest ALMA baselines could not provide good images on the scale of the whole disc and the imperfect sampling led to strong ripples so the data from the affected short baselines, all less than 33 m in length, were removed. There were also strong spectral ripples on some parts of the planet, such as the poles, which had to be excluded from further analysis otherwise they would add noise to the spectra, reducing the sensitivity of the final results. Further analysis of the ALMA data processing also found some errors in
Figure 7: Some of the 64 antennae that make up the ALMA telescope. Credit: ESO/C. Malin.
the standard reduction script used, see Section 6.3.1, which improved on the initial detection. The end result of the ALMA observations once all these various effects are taken into account is shown in Figure 6 - a good detection of phosphine absorption at a level of \(\sim\)20 ppb that matches what was seen by the JCMT but with somewhat higher signal-to-noise.
### The Detection of Phosphine from the Ground
The detection of phosphine in the atmosphere of Venus was, to say the least, a surprise. The observations from JCMT and ALMA thus prompted a considerable amount of debate and further observations using other facilities. In this section we look at these various discussions, their conclusions, and counter-arguments to the suggestion that phosphine has not been detected or that whatever has been detected was not phosphine.
#### 6.3.1 Reanalysis of the ALMA Data
One of the first responses to the Greaves et al. detection paper, [6], was a reanalysis of the ALMA data by a separate group [43]. This analysis did not reproduce the phosphine detection of [6], and instead found an upper limit to the phosphine abundance of about 1 ppb. They identified some processes used in the standard ALMA calibration scripts which were not adequate for a very bright, time-varying, beam-filling target or indeed for the correspondingly bright calibrator sources used. This led to reprocessing of the raw data by the ALMA observatory and European Southern Observatory (ESO) staff (independently of any of the research groups), who provided new scripts taking these and additional problems into account. The reprocessing simplified the basic removal of instrumental bandpass ripples using the moon Callisto as a calibrator, and also avoided the chance of spectral averaging producing sharp edges which could mimic an absorption line. The new scripts also accounted for the non-linear instrumental response to the high intensity of Venus (the brightest source in the sky after the Sun at these wavelengths) and its large angular size, although, since this exceeds the extent of accurate models of the response of individual ALMA dishes, this is thought to be a source of residual error.
Greaves et al. responded to this reanalysis [42, 44] by employing the improved observatory scripts and updating their own processing, using three different independent methods to obtain final images and spectra. The first step after observatory calibration is to remove the shortest baselines as explained above, and then to make a simple, linear spectral fit to the visibility data to remove the contribution of Venus. Next, residual spectral ripples can be corrected either in the visibility data or after Fourier transforming to make an image cube, and before or after cleaning. Spectra were extracted over different portions of the planet; small residual errors meant that only those spectra extracted from regions symmetric about the planet centre were considered reliable. A range of parameters allowed the continued recovery of a phosphine signal using all the updated methods, optimised at 7.7\(\sigma\) significance by excluding the planetary poles [42]. They attributed the non-recovery of the phosphine signal by [43] as a result of including baselines shorter than 33 m in most of their analyses, as well as including parts of the image of the planet that had significant spectral artefacts that raise the noise in the final combined spectrum. They concluded that the phosphine detection in the ALMA data remained robust.
#### 6.3.2 Was it a real line?
A common feature of both the original ALMA and JCMT data analyses in [6] was the use of fairly high order polynomials to allow the removal of varying baselines. In doing this, it is necessary to mask out the region of the spectrum around a suspected line otherwise the polynomial fitting method might fit and remove a real line, mistaking it for a small scale baseline ripple. Several authors suggested that this process can instead lead to the creation of fake lines, and that this was in fact the origin of the claimed phosphine detection [43, 45, 46]. There are two counter-arguments to this suggestion that the detection is essentially a statistical false positive.
The argument that the claimed phosphine detection is a false positive is that when you take the ripple-contaminated spectrum, block out a portion of it where there might be a line, and use a sixth or higher order polynomial to fit the baseline, then some noise spikes or contaminating ripples in the blocked out section may end up looking like a line. This is in fact correct, and blind searches for line candidates at random locations in the spectrum would indeed suffer from this effect, significantly reducing confidence that any detections are real. However, the detection of phosphine in [6] did not solely rely on measuring the depth of an absorption line at a random position. Instead, it also relied on the wavelength of the line seen coinciding with that of the line being searched for, phosphine. This significantly reduces the chance of a noise spike or residual masquerading as a phosphine detection. Analysis in [47] shows that adding the additional constraint that a fake line must be at a specific frequency reduces the chance of a false positive for line detection to \(<1.5\%\).
Furthermore, if the line was in fact a false positive then there would be no reason for any such noise-generated feature to lie at exactly the same frequency in both the ALMA and JCMT data. As pointed out in [6], the only feature at matching wavelengths in both the ALMA and JCMT data lies at the expected frequency of phosphine. This further bolsters our confidence that the detected phosphine line is real, and not a statistical artefact resulting from the data processing approach.
#### 6.3.3 Is it really phosphine?
The foregoing analysis suggests that the line discovered is in fact real and not a statistical false positive. However, can we be sure that it is in fact phosphine and not some other molecular species that happens to have an absorption feature at a similar frequency? Sulphur dioxide, SO\({}_{2}\), a known constituent of Venus' atmosphere, has a transition due to the (J = \(30_{9,21}-31_{8,24}\)) transition at 266.943329 GHz, a frequency shift from the PH\({}_{3}\) J = 1-0 line at 266.944513 GHz that corresponds to a velocity difference of just 1.3 km/s. The possibility that the claimed phosphine line is actually a misidentification of this SO\({}_{2}\) line was first suggested by [43] and has been further explored by others [48, 49]. While they have concluded that SO\({}_{2}\) contamination or misidentification is a possibility, a number of problems with this interpretation have been pointed out by [42]. Firstly, while the line centres of PH\({}_{3}\) J=1-0 and SO\({}_{2}\) J = \(30_{9,21}-31_{8,24}\) are close, they are still 1.3 km/s apart, leading to a \(\sim\) 3 \(\sigma\) discrepancy between the measured line centre and that expected for the SO\({}_{2}\) line. Furthermore, simultaneous (in the case of ALMA) and near-simultaneous (in the case of JCMT) observations of a different and stronger SO\({}_{2}\) line [42] provide predictions of the relative strength of the SO\({}_{2}\) transition that might contaminate the phosphine line. They find that the level of contamination of the phosphine line by SO\({}_{2}\) is \(\sim 10\%\) for the JCMT data and \(<2\%\) for the ALMA data. This level of contamination by SO\({}_{2}\) is shown as a black line in Figure 6. On this basis it seems likely that the detected line is indeed
phosphine, and that any contamination by the neighbouring SO\({}_{2}\) line is insignificant.
#### 6.3.4 Other Observations
The phosphine J=1-0 line at 1.123 mm is not the only line of this molecule. However, many of the other transitions are at wavelengths that are more difficult to observe from the ground. Nevertheless, observations have been attempted of other lines in search of independent confirmation of the presence of phosphine.
The first of these used archival data from the TEXES (Texas Echelon Cross Echelle Spectrograph) instrument, a 5 to 25\(\mu\)m high resolution mid-infrared spectrometer, on the NASA Infrared Telescope Facility (IRTF) on Mauna Kea in Hawaii [50]. These observations were part of a long term project to monitor SO\({}_{2}\) and H\({}_{2}\)O in the cloud tops of Venus, and involved observations at a range of frequencies. One of these datasets, obtained in March 2015, fortuitously included a range of wavelengths where there are some relatively strong phosphine transitions, at a wavelength around 10.471 \(\mu\)m (corresponding to a frequency of 28.65 THz). No phosphine absorption is detected, indicting an upper limit of about 5 ppb, which is substantially lower than the claimed millimetre wave phosphine detection.
Further infrared data, this time from the Venus Express spacecraft, were analysed, looking for absorption from phosphine lines at wavelengths around 4.125 \(\mu\)m above the cloud layers [51]. This data was taken at various times from June 2006 to December 2014, and measured absorption against the light of the Sun as it rises or sets, rather than against the emission of Venus itself. This means that only a small part of the atmosphere is studied rather than the entire planetary disk as is the case, for example, for the JCMT or TEXES observations. These Venus Express observations also failed to find any phosphine absorption, setting limits on its abundance of 0.2 to 20 ppb depending on the specific observations and the assumed altitude of the absorption, ranging from 60 to 95 km.
A third approach to confirm the detection of phosphine is to search for absorption lines in the far-infrared, at frequencies around 534 and 1067 GHz [52]. While the Earth's atmosphere is completely opaque at these frequencies at sea level and even on tall mountains like Mauna Kea, the SOFIA observatory (Stratospheric Observatory For Infrared Astronomy) - essentially a 747 Jumbo Jet with a hole cut in the fuselage with a 2.5m telescope pointing out (see Figure 8) - can perform these observations since it flies at an altitude of about 13 km, above much of the water vapour that absorbs far-IR radiation in the Earth's atmosphere5.
Footnote 5: Sadly such observations can no longer be performed since the SOFIA observatory was decommissioned and retired at the end of September 2022.
SOFIA observations of Venus in search of phosphine were carried out in November 2021 [52] using the GREAT (German REceiver At Terraherz frequencies) instrument, a receiver similar to the JCMT receivers but operating at much higher frequencies. Data reduction and analysis by the original authors failed to find any sign of phosphine, setting an upper limit of 0.8 ppb from the J=4-3 line and \(\sim\) 2 ppb for the J=2-1 line. However, subsequent reanalysis of the SOFIA data found that the calibration stage that sets an absolute flux scale adds noise and artefacts to the resulting spectra. This calibration stage is not needed if we are only interested in the line-to-continuum ratio, as is the case when measuring an absorption line. By purely analysing the line-to-continuum ratios [53], phosphine at a level of \(\sim\)1-2 ppb is found, averaged over altitudes from 75-110 km, with 6.5\(\sigma\) significance.
These other observations in search of phosphine absorption using different ap
proaches, whether from the ground or from Venus Express, have produced a number of conflicting results. They need to be carefully interpreted since the different wavelengths and observational approaches are in fact probing the presence of phosphine at different altitudes and times, as we shall see below. None has yet definitively disproved the original JCMT and ALMA results of [6], and the SOFIA observations may in fact have provided some level of confirmation, depending on which analysis approach is used.
#### 6.3.5 In Situ Confirmation
The ideal way to determine the presence and amount of phosphine in the atmosphere of Venus would be to send a space probe directly into the atmosphere equipped with instrumentation that can detect and measure the presence of the gas _in situ_. This would avoid all the difficulties of observing phosphine remotely, as well as all issues of interpretation. At this point, as we will see below, we are some years away from any such future mission. However, past missions to Venus did send probes into the planet's atmosphere. Principal among these, for our current purposes, is the Pioneer Venus Multiprobe (also known as Pioneer Venus 2 or Pioneer 13) [54] (see Figure 9) which, among many other instruments, sent a mass spectrometer into the atmosphere of Venus on its largest entry probe.
Data from the Pioneer Venus Large Probe's Neutral Mass Spectrometer (LNMS) were reanalysed in 2021 [40] subsequent to the announcement of the discovery of phosphine by [6]. This reanalysis of data taken during its descent into the atmosphere of Venus on 9 December 1978, was the first to look for trace or minor constituents
Figure 8: The SOFIA observatory, which consists of a 2.5m telescope and instruments mounted inside a 747 jumbo jet, and capable of flying above much of the far-IR absorption in the Earth’s atmosphere. Credit: NASA/DLR
of the atmosphere beyond methane and water. The LNMS takes gas in from the atmosphere through inlet tubes. Molecules in the gas are then ionised by an electron source, accelerated by an electric field and then passed through a magnetic field which deflects the ions by an amount that depends on their mass. These ions are subsequently detected, allowing their mass and abundance to be determined. For more information see [55].
The data reanalysed in search of phosphine came from within the clouds, at an altitude of 51.3 km above the surface, part of the atmosphere that is largely inaccessible to ground or space based observations, but which is of critical importance to searches for possible life in the clouds, as this is where that life might actually live. The detailed analysis found evidence for phosphine at 0.1 to 2 parts per million (ppm) levels in the clouds themselves, a much higher abundance than is seen in the JCMT or ALMA observations. It also found evidence for other species such as nitrite, nitrate, nitrogen and possibly ammonia. Taken together these molecules indicate that unexpected chemical processes are underway in the clouds and suggest chemical disequilibrium. Whether this disequilibrium is due to biological or some other as-yet unknown chemical process is yet to be determined.
### Where and When is the Phosphine Seen?
The forgoing sections, describing the various observations in search of phosphine in the atmosphere of Venus, can seem confusing and mutually contradictory. This is at least partly because observational constraints mean that they sample the atmosphere of Venus at different altitudes and, because they encompass datasets that span over 40 years, at different times. We already know that some species in Venus' atmosphere
Figure 9: The Pioneer Venus Probe Spacecraft. The mission consisted of an orbiter and four probes that were sent into the atmosphere of Venus, seen here in artists conception. Credit: NASA
are highly variable - SO\({}_{2}\) levels, for example, can vary by large factors on timescales of both years and days at various altitudes [56] - so this may also apply to phosphine. Spatial variations across the disk of the planet are also possible, but this is difficult to assess for phosphine since many of the observations to date have been of the average phosphine level across the planetary disk.
Of particular importance is the amount of phosphine in the atmosphere as a function of altitude. While the _in situ_ observations of the LNMS and Venus Express have a clearly determined altitude, this is harder to extract for the observations from Earth. In principle, the effect of pressure broadening on the absorption lines can be used to determine the vertical abundance profile of an absorbing molecule. Pressure broadening of an absorption line occurs when the absorbing molecules interact collisionally with other molecules in the atmosphere. These interactions shorten the characteristic time of the absorption process, in accordance with Heisenberg's uncertainty principle, increasing the uncertainty of the absorption frequency and thus broadening the line (see eg. [57]). The overall effect is to make the line shape a Lorentzian function, which has much broader wings than the usually assumed Gaussian shape. The exact width of the Lorentzian line depends on the pressure, temperature and nature of the molecules that are interacting. The higher the pressure, the broader the wings, so a full analysis of the shape of the phosphine absorption line can reveal its vertical abundance profile in the atmosphere of Venus.
There are, however, a number of problems with a full pressure broadening analysis of the phosphine line seen in the atmosphere of Venus. Firstly, the pressure broadening coefficient for phosphine in CO\({}_{2}\), the dominant constituent of Venus' atmosphere, is not currently known. Analyses have so far used either a modification of the phosphine broadening coefficient in air [50] or have used the CO\({}_{2}\) pressure broadening coefficient for NH\({}_{3}\) as an analog to that of phosphine [6]. Secondly, and more significantly for the immediate understanding of phosphine in Venus, the data reduction techniques used to date to extract the absorption line remove any broad line wings as part of the process that removes baseline ripples. This just leaves narrow line cores, meaning that the observations are insensitive to any significantly broadened lines, and thus are only sensitive to phosphine at altitudes of 75 to 80 km. Most recently, an experimental data processing approach applied to new observations of Venus from the JCMT-Venus project (PI: D.L. Clements) seems to be able to recover the broad line wings of the J=1-0 phosphine line, suggesting an abundance of phosphine at the ppm level inside the clouds at an altitude of about 60 km, consistent with the high levels seen by the LNMS.
The other factor to consider is the timing of the observations. While we do not yet have enough observations to allow us to monitor any changes in the abundance of phosphine with time or in relation to other species such as HDO or SO\({}_{2}\), we can see if there are any correlations between the amounts of phosphine seen and the timing of the observations relative to the illumination of Venus' atmosphere by the Sun. This may well be an important factor since photolysis by sunlight is a significant destruction route for phosphine in the Earth's atmosphere [22]. If we combine all the phosphine observations - detections and non-detections - together with information about whether the Sun is rising or setting on the atmosphere at the time of observation we perhaps begin to see a pattern (see Figure 10).
Figure 10: The trend of phosphine abundance by altitude from the currently available data. Shading indicates cloud (orange, centred at \(\sim\) 60 km) and haze (grey, centred at \(\sim\) 80 km and 40 km) layers of Venus’ atmosphere. Superposed symbols indicate candidate detections plus upper limits for phosphine abundance. Rising arrows indicate observations made where the atmosphere was rising into sunlight and falling arrows indicate observations made when the atmosphere was descending towards the nightside. Abundances are, from top: 20, 25 ppb from J=1-0 data [6]; \(\sim\)1 ppb or \(<\) 0.8 ppb from J=4-3 data [52,53]; \(<\) 7 ppb at 62 km from 4 \(\mu\)m spectra [51]; \(<\) 5ppb at 60 km from 10 \(\mu\)m spectra [50]; \(\sim\)2 ppm at 60 km from initial JCMT-Venus processing; high ppb to 2 ppm at 51 km from Pioneer-Venus _in situ_ sampling [40]. As can be seen all the significant detections of phosphine take place as the atmosphere is moving out of night and into sunlight, while the non-detections take place as the atmosphere is moving from sunlight into night. If sunlight destroys phosphine at high altitudes during daylight, as is the case on Earth, this would explain the apparent contradictions between some of the observations. From: Greaves et al. in prep, by permission.
## 7 The (Im)Possible Origins of Phosphine on Venus
The presence of phosphine in the atmosphere of Venus is a surprise since a compound of phosphorous with hydrogen should not naturally appear in the atmosphere of a planet, such as Venus, which has an oxidised atmosphere. On Earth, phosphine does not occur through normal chemical processes and is produced only by anaerobic life or through human industrial activity. While this is the expectation, Venus is a complex environment with a wide range of chemical and physical processes underway from the surface to the top of the atmosphere. A detailed analysis is thus necessary to see if there are any possible routes through which the levels of phosphine seen might occur through normal chemical processes. Such an analysis was conducted in [58] where a wide range of chemical processes were examined to see if there is any potential source of phosphine in sufficient abundance to explain the observations with processes that we know are underway on the planet. The processes examined included gas reactions, geochemical reactions, photochemistry, volcanism (see also [59]), lightning and impactors. An example of the kind of chemical reaction network considered is shown in Figure 11, where the reaction rate and the destruction rates are compared. Only segments of this reaction network where the ratio of the production rate over the destruction rate is \(\geq 1\) can produce an accumulation of the relevant chemical. For phosphine to be produced in significant amounts the whole reaction network must have this ratio \(\geq 1\) but, as can be seen, critical segments of the network have ratios orders of magnitudes less than this. More generally, it was found that the lifetime of free phosphine at various altitudes in the atmosphere of Venus ranged from \(<1\) second to perhaps a century [59], making it highly unlikely that a significant amount of phosphine can accumulate from any hypothetical source.
The most obvious conclusion that can be drawn from this analysis is that we do not know how phosphine came to be in the atmosphere of Venus. There may be geochemical or photochemical processes that can produce it in sufficient amounts, but these are currently not known to us. The alternative, that, by analogy with Earth, phosphine is being produced by anaerobic biological processes, is another potential explanation. However, before we can make this particular le
Figure 11: Potential chemical pathways for the synthesis of phosphine in the atmosphere of Venus, and their derived production vs. destruction rate. There are stages where, for all possible pathways, the rate of destruction of phosphine exceeds its formation by many orders of magnitude, as shown in red/purple. As can be seen, there is no route to produce phosphine by these processes that can account for the amounts observed. From [58] where more details can be found.
have found evidence for life in the clouds of Venus, we must first exclude all other possible origins, and also explain how life is able to survive in the extremely acidic environment of Venusian cloud droplets. One possible solution to the latter problem is that ammonia, if present, is able to buffer the sulphuric acid in these droplets to some extent [34]. The possible detection of ammonia in the clouds of Venus by the LNMS [40] and in preliminary analysis of data from the Green Bank Telescope (Greaves et al., private communication), is thus rather interesting.
## 8 The Next Steps
As has become clear in the previous section, studies of phosphine, and the search for life on Venus, are very much works in progress. While the current results are intriguing, there are no solid conclusions that can yet be determined. Much more work needs to be done, and it will be the work of many years before we can have a definitive answer to the question of whether there is life in the clouds of Venus. This will require not only observations from Earth, but also _in situ_ probes and, ideally, missions that can return samples from Venus to Earth. In this section we look at some of the projects that are planned or already underway to improve our knowledge of the clouds of Venus.
### Earth-based Studies
Observations from Earth were responsible for the first detections of phosphine, and these are continuing to both monitor phosphine and to search for other molecules that may have a bearing on the chemistry, or biochemistry, underway in the clouds of Venus.
The largest of the projects currently underway is JCMT-Venus (PI: D.L. Clements). This uses the 'U'u receiver, the replacement for RxA3, together with the ACSIS system to obtain whole disk spectra for Venus. The new receiver has a wider bandwidth than RxA3 so we can simultaneously observe phosphine, HD and SO\({}_{2}\), and search for other molecules such as SO and PO\({}_{2}\) which have spectral features in the band covered by 'U'u. By simultaneously monitoring phosphine, HDO and SO\({}_{2}\) we can see how these different species vary in relation to each other. This should provide indications as to the chemical processes behind the presence of phosphine. If, for example, phosphine is produced by reducing processes in the upper atmosphere, the proportions of reduced compounds, like phosphine, and oxidized compounds, like HDO and SO\({}_{2}\), will be anticorrelated. The JCMT-Venus project is a long term programme at the JCMT and has been awarded 200 hours of time over a period of three years. The visibility of Venus means that observations will be possible in three tranches, including Feb 2022, July 2023 and September 2023. The first of these observing campaigns has already taken place, with Venus observed over a period of 20 consecutive mornings. The data obtained already contains 140 times as much information as in the original JCMT observations, so is taking some time to process and analyse, especially since 'U'u has its own difficulties dealing with the brightness of Venus and thus an interesting new set of baseline drifts and ripples to be removed. Nevertheless the analysis is well underway and initial results, some of which have been briefly discussed above, have already emerged, including further confirmation of the presence of phosphine. When complete, JCMT-Venus will provide a major new database of observations of Venus in the mm band, including phosphine and other important molecules which will provide significant new insights into the origin of phosphine.
Further ALMA observations have yet to be approved, but these hold the promise of providing further information about the distribution of phosphine across the face of the planet. The original ALMA observations provided some hints that the distribution is not uniform, but a full map of the abundance of phosphine across the planetary disk could not be made because of excess ripples affecting the signal over significant portions of the disk. Additional observations in the mid-IR from the IRTF and elsewhere will also be helpful. Sadly, further observations with SOFIA are not possible since the observatory has been decommissioned.
Studies related to the search for potential signs of life on Venus are also underway that do not directly target phosphine. These include observations with the Green Bank Telescope (GBT) at radio wavelengths to look for ammonia (NH\({}_{3}\)) in absorption. This is important since detection of ammonia would indicate the presence of another reduced molecule that should not be expected in the oxidised atmosphere of Venus. Ammonia is also important because of its buffering effect against the high acidity in the liquid droplets in Venus' clouds [34]. Analysis of archival data from the 1970s as well as an initial set of observations with the GBT suggest the presence of ammonia (Greaves et al., private communication), as do the _in situ_ measurements of the LNMS, but more data is needed to confirm this.
Laboratory studies also have a role to play since they can validate and test the various assumptions that went into the analysis of [34], and allow more accurate predictions for the formation and destruction of phosphine and other hydrogen-rich compounds in Venus-like conditions. Such studies are already being planned.
### Space-based studies
Venus is also being studied from much closer quarters by space probes. These missions take many years to prepare and so have largely not been designed to examine the possibilities of unusual chemistry, or even life, in the clouds of Venus. Nevertheless, existing missions do have useful capabilities for these purposes and future missions are being planned that can respond to recent discoveries.
The Japanese mission AKATSUKI (see eg. Figure 1) is currently operating in orbit around Venus. While it does not have any instruments that are directly relevant to the search for phosphine, its UV imaging instruments are monitoring the unidentified UV absorber, the origin of which is one of the outstanding mysteries of the Venusian atmosphere. Comparing AKATSUKI's results with future data from the ground, especially any future imaging observations with ALMA, may be able to see if there is a link between the presence of phosphine and the presence of the UV absorber.
The next potentially important space mission to go to Venus, from the point of view of phosphine observations, is not in fact a specific mission to Venus, but the JUICE mission to the moons of Jupiter [5]. The JUICE spacecraft is scheduled to be launched in the second quarter of 2023. It will then perform a series of flybys of planets as gravity assists on its journey to Jupiter. Of particular interest here is the flyby of Venus in August 2025 where an observational campaign is possible. Of particular importance in the context of phosphine on Venus is the Submillimetre Wave Instrument (SWI) which will be able to observe higher J transitions of phosphine, including those observed from the Earth by SOFIA. Whether JUICE will be able to undertake an observing campaign at Venus will be up to the JUICE mission directors, and no decision will be made until after launch.
In the 2030s three missions directly targeted at Venus are due to be launched.
These include the European Space Agency's EnVISION mission [60], and NASA's VERITAS (Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy) [61] and DAVINCI (Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging) [62] missions. VERITAS and EnVISION are primarily concerned with the surface and interior of Venus, studying the history and role of volcanism on the planet. While they will doubtless reveal much that is of interest, they are unlikely to have much to say about phosphine and the processes underway in the clouds of Venus unless they uncover volcanic activity vastly in excess of our current understanding [59]. DAVINCI, however, is a much more interesting prospect.
The goal of DAVINCI is to study the atmosphere of Venus. To do this its primary set of instruments are on board a descent stage that will fly through the clouds of Venus, sampling the atmosphere as it goes. It will be the first NASA mission to enter the atmosphere of Venus since Pioneer Venus Probe in 1978. Among the instruments on the descent stage is a mass spectrometer that will be able to significantly improve on the results of the LNMS. This will be able to detect phosphine and other trace gas species and see how their abundance changes with altitude and other conditions. Other instruments include a tuneable laser spectrometer which is able to measure even small amounts of specific gases. Altogether, the four instruments on the DAVINCI probe, combined with imagers on the orbiting mothership, will provide a vast improvement in our _in situ_ knowledge of Venus' atmosphere. It will provide the ground truth against which observations of the planet from Earth can be compared.
National and international space agencies are not the only organisations looking to send probes to Venus. Private companies now have the capability to send missions to other planets independently of governments, and they are also interested in the possibility of life on Venus. One company in particular, Rocket Lab, is taking special interest in Venus and has set up a team to develop a series of Venus Life Finder (VLF) missions [63]. The first of these missions, which may launch as soon as mid-2023, is intended to look for organic molecules using an ultraviolet autofluorescence technique. Further missions are planned including a balloon borne laboratory that will be able to float in the clouds for an extended period. Amongst the planned instruments for this payload are not only mass spectrometers but also a microscope that will search cloud droplets for evidence of biological cells.
Perhaps the most ambitious mission planned by the VLF team is a sample return mission that will use a balloon to collect samples of cloud droplets and gas, and return these to Earth for detailed laboratory study. If there is in fact life in the clouds of Venus, a mission like this will be necessary to answer fundamental questions about its origin and how it operates. It is perhaps the dream mission in the search for evidence of life on Venus.
## 9 Conclusions
The discovery of phosphine in the atmosphere of Venus has caused some controversy and has renewed discussions about the possibility of life in the planet's clouds. The observational evidence for phosphine has been challenged and examined in detail. The JCMT and ALMA results have so far survived these challenges, and there has been independent _in situ_ confirmation of the presence of phosphine from the Pioneer Venus LNMS instrument. Observations from other telescopes in search of phosphine have produced rather more mixed results, with several upper limits and one possible detection. However, the apparent disagreement between these different sets of observations may
soon be understood in the context of day-night variations in the amount of phosphine above the clouds thanks to photolysis by sunlight.
While the presence of phosphine in the atmosphere of Venus is becoming more secure with the arrival of new and improved datasets such as JCMT-Venus, an understanding of its origin still eludes us. It is clear that no conventional chemical process can produce phosphine in the amounts observed, but it is still far from clear whether biological processes are involved, or if there is some as-yet unknown non-biological source.
More data is clearly necessary for us to understand what is really going on in the atmosphere of Venus, and this is being sought by a number of different ground and space-based approaches. Over the next several years our understanding of the origin of phosphine on Venus will certainly improve, and we will hopefully reach a point at which the question of life in the clouds of Venus moves from being something that we can only speculate about, to something about which we have clear and decisive knowledge. Whatever conclusion we finally reach, we will have learnt a lot more about our nearest neighbour planet, and this knowledge will help guide our search for possible biospheres on planets orbiting other stars. Confirmation that there is in fact life in the clouds of Venus would be a truly epoch making discovery, but we are still a very long way from drawing that conclusion.
#### 5.0.1 Acknowledgements
It is a pleasure to thank Jane Greaves, Janusz Petkowski, Anita Richards, and Wei Tang for many useful comments. It is also a pleasure to thank all members of the phosphine team for their enthusiasm and expertise in what has already been quite an exciting, and unexpected, adventure, which, for me, started in a bar in Hilo.
|
2307.01088 | Empirically Validating Conformal Prediction on Modern Vision
Architectures Under Distribution Shift and Long-tailed Data | Conformal prediction has emerged as a rigorous means of providing deep
learning models with reliable uncertainty estimates and safety guarantees. Yet,
its performance is known to degrade under distribution shift and long-tailed
class distributions, which are often present in real world applications. Here,
we characterize the performance of several post-hoc and training-based
conformal prediction methods under these settings, providing the first
empirical evaluation on large-scale datasets and models. We show that across
numerous conformal methods and neural network families, performance greatly
degrades under distribution shifts violating safety guarantees. Similarly, we
show that in long-tailed settings the guarantees are frequently violated on
many classes. Understanding the limitations of these methods is necessary for
deployment in real world and safety-critical applications. | Kevin Kasa, Graham W. Taylor | 2023-07-03T15:08:28Z | http://arxiv.org/abs/2307.01088v1 | Empirically Validating Conformal Prediction on Modern Vision Architectures Under Distribution Shift and Long-tailed Data
###### Abstract
Conformal prediction has emerged as a rigorous means of providing deep learning models with reliable uncertainty estimates and safety guarantees. Yet, its performance is known to degrade under distribution shift and long-tailed class distributions, which are often present in real world applications. Here, we characterize the performance of several post-hoc and training-based conformal prediction methods under these settings, providing the first empirical evaluation on large-scale datasets and models. We show that across numerous conformal methods and neural network families, performance greatly degrades under distribution shifts violating safety guarantees. Similarly, we show that in long-tailed settings the guarantees are frequently violated on many classes. Understanding the limitations of these methods is necessary for deployment in real world and safety-critical applications.
Machine Learning, ICML
## 1 Introduction
Deep learning models have shown the ability to complete a diverse range of tasks with exceedingly high performance (Silver et al., 2017; Brown et al., 2020; Dosovitskiy et al., 2020). However, high performance metrics (e.g., accuracy) alone are insufficient for deployment in safety-critical applications, where uncertainty measures and safety guarantees that experts can trust are required (Ovadia et al., 2019). _Conformal prediction_ (CP) (Vovk et al., 2005) is a promising method for addressing these limitations. Conformal prediction turns heuristic notions of uncertainty into reliable ones through a post-training adjustment, which can then be used to predict _confidence sets_ that are guaranteed to contain the true class with some user specified error rate.
Various conformal prediction methods (Sadinle et al., 2019; Stutz et al., 2022; Romano et al., 2020; Angelopoulos et al., 2022; Teng et al., 2023) perform well on a number of complex tasks such as image classification and object detection (Angelopoulos et al., 2022). However, these results thus far are largely limited to in-distribution and class-balanced data regimes. This is problematic since data encountered in real-world settings is often imbalanced (Krawczyk, 2016) or subject to distribution shift (Castro et al., 2020), and robustness to these settings is necessary for the safe deployment of ML (Amodei et al., 2016).
Despite the importance of understanding performance in these real-world settings, there has thus far been no comprehensive investigation of the performance of popular
Figure 1: **Performance of threshold conformal prediction (Sadinle et al., 2019) degrades across various neural architectures when tested on distribution-shifted ImageNet datasets**. Target coverage is set to 0.90. All conformal prediction thresholds were first calibrated on a held-out portion of the original validation set. The **same** threshold was used to construct confidence sets in subsequent test sets. Target coverage is consistently violated for all distribution-shifted sets. Likewise, the average confidence set size, or “inefficiency”, is observed to increase under distribution shift. Larger markers reflect larger architectures within the family.
conformal prediction methods under distribution shift and long-tailed data. Since conformal prediction assumes identically distributed data and guarantees provided are based on micro- rather than macro-averages, it is unsurprising that performance would degrade under shifted and long-tailed distributions. This phenomenon has been observed in small-scale datasets (Tibshirani et al., 2020). Nonetheless, the recent adoption of conformal prediction into deep learning and safety-critical domains (Angelopoulos et al., 2022; Muthali et al., 2023; Vazquez & Facelli, 2022; Lu et al., 2022) warrants specific investigation of these methods using modern neural network architectures and large-scale datasets that are more characteristic of data found "in the wild".
In this study, we evaluate four different conformal prediction methods on numerous distribution-shifted and long-tailed datasets and thoroughly characterize their performance under these conditions. We investigate across three deep learning model families, while also controlling for model size. Our primary findings are:
* Safety guarantees in terms of coverage (Eq. 8) are violated even under small distribution shifts.
* Class-conditional coverage is frequently violated in long-tailed distributions.
* The size of the confidence sets, with smaller being more desirable, increases under both these settings.
* The above results hold across all CP methods and model architectures.
## 2 Methods
In this study, four conformal prediction methods were evaluated across five distribution-shifted datasets and one long-tailed dataset, for image classification tasks. Three neural architecture families were used as the base classifier, to determine their affect on CP performance, which was evaluated using several metrics.
### Conformal Prediction Methods
The common classification paradigm involves training a model \(\pi_{\theta}(x)\) to predict a _single label_\(Y\in[K]:=\{1,...,K\}\). In contrast, conformal prediction is a statistical method that can be used to predict confidence _sets_ for machine learning models (Angelopoulos et al., 2022). Formally, it aims to construct a confidence set \(\mathcal{C}\subseteq[K]\) such that the true class is included with some user specified error rate \(\alpha\):
\[\mathbb{P}(Y_{\text{test}}\in\mathcal{C}(X_{\text{test}}))\geq 1-\alpha. \tag{1}\]
This is done through a two step post-processing procedure. In the calibration step, a score function \(s(x,y)\) is used on held-out data to transform a provisional uncertainty measure (e.g., softmax values) into _conformity scores_. The \(1-\alpha\) quantile of the conformity scores is then used to determine a threshold \(\tau\). In the prediction step, sets \(\mathcal{C}(X)\) are constructed on new unseen data by including all the labels whose conformity scores fall within the threshold, guaranteeing \(1-\alpha\) coverage. Importantly, this guarantee is known as _marginal_ coverage, since it holds in expectation _unconditionally_ across all data points rather than per-class. The returned confidence sets can also be used as an uncertainty estimate, with larger confidence sets \(|\mathcal{C}(X)|\) suggesting greater uncertainty in the predictions.
The **threshold conformal prediction (THR)** method (Sadinle et al., 2019) generally produces the smallest average set sizes. Here, the confidence sets are constructed as:
\[\mathcal{C}(x;\tau):=\{k\in[K]:s(x,k)>\tau\} \tag{2}\]
Here, the score function is defined as \(s(x,y)=\pi_{\theta}(x)_{y}\), and the threshold \(\tau\) is computed as the \(\alpha\left(1+\nicefrac{{1}}{{N_{\text{all}}}}\right)\) quantile of the calibrated conformity scores. During calibration, the softmax value corresponding to the true class \(y\) of the input \(x\) is used in the conformity scores. At test time, this method includes in the set those classes whose softmax score is greater than the calibrated threshold. Although THR produces small set sizes, it may lead to "un-even" coverage, with difficult classes achieving worse coverage.
**Adaptive prediction sets (APS)** (Romano et al., 2020) were developed to improve conditional coverage, with the trade-off of larger set sizes. In the APS method, the conformity scores are calculated by accumulating softmax values:
\[s(x,y)=\sum_{j=1}^{y}\hat{\pi}_{\theta}(x)_{j}, \tag{3}\]
Where \(\hat{\pi}_{\theta}(x)\) is the sorted softmax values for input \(x\) from greatest to smallest. Subsequently, sets are constructed by including values _less_ than the threshold \(\tau\):
\[\mathcal{C}(x;\tau):=\{k\in[K]:s(x,k)<\tau\}, \tag{4}\]
Similarly to THR, the conformity scores with respect to the true class \(y_{i}\) are used for calibration, and the \((1-\alpha)(1+\nicefrac{{1}}{{N_{\text{all}}}})\) quantile is used to find the value \(\tau\) that ensures marginal coverage on test examples.
**Regularized adaptive prediction sets (RAPS)**(Angelopoulos et al., 2022) build on APS by modifying the conformity scores to include a penalty \(\lambda\) to classes beyond some specified threshold \(k_{reg}\). Specifically, the score function is defined as:
\[s(x,y):=\sum_{j=1}^{k}\pi_{\theta}(x)_{y}+\lambda\cdot(o_{x}(y)-k_{\text{reg}})^{+}, \tag{5}\]
where \(o_{x}(y)\) is the ranking of label \(y\) among the sorted probabilities, and \((\cdot)^{+}\) indicates the positive part of the expression. The confidence sets are then defined the same as in Equation 4. The regularization helps to exclude probabilities that are deep in the tail that would otherwise have been included, since labels now require a greater score to be included in the set. This helps to produce smaller prediction sets than APS (albeit not as small as THR), and has been shown to work well on large datasets like ImageNet (Angelopoulos et al., 2022). In our experiments, convolution-based networks use values of \(\lambda=0.01\) and \(k_{reg}=5\), and transformer-based networks use \(\lambda=0.1\) and \(k_{reg}=2\).
The CP methods described thus far are implemented _after_ a model is trained, which does not directly optimize the underlying model to produce high performing confidence sets. **Conformal training (ConfTr)**(Stutz et al., 2022) was proposed to address this, by simulating the conformal prediction process during training. This is done by splitting each training batch \(B\) into a calibration \(B_{cal}\) and prediction \(B_{pred}\) subset. Just like in regular CP, \(B_{cal}\) is used to calibrate the threshold \(\tau\), and confidence sets are formed on \(B_{pred}\). To perform the thresholding step, differentiable sorting (Blondel et al., 2020) is used to find the quantiles of the conformity scores in a way that can be back-propagated during training. The size of the confidence sets is then used as the loss function to be minimized during training:
\[\mathcal{L}_{\text{size}}=\max\left(0,\sum_{k=1}^{K}E_{\theta,k}(x;\tau)- \kappa\right). \tag{6}\]
In Equation 6, \(E_{\theta,k}(x;\tau)\) is a "smooth" assignment of class \(k\) to the confidence set, calculated as \(E_{\theta,k}(x;\tau):=\sigma\left(\frac{s(x,y)-\tau}{T}\right)\), where \(\sigma(\cdot)\) is the Sigmoid function and \(T\in[0,1]\) is a temperature parameter controlling the smoothness. This penalizes the set sizes, and the hyper-parameter \(\kappa\in\{0,1\}\) determines whether or not sets of size one are penalized (i.e., \(\kappa=1\) means that singleton sets will incur no loss). An additional classification loss can be included to ensure the true label is included in the confidence sets:
\[\mathcal{L}_{\text{class}}=\sum_{k=1}^{K}\left[(1-C_{\theta,k}(x;\tau))\cdot \mathbf{1}[y=k]\right]. \tag{7}\]
A weighted combination \(\mathcal{L}=\mathcal{L}_{\text{class}}+\lambda\mathcal{L}_{\text{size}}\) can then be used to train the model.
For this method, a ResNet-50 pre-trained on ImageNet (Wightman, 2019) was used as the base model. The training methodology and hyper-parameters closely follow that used by the original authors on the CIFAR-100 dataset (Stutz et al., 2022). This included re-initializing the final fully connected layer, and training one baseline model using cross-entropy loss and one with the combined \(\mathcal{L}_{\text{size}}\) and \(\mathcal{L}_{\text{class}}\) losses, defined in Equation 6 and Equation 7.
Any CP method can be used to predict the confidence sets during training, however in practise THR has been shown to produce better results, so that is used in this study for the ConfTr experiments. Because ConfTr relies on smooth sorting / assignment operations, post-training conformal prediction is still performed to ensure the formal guarantees are maintained.
### Evaluation Metrics
The primary metrics used for evaluation are coverage and inefficiency. **Coverage** measures the fraction of true labels that are actually included in the confidence set:
\[\text{\emph{Cover}}:=\frac{1}{N_{\text{test}}}\sum_{i=1}^{N_{\text{test}}} \mathbf{1}[y_{i}\in\mathcal{C}(x_{i})]. \tag{8}\]
The conformal prediction process guarantees that \(\mathbb{P}(Y_{\text{test}}\in\mathcal{C}(X_{\text{test}}))\geq 1-\alpha\), thus the _Cover_ metric should be \(\geq 1-\alpha\) on average. However, conformal prediction does not guarantee **class conditional coverage**: \(\mathbb{P}(Y_{\text{test}}\in\mathcal{C}(X_{\text{test}})|Y_{\text{test}}=y) \geq 1-\alpha\). We can capture conditional performance using a "macro" coverage metric. First we can consider \(\text{\emph{Cover}}(k)\) to be the the coverage computed only on test points from class \(k\in[K]\). The macro coverage is then:
\[\text{\emph{Macro Cover}}:=\frac{1}{K}\sum_{k=1}^{K}\text{\emph{Cover}}(k). \tag{9}\]
The non-conditional guarantees of conformal prediction mean that although across an entire dataset the desired coverage may be maintained, there may be classes which violate the desired coverage level. This is especially pertinent for long-tailed datasets. Thus, the number of classes that violates the coverage level is found:
\[\text{\emph{Cover Violation}}:=\sum_{k=1}^{K}\mathbf{1}\left[\text{\emph{Cover }}(k)<1-\alpha\right]. \tag{10}\]
**Inefficiency** is a measure of the size of the confidence sets. The prediction sets must both provide adequate coverage (contain the right class), and be _informative_; very large prediction sets are of little use. Inefficiency is measured as:
\[\text{\emph{Ineff}}:=\frac{1}{N_{\text{test}}}\sum_{i=1}^{N_{\text{test}}} \left|\mathcal{C}(x_{i})\right|. \tag{11}\]
The macro inefficiency is also calculated, to determine if some classes tend to return particularly large sets. Similarly to Equation 9, we define \(\text{\emph{Ineff}}(x)\) as the inefficiency
on class \(k\), and the macro inefficiency as:
\[\textit{Macro\ Ineff}:=\frac{1}{K}\sum_{k=1}^{K}\textit{Ineff}(k). \tag{12}\]
The macro coverage and inefficiency metrics will be used to characterize performance on the long-tailed datasets.
### Datasets
Distribution Shift.We use the ImageNet (Deng et al., 2009) dataset to train our neural networks and calibrate the CP classifiers. Following previous works on conformal prediction (Angelopoulos et al., 2022), we reserve 50% of the ImageNet validation set to find the threshold \(\tau\). This **same threshold** is used to form prediction sets on the remaining ImageNet validation set, as well as the following distribution-shifted datasets:
1. **ImageNetV2**(Recht et al., 2019) is a new ImageNet test set collected by closely following the same format and collection process as ImageNet, with the goal of mimicking the original data distribution.1 Footnote 1: It is difficult to conclude whether this dataset represents a true distribution shift in the absence of convincing generalization error bounds for ImageNet-scale DNNs, however, we adopt Recht et al.’s hypothesis that it indeed represents a small shift.
2. **ImageNet-C**(Hendrycks and Dietterich, 2018) applies common visual corruptions to the ImageNet validation set. In this study, the Gaussian noise, motion blur, brightness, and contrast corruptions are investigated, representative of the four main categories -- noise, blur, weather, and digital, respectively.
3. **ImageNet-A**(Hendrycks et al., 2021) contains naturally adversarial images that a ResNet-50 incorrectly classifies, but can be correctly classified by humans.
4. **ImageNet-R**(Hendrycks et al., 2021) consists of rendered versions of ImageNet classes, such as drawings, cartoons, etc.
The details of these datasets are summarized in Table 1. Metrics are reported as the average across ten trials, to account for variation in the calibration split.
Long-tailed labels.Conformal prediction performance on long-tailed data distributions was evaluated on the PlantNet-300k dataset (Garcin et al., 2021). This is a highly imbalanced dataset, with 80% of classes accounting for only 11% of the total number of images. In addition to the 243,916 training examples, PlantNet-300k has defined validation and test sets, each with 31,118 examples and at least one image of each class in each set. The validation set is used to calibrate the conformal prediction methods and find the threshold, and the test set is used to form confidence sets and evaluate performance. Here, all three data splits (train, validation, and test) are long-tailed, meaning that **the conformal calibration process is conducted on highly imbalanced data**.
### Deep Learning Models
To account for differences in model architecture and training algorithms, three distinct model families were evaluated:
1. **ResNets**(He et al., 2015) are prototypical convolutional neural networks.
2. **Vision Transformers (ViT)**(Dosovitskiy et al., 2020) are transformer-based architectures that are pre-trained on ImageNet-21k (Ridnik et al., 2021), before being fine-tuned on ImageNet-1k.
3. **Data efficient image Transformers (DeiT)**(Touvron et al., 2022) are also transformer networks, however they are trained only on ImageNet-1k following a carefully designed training procedure.
## 3 Experiments and Results
### Distribution Shift
Our results on alternate ImageNet test sets are summarized in Figure 1. We can see that the desired coverage is
\begin{table}
\begin{tabular}{|l|c|c|} \hline Dataset & Number of Images & Number of Classes \\ \hline ImageNet-V2 (Recht et al., 2019) & 10,000 & 1,000 \\ ImageNet-C (Hendrycks and Dietterich, 2018) & 50,000 & 1,000 \\ ImageNet-A (Hendrycks et al., 2021) & 7,500 & 200 \\ ImageNet-R (Hendrycks et al., 2021) & 30,000 & 200 \\ \hline \end{tabular}
\end{table}
Table 1: Alternate ImageNet-based validation datasets used to evaluate performance under distribution shift. For ImageNet-C, the Gaussian noise, motion blur, brightness, and contrast corruptions are used. The conformal calibration process is **only** conducted on the original ImageNet validation set.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multicolumn{2}{c}{Accuracy} & \multicolumn{2}{c}{Coverage} & \multicolumn{2}{c}{Inefficiency} \\ & Baseline & ConfTr & Baseline & ConfTr & Baseline & ConfTr \\ \hline ImageNet & 76.91 & 72.40 & 0.99 & 0.99 & 32.21 & 29.89 \\ ImageNet-V2 & 64.68 & 60.45 & 0.97 & 0.97 & 50.79 & 46.99 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between using baseline cross-entropy training and ConfTr, which directly optimizes set sizes during training. Although ConfTr leads to smaller sizes on the in-distribution test data, there is negligible difference in coverage between the two methods on ImageNet-V2. Coverage target is 0.99.
consistently violated across all models. Distribution shift also leads to increased inefficiency -- a proxy for the increased uncertainty of the underlying model. The coverage target is violated even on small distribution shifts, such as ImageNet-V2, which was purposefully and carefully constructed to match the original ImageNet distribution as closely as possible. The inability of these methods to maintain coverage even on minor distribution shifts highlights the risks of deployment in real world situations, without additional safety features.
Smaller models exhibit worse inefficiency, and often lower coverage rates. The larger ViT / DeiT models perform best overall with the smallest degradation under distribution shift. These results highlight the value of combining conformal prediction with modern, high-performing deep learning models. It affirms that efforts to improve the performance of the base model may improve the performance of conformal prediction methods under distribution shift. Refer to Appendix A for detailed results on these datasets, as well as ImageNet-C results at each corruption level. Further, Appendix D shows the relationship between model accuracy and CP coverage, and Appendix E includes results on the recent ImageNet-W (Li et al., 2023) dataset.
Table 2 shows the results of the conformal training method. As expected, the ConfTr method leads to smaller sets on the in-distribution data, however, this does not translate to improved coverage on distribution-shifted data.
### Long-tailed Label Distributions
Table 3 shows the results on the long-tailed PlantNet-300k dataset. Although the target coverage of 0.90 is maintained marginally across the entire dataset, it is frequently violated on a class-conditional basis. Indeed, there are often hundreds of classes with violated coverage levels, leading to a violation of coverage on up to 70% of the classes in the worst case. This is consistent across all models and methods, and highlights the difficulty of applying conformal prediction methods to long-tailed data distributions.
The ineffectiveness of approximating class-conditional coverage on PlantNet-300k is further demonstrated in the Appendix (see Table 6). The Appendix also includes the results of experiments on the iNaturalist-2018 (iNaturalist 2018 competition dataset) and -2019 (iNaturalist 2019 competition dataset) datasets (see Table 7).
## 4 Conclusion
In this paper, we studied the performance of conformal prediction methods under distribution shift and long-tailed data, on large-scale datasets and modern neural architectures. We show that performance degrades in these regimes, and coverage guarantees are frequently violated. We also observed increased inefficiency, the average size of the conformal sets. While violation of coverage guarantees is undesirable, inefficiency indicates model uncertainty. A good model should exhibit heightened uncertainty with OOD examples.
There have been several recent methods developed in dealing with distribution shift (Amoukou and Brunel, 2023; Gendler et al., 2022; Gibbs and Candes, 2022; Barber et al., 2023; Dunn et al., 2022; Bhatnagar et al., 2023; Cauchois et al., 2023) and class-conditional coverage (Deng et al., 2023; Fisch et al., 2021; Jung et al., 2022). However, these have thus far been developed mostly on small-scale datasets, and it remains to be seen how they translate to the large-scale datasets studied here. This is something future works may tackle, and we hope that our results will serve as baselines upon which new conformal prediction methods and novel algorithms and architectures for deep learning can improve.
Ultimately, this work highlights the challenges that conformal prediction methods may face in real world applications, where class imbalance is common and data distributions are ever-shifting. Developing and empirically evaluating conformal prediction methods that are more robust to these admittedly difficult settings is a key requirement to their adoption in safety-critical environments.
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline \hline Model & CP Method & Accuracy & Macro Acc. & Coverage & Macro Coverage & Inefficiency & Macro Inefficiency & \# (\%) Violated Classes \\ \hline \multirow{3}{*}{ResNet-152} & THR & & & 0.899 & 0.505 & 1.46 & 1.99 & 774 (72\%) \\ & APS & 80.84 & 36.82 & 0.900 & 0.648 & 3.67 & 13.75 & 617 (57\%) \\ & RAPS & & & 0.900 & 0.610 & 2.15 & 5.17 & 665 (62\%) \\ \multirow{3}{*}{DeiT-B} & THR & & & 0.898 & 0.541 & 1.30 & 1.50 & 714 (66\%) \\ & APS & 82.68 & 43.57 & 0.900 & 0.713 & 4.70 & 18.30 & 513 (47\%) \\ & RAPS & & & 0.901 & 0.603 & 1.68 & 2.64 & 654 (60\%) \\ \multirow{3}{*}{ViT-B} & THR & & & 0.899 & 0.461 & 1.56 & 2.42 & 800 (74\%) \\ & APS & 82.15 & 35.86 & 0.899 & 0.744 & 12.37 & 98.45 & 466 (43\%) \\ \multirow{3}{*}{ViT-B} & RAPS & & & 0.901 & 0.551 & 1.67 & 3.27 & 697 (64\%) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Conformal prediction results on PlantNet-300k. While marginal coverage of 0.90 is maintained, class-conditional coverage is frequently violated. The conformal threshold is calibrated on a (long-tailed) held-out validation set. |
2308.09251 | Catalogue of topological electrons and phonons in all allotropes of
carbon | Carbon, as one of the most common element in the earth, constructs hundreds
of allotropic phases to present rich physical nature. In this work, by
combining the ab inito calculations and symmetry analyses method, we
systematically study a large number of allotropes of carbon (703), and
discovered 315 ideal topological phononic materials and 32 topological
electronic materials. The ideal topological phononic nature includes single,
charge-two, three, four Weyl honons, the Dirac or Weyl nodal lines phonons, and
nodal surfaces phonons. And the topological electron nature ncludes topological
insulator, (Type-II) Dirac points, triple nodal points, the Dirac (Weyl) nodal
lines, quadratic nodal lines and so on. For convenience, we take the uni in SG
178 and pbg in SG 230 as the examples to describe the topological features in
the main. We find that it is the coexistence of single pair Weyl phonons and
one-nodal surfaces phonons in the uni in SG 178, which can form the single
surface arc in the (100) surface BZ and isolated double-helix surface states
(IDHSSs)in the (110) surface BZ. In topological semimetal pbg in SG 230, we
find that the perfect triple degenerate nodal point can be found in the near
Fermi level, and it can form the clear surface states in the (001) and (110)
surface BZ. Our work not only greatly expands the topological features in all
allotropes of carbon, but also provide many ideal platforms to study the
topological electrons and phonons. | Qing-Bo Liu, Xiang-Feng Yang, Zhe-Qi Wang, Ziyang Yu, Lun Xiong, Hua-Hua Fu | 2023-08-18T02:26:37Z | http://arxiv.org/abs/2308.09251v1 | # Catalogue of topological electrons and phonons in all allotropes of carbon
###### Abstract
Carbon, as one of the most common element in the earth, constructs hundreds of allotropic phases to present rich physical nature. In this work, by combining the ab into calculations and symmetry analyses method, we systematically study a large number of allotropes of carbon (703), and discovered 315 ideal topological phononic materials and 32 topological electronic materials. The ideal topological phononic nature includes single, charge-two, three, four Weyl phonons, the Dirac or Weyl nodal lines phonons, and nodal surfaces phonons. And the topological electron nature includes topological insulator, (Type-II) Dirac points, triple nodal points, the Dirac (Weyl) nodal lines, quadratic nodal lines and so on. For convenience, we take the \(uni\) in SG 178 and \(pbg\) in SG 230 as the examples to describe the topological features in the main. We find that it is the coexistence of single pair Weyl phonons and one-nodal surfaces phonons in the \(uni\) in SG 178, which can form the single surface arc in the (100) surface BZ and isolated double-helix surface states (IDHSSs)in the (110) surface BZ. In topological semimetal \(pbg\) in SG 230, we find that the perfect triple degenerate nodal point can be found in the near Fermi level, and it can form the clear surface states in the (001) and (110) surface BZ. Our work not only greatly expands the topological features in all allotropes of carbon, but also provide many ideal platforms to study the topological electrons and phonons.
## 1 Introduction.
Topological materials, including topological insulators (TIs), topological semimetals (TSMs) [1, 2, 3, 4, 5, 6, 7, 8, 9], topological superconductors (TSCs) [9, 10, 11, 12] and so on, have been received intense studies due to their protected boundary states and prospects for the future applications in quantum devices in the past a dozen years decades. And nodal point phonons, nodal lines phonons and nodal surface phonons have all been attracting the tremendous research interests, because the topological phonons can realize particular phonon-based devices [13, 14], such as phonon diode effect in hexagonal honeycomb lattice. Recently years, single Weyl phonons with Chern number \(\pm 1\)[15, 16, 17, 18, 19, 20], charge-2 [21, 22], 3 [23, 24] and 4 [3, 25] Weyl phonons with charges of \(\pm 2\), \(\pm 3\) and \(\pm 4\) have been observed in the realist materials and they can form the single, double, triple and quadruple-helicoid surface states. At the same time, the spin-1 Weyl phonons, charge-2 Dirac phonons and helicoid nodal lines phonons are all observed in the experiment [26, 27]. So, to explore novel topological electrons and phonons has been another central topic in topological physics and materials.
Besides the well-known the allotropes of carbon of graphite [28], diamond [29], carbon nanotubes [30], graphene [31] and fullerenes [32], more than 700 carbon allotropes have been theoretically predicted or experimentally synthesized. In recent years, the magic angle twisted bilayer graphene (TBG) has attracted the an intense interest of researchers, because it has many interesting physical properties, such as superconductivity [33, 34, 35] and topologically nontrivial electronic states. Beyond TBG, the topological features of carbon allotropes have been widely studied in the electronic and phononic systems [36, 37, 38], For example, topological semimetals phases have been predicted in nanostructure carbon allotropes, body centered orthorhombic C\({}_{16}\)[39] monoclinic C\({}_{16}\)[40] and C\({}_{40}\)[41], and others [42, 43], including the topological features of nodal points [42], nodal nets [41, 42], nodal rings [36] and nodal surfaces [43]. And topological phonons are also predicted in the allotropes of carbon, such as nodal rings phonons [44], straight nodal lines phonons [45], single Weyl phonons [45], charge-2 Weyl phonons [45]. However, no reports have reported the all topological electrons and phonons of all allotropes of carbon so far.
In this work, by combining the ab initio calculation and symmetry analyses method, we firstly systematically study a large number of carbon allotropes (703), and discovered 315 ideal topological phononic materials and 32 topological electronic materials. The ideal topological phononic features include charge-one, two, three, four Weyl phonons, Dirac or Weyl phonons, Dirac or Weyl nodal lines phonons, and nodal surfaces phonons. And the topological electron nature includes topological insulators, Dirac or Weyl points, triple nodal points, the Dirac or Weyl nodal lines, quadratic nodal lines in allotropes of carbon. We mainly discuss the topological phononic (electronic) nature of \(uni\) (\(pbg\)) in SG 178 (230) in the main. We find that it is the coexistence of single pair Weyl phonons with charge -1 and one-nodal surfaces
phonons in the \(uni\) in SG 178. Moreover, it can form the single surface arc in the (100) surface BZ and isolated double-helix surface states in the (110) surface BZ. In addition, another example of topological semimetal \(pbg\) in SG 230 can form the ideal triple nodal point near the Fermi level. And it can also form clear surface states in (001) and (110) surfaces states. We discuss the topological electronic and phononic features of other carbon allotropes in the Supplementary materials [46]. Our theoretical results not only find many ideal platform to study the topological electron and phonons in allotropes of carbon, but also greatly expands the topological features in all carbon allotropes.
**2. Materials and method**
The crystallographic data of all allotropes of carbon can be taken from the international database SACADA [47]. The phononic dispersions and electronic bands of allotropes of carbon are calculated by the density func
Figure 1: The schematic procedure for discovering the topological electrons and phonons of all carbon allotropes.
tional theory (DFT) using the Vienna _ab initio_ Simulation Package (VASP) with the generalized gradient approximation (GGA) in the form of Perdew-Burke-Ernzerhof (PBE) function for the exchange-correlation potential [48; 49; 50]. An accurate optimization of structural parameters is employed by minimizing the interionic forces less than 0.001 eV/A and an cutoff energy at 450 eV. The BZ is gridded with 3\(\times\)3\(\times\)3 \(k\) points. Then the phononic spectra are gained using the density functional perturbation theory (DFPT), implemented in the Phonopy Package [51]. The force constants are calculated using a \(2\times 2\times 2\) supercell. To reveal the topological nature of phonons, we construct the phononic Hamiltonian of tight-binding (TB) model and the surface local density of states (DOS) with the open-source software Wanniertools code [52] and the surface Green's functions [53]. The Chern numbers or topological charge of WPs are calculated by Wilson loop method [54]. The electronic surface states have been performed using the open-source code WANNIERTOOLS based on the Wannier tight-binding model constructed using the WANNIER90 code. The irreps of the electronic and phononic bands are computed by the program \(ir2tb\) on the electronic and phononic Hamiltonian of TB model [55].
**3. Results and discussion**
_3.1 Research the topological electrons and phonons_
The guiding principle of our search the topological electrons and phonons in all carbon allotropes as shown in Fig. 1. Firstly, among 703 carbon allotropes structures in the international database SACADA [47], we will exclude the no topological features of space groups, according to the irreducible representations of high-symmetry
Figure 2: (a) Crystal structure of _uni_ in SG 178 in a primitive cell. (b) The bulk BZ of _uni_ and the (100) (red square) and (110) (blue square) surface BZ. (c) The distribution of single pair monopole WPs (green dots with Chern number -1) and one-nodal surfaces (yellow region) between the \(17^{th}\) and \(18^{th}\) phonon bands of _uni_ in the first BZ. (d) The phonon spectra of _uni_ along the high-symmetry paths and the corresponding phononic density of states (DOS). A red box is marked along the high-symmetry paths as shown in Fig. 1(e). (f) The evolution of the average position of Wannier centers for single pair monopole WPs.
lines (HLs) and high-symmetry points (HPs) in space groups [56; 57]. Then, we can obtain the 505 allotropes of carbon which can exist the topologically electronic or phononic properties. Secondly, their dynamical stabilities were examined by using the density functional perturbation theory (DFPT) calculations, which yields 329 dynamical stable allotropes of carbon. Finally, by using the first-principle calculations, we can divide 329 dynamical stable allotropes of carbon into 32 perfect topological electronic materials (TEMs) including topological insulators (TIs), Dirac semimatals (DSMs), Weyl semimetals (WSMs) and so on, and 321 perfect topological phononic materials (TPMs) including Weyl phonons (WPs), Dirac phonons (DPs), triple-fold phonons (TPs) and so on as shown in Supplementary materials [46].
_3.2 Symmetry analysis_
To elucidate the topologically nontrivial features of single pair Weyl phonons and one-nodal surface phonons in SG 178 in Table-1, we can prove that it exist the single pair Weyl phonons at high-symmetry point K and one-nodal surface phonons at \(k_{z}\)= \(\pm\pi\) planes by using a two-band \(k\cdot p\) model as
\[\mathcal{H}_{kp}(k)=g_{x}(k)\sigma_{x}+g_{y}(k)\sigma_{y}+g_{z}(k)\sigma_{z}, \tag{1}\]
where \(k=(k_{x},k_{y},k_{z})\), \(\sigma_{x,y,z}\) represents the three Pauli matrices, and \(g_{x,y,z}(k)\) represents the complex functions versus \(k_{x}\), \(k_{y}\) and \(k_{z}\).
Let us first consider a three screw symmetry \(\{C_{31}^{+}|00\frac{1}{3}\}\) and a two-fold screw symmetry \(\{C_{21}^{{}^{\prime\prime}}|00\frac{1}{2}\}\) at K point in SG 178. Under a 2D irreps:R3 [56; 57], the representation matrixes of \(\{C_{31}^{+}|00\frac{1}{3}\}\) and \(\{C_{21}^{{}^{\prime\prime}}|00\frac{1}{2}\}\) can be described as
\[\{C_{31}^{+}|00\frac{1}{3}\}=\left[\begin{array}{cc}-\frac{1}{2 }&\frac{\sqrt{3}}{2}\\ -\frac{\sqrt{3}}{2}&-\frac{1}{2}\end{array}\right],\] \[\{C_{21}^{{}^{\prime\prime}}|00\frac{1}{2}\}=\left[\begin{array}[] {cc}0&1\\ 1&0\end{array}\right],\]
So,the \(k\cdot p\)-invariant Hamiltonian at point K is derived as
\[\mathcal{H}_{kp}=ak_{x}\sigma_{x}+bk_{z}\sigma_{y}+ck_{y}\sigma_{z},\]
where a, b and c are constant coefficients. According to the two-band \(k\cdot p\) Hamiltonian, we find that it has a \(k\) dispersion along all phononic dispersion's direction at K point.
Then, we will prove that it can exist the one nodal surfaces at at \(k_{z}\)= \(\pm\pi\) planes in the nonsymmorphic SG 178. We continue to consider the skew axial symmetry \(S_{2z}\): \((x,y,z,t)\mapsto(-x,-y,-z+\frac{1}{2},t)\), which indicates that \(S_{2z}\) inverses \(k_{x}\) and \(k_{y}\) while preserves \(k_{z}\) in the \(k\)-space. One can drive that \((S_{2z})^{2}=T_{001}=e^{-ik_{z}}\), where \(T_{001}\) is the translation along the \(x\) direction by a lattice constant. As we know that \(\mathcal{T}\) is antiunitary and inverses \(k\) with the relation \(T^{2}\) = 1. So the compound symmetry \(S=TS_{2z}\) is antiunitary and only inverses \(k_{z}\). Since [\(T\), \(S_{2z}\)] =0, \(S\) satisfies
\[S^{2}=e^{-ik_{z}}. \tag{2}\]
The crossing on the surface can be comprehended as a result of the Kramer's degeneracy. Under the operation \(S\), we find that any point on the \(k_{z}=\pi\) is invariant. Moreover, from Eq. (2), one can drive that the antiunitary symmetry satisfies \(S^{2}\) = -1 on the whole \(k_{z}=\pi\) plane. Thus, the two Kramer's degeneracy can arise at the every point in this plane. Away from this plane, this Kramer's degeneracy is generically destroyed owing to the loss of symmetry protections. In a word, a nodal surface should be formed at the \(k_{z}=\pi\) plane.
Next, we will derive the \(k\cdot p\) model of \(\Gamma\) point in SG 230 in Table-1. The point \(\Gamma\) belongs to point group \(O_{h}\), which includes five symmetries \(\{S_{61}^{-}|000\}\), \(\{\sigma_{x}|\frac{1}{2}\frac{1}{2}0\}\), \(\{\sigma_{z}|\frac{1}{2}0\frac{1}{2}\}\), \(\{C_{2c}|00\frac{1}{2}\}\) and time-reversal symmetry \(\mathcal{T}\). Under a 3D irreps:\(\Gamma_{4}^{+}\), the representation matrixes of there symmetries can be shown
\[\{S_{61}^{-}|000\}=\left[\begin{array}{ccc}0&0&1\\ 1&0&0\\ 0&1&0\end{array}\right],\] \[\{\sigma_{x}|\frac{1}{2}\frac{1}{2}0\}=\left[\begin{array}{ccc}- 1&0&0\\ 0&1&0\\ 0&0&-1\end{array}\right],\] \[\{\sigma_{z}|\frac{1}{2}0\frac{1}{2}\}=\left[\begin{array}{ccc}0 &1&0\\ 1&0&0\\ 0&0&-1\end{array}\right],\] \[\{C_{2c}|00\frac{1}{2}\}=\left[\begin{array}{ccc}-1&0&0\\ 0&1&0\\ 0&0&-1\end{array}\right],\] \[\mathcal{T}=\mathcal{K}.\]
The \(\mathcal{K}\) is a complex conjugate operator. So, the three-band \(k\cdot p\)-invariant Hamiltonian at point \(\Gamma\) is derived
\begin{table}
\begin{tabular}{c c c c} \hline \hline SGs & Numbers & Names & TFs \\ \hline \multirow{3}{*}{P6\({}_{1}\)22(No. 178)} & 29 & uni & \\ & 56 & unj & \\ & & & Weyl points and nodal surfaces \\ \hline \multirow{3}{*}{Ia\(\bar{3}\)d(No. 230)} & 13 & lcs & triple nodal \\ & 53 & pbg & points \\ \end{tabular}
\end{table}
Table 1: The complete list of allotropes of carbon with topological properties in SGs 178 and 230. The first column indicates the space groups (SGs), the second column indicates the number of allotropes of carbon in SACADA, the third column indicates the SG symbol, and the fourth column shows the types of topological features (TFs). The red letters stand for the topological insulators or semimatel.
as
\[\mathcal{H}_{kp}=\left[\begin{array}{llll}0&mk_{x}k_{y}&nk_{x}k_{z}\\ mk_{x}k_{y}&0&qk_{y}k_{z}\\ nk_{x}k_{z}&qk_{y}k_{z}&0\end{array}\right],\]
where m, n and q are real constant coefficients. According to this equation, we find that it can form quadratic triple nodal point (QDNP) at the \(\Gamma\) point in SG 230.
### Topological phononic features of \(uni\) in SG 178
The crystal structure of \(uni\) in SG 178 is shown in Fig. 2(a), which includes 6 carbon atoms in a unit cell. The bulk BZ, (110) surface BZ (blue square) and (100) surface BZ (red square) are shown in Fig. 2(b). The phononic bands and the corresponding phononic density of states (DOS) of \(uni\) are drawn in Fig. 2(d), where no imaginary frequencies indicate that this material is thermodynamically stable. From the first glance, we can
Figure 3: The surface phonon dispersions and isofrequency surface contours of \(uni\) in SG 178. (a) The surface phonon dispersion on the (100) surface along the high-symmetry \(\overline{\text{Y}}\)-\(\overline{\text{X}}\)-\(\overline{\text{Y}}\). (b) The isofrequency surfaces contours of (100) surface BZ at 36.155 THz. (c) The surface phonon dispersion on the (110) surface along the high-symmetry \(\widetilde{\text{X}}\)-\(\widetilde{\text{Y}}\). (d) The isofrequency surfaces contours 0f (110) surface BZ at 36.155 THz.
find that two single pair Weyl points are located at the K point formed by \(17^{th}\) and \(18^{th}\) bands, and their distribution of single pair Weyl phonons (green dots) are shown in Figs. 2(c) and 2(e) with Chern number -1 in Fig. 2(f). And two-fold degeneracy bands can be observed along the high-symmetry lines A-H-L in Fig. 2(d), which can form one nodal surfaces at \(k_{z}=\pm\pi\) planes (yellow region) in Fig. 2(c).
In order to study the topological features of \(uni\), we calculate the (100) and (110) surface states of \(uni\) in Fig 3. As shown in Fig. 1(b), the high-symmetry points \(\Gamma\) and M (L and A) are projected to \(\overline{\Gamma}\) (\(\overline{\rm Y}\)) in the (001) surface in Fig. 2(b). In (100) surface, one can see that the surface arcs surface states can be formed by the single Weyl phonons and one nodal surfaces in Fig. 3(a), which is for the first time reported in the phononic systems. We also calculate the softenquency surface at frequency 36.155 THz and we observe that the isolated surface arcs can be formed by a Weyl phonon with charge of -1 in Fig. 3(b). In (110) surface BZ, the high-symmetry points \(\Gamma\) and K (A) are projected to \(\widetilde{\Gamma}\) (\(\widetilde{\rm Y}\)) in Fig. 2(b). We find that two Weyl phonons with charge of -1 are projected o the same point which can form a charge-two Weyl phonons at \(\widetilde{\Gamma}\), leading to isolated double-helix surface states in Figs. 3(c). And we also can observe the isolated double-helix surface arc states in the softenquency surface at frequency 36.155 THz as shown in Fig. 3(d). So, these novel physical features confirm that it exit the topological nontrivial nature in \(uni\).
_3.4 Topological electronic features of \(pbg\) in SG 230_
Next, we will disscus the topological electronic features of \(pbg\) in SG 230 in Figs. 4 and 5. The unit cell of \(pbg\) is shown in Fig. 4(a), which includes 48 carbon atoms. The 3D BZ, (001) (blue square) and (110) (red square) surface BZ are shown in Fig. 4(b). The electronic bands and density of states (DOS) of \(pbg\) are draw in Fig. 4(c), where blue lines (dashed line) stand for the result of DFT (Wannier90 fitting). We find that a ideal triple nodal point can exist near the Fermi level (\(E_{f}\)) at the point \(\Gamma\).
Like to \(uni\) in SG 178, we also calculate the (001) and (110) surface states of \(pbg\) in Fig. 5. The \(\Gamma\) and H points (P and H) are projected to \(\overline{\Gamma}\) (\(\overline{\rm M}\)) in the (001) surface in Fig. 4(b). At the first glance, we find that at the point \(\overline{\Gamma}\), there are three surface arcs surfaces states projected by triple nodal points in Fig. 5(a). And we can also observe the triple nodal point at the Fermi level as shown in Fig. 5(b). In (110) surface, we can also clearly observe the three surface arcs surfaces states projected by triple nodal points at point \(\widetilde{\Gamma}\) in Fig. 5(c) and triple nodal point at \(\widetilde{\Gamma}\) in Fig. 5(d). These nontrivial topological feature all prove that \(pbg\) is topological.
**4. Conclusion**
In summary, performing symmetry analysis and first principles calculations, we systematically study 703 alolotropes of carbon, and discovered 315 ideal topological phononic materials, including single, charge-two, three, four Weyl phonons, the Dirac or Weyl nodal lines phonons, and nodal surfaces phonons, and 32 topological electronic materials, including topological insulator, (Type-II) Dirac points, triple nodal points, the Dirac (Weyl) nodal lines, quadratic nodal lines and so on. In _uni_ carbon in SG 178, we find that it is the coexistence of single pair Weyl phonons and one-nodal surfaces phonons in the uni in SG 178, which can form the single surface arc in the (100) surface BZ and isolated double-helix surface states in the (110) surface BZ. Another example, we find that the perfect triple nodal point can be found in the near Fermi level, and it can form the clear triple surface arc states in the (001) and (110) surface BZ. And more carbon allotropes are tabulated in the Supplementary materials [46]. Our work not only greatly expands the topological features in all allotropes of carbon, but also provide many ideal platforms to study the topological electrons and phonons.
**Conflict of interest**
The authors declare that they have no conflict of interest.
**Acknowledgements.**
This work is supported by the National Science Foundation of China with Grants Nos. 11774107, 12147113, 12104348 and U20A2077, by the Science and Technology Department of Hubei Provincial with Grant No. 2022CFD041, and partially by the National Key R&D Program of China (2021YFC2202300).
**Author contributions**
Ziyang Yu, Lun Xiong and Hua-Hua Fu proposed and supervised the project. Qing-Bo Liu carried out the high-throughput calculations. All authors contributed to writing of the manuscript.
**Author contributions**
Supplementary materials to this article can be found online at XXXX.
|
2304.11913 | Development of a Trust-Aware User Simulator for Statistical Proactive
Dialog Modeling in Human-AI Teams | The concept of a Human-AI team has gained increasing attention in recent
years. For effective collaboration between humans and AI teammates, proactivity
is crucial for close coordination and effective communication. However, the
design of adequate proactivity for AI-based systems to support humans is still
an open question and a challenging topic. In this paper, we present the
development of a corpus-based user simulator for training and testing proactive
dialog policies. The simulator incorporates informed knowledge about proactive
dialog and its effect on user trust and simulates user behavior and personal
information, including socio-demographic features and personality traits. Two
different simulation approaches were compared, and a task-step-based approach
yielded better overall results due to enhanced modeling of sequential
dependencies. This research presents a promising avenue for exploring and
evaluating appropriate proactive strategies in a dialog game setting for
improving Human-AI teams. | Matthias Kraus, Ron Riekenbrauck, Wolfgang Minker | 2023-04-24T08:42:51Z | http://arxiv.org/abs/2304.11913v2 | Development of a Trust-Aware User Simulator for Statistical Proactive Dialog Modeling in Human-AI Teams (Preprint)
###### Abstract.
The concept of a Human-AI team has gained increasing attention in recent years. For effective collaboration between humans and AI teammates, proactivity is crucial for close coordination and effective communication. However, the design of adequate proactivity for AI-based systems to support humans is still an open question and a challenging topic. In this paper, we present the development of a corpus-based user simulator for training and testing proactive dialog policies. The simulator incorporates informed knowledge about proactive dialog and its effect on user trust and simulates user behavior and personal information, including socio-demographic features and personality traits. Two different simulation approaches were compared, and a task-step-based approach yielded better overall results due to enhanced modeling of sequential dependencies. This research presents a promising avenue for exploring and evaluating appropriate proactive strategies in a dialog game setting for improving Human-AI teams.
user simulation, proactive dialog, corpus-based methods, human-AI team, human-AI trust +
Footnote †: journal: ACM
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
## 1. Introduction
The concept of a Human-AI team (HAIT) is intriguing, but despite the availability of sophisticated conversational assistants such as Alexa, Siri, and ChatGPT, we are still far from achieving an AI that is not only able to solve specific tasks but could also socialize and form a personal bond with its user to build an effective team. HAIT requires close coordination between humans and AI teammates to work together towards a common goal (Krishnan et al., 2017). Effective communication, prediction of teammates' actions, and high-level coordination are essential components of this collaborative effort. In this regard, the proactive behavior of AI-based systems and the communication thereof during collaboration is an important research topic concerning HAITs, e.g., see Horvitz et al. (2018). Proactivity can be defined as an AI's self-initiating, anticipatory behavior for contributing to effective and efficient task completion. It has been shown to be essential for human teamwork as it leads to higher job and team performance and is associated with leadership and innovation (Brock et al., 2018). However, the design of adequate proactivity for AI-based systems to support humans is still an open question and a challenging topic. It is essential to study the impact of proactive system actions on the human-agent trust relationship and how to use information about an AI agent's perceived trustworthiness to model appropriate proactive dialog strategies for forming effective HAITs. There are different experimental approaches to achieving this goal, such as
developing strategies and testing them with real users or training statistical-based strategies and testing them using simulated experimental approaches (Kraus et al., 2017). Experiments with real users can either take place under laboratory conditions or "in the wild," i.e., in real-life scenarios. However, there are disadvantages to both approaches, including the lack of real-life usage in laboratory conditions and the difficulty of obtaining a full-functioning system during the early stages of development for in-the-wild studies. In addition, recruiting a sufficient number of users to allow a valid interpretation of data is a challenge. To overcome this challenge, user simulation techniques have been developed (Kraus et al., 2017; Kraus et al., 2017; Kraus et al., 2017; Kraus et al., 2018). These techniques allow the testing of dialog agent prototypes with a large number of simulated "subjects" and facilitate the exploration of dialog strategies that may enhance HAIT.
In this paper, we present the development of a corpus-based user simulator for training and testing proactive dialog policies. To create the simulator, we collected a proactive dialog corpus with user trust annotations (Kraus et al., 2017), utilizing informed knowledge about proactive dialog and its effect on user trust from previous studies (Kraus et al., 2017; Kraus et al., 2018; Kraus et al., 2018; Kraus et al., 2018) to ensure high-quality data collection. Our main goal was to replicate realistic user characteristics, tasks, and trusting behaviors for exploring and evaluating appropriate strategies in a dialog game setting that we designed as a sequential decision-making task in a company management environment (Kraus et al., 2017). We simulated user behavior and personal information, including socio-demographic features and personality traits, using relevant data from the corpus collection. This enabled us to estimate the current trustworthiness of system behavior (Kraus et al., 2017) and integrate trust in the dialog state and reward function for creating trust-adaptive proactive dialog strategies (Kraus et al., 2018). To simulate the user's task behavior, we developed and compared two different simulation approaches: A complexity-based method and a task-step-based method. Both methods were found to be applicable for training statistical proactive dialog strategies, but the task-step-based approach yielded better overall results due to better modeling of sequential dependencies.
## 2. Related Work
Over the past 25 years, user simulation for training statistical and primarily RL-based dialog systems has been extensively studied (Kraus et al., 2017). The primary focus has been on task-oriented dialog systems that are designed to assist users in achieving a specific task or goal. Examples of such systems include conversational agents used for hotel room bookings or ordering food from a restaurant (Kraus et al., 2018). These systems conduct so-called slot-filling dialogs, where the system retrieves specific values for pre-defined entities, or slots, of a particular domain, such as food type, price range, and location, via dialog with the user for providing the desired information. In task-oriented dialog systems, user utterances are typically encoded in semantic representations by a natural language understanding module. The dialog management module then takes these representations as input for filling the respective semantic slots and selecting an appropriate system response according to a specific dialog policy. Therefore, user simulation for training and evaluating task-oriented systems usually produces output in the form of semantic representations of user actions (Kraus et al., 2017; Kraus et al., 2017; Kraus et al., 2017; Kraus et al., 2017). However, there are also alternative approaches that can generate natural language utterances (Kraus et al., 2018; Kraus et al., 2018). (Kraus et al., 2018) even presented a user simulator that generates spoken utterances based on pre-recorded speech files. User simulators can be classified into two types: rule-based and corpora-based (Kraus et al., 2018). Rule-based methods rely on hand-crafted rules and a range of user profiles (Kraus et al., 2018), while corpora-based approaches use probabilistic data-driven methods to model natural user behavior (Kraus et al., 2018; Kraus et al., 2018). More recent approaches use sequence-to-sequence or transformer models to produce output on a semantic or natural language level (Kraus et al., 2018; Kraus et al., 2018; Kraus et al., 2018). Social behavior modeling is also of interest when it comes to user simulation. Egges et al. (Egges et al., 2019) simulated personality, mood, and emotion, and Ferreira and Lefevre (Ef et al., 2019) incorporated social signals in the reward function of the RL-based system for training socially-effective dialog behavior. Several social signals, such as user satisfaction, rapport-building, small talk, and self-disclosure, have been used in different user simulation
approaches (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). We also aim to build a user simulator that can express task-related behavior and model specific user characteristics. However, as our interaction with the agent only involves a limited set of user actions, we opted for a simplistic combination of rule-based and data-driven mechanisms for simulating adequate user behavior. We compared two simulation approaches: using the complexity-based method, the user's task behavior was simulated depending on the agent's action and the complexity of a task step, while the task-step-based method incorporated information from a particular task step and the agent's action. For deciding, which approach to use for training our RL-based proactive dialog agent, we evaluated the approaches according to common metrics for estimating the quality of user simulators (Wang et al., 2018). Here, we utilized the mean square error (MSE) and Kullback-Leibler (KL) Divergence for measuring the error and difference between the behavior of real users and simulated users.
## 3. Simulated Proactive Dialog Environment
As we chose to utilize a corpus-based method to appropriately model a user's task and trusting behavior, it was necessary to have access to a proactive dialog corpus that includes trust annotations. Although there are various data corpora that exist for conventional dialog modeling (Han et al., 2017; Wang et al., 2018), none of them are sufficient for modeling proactive dialog. This is due to the fact that proactive behavior is either absent or underrepresented in these corpora (Bahdan et al., 2016), and trust-related features are not adequately annotated. To fill this gap, a new data corpus was created in previous work (Liu et al., 2018), which involved developing an AI agent prototype for personal advising with a proactive dialog model. The agent collected personal and dialog data in a serious gaming scenario, resulting in a trust-annotated data corpus containing interactions with the proactive assistance system. To accurately model and predict trust during mixed-initiative dialog, a trust prediction model was required to simulate a user's trusting behavior. Previous research on trust has identified various user-, system-, and context-related factors that influence the trust relationship (Han et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). For including this in the development of proactive dialog policies, we collected the necessary features during data collection and build a trust estimation model in previous work (Liu et al., 2018). The next sections provide further details regarding the data collection method, the corpus, and the techniques employed to predict the HCT relationship.
### Data Collection
We acquired a dataset of 308 dialogs, which included a total of 3696 exchanges between users and a proactive dialog agent (Liu et al., 2018). The data was gathered using an online game that utilized the clickworker framework 1, where users had to make strategic decisions to manage a company with the agent's assistance. Each exchange was marked with the user's self-reported measures of trust in the system, including competence, predictability, and reliability. The data also included objective features such as task complexity, exchange duration, user actions, and static user information such as age, gender, personality, and domain expertise. The game was designed as a turn-based planning task where the user had to make decisions based on the agent's provided options. The agent selected various proactive dialog action types and employed natural language to offer suggestions and information to help the user make the best choice. We made sure that the agent's suggestions generated the most points based on the task step to avoid any unintended negative impacts on the system's trustworthiness.
Footnote 1: www.clickworker.de
### Trust Modelling and Estimation
Based on the collected data set, we created a new user model to predict online trust when interacting with proactive AI agents (Kraus et al., 2018). For this, we selected corpus parameters as features for prediction, including both numerical values (such as age and trust propensity) and categorical values (such as proactivity type). The resulting feature vector contained 57 features, which were categorized as personal user parameters, interaction parameters, and temporal interaction parameters. To predict trust, the variables for trust, competence, reliability, and predictability were combined to create a target variable with a range of 1 to 5 on a Likert-scale. The prediction problem was treated as a multi-class classification task with distinct trust values as target classes. We applied a Support Vector Machine (SVM) as a trust classifier, which provided the best results compared to Gated Recurrent Unit (GRU) Networks and Extreme Gradient Boosting. The trained SVM was used to predict the user's trust state at each task step and to evaluate the simulated user's trust.
In summary, the created simulated environment consisted of a corpus-based user simulator, a supervised learning-based trust state model which allows for estimating the simulated user's current trust in the agent's action and to be able to adapt the dialog to the estimated user trust in the agent, and an RL-based proactive dialog agent, which is extensively described in our previous work (Kraus et al., 2018). In Fig 1, the schematic description of the simulated environment is depicted. In the following, we present the architecture of the user simulation model.
## 4. User Simulator Architecture
User simulation was based on two components similar to the work by Jain et al. (Jain et al., 2017): a _user model_ and a _user dialog manager_. The user model contained all the necessary information for modeling distinct user types whose specific task and trust behaviors were imitated. The user dialog manager was designed as a rule-based agent that triggered various behaviors dependent on the proactive system's actions and the current task context. See Table 1 for an overview of all parameters used for user simulation. First, the user model is described in detail.
### User Model
To create distinct user types, we utilized user-dependent information from the corpus, including age, gender, technical proficiency, the propensity to trust, domain expertise, and the Big 5 personality traits. We generated random distributions for these variables, with the exception of gender, which was based on its likelihood of occurrence in the corpus. Truncated
Figure 1. The proposed RL framework for implementing trust-adaptive proactive dialog agents. We formulate the collaboration process as an MDP and train an RL-based proactive dialog agent on interactions with a simulated user based on our data collection. For integrating user trust estimations in the state and the reward function of the agent, we utilize our trust estimation module for predicting user trust in the agent’s actions in real time.
Gaussian distributions were used for all other variables, as they were rated on 5-point Likert scales or were bounded due to study restrictions (user age was limited from 18 to 60). Our definition of a user's task behavior comprised the selection of options, which was represented as the game score, help requests about the game and suggestion requests towards the system. While all user-type features were used for trust estimation, only three variables (domain expertise, propensity to trust, and technical proficiency) were deemed relevant to the specific user's task behavior. Domain expertise influences decision-making and novices may ask for more recommendations. Propensity to trust affects a user's reactions to proactive behavior, with low trust leading to rejections of offers or not asking for assistance. Technical proficiency affects decision-making when collaborating with an autonomous technical system. To simplify the selection of specific task behavior, these three user traits were transformed into binary values based on a threshold of 3 on the 5-point Likert scale. This was done to reduce the rule space for simulating user behavior. User-specific task behavior and the system's actions affect task duration and perceived difficulty. Both variables were randomized also using truncated Gaussian distributions as task duration was always greater than 20 seconds and had timing limitation, and perceived difficulty was measured on a 5-point Likert scale. The user's task behavior was based on a pre-defined rule set generated by a user dialog manager, which is described in the next section.
### User Dialog Manager
Two distinct approaches, namely complexity-based and task-step-based, were used to generate task behavior in the serious dialog game. The game had 12 task steps, each with a varying level of complexity, which referred to the number of options a user had to select from for decision-making. The number of options per task varied sequentially in the order of 3, 4, 5, 3, 4, and 5. The complexity-based method simulated user behavior based on the system's action and the complexity level of the current task step. For instance, if the current proactive dialog act type was Notification and the current task step had a complexity level of 3, the simulator would use the corpus data distributions for these specific cases for generating task behavior. On the other hand, the task-step-based method incorporated information from a specific task step and the system's action to simulate user behavior. For instance, if the current proactive dialog act type was Notification and the user was working on the seventh task step, the simulator would use the corpus data distributions for these specific cases for generating task behavior. The advantage of incorporating task complexity in dialog management was that user behavior could be generated in a more generalized way, not dependent on the
\begin{table}
\begin{tabular}{l l l} \hline _Parameter_ & _Description_ & _Type_ \\ \hline Age & numerical value & user trait \\ Gender & categorical value: male, female, other & user trait \\ Technical Affinity & avg. numerical value based on 5-point Likert scale by Karrer et al. (2010) & user trait \\ Trust Propensity & avg. numerical value based on 5-point Likert scale by Merritt et al. (2015) & user trait \\ Domain Expertise & avg. numerical value based on self-developed 5-point Likert scale & user trait \\ Big 5 personality traits & avg. numerical value for each trait based on 5-point Likert scale by (Nagaghi et al., 2017) & user trait \\ Proactive DialAct & categorical value: None, Notification, Suggestion, Intervention & system action \\ Task Difficulty & numerical value based on 5-point Likert scale & task property \\ Task Complexity & numerical value: 3,4,5 & task property \\ Task Duration & numerical value & task property \\ User Selection & numerical value according to game score of the respective option & user action \\ Suggestion Request & categorical value: True, False & user action \\ Help Request & categorical value: True, False & user action \\ \hline \end{tabular}
\end{table}
Table 1. Overview and description of the user and interaction parameters from the ProDial-corpus.
specific task steps. However, the task-step-based method modeled sequential dependencies between task steps better. Hence, there existed a certain trade-off between both variants.
The simulation of user behavior in both approaches involved generating values for the game score, whether a user-initiated a suggestion or help request, the corresponding duration of the task step, and perceived task difficulty. The probabilities for each specific user behavior were based on structured datasets that depended on the user model and the current dialog situation. The overall dataset was sorted based on the occurrences of user behavior, dependent on the relevant user traits, such as domain experience, propensity to trust, and technical affinity. These user traits were represented as tuples of three binary values, i.e., "000" to "111". For instance, "000" represented low domain experience, low propensity to trust, and low technical affinity, while "111" represented high domain experience, high propensity to trust, and high technical affinity. In the complexity-based method, the user-dependent data was first summarized based on tasks of the same complexity, and then it was summarized according to the types of assistant proactivity, i.e., None, Notification, Suggestion, and Intervention. Lastly, the resulting data was structured according to the occurrences of help and suggestion requests, represented as binary values. In contrast, the task-step-based approach used the same method but summarized the user-dependent data based on the respective task step number. In case the occurrences of specific parameters did not exceed a specific threshold, both approaches used fallback datasets. If there was not enough data for a particular user trait, i.e., occurrences for a trait were below 10, then this parameter was omitted, and the means and standard deviations or counts of all user traits were used for calculating the probabilities for user behavior generation.
The following outlines the process for simulating using the task-step-based approach 2. Similarly, the complexity-based approach algorithm was structured, but with complexity-based data distributions instead of task-step-based distributions. The algorithmic process begins with generating a user type with specific traits and loading approach-specific structured data sets. Then, the dialog game is initialized, and the task steps (1-12) with respective complexities (3, 4, 5) are iterated. For each task step, values for help and suggestion requests, duration, perceived difficulty, and the achieved game score are calculated based on the user type and the system's action. Relevant trait categories are queried, and context is determined, such as proactive system action and complexity or task step number. The fallback threshold is checked, and if not exceeded, personality traits are neglected, and overall means are used for probability calculation. A simulation is conducted to determine if a help and/or suggestion request would be set, with a fallback check for respective request types. Depending on the specific case, the perceived difficulty, task step duration, and the achieved game score are simulated.
Footnote 2: The code for the user simulator is available online at [https://github.com/MattKraus90/ProactiveTraining](https://github.com/MattKraus90/ProactiveTraining) for reproducibility.
## 5. Experiments and Results
To determine the most suitable approach for training and testing proactive dialog strategies that adapt to trust, we conducted an evaluation that focused on assessing the realism of each user simulation method. Our goal was to identify the approach that best approximated the actual behavior of users, as observed in the data. To achieve this, we employed both methods to simulate user behavior, based on the user types and system actions recorded during data collection with real users. We then compared the simulated behavior with the real behavior, using the KL distance to measure the difference between the two distributions. This approach has been suggested by previous researchers such as Pietquin and Hastie (Pietquin and Hastie, 2017) and was used in a study by Jain et al. (Jain et al., 2018). KL distance is a measure of how one probability distribution
\(Q\) is different from a second, reference probability distribution \(P\):
\[D_{KL}(P\parallel Q)=\sum_{x\in\mathcal{X}}P(x)\log\left(\frac{P(x)}{Q(x)}\right) \tag{1}\]
where the distances range from 0, i.e. distributions are equal, to 1, i.e distributions are completely different. The lower the distance between the distributions, the more realistic is the respective user simulator. For evaluation, we calculated the distances of distributions between the complexity-, respectively task-step-based approach and actual behavior for each task step. In Table 2, the overall mean distances for each task step as well as the individual distances for the game score, duration, help and suggestion request, and perceived difficulty are listed.
## 6. Discussion and Conclusion
The study findings indicated that both types of user simulators had similar performance as there were no notable differences in all measured features (with all p-values being greater than 0.05). Table 2 reveals that the task-step-based approach produced behavior that was slightly more realistic than the complexity-based method. This was mainly attributed to the fact that the former approach simulated more authentic game scores and durations of specific task steps. One possible explanation for this was that the task-step approach used averages of individual task steps to generate distributions, while the complexity-based method used average values across four task steps of the same complexity. Help and suggestion requests were simulated nearly the same way as observed during data collection. The result of the simulators was in the same region as the performance of the user simulator by Jain et al. (Jain et al., 2019) by achieving a KL-score of 0.109 which they then used to build an RL-based dialog manager (Jain et al., 2019). Thus, we deem both applicable for training and testing statistical proactive dialog strategies. Given the slightly more realistic outcomes, the task-step-based approach was chosen for constructing the train- and test environment in developing statistical proactive dialog. We trained and tested an RL-based proactive dialog agent using the described user simulation approach and presented the results in Kraus et al. (Kraus et al., 2017). The evaluation showed the utility of our approach, as the user simulator provided similar on task effectiveness and user trust as observed in studies with real users (Kraus et al., 2017; Kraus et al., 2017).
As usual, our approach has limitations. Firstly, although we found that 308 participants provided meaningful results for creating user types, increasing the number of participants for data collection could be beneficial. This is especially important for very specific user types where there is limited information, necessitating the need for fallback strategies. Gathering data from more users could alleviate this issue. Secondly, we only focused on a restricted decision-making interaction where users used template natural language utterances to interact with the agent. Therefore, we used a simplistic approach which was deemed reasonable for the task at hand. However, for future work, using more
\begin{table}
\begin{tabular}{l|l|l||l|l} \hline & \multicolumn{2}{c}{**Complexity-based \(M\) (\(SD\))**} & \multicolumn{2}{c}{**Task-step-based \(M\) (\(SD\))**} \\ & KL & MSE & KL & MSE \\ \hline \hline
**Game Score** & 0.369 (.185) & 73.19 (64.6) & 0.354 (.166) & 70.94 (64.7) \\ \hline
**Duration** & 0.261 (.064) & 1722 (844) & 0.244 (.079) & 1530 (104) \\ \hline
**Difficulty** & 0.145 (.011) & 1.909 (.155 ) & 0.149 (.008) & 1.887 (.217) \\ \hline
**Help Request** & 0.029 (.009) & 0.088 (.028) & 0.031 (.011) & 0.097 (.035) \\ \hline
**Suggestion Request** & 0.084 (.006) & 0.352 (.025) & 0.082 (.010) & 0.337 (.034) \\ \hline \hline
**Overall** & 0.178 (.151) & 359.5 (780) & 0.172 (.142) & 320.6 (765) \\ \hline \end{tabular}
\end{table}
Table 2. Descriptive statistics of the KL distances and MSEs for each user simulator type with regard to the measures of game score, duration, help and suggestions request, and perceived difficulty.
sophisticated approaches such as Hidden Markov Models or RL-based approaches could be beneficial, especially when extending our approach to utilize a greater number of dialog acts. In addition, we aim to integrate natural language interaction, for which transformer-based user simulation approaches could be helpful in creating sophisticated models for more complex use cases. Nonetheless, our approach is among the first to incorporate user, task, and system information for modeling a user's task and trusting behavior during mixed-initiative interaction with a proactive dialog agent. Thus, this work is an important step towards enabling socially responsible and task-effective HAITs.
|
2307.08912 | CONTRACTFIX: A Framework for Automatically Fixing Vulnerabilities in
Smart Contracts | The increased adoption of smart contracts in many industries has made them an
attractive target for cybercriminals, leading to millions of dollars in losses.
Thus, deploying smart contracts with detected vulnerabilities (known to
developers) are not acceptable, and fixing all the detected vulnerabilities is
needed, which incurs high manual labor cost without effective tool support. To
fill this need, in this paper, we propose ContractFix, a novel framework that
automatically generates security patches for vulnerable smart contracts.
ContractFix is a general framework that can incorporate different fix patterns
for different types of vulnerabilities. Users can use it as a security fix-it
tool that automatically applies patches and verifies the patched contracts
before deploying the contracts. To address the unique challenges in fixing
smart contract vulnerabilities, given an input smart contract, \tool conducts
our proposed ensemble identification based on multiple static verification
tools to identify vulnerabilities that are amenable for automatic fix. Then,
ContractFix generates patches using template-based fix patterns and conducts
program analysis (program dependency computation and pointer analysis) for
smart contracts to accurately infer and populate the parameter values for the
fix patterns. Finally, ContractFix performs static verification that guarantees
the patched contract is free of vulnerabilities. Our evaluations on $144$ real
vulnerable contracts demonstrate that \tool can successfully fix $94\%$ of the
detected vulnerabilities ($565$ out of $601$) and preserve the expected
behaviors of the smart contracts. | Pengcheng, Peng, Yun, Qingzhao, Tao, Dawn, Prateek, Sanjeev, Zhuotao, Xusheng | 2023-07-18T01:14:31Z | http://arxiv.org/abs/2307.08912v2 | # ContractFix: A Framework for Automatically Fixing Vulnerabilities in Smart Contracts +
###### Abstract
The increased adoption of smart contracts in many industries has made them an attractive target for cybercriminals, leading to millions of dollars in losses. Thus, deploying smart contracts with detected vulnerabilities (known to developers) are not acceptable, and fixing all the detected vulnerabilities is needed, which incurs high manual labor cost without effective tool support.
To fill this need, in this paper, we propose ContractFix, a novel framework that automatically generates security patches for vulnerable smart contracts. ContractFix is a general framework that can incorporate different fix patterns for different types of vulnerabilities. Users can use it as a security "fix-it" tool that automatically applies patches and verifies the patched contracts before deploying the contracts. To address the unique challenges in fixing smart contract vulnerabilities, given an input smart contract, ContractFix conducts our proposed ensemble identification based on multiple static verification tools to identify vulnerabilities that are amenable for automatic fix. Then, ContractFix generates patches using template-based fix patterns, and conducts program analysis (program dependency computation and pointer analysis) for smart contracts to accurately infer and populate the parameter values for the fix patterns. Finally, ContractFix performs static verification that guarantees the patched contract to be free of vulnerabilities. Our evaluations on \(144\) real vulnerable contracts demonstrate that ContractFix can successfully fix \(94\%\) of the detected vulnerabilities (\(565\) out of \(601\)) and preserve the expected behaviors of the smart contracts.
Smart Contract
## 1 Introduction
As a paradigmatic application of blockchain [1], smart contracts enable the creation of decentralized general-purpose applications and have received wide adoption [2, 3, 4, 5]. While the correct execution of smart contracts is enforced by the consensus protocol of blockchain, it is challenging to create smart contracts without security vulnerabilities, partly
due to the lack of security knowledge by developers in the new ecosystem of smart contract languages (e.g. Solidity [6]) and platforms (e.g. permissionless blockchains such as Ethereum [2; 7]). Over the past few years, the blockchain community witnessed a number of critical vulnerabilities in smart contracts being exploited by attackers, leading to millions of dollars in losses [8; 9; 10; 11; 12; 13]. For example, the reentrancy attack on TheDAO contract [14] in 2016 resulted in $50M worth of Ether being stolen [12; 15].
Despite considerable research efforts [16; 17; 18; 19; 20; 21; 22] of tool support for detecting vulnerabilities in smart contracts, fixing these vulnerabilities is highly critical (yet lacking of effective tool support) for two main reasons. First, unlike other software applications that can be released with known bugs [23], fixing all detected vulnerabilities before deployment is needed, thus incurring high manual labor cost without effective tool support. Such need is partly due to that smart contracts aremutable after deployment; in other words, deploying smart contracts with detected vulnerabilities (known to developers) is not acceptable. Second, manually fixing a smart contract with multiple detected vulnerabilities is often challenging, and many smart contracts are found to have multiple vulnerabilities (see Sec 2.3). For example, the best practice to avoid reentrancy vulnerability is to ensure all internal state changes are performed before the external call is executed (i.e. the Checks-Effects-Interactions pattern) [24; 15]. Hence, the patch for _reentrancy vulnerability_ requires (1) reordering multiple statements to ensure that all updates to contract state variables occur before the external call, and (2) creating temporary variables to store the values of these state variables for eliminating data dependencies on the external call (see Fig. 1).
Although various existing techniques of automated program repair [25; 26; 27; 28; 29] can automatically generate patches to fix the given program's bugs, these techniques are often not applicable to effectively fix vulnerabilities for smart contracts for two main reasons. First, applying existing repair techniques to repair contract vulnerabilities typically requires a comprehensive test suite to assure that all detected vulnerabilities are fixed and no side effects are introduced by the generated patch. Previous work [30] shows that it is highly difficult to create a comprehensive test suite that can defend against all types of exploits. Second, applying existing search-based repair techniques [31; 26; 27; 28; 29; 32] (being mainstream ones) to fix contract vulnerabilities fails to generate patches for some important types of contract vulnerabilities. These techniques explore the search space of repairs based on syntactic mutators, by leveraging search algorithms such as genetic programming or random search. However, the strategies of these techniques are mostly adding conditional checks or replacing a statement with another existing statement, which is insufficient for fixing contract vulnerabilities that require temporary variable creations and statement reordering (e.g. fixing reentrancy vulnerabilities). Although one can simply adapt these techniques to include more complex fixing strategies, doing so tend to (1) result in an exponential expansion of the search space [31; 33; 34], inducing patch-generation ineffectiveness.
To effectively fix vulnerabilities for smart contracts, in this paper, we propose ContractFix, that (1) automatically detects vulnerabilities in a smart contract, (2) applies patches to the multiple detected vulnerabilities, and (3) verifies the patched contract before the contract deployment. _This is the first end-to-end framework that ensembles detection, patching, and verification to fully automate the process of fixing vulnerabilities._ In particular, ContractFix is built upon our novel program analysis infrastructure that is specially optimized for Solidity, enabling ContractFix to _support fix strategies for different vulnerabilities and verify the correctness of the patched contracts_. In our work, the current instantiation of ContractFix focuses on fixing four major types of vulnerabilities: (1) _Reentrancy_, which allows the execution to re-enter a non-recursive function before its termination; (2) _MissingInputValidation_, which uses default values for function arguments; (3) _LockedEther_, which relies on other libraries to transfer ethers; (4) _UnhandledException_, which mishandles the raised exceptions.
ContractFix is powered by three innovative designs. First, to avoid wasting later high cost of searching for patches of detected vulnerabilities being false positives or not amenable for automatic fix (e.g. handling external method calls without source code), ContractFix _synergistically combines multiple static verification tools [18; 35; 36] with
Figure 1: Example patch for _Reentrancy_ vulnerability
post-processing_. In particular, ContractFix first applies these static verification tools to detect vulnerability candidates and adopts majority voting [37] to determine which candidates are more likely to be real vulnerabilities rather than false-positive ones. ContractFix then conducts post-processing to extract the required information from the reported vulnerabilities (e.g. identifying the types of the data dependencies for reentrancy vulnerabilities in Sec. 3.2.3) and filter out candidates that are not amenable for automatic fix.
Second, to address the space explosion during the search for target patches and preserve expected contract behaviors, ContractFix generates patches using template-based fix patterns [38], conducting _static program analysis_ to accurately infer variable values from the contract program under analysis without the need for searching a huge repair space. Most smart contracts restrain the uses of references in the language level (e.g. Solidity limits references to specific types), enabling our static analysis techniques to compute precise program dependencies for generating complex patches such as moving statements without violating data dependency constraints (Sec.3.2.1). Particularly, our program analysis allows ContractFix to support fix patterns with different performance overheads. For example, ContractFix supports both adding global locks and moving statements to fix reentrancy vulnerabilities, and prefers moving statements as the resulting program requires much less gas cost (\(5\) v.s. \(25000\)).
Third, ContractFix reapplies the static verification techniques used to detect the vulnerabilities on the patched smart contract, and verifies that the detected security vulnerabilities are eliminated in the patched smart contract. In this way, not only our template-based fix patterns with program analysis support can guarantee that the patched smart contract preserves the expected contract behaviors, the static verification techniques also ensure the elimination of the patched vulnerabilities.
This paper makes the following major contributions:
* We propose a novel framework, named ContractFix, that is the **first work** to (1) leverage the synergy of multiple static verification tools to detect vulnerabilities in a smart contract, (2) generate source code patches for the contract, and (3) performs static verification to verify the patched contract, ensuring the elimination of the vulnerabilities and preserving the expected contract behaviors.
* We propose a novel set of program analysis techniques that extract variable values from smart contracts to generate patches based on the fix patterns for four major types of vulnerabilities.
* We conduct an evaluation on \(50\) contracts (\(20,510\) LOC) selected from a widely used dataset [39] of smart contracts with injected vulnerabilities and \(94\) contracts (\(120,894\) LOC) selected among \(4,940\) real smart contracts with the largest number of transactions from Etherscan [40]. The results show that the majority voting scheme is highly precise in detecting vulnerabilities, and ContractFix changes \(7.97\) lines on average to successfully generate patches for \(565\) out of \(601\) vulnerabilities, _achieving a high success rate_ (\(>94\%\)).
* We crawl \(\sim 125,000\) transactions from Etherscan [40] and replay these transactions on both the original contracts and the patched contracts. The results show that the patched contracts preserve the original contract functionalities, and the increases of the gas caused by the extra security checks are negligible (\(\sim 80.000027\)).
* We make a prototype implementation of ContractFix and the evaluation results publicly available [41].
## 2 Background and Empirical Study
### Smart Contract and Ethereum
The very first blockchain, Bitcoin [1], which supports limited scripting [42] for its transactions, can already run simple smart contracts such as freezing funds until a time stamp in the future [42] and decentralized lotteries [43].
Ethereum [7] and other blockchains (e.g. Hyperledger [44] and Corda [45]) support general-purpose computation for smart contracts, and thus it is far less complicated to build a much wider range of decentralized applications (Dapps). In Ethereum, the Ethereum Virtual Machine (EVM) is a virtual machine designed as the runtime environment for smart contracts.
### Vulnerabilities in Smart Contracts
Recently, an increasing number of high-profile attacks resulting in huge financial losses have been reported. We next illustrate a list of representative vulnerabilities [46].
**Reentrancy** In July 2016 [12], a fault in TheDAO contract allowed an attacker to steal $50M. The atomicity and sequentiality of transactions may make developers believe that it is impossible to re-enter a non-recursive function before its termination. However, this belief is not always true for smart contracts.
Fig. 2 shows an exploit of the _Reentrancy_ vulnerability. First, the _attack()_ function in the attacker contract is called, causing to deposit some ethers in the victim contract and then invoke the victim's vulnerable _refund()_ function. Then, the _refund()_ function sends the deposited ethers to the attacker contract (Line 9 in A), also triggering the unnamed fallback function in the attack contract (Line 9 in B). Next, the fallback function again calls the _refund()_ function in the victim contract (Line 11 in B). Since the victim contract updates the _userBalances_ variable (Line 10 in A) after the ether transfer call, _userBalances_ remains unchanged when the attacker re-enters the _refund()_ function, and thus the balance check (Line 8 in A) can still be passed. As a consequence, the attacker can repeatedly siphon off ethers from the victim contract and exhaust its balance.
**Locked Ether** In 2017, a vulnerable contract led to the frozen of million dollars [9]. The reason is that this contract relies on another library contract to withdraw its funds (using _delegatecall_). Unfortunately, a user accidentally removed the library contract from the blockchain (using the _kill_ instruction), and thus the funds in the wallet contract could not be extracted anymore.
**Unhandled Exception** In Solidity, there are multiple situations where an exception may be raised. Unhandled exceptions can affect the security of smart contracts. In February 2016, a vulnerable contract [8] forced the owner to ask the users not to send ether to the owner because of an unhandled exception in the _call_ instruction.
### Empirical Study
As static-verification-based solutions for vulnerability detection use different security properties to detect different types of vulnerabilities, we need to study the prevalence of vulnerabilities so that ContractFix's patch generation strategies can target the most effective properties. Fig. 3 shows the vulnerability distribution obtained by applying Security on a set of \(4,640\) smart contracts (with the most transactions) collected from Etherscan. In summary, there are \(33,516\) vulnerabilities and each contract contains \(7.22\) vulnerabilities on average. The results show that vulnerabilities are commonly found in smart contracts and multiple vulnerabilities may often exist in one contract, making manual fixing labor intensive and error prone. This observation motivates the design of automated solutions to generate patches to fix vulnerabilities.
Based on the vulnerability distribution in Fig. 3, we select the types of vulnerabilities to include in ContractFix's fixing scope. The most common types of vulnerabilities are _MissingInputValidation_ and _UnrestrictedWrite_ (count \(>13,000\)). As _MissingInputValidation_ can be fixed via source code transformation, ContractFix includes it in its fixing scope. For _UnrestrictedWrite_, the security property used to detect the vulnerabilities is too strict, and most of the detected vulnerabilities are false positives. Thus, ContractFix excludes _UnrestrictedWrite_.
Figure 3: Vulnerabilities detected by Security for 4,640 smart contracts collected from Etherscan
Figure 2: An exploit of _Reentrancy_ vulnerability
Such uncertainty is inherent in blockchain execution platforms and cannot be fixed by modifying the smart contract source code. Fixing them needs to change the operational semantics of Ethereum, requiring all the clients in the Ethereum network to upgrade.
As doing so is not a practical solution, we exclude these types of vulnerabilities from ContractFix's fixing scope. Furthermore, we include _Reentrancy_, _LockedEther_, and _UnhandledException_. These types of vulnerabilities are commonly found in smart contracts and their fixing strategies are different from each other, making them good candidates to demonstrate the effectiveness of ContractFix in both simple and complex patches. The four types of vulnerabilities that ContractFix focuses on account for \(53.0\%\) of the total vulnerabilities.
## 3 Design Of ContractFix
### Phase I: Vulnerability Detection
In this phase, ContractFix first conducts static verification to detect vulnerability candidates. Based on our empirical study (Sec. 2.3), ContractFix conducts static verification that checks the security properties for identifying four types of vulnerabilities: _Reentrancy_, _MissingInputValidation_, _LockedEther_, _UnhandledException_. Static verification tools [18, 16, 17] adopt over-approximation analysis, which may produce false-positive violations. To address this issue, ContractFix combines three static verification tools: _Securify_[18], _Stilher_[35], and _Smartcheck_[36] to detect vulnerabilities, and leverages the majority voting mechanism to improve the detection accuracy.
As some of the detected vulnerabilities are not amenable for automatic fix (e.g. handling external method calls without source code), ContractFix focuses on the detected vulnerabilities that have severe security impacts based on our motivating study (Sec 2) and are amenable for automatic fix:
* _Reentrancy_: These vulnerabilities can be detected by _Stilher_ and _Securify_. However, for some detected vulnerabilities, the return value of external function call is used to control whether to update the state variables. As it is almost impossible to verify the behavior of external function calls, ContractFix cannot generate a patch properly. Also, some updates of the state variables depend on timestamps, and any patch that moves the updates will cause semantics changes. Thus, ContractFix ignores the reentrancy vulnerabilities caused by external function calls.
* _MissingInputValidation_: This type of vulnerability can be detected by _Securify_. Except for function arguments of the _address_ type, function arguments of other data types (e.g. integers) can have a wide range of values and it requires dynamic analysis to determine the runtime values. Thus, ContractFix only fixes the _MissingInputValidation_ vulnerabilities that concern about the _address_ type arguments.
* _LockedEther_: This type of vulnerability can be detected by both _Stilher_, _Securify_, and _Smartcheck_. For some contracts, developers use them as the library, which is not assumed to receive ether. Thus, for these contracts, it is not reasonable for them to have a function that can send out Ether. Thus, ContractFix will not fix the _LockedEther_ violations for these contracts.
* _UnhandledException_: This type of vulnerability can be detected by _Stilher_, _Securify_, and _Smartcheck_. When developers assign the return value of the Ether transfer function _send()_ to a variable, they often provide more code to handle the exceptions. Thus, ContractFix fixes the violations where developer does not process the return value of _send()_.
To identify these types of vulnerabilities that are amenable for automatic fix, ContractFix performs post-processing on the reported vulnerabilities based on the syntactic analysis on the AST and intra-procedural control and data flow
Figure 4: The Architecture of ContractFix
analysis. For example, for a reported reentrancy vulnerability, detecting whether an external function call is used to control the execution of a state variable update will require both control and data flow analysis.
### Phase II: Patch Generation
ContractFix performs static program analysis to extract the context information of the detected vulnerabilities, and supports fix patterns in three granularity levels: _statement level_, _method level_, and _contract level_. Tab. 1 summarizes the fix patterns supported by ContractFix. We can see that ContractFix can support a wide range of fix patterns, while existing work often supports one or two patterns [15; 22]. For example, sGUARD [22] supports only adding locks for fixing reentrancy vulnerabilities, while ContractFix additionally supports the fix pattern by reordering specific statements. Alg. 1 shows the patch generation algorithm of ContractFix.
#### 3.2.1 Program Analysis Infrastructure
ContractFix customized static program analysis techniques to extract the necessary context information for vulnerabilities in smart contracts, including involved variables and their control and data dependencies. We next describe the static program analysis techniques employed by ContractFix.
**Intra-Procedural Data-flow Analysis** ContractFix performs an intra-procedural data-flow analysis to collect the program points (i.e. statements) where a variable is created, read, modified, and deleted [50; 51]. Our intra-procedural data-flow analysis starts with building the method's control flow graph (CFG), where each statement is considered as a single basic block for the convenience of dependency analysis. It is worth mentioning that modifiers assigned to methods in smart contracts can be executed both before and after the execution of the method body and the parameters of methods can be used in the modifiers. Thus, the control flow of a method follows this sequence: modifier, method body, modifier. Once the CFG is built, existing data-flow analysis is employed to build the data-flow graph (DFG) for the method. Note that fixing _Reentrancy_ requires inter-procedural analysis, which is achieved by combining the method summaries with the intra-procedural analysis.
**Pointer Analysis for Solidity** Pointer analysis is known to be expensive and is required for precise analysis (e.g. flow-sensitivity analysis and context-sensitivity analysis). As smart contract languages such as Solidity restrain the usage of references, existing pointer analysis can be easily adapted for obtaining accurate point-to information for the contracts. In Solidity (\(\geq\)v0.6.1), there are three locations where a variable can be stored:
* _memory_: the variable in memory is not persistent and its lifetime is limited to an external method call.
* _storage_: the variable in storage is persistent and its lifetime is the same as the contract's.
* _calldata_: this location is only available for external method call parameters.
Solidity's reference types include \(struct\), \(array\), and \(mapping\). Fig. 5 shows an example contract to illustrate reference creations. There are only two situations where a variable of these types can be a reference: (1) assignments from a variable in storage to a local variable in storage create a reference (Line 6); (2) assignments from a variable in memory to another variable in memory create a reference (Line 7).
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Type** & **Level** & **Fix Pattern** \\ \hline
**Reentrancy** & Method & Lock [15; 22], Reorder statements [15] \\
**MissingInputValidation** & Method & Require check [47] \\
**LockedError** & Contract & Withdraw function [48] \\
**UnhandedException** & Statement & Require check [49] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of fix patterns
Figure 5: Reference creations in Solidity
To determine which pointer analysis algorithm to use, we analyzed \(6,420\) real-world smart contracts to find how often the reference type is used in solidity programs. We scanned all assignments among these contracts and checked if there are variables that meet the definitions of reference types mentioned above. In summary, we found \(3,210\) reference variables among \(199,724\) assignments in \(6,420\) contracts. That is, on average, only \(1.6\%\) of the assignments use reference types.
Thus, ContractFix adapts a flow-insensitive and context-insensitive pointer analysis by extending its point-to model to incorporate data locations and reference creations as described above, and computes the point-to information for each variable in a contract [50].
**Inter-Procedural Analysis via Method Summary**
To enable inter-procedural analysis, for each method, ContractFix builds a method summary that computes the side effects of the state variables, i.e. whether the state variables are modified in each method. Method summaries (or called function summaries) have been used to build inter-procedural program analysis in a modular way [52, 53, 54], which can be easily combined with other intra-procedural program analysis to enable inter-procedural analysis.
**Data Dependency Classification**
Based on the inter-procedural analysis, ContractFix further classifies data dependency into the following types:
* _Flow Dependence_ or _Read After Write (RAW):_ a statement \(s_{2}\) is flow dependent on \(s_{1}\) if and only if \(s_{1}\) modifies a resource that \(s_{2}\) reads and \(s_{1}\) precedes \(s_{2}\) in execution.
* _Anti-Dependence_ or _Write After Read (WAR):_ a statement \(s_{2}\) is antidependent on \(s_{1}\) if and only if \(s_{2}\) modifies a resource that \(s_{1}\) reads and \(s_{1}\) precedes \(s_{2}\) in execution.
* _Output Dependence_ or _Write After Write (WAW)_: a statement \(s_{2}\) is output dependent on \(s_{1}\) if and only if \(s_{1}\) and \(s_{2}\) modify the same resource and \(s_{1}\) precedes \(s_{2}\) in execution.
* _Input Dependence_ or _Read After Read (RAR):_ a statement \(s_{2}\) is input dependent on \(s_{1}\) if and only if \(s_{1}\) and \(s_{2}\) read the same resource and \(s_{1}\) precedes \(s_{2}\) in execution.
The classification of the data dependency will later be used by the fix patterns (e.g. fixing _Reentrancy_ in Sec. 3.2.3) to guide the patch generation.
#### 3.2.2 Fix Patterns in Statement Level
Vulnerabilities at this level are usually caused by misuse of individual statements, such as _UnhandledException_ that forgets to check method return values. **Fixing _UnhandledException_** The fix pattern (Lines 1-6 in Alg. 1) for this vulnerability is to check the return value of each coin transfer function: _send()_ and _value()_. The type of their return values is boolean because they indicate whether the transfers succeed. Transactions in which these transfers fail must be reverted to notice the caller to ensure the coherence between the contract states and the transactions. Thus, to fix this vulnerability, ContractFix adds a _require()_ function call to validate the return values of _send()_ and _value()_ (Line 4 in Alg. 1) and ensures that their executions are successful before completing the whole transaction.
#### 3.2.3 Fix Patterns in Method Level
Vulnerabilities in this level are usually caused by missing parameter checks or misuse some method calls in a method. The two types of vulnerabilities in this level are _Reentrancy_ and _MissingInputValidation_. **Fixing _Reentrancy_** The preferred fix pattern for _Reentrancy_ (Lines 8-17 in Alg. 1) is to move all writes to storage ahead so that there is no write to storage after an external method call or a coin transfer call, such as the patch shown in Fig. 1. ContractFix first identifies the method that has the vulnerability (Lines 9-10), and computes the method summary, the pointer information, and the DFG of the method (Lines 11-13). ContractFix then identifies the writes that result in the vulnerability (Line 14), and further computes the data dependencies that are used to move the writes (Line 15). In particular, if any of these writes (represented as \(w\)) has data dependencies to the variables used by the external calls (represented as \(c\)), depending on the type of the dependencies, ContractFix may eliminate such dependencies without changing the semantics before moving the writes ahead:
* For _flow dependence_ from \(w\) to a statement \(s\), ContractFix creates a temporary variable to store the value of the variables in \(w\) before they are written and replace the same variables in \(c\) with these temporary variables, so that \(w\) does not impact \(c\) if \(w\) is moved ahead.
* For _anti-dependence_ and _output dependence_ from \(w\) to a statement \(s\), ContractFix moves both \(w\) and \(s\) ahead if \(s\) is not the external call.
* For _input dependence_ from \(w\) to a statement \(s\), ContractFix simply moves \(w\) ahead since there are no side effects in this type of dependency.
However, when there is a _anti-dependence_ or _output dependence_ from \(w\) to \(c\), the data dependencies cannot be eliminated. Because in this case the updates in \(w\) must wait for the execution results of \(c\) so \(w\) cannot be moved ahead of \(c\). In this case, ContractFix adopts another more expensive fix pattern, which declares a new global _bool_ value as a lock to limit the method invocations [22]. This lock will not allow the unexpected recall, if the previous call does not finish the execution. As the modification of a global variable is much more expensive than the declarations of local variables in smart contracts, we find out that for each transaction, the global lock increases the gas cost by \(\sim 25000\) for the function, while declaring temporary variables and moving statements increase the gas cost by only \(5\).
**Fixing _MissingInputValidation_** The fix pattern for _MissingInputValidation_ is to add conditional checks to validate the method parameters at the beginning of method body (Lines 18-25 in Alg. 1). To patch this vulnerability, ContractFix first identifies the method parameters that are not validated (Line 21 in Alg. 1). To do so, ContractFix checks whether the parameters appear in any _require()_ method, which is often used in Solidity to perform validation. This checking can be easily done by using the DFG of the method. ContractFix inserts validations for the other unchecked parameters:
* For those parameters whose type is address, ContractFix adds a common validation to check whether this address is \(0x0\) because an address with value \(0x0\) is invalid.
* For parameters whose type are integer, ContractFix adds a safe math library to prevent integer overflow and underflow when doing calculations.
* For parameters whose type are bytes or self-defined, ContractFix may not add proper validations because it lacks contextual information for ContractFix to acquire sufficient information about their valid ranges.
Consider the contract in Fig. 6. The parameter _vnd_ is validated by the highlighted statement (Line 6) and thus there is no need for further validation. For the parameters _src_ and _dst_, ContractFix adds a _require()_ method to validate their values (Lines 8-9).
When all the identified vulnerabilities are patched, ContractFix converts its transformed AST back to source code and outputs the patched contract to Phase III.
### Phase III: Patch Verification
In this phase, ContractFix reapplies the static verification tools to ensure that the detected vulnerabilities are eliminated while the expected behaviors are preserved in the patched contract. Once a patched contract candidate is generated, ContractFix applies static verification again on the patched contract and checks whether the vulnerabilities reported in Phase I are eliminated. If the static verification tool no longer reports the same vulnerabilities, the patched contract is considered to pass the static verification.
## 4 Evaluation
We implemented ContractFix in JavaScript (\(\sim\)5000 lines of code). We adopted three state-of-the-art vulnerability tools, Securify [18], _Slither_[35], and _Smartcheck_[36], as the static verification tools for ContractFix.
We built the parsing and transformation modules upon an open source Solidity parser [55] built on top of ANTLR4 [56]. We evaluated the effectiveness of ContractFix in fixing vulnerabilities in real-world smart contracts. Specifically, our evaluations aim to answer the following research questions.
* **RQ1:** How effective is ContractFix in generating successful patches for vulnerable contracts?
* **RQ2:** How effective is the synergy of static verification and post-processing in detecting vulnerabilities, compared to static verification only?
* **RQ3:** How efficient is ContractFix in generating patches?
### Evaluation Setup
Our evaluation datasets have \(144\) contracts (\(141,404\) lines of Solidity code), and the evaluations are conducted on a server with Intel(R) Xeon(R) CPU E5-2637 v4 (3.50GHz) and 256GB RAM.
\begin{table}
\begin{tabular}{r r r r r} \hline
**Type** & **Securify** & **Slither** & **Smartheck** & **Majority** \\ \hline
**Reentrancy** & 107 & 461 & 0 & 23 \\
**MissingInputValidation** & 979 & 0 & 0 & 131 \\
**LockedAfter** & 184 & 100 & 43 & 137 \\
**UnhandedException** & 83 & 129 & 36 & 60 \\
**Total** & 1353 & 690 & 79 & 351 \\ \hline \end{tabular}
\end{table}
Table 2: Vulnerabilities detected by each detector
Figure 6: Patch for _MissingInputValidation_ vulnerability
**Injected Vulnerabilities**. We use a widely adopted dataset [39] that contains smart contracts with various types of injected known vulnerabilities (e.g. integer overflow/underflow, reentrancy, and timestamp-dependency). We choose \(50\) contracts with injected reentrancy vulnerabilities as other types of vulnerabilities in the dataset are not the focus of ContractFix. These contracts provide the detailed information on the injected vulnerabilities and thus we can directly evaluate our patch generation without applying vulnerability detection. **Real Vulnerabilities**. We collect real smart contracts based on the addresses obtained from BigQuery [57], and select the top \(10,000\) contracts sorted by the number of transactions. We further download the source code from Etherscan [40] based on the addresses, and exclude the contracts whose source code is not available. We then apply static verification on these downloaded contracts and exclude the ones that cannot be analyzed by the Solidity compiler due to version incompatibility. In total, we obtain \(4,640\) contracts. For each contract to be used in our evaluation, we inspect the source code to confirm vulnerabilities, verify the patches, and prepare test cases that exercise the vulnerable behaviors of the original contracts, which requires non-trivial manual efforts. Within our affordable efforts, we select \(94\) vulnerable contracts that have the four types of vulnerabilities described in Sec. 2.3, as shown in Tab. 3. The test cases are shown in Tab. 4. If a contract has several types of vulnerabilities, we classify it into the _Mixed_ category.
**Semantic Validation**. To ensure that the patched contracts preserve the expected behaviors, semantic validation is conducted using a smart contract testing platform named Truffle [58]. Truffle allows us to deploy a contract on a local blockchain powered by Ganache and make a method call or issue a transaction. To obtain the test cases for checking contracts' expected behaviors, we made use of the publicity of all transactions on blockchain: we downloaded the existing transactions of a contract and extracted the input values to create test cases because these transactions should reflect the functionality of a contract. But these constructed test cases lack test oracles to assert the expected results. To address this problem, we adopt the idea of multiple implementation testing [59; 60]: we consider the original contract and the patched contract as different implementations for the same requirements and assert that the states of both contracts should be the same after the testing. Specifically, ContractFix first deploys both the patched contract and the original contract on the Truffle with the same initial states. Then, ContractFix runs the test cases and compares the states of both contracts. If the states of both contracts remain the same after the testing, ContractFix considers that the expected behaviors are preserved in the patched contract.
### RQ1: Effectiveness in Patch Generation
We run ContractFix on each of the contracts in our testing set to generate a patched contract. We then manually examine the patched contract and verify its correctness. We also examine why ContractFix fails to generate patches for certain contracts.
**Overall Results** Tab. 4 shows the path generation results. On average, each contract in the testing set has \(4.17\) vulnerabilities. We consider a patch is successful if the vulnerability to fix cannot be detected by the static verification tools in the patched contract and the patched contract passes the corresponding test cases, including the test cases written to exercise the vulnerable behaviors (Tab. 4) and the test cases built from the public-transaction records (Tab. 5). For contracts with injected vulnerabilities, we perform manual verification since these contracts lack public-transaction records for writing test cases. Overall, ContractFix successfully generates \(565\) patches for \(601\) vulnerabilities, achieving a very high success rate (\(94\%\)). These results indicate that the combination of template-based fix patterns and static program analysis is very effective in generating successful patches.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Type** & **Total** & **Success** & **Fall** & **Suc. Rate** & **Test Case** \\ \hline
**Reentrancy** & 23 & 21 & 2 & 0.91 & 73 \\
**MissingInputValidation** & 131 & 128 & 3 & 0.98 & 68 \\
**LockedEther** & 137 & 136 & 1 & 0.99 & 60 \\
**UnhandedException** & 60 & 60 & 0 & 1.00 & 245 \\
**Injected** & 250 & 220 & *30 & 0.88 & - \\
**Total** & 601 & 565 & 36 & 0.94 & 446 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Effectiveness of ContractFix in patch generation
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Type** & **Contract Count** & **Lines of Code** \\ \hline
**Reentrancy** & 17 & 8,522 \\
**MissingInputValidation** & 21 & 26,102 \\
**LockedEther** & 20 & 17,277 \\
**UnhandedException** & 19 & 30,349 \\
**Mixed** & 17 & 38,644 \\
**Injected** & 50 & 20,510 \\
**Total** & 144 & 141,404 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Vulnerable contracts used in evaluation
**Patch Statistics** For the real contracts, on average, ContractFix needs to change \(3.7\), \(17.47\), \(3.71\), \(2.26\), and \(12.71\) lines of code to patch a contract with _Reentrancy_ vulnerabilities, _LockedEther_ vulnerabilities, _MissingInputValidation_ vulnerabilities, _UnhandledException_ vulnerabilities, and multiple types of vulnerabilities (i.e._Mixed_). Without considering the type of vulnerabilities, ContractFix needs to avergely change \(7.97\) lines of code to patch a contract. For the contracts with injected vulnerabilities, ContractFix needs to change \(4\) lines of code to patch the _Reentrancy_ vulnerabilities.
**Transaction Replay** To show that the patch preserves the original functionality of the smart contracts, we chose 25 contracts (top 5 contracts that have the most transactions in each vulnerable type) and crawled \(5000\) transactions for each of them (\(125,000\) transactions in total). We then deployed their original version and the patched version in the local testing environment (e.g. Ganache), and issued transactions to execute the contracts using the crawled transaction data. Tab. 5 shows the results of the transaction replay. Column "Status diff" shows the number of different result states. Column "Gas Diff" shows the average gas usage differences. We can see the number of contracts that have different result states are all \(0\). This result shows that _the patches generated by ContractFix preserves the contracts' original functionality_. The results also show that gas usage has only a slight increase (maximum being \(188.84\)). As each unit of gas equals to \(10E-9\) Ethers [61], which is \(\sim\$0.000027\), _the cost of the extra gas is negligible_.
**Failed Patches** For real contacts, ContractFix fails to generate patches for \(2\)_Reentrancy_ vulnerabilities, which is mainly due to the stack depth limit of EVM. When fixing _Reentrancy_ vulnerability, ContractFix usually needs to create temporary variables. While not often, such behavior may trigger the EVM exceptions about stack depth, causing the patches not to pass the validation. This case may be improved by requesting the developers to limit their call stack, which will also help defend stack depth attack [62]. The main reason why fixing _MissingInputValidation_ and _LockedEther_ may fail lies in the limitation of our source code generator, which adopts the pre-order traversal of the patched AST to generate patches. However, when a method call is used as the argument for a function modifier, the generated source code will have syntax errors. This case can be fixed by improving the code generation mechanism.
For contracts with injected vulnerabilities, ContractFix fails to generate patches for \(30\) contracts as our dataflow analysis considers the injected vulnerabilities cannot be fixed by moving statements. But _all these \(30\) contracts can be fixed by ContractFix by adopting the global bool value as a lock to limit the method invocations_.
**Comparison to Existing Works**. As there are no existing works that combine detection, patching, and verification as ContractFix does, we cannot directly compare ContractFix with existing works. Thus, we compare only the patching step in our evaluations. As shown in Tab. 1, ContractFix provides fix patterns (global lock for _Reentrancy_ and owner check for _LockedEther_) that EVMPatch [63] and SGuard [22] can support. The results show that ContractFix can effectively generate these patches (91% for _Reentrancy_ and 99% for _LockedEther_) as the existing works without compromising expected behaviors. Beyond that, ContractFix supports moving statement to fix _Reentrancy_ vulnerabilities with a much lower gas cost than sGuard (5 v.s. 25000). EVMPatch cannot this fix pattern as binary code loses the source code semantics and requires extra data analysis to move the statements.
+ vartotalUnreleasedTokens_temp = totalUnreleasedTokens. + vestingSchedule.principleLockAmount =_principleLockAmount; + vestingSchedule.bonusLockAmount =_bonusLockAmount; + vestingSchedule.isPrincipleReleased = false; + vestingSchedule.isBonusReleased = false; + totalUnreleasedTokens = safeAdd(totalUnreleasedTokens, _totalAmount); + vestingSchedule.amountReleased = 0; + require(token.balanceOf(this) >= safeAdd(totalUnreleasedTokens_temp, _totalAmount));
- require(token.balanceOf(this) >= safeAdd(totalUnreleasedTokens, _totalAmount));
- vestingSchedule.principleLockAmount =_principleLockAmount;
- vestingSchedule.bonusLockAmount =_bonusLockAmount;
- vestingSchedule.isPrincipleReleased = false;
- vestingSchedule.isBonusReleased = false;
- totalUnreleasedTokens = safeAdd(totalUnreleasedTokens, _totalAmount);
- vestingSchedule.amountReleased = 0;
Based on the data-flow analysis, ContractFix finds that there is a WAR dependence between _safeAdd()_ and the writes to _totalUnreleasedTokens_. In this case, ContractFix generates a patch that saves _totalUnreleasedTokens_ before _safeAdd()_ by creating a temporary variable, replaces the parameters of _safeAdd()_ with the temporary variable, and moves all the writes ahead.
### RQ2: Ensemble of Static Verification Tools
In this RQ, we evaluate the effectiveness of ContractFix's ensemble of multiple static verification tools. Tab 2 shows the vulnerability detected by different static verification tools by ContractFix.
Column _Majority_ shows the number of vulnerabilities confirmed by at least two static verification tools except for _MissInputValidation_, because only _Securify_ supports the detection of this vulnerability. The results show that our post-processing filters out the candidates that cannot be fixed (Sec. 3.1). For example, _MissingInputValidation_ reports 795 vulnerabilities and only 131 of them are related to address types. We further manually examine the detected vulnerabilities by the majority voting, and confirm all of them are true positives, indicating the effectiveness of majority voting in improving the detection performance. For example, _Slither_ reports 129 _UnhandledException_ vulnerabilities, while the majority voting confirms 60 out of them. Similarly, _Slither_ misses 37 _LockedEther_ vulnerabilities, but the combination of _Securify_ and _SmartCheck_ finds these 37 vulnerabilities.
We can also see the combination of post-processing and majority voting addresses the limitations of using only one static verification tool. For example, some security properties used by _Securify_ are too general: for _Reentrancy_ vulnerabilities, _Securify_'s property detects all the writes to storage after an external method call; however, if another external method call is used to determine the execution of the writes to storage, a false positive is reported. Based on the results, _Securify_ reports 26 false positives, which is first reduced by the post-processing to 3 and then by the majority voting to 0. These results demonstrate that static-verification and post-processing greatly improve the precision of the vulnerability detection, making it feasible and practical to support the patch generation.
### RQ3: Runtime Performance
To understand the performance of ContractFix, we measure the execution time of ContractFix's three phases. Tab. 6 shows the results. Column _Detection_ shows the execution time for pre-processing. Column _Patch_ shows the execution time for patch generation. Column _Verification_ shows the execution time for patch verification using Securify. We exclude the validation using Truffle since it requires manual interactions (e.g. sending transactions). As we can see, ContractFix takes \(1159.58s\) to finish the whole process. Without considering the time needed by static verification (i.e. detection and validation), ContractFix only takes \(3.75s\) to fix a contract on average, indicating that ContractFix's light-weight program analysis and patch generation based on template-based fix patterns are very efficient.
## 5 Discussion
**Static Verification** Static verification unavoidably produces false positives. A major reason is that some security properties are too general and cannot describe various specific behaviors (Sec. 3.1). ContractFix addresses this problem by leveraging majority voting to ensemble multiple static verification tools and employs post-processing to filter out vulnerabilities not amenable for automatic fix. Note that the patch generation of ContractFix does not depend on the intermediate information of any detectors, and thus can be integrated with various types of detectors. Post-processing is relatively easy to extend as it focuses on only local context (i.e. mainly intra-procedural analysis), while static verification considers the global context and is difficult to customize. Alternatively, more precise static verification with more flexible security properties can be used, but this direction requires further research efforts and is out of the scope of this paper.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Type** & **Detection (s)** & **Patch (s)** & **Validation (s)** \\ \hline
**Recentrancy** & 641.61 & 12.56 & 595.58 \\
**MissingInputValidation** & 765.61 & 0.44 & 468.17 \\
**LockedEther** & 781.75 & 0.82 & 802.73 \\
**UnhandledException** & 84.07 & 0.33 & 108.28 \\
**Mixed** & 855.05 & 4.62 & 871.44 \\
**AVG** & 586.59 & 3.75 & 569.24 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Runtime performance of ContractFix
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Type** & **Status Diff.** & **Gas Diff.** \\ \hline
**Reentrancy** & 0 & 53.38 \\
**MissinputValidation** & 0 & 46.99 \\
**LockedEther** & 0 & 7.36 \\
**UnhandledException** & 0 & 27.68 \\
**Mixed** & 0 & 188.84 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Execution results about transaction replay |
2303.14614 | A Golden Decade of Polar Codes: From Basic Principle to 5G Applications | After the pursuit of seventy years, the invention of polar codes indicates
that we have found the first capacity-achieving coding with low complexity
construction and decoding, which is the great breakthrough of the coding theory
in the past two decades. In this survey, we retrospect the history of polar
codes and summarize the advancement in the past ten years. First, the primary
principle of channel polarization is investigated such that the basic
construction, coding method, and classic successive cancellation (SC) decoding
are reviewed. Second, in order to improve the performance of the finite code
length, we introduce the guiding principle and conclude five design criteria
for the construction, design, and implementation of the polar code in the
practical communication system based on the exemplar schemes in the literature.
Especially, we explain the design principle behind the concatenated coding and
rate matching of polar codes in a 5G wireless system. Furthermore, the improved
SC decoding algorithms, such as SC list (SCL) decoding and SC stack (SCS)
decoding, etc., are investigated and compared. Finally, the research prospects
of polar codes for the future 6G communication system are explored, including
the optimization of short polar codes, coding construction in fading channels,
polar coded modulation and HARQ, and the polar coded transmission, namely polar
processing. Predictably, as a new coding methodology, polar codes will shine a
light on communication theory and unveil a revolution in transmission
technology. | Kai Niu, Ping Zhang, Jincheng Dai, Zhongwei Si, Chao Dong | 2023-03-26T03:42:59Z | http://arxiv.org/abs/2303.14614v1 | # A Golden Decade of Polar Codes: From Basic Principle to 5G Applications
###### Abstract
After the pursuit of seventy years, the invention of polar codes indicates that we have found the first capacity-achieving coding with low complexity construction and decoding, which is the great breakthrough of the coding theory in the past two decades. In this survey, we retrospect the history of polar codes and summarize the advancement in the past ten years. First, the primary principle of channel polarization is investigated such that the basic construction, coding method and the classic successive cancellation (SC) decoding are reviewed. Second, in order to improve the performance of the finite code length, we introduce the guiding principle and conclude five design criteria for the construction, design and implementation of the polar code in the practical communication system based on the exemplar schemes in the literature. Especially, we explain the design principle behind the concatenated coding and rate matching of polar codes in 5G wireless system. Furthermore, the improved SC decoding algorithms, such as SC list (SCL) decoding and SC stack (SCS) decoding etc., are investigated and compared. Finally, the research prospects of polar codes for the future 6G communication system are explored, including the optimization of short polar codes, coding construction in fading channels, polar coded modulation and HARQ, and the polar coded transmission, namely polar processing. Predictably, as a new coding methodology, polar codes will shine a light on communication theory and unveil a revolution in transmission technology.
**Keywords:** polar codes; channel polarization; successive cancellation decoding; polar coded modulation; polar processing
## I Introduction
Channel coding, also named as error control coding, is not only the basic theory of modern information science, but also the core technique of modern communication system.
In 1948, C. E. Shannon [1] discovered that there exist some channel codes that can be decoded reliably when the code rate is no more than channel capacity whereas he did not provide the constructive coding schemes. In the past seventy years, pursuit of approaching capacity with practical en/decoding complexity is a central challenge in coding theory [2]. In 1990s, Turbo codes [3] and LDPC codes [4] were discovered to dramatically improve the error performance of digital communication and approach the channel capacity, which are a big progress of coding theory. However, there is still a non-zero gap between the achievable rate of these codes and the channel capacity.
In 2008, Ankan [5] invented polar codes1 to achieve the capacity of the symmetric channels, which is a
great theoretic breakthrough. Polar codes are coined by the concept of channel polarization. That is to say, by using the channel combining and splitting, a group of \(N\) identical binary-input discrete memoryless channels (B-DMC) can be transformed into a group of polarized channels, some of which become noiseless and the capacity tends to one (good channels) and others get noisy and the capacity goes to zero (bad channels). When the channel number, that is code length, tends to infinity, the ratio of the number of good channels to that of total channels will approach the capacity of original channel. Unlike the traditional channel codes, such as Turbo/LDPC codes, polar codes introduces a novel idea for the coding design.
For the practical application, channel coding is the key technology in the reliable transmission, especially in wireless communications. Figure 1 shows the roadmap of channel codes application in 3G\(\sim\)5G wireless systems. It follows that low-latency and high reliable transmission are the main trend of the wireless communication in the past twenty years. Low-latency requires the short code length and high reliability demands a low error rate. In pre3G wireless system, Reed-Solomon and convolutional concatenate (RS/CC) codes are applied to achieve high reliability (\(99.9\%\)) with a very long code length (\(10^{5}\sim 10^{6}\)). Then in 3G and 4G systems, turbo codes are utilized to approach the same performance with a long length (\(10^{4}\)) and a low latency in terms of \(10ms\sim 100ms\). The 5G systems propose more rigor requirements for the transmission latency (\(1ms\)) and reliability (\(99.999\%\)) and tubo codes cannot meet this severe challenging. Due to the capacity-approaching feature, it seems that the classic polar codes can be used as the candidate of 5G channel coding.
However, the theoretic advantage of polar codes can not be transformed into the superiority of practice application. The error performance of the classic polar codes is not satisfied in the case of the finite code length. In order to improve the performance of polar codes, in the past decade, many advanced polar coding schemes and high performance decoding algorithms have been proposed. Specifically, cyclic redundancy check (CRC) concatenated polar codes and CRC aided decoding algorithms demonstrate the outstanding error performance in the short to medium code length [6][7][8] and outperform the performance of Turbo/LDPC codes. One highlighted merit of CRC-polar code is that it has no error floor due to the algebraic coding structure [9]. On the contrary, both turbo and LDPC codes demonstrate the error floor effect at the high signal-to-noise ratio (SNR) region. Since the CRC-polar codes fulfill the requirement of high-reliability transmission, polar codes have been accepted as the coding standard in 5G wireless communication system [10].
In the previous survey papers [6][11], the basic principle of polar coding were briefly introduced. In 2019, IEEE communication society published the best readings of polar coding online [12]. In this best readings, the polar coding theory, the construction and decoding of practical polar codes, as well as practical implementations are summarized and the exemplary works are selected from the literature. The interesting reader can also refer to the book of polar codes [9] for the further understanding.
In this paper, we retrospect the history of polar codes from birth to the present so as to review the development of this new direction. The remainder of the paper is organized as follows. Section II presents the basic principle of polar codes, including channel polarization, encoding and successive cancellation decoding. Then Section III describes the design criteria of polar codes with finite code length. We explain the design concerns behind the 5G polar codes. In Section IV, the construction algorithms, the encoding structure of concatenated polar codes and rate compatible methods are discussed from the view point of practice application. On the other hand, we address the improved decoding algorithms of polar codes in Section V, including successive cancellation list (SCL) decoding, successive cancellation stack (SCS) decoding and
Figure 1: Roadmap of channel coding in wireless communication systems.
CRC-aided SCL/SCS decoding etc. Simulation results show the superiority of polar codes compared to Turb-bo/LDPC codes. Then the future direction of polar coded technique is discussed in Section VI. Finally, Section VII concludes the paper.
## II Basic principle of polar codes
In this section, we first explain the channel polarization, that is, the core concept of polar codes. Then, the encoding and construction of polar codes and the classic SC decoding algorithm are presented.
### Channel Polarization
As stated in [5], channel polarization is an interesting phenomenon between the source block and the received sequence which can be explained by using the chain rule of the mutual information. Briefly, this phenomenon can be recursively implemented by transforming multiple independent uses of a given B-DMC into a set of successive uses of synthesized binary input channels.
Figure 2 illustrates the process of channel polarization. Given one binary erasure channel (BEC) \(W\) with the input bit \(x\) and the output signal \(y\) in Figure 2(a), the erasure probability of this BEC is 0.5 and the corresponding capacity is \(I(W)=0.5\). By using one modulo-2 operation between the two independent BECs, i.e. \(x_{1}=u_{1}\oplus u_{2}\), as shown in Figure 2(b), an equivalent compound channel can be obtained which has two input bits \(u_{1},u_{2}\) and two output bits \(y_{1},y_{2}\), as well as the associated capacity is \(I(u_{1},u_{2};y_{1},y_{2})\). Furthermore, by applying the chain rule of mutual information, this compound channel can be decomposed into two synthesized channels: channel \(W^{-}\) (indicated by the green line with the input bit \(u_{1}\) and the output signals \(y_{1}\) and \(y_{2}\)) and channel \(W^{+}\) (indicated by the pink line with the input bit \(u_{2}\) and the output signals \(y_{1}\), \(y_{2}\), and \(u_{1}\)). The mutual information relationship in this process can be expressed as \(I(u_{1},u_{2};y_{1},y_{2})=I(u_{1};y_{1},y_{2})+I(u_{2};y_{1},y_{2}|u_{1})=I(W^ {-})+I(W^{+})\).
Such operation is named as the single-step (or two-channel) polarization, which means two independent BECs with the same reliability are transformed into two polarized channels and the sum capacity of two channels is unchangeable, i.e., \(I(u_{1},u_{2};y_{1},y_{2})=2I(W)\). In [5], Ankan proved that the bad channel \(W^{-}\) has a smaller capacity than the given BEC \(W\) whereas the good channel \(W^{+}\) has a larger capacity, i.e., \(I(W^{-})<I(W)<I(W^{+})\). Specifically, in this example, given the capacity of BEC \(I(W)=0.5\), the capacities of two polarized channels are \(I(W^{-})=0.25\) and \(I(W^{+})=0.75\), respectively.
The single-step polarization transform can be extended to four independent BECs, as shown in Figure 2(c). Two good channels \(W^{+}\) can be further transformed into two polarized channels \(W^{+-}\) and \(W^{++}\). Similarly, two bad channels \(W^{-}\) can also be converted into two channels \(W^{--}\) and \(W^{-+}\). Obviously, the polarization effect is further strengthened.
In this way, the polarization transform can be recursively performed over \(N=2^{n}\) independent uses of a given BEC. As shown in Figure 2(d), when the code length \(N\) is increased from \(2^{0}\) to \(2^{8}\), the polarization effect demonstrates more and more prominent, i.e., the capacities of all the polarized channels tend to either 1 (good/noiseless channels marked by pink lines) or 0 (bad/noisy channels marked by dashed blue lines) except a vanishing fraction.
Theoretically, by using of martingale property, Arikan proved the stochastic convergence behaviour of symmetric capacities of polarized channels in [5]. That is to say, given a fixed constant \(\delta\in(0,1)\), as the number of channels (code length \(N\)) becomes infinity, the proportion of the noiseless channels is exactly equal to the symmetric capacity of the original B-DMC \(W\). This limitation can be formally expressed
Figure 2: _Channel polarization scheme._
as follows,
\[\lim_{N\rightarrow\infty}\frac{\left|\left\{i:I\left(W_{N}^{(i)}\right)>1-\delta \right\}\right|}{N}=I(W). \tag{1}\]
**Remark 1**.: _Channel polarization is a new methodology for the channel code design. We can assign the information bits on the noiseless channels and the fixed bits on the noisy channels so as to construct the polar codes. It is well known that the joint asymptotic equipartition property (AEP) plays a central role in the proof of Channel Coding Theorem [13]. As pointed out by Niu et al. in [6], channel polarization can be regarded as an analog of joint AEP. For the noiseless channels, the transmitted codeword and the received sequence forms a jointly typical mapping and about \(2^{NI(W)}\) codewords can be reliably transmitted over these channels. On the other hand, the jointly typical probability between a random vector and the received sequence is approximately \(2^{-NI(W)}\) tending to zero with the increasing of code length. So we can conclude that channel polarization provides a constructive proof for the channel capacity achieving._
### Basic Construction and Encoding
In order to construct polar codes, we should evaluate the reliability of each polarized channel in the channel polarization and select the high reliability channels to carry the information bits. Initially, Arikan proposed a construction based on the Bhattacharyya parameter. Given a B-DMC channel \(W:\mathcal{X}\rightarrow\mathcal{Y}\) with the input bit set \(\mathcal{X}=\{0,1\}\), the output alphabet \(\mathcal{Y}\) and the transition probabilities \(W(y|x),x\in\mathcal{X},y\in\mathcal{Y}\), the corresponding Bhattacharyya parameter is defined as \(Z(W)=\sum_{y\in\mathcal{Y}}\sqrt{W(y|0)W(y|1)}\).
Specifically, suppose a BEC \(W\) with the Bhattacharyya parameters \(I\left(W_{1}^{(1)}\right)=\epsilon\) is given and the Bhattacharyya parameters of polarized channels \(Z\left(W_{N/2}^{(i)}\right),i=1,2,...,N/2\) have been obtained, the Bhattacharyya parameters of \(N\) channels can be easily recursively calculated as follows
\[\left\{\begin{aligned} Z\left(W_{N}^{(2i-1)}\right)& =2Z\left(W_{N/2}^{(i)}\right)-Z^{2}\left(W_{N/2}^{(i)}\right),\\ Z\left(W_{N}^{(2i)}\right)&=Z^{2}\left(W_{N/2}^{(i)} \right).\end{aligned}\right.. \tag{2}\]
Thus we can sort the Bhattacharyya parameters \(Z(W_{N/2}^{(i)})\) and select a channel index set with high reliability as the information set \(\mathcal{A}\). Although this construction is a low complexity method by \(O(N\log_{2}N)\), it is only accurate for the coding construction in a BEC. In other B-DMCs such as the binary symmetric channel (BSC) and the binary input additive white Gaussian noise (BI-AWGN) channel, exact calculation needs highly complicated Monte Carlo integration, while approximate iterative calculations result in the performance loss. Therefore, the Bhattacharyya parameter based method is a basic reference for the construction of polar codes.
Now we explain the encoding of polar codes. Given the code length \(N\), the information length \(K\) and code rate \(R=K/N\), the indices set of polarized channels can be divided into two subsets: one set \(\mathcal{A}\), called information set, which carries information bits and the other complement set \(\mathcal{A}^{c}\), which is assigned the fixed binary sequence consisting of frozen bits. A message block consisting of \(K=|\mathcal{A}|\) bits is transmitted over the most reliable channels \(W_{N}^{(i)}\) with indices \(i\in\mathcal{A}\), while other channels are assigned the frozen bits. Therefore, a binary source block \(u_{1}^{N}\) consisting of \(K\) information bits and \(N-K\) frozen bits can be encoded into a codeword \(x_{1}^{N}\) by
\[x_{1}^{N}=u_{1}^{N}\mathbf{G}_{N}, \tag{3}\]
where \(\mathbf{G}_{N}=\mathbf{B}_{N}\mathbf{F}_{N}\) is the \(N\)-dimensional generator matrix, \(\mathbf{B}_{N}\) is the bit-reversal permutation matrix, \(\mathbf{F}_{N}=\mathbf{F}_{2}^{\otimes n}\) denotes the channel transformation matrix, \(\mathbf{F}_{2}=\left[\begin{smallmatrix}1&0\\ 1&1\end{smallmatrix}\right]\) is a \(2\times 2\) kernel matrix and "\(\otimes\)n" denotes the \(n\)-th Kronecker product.
Figure 3 shows an encoder example of polar code with \(N=8\), \(K=4\), and \(R=1/2\). According to the reliability order, the information set is \(\mathcal{A}=\{4,6,7,8\}\) and the frozen set is \(\mathcal{A}^{c}=\{1,2,3,5\}\). So the information bits \(\{u_{4},u_{6},u_{7},u_{8}\}\) are assigned to the polarized channels in set \(\mathcal{A}\) while the frozen bits are deployed in set \(\mathcal{A}^{c}\). Intuitively, each row of the generation matrix is associated with a polarized channel.
Furthermore, one butterfly unit is also depicted in Figure 3, which transform two input bits \((a,b)\) into two output bits \((a\oplus b,a)\). This operation is designated by the kernel matrix \(\mathbf{F}_{2}\) and related to the 2-channel polarization. For this example with the code length
\(N=8\), the polar encoder includes one bit-reversion and three stages of butterfly operations. In each stage, there are four butterfly units.
Generally, given the code length \(N=2^{n}\), the polar encoder contains \(n=\log_{2}N\) stages and each stage has \(N/2\) butterfly units. Thus, the encoding complexity of polar codes is \(O(N\log_{2}N)\). Since the bit-reversal matrix \(\mathbf{B}_{N}\) is a symmetric and permutated matrix, it satisfies \(\mathbf{B}_{N}^{T}=\mathbf{B}_{N}\) and \(\mathbf{B}_{N}^{-1}=\mathbf{B}_{N}\). So the generator matrix \(\mathbf{G}_{N}\) has two equivalent expressions and the polar encoding can be written as two forms,
\[x_{1}^{N}=u_{1}^{N}\mathbf{G}_{N}=u_{1}^{N}\mathbf{B}_{N}\mathbf{F}_{2}^{\otimes n}, \tag{4}\]
and
\[x_{1}^{N}=u_{1}^{N}\mathbf{G}_{N}=u_{1}^{N}\mathbf{F}_{2}^{\otimes n}. \tag{5}\]
The polar encoder implemented by (4) is named as the bit-reversal order encoder and that by (5) is the natural order encoder.
### Successive Cancellation Decoding
Successive cancellation (SC) decoding, proposed by Arikan [5], is the basic decoding algorithm of polar codes. Essentially, SC decoding is a greedy algorithm with the bit-wise soft-message calculation and hard-message decision. Hence, this decoding can be regarded as a soft/hard message passing algorithm over the trellis or code tree of polar codes.
The trellis of polar codes is a regular structure with \(n\) stages and \(N\) levels. Each stage includes \(N/2\) butterfly units and each one includes a pair of check and variable node. Figure 4 shows an example of trellis for the \((8,4)\) polar code. The soft and hard messages are calculated and passed over the variable nodes in the trellis, denoted by \(s_{i,j},1\leq i\leq N,1\leq j\leq n+1\). The corresponding soft messages are the logarithmic likelihood ratios (LLRs) denoted by \(L_{i,j}=L\left(s_{i,j}\right)\). Similarly, the hard messages \(B_{i,j}\) are the bits associated to the nodes \(s_{i,j}\). Specially, in the left side of the trellis, the soft messages of the variable are designated as \(L_{i,1}=L\left(\hat{u}_{i}\right)\), where \(\hat{u}_{i}=s_{i,1}\) are the estimated values of the source block. Similarly, in the right side of the trellis, the associated LLRs are denoted by \(L_{i,n+1}=\log\frac{P(y_{i}|1)}{P(y_{i}|0)}\), where \(y_{i}\) are the received signals.
The soft/hard message update and decision rule can be summarized as follows.
1. **Soft Message Updated Rule** \[L_{i,j}\!=\!\begin{cases}2\mathrm{artanh}\left[\tanh\left(\frac{L_{i,j+1}}{2} \right)\right.\\ \left.\cdot\tanh\left(\frac{L_{i+2^{i-1},j+1}}{2}\right)\right],\\ \mathrm{if}\left|\frac{i-1}{2^{j-1}}\right|\bmod 2=0;\\ (1-2B_{i-2^{j-1},j})(L_{i-2^{j-1},j+1})\!+\!L_{i,j+1},\\ \mathrm{if}\left|\frac{i-1}{2^{j-1}}\right|\bmod 2=1.\end{cases},\] (6) where \(i=1,2,\cdots,N\), \(j=1,2,\cdots,n\), \(\lfloor\cdot\rfloor\) is the floor function, \(\tanh(\cdot)\) and \(\mathrm{artanh}(\cdot)\) are the hy
Figure 4: _Trellis of the \((8,4)\) polar code._
Figure 3: _An example of polar coding for \(N=8,K=4\)._
perbolic tangent function and the inverse function respectively. The soft message at the check node labeled by the index \(\lfloor(i-1)/2^{j-1}\rfloor\bmod 2=0\) (the even number node) is updated by using the first formula of (6) and the message at the variable node (the odd number node) is designated by the second equation.
2. **Hard Message Updated Rule** \[B_{i,j+1}=\begin{cases}B_{i,j}\oplus B_{i+2^{j-1},j},\left\lfloor\frac{i-1}{2^{ j-1}}\right\rfloor\!\!\mod 2=0,\\ B_{i,j},\text{otherwise}.\end{cases},\] (7) where \(\oplus\) is the modulo-2 operation. In the even number node, the hard message is calculated by the first formula of (7) and in the odd number node, the message is referred to the second equation.
3. **Decision Rule** When the soft message is obtained in stage \(1\), the bit decision rule is written as \[\begin{cases}\hat{u}_{i}=0,L_{i,1}\geq 0\text{ and }i\in\mathcal{A},\\ \hat{u}_{i}=1,L_{i,1}<0\text{ and }i\in\mathcal{A},\\ \hat{u}_{i}=0,i\in\mathcal{A}^{c}.\end{cases}.\] (8) Therefore, for an information bit, the bit value associated to the node is decided by using the soft message and for a frozen bit, it can be simply set to a pre-defined value, i.e., \(\hat{u}_{i}=0\).
The computational complexity of SC decoding is mainly determined by the soft message calculations in the butterfly units. Since the trellis consists of \(N/2\log_{2}N\) butterfly units, the time complexity of the SC decoder is \(O(N\log_{2}N)\). Intuitively, the SC decoder needs \(N\log_{2}N\) memory units to store the LLRs. However, this memory consumption can be reduced to \(N\) units using the memory sharing mechanism.
Given an \((N,K)\) polar code with the information set \(\mathcal{A}\), the block error rate (BLER) under SC decoding can be upper bounded by [5]
\[P_{e}(N,K,\mathcal{A})\leq\sum_{i\in\mathcal{A}}Z\left(W_{N}^{(i)}\right). \tag{9}\]
When the code length \(N\) goes to infinity, the asymptotic BLER trends to \(o\left(2^{\sqrt{N}}\right)\), that is to say, the block error probability of polar codes will exponentially decay with the square root of the code length [14].
In a word, the theoretical advantages of polar codes can be summarized as follows.
1. **Channel capacity achieving** Polar codes are a class of constructive channel codes achieving the symmetric capacity of the binary-input discrete memoryless channels. It firstly reveals the coding process approaching the channel capacity and equivalently provides a constructive proof of the channel coding theorem.
2. **Low complexity en/decoding** By using the Walsh-Hardamard transform, the encoding complexity of polar codes is \(O(N\log_{2}N)\). Correspondingly, using SC algorithm, the decoding complexity of polar codes is also \(O(N\log_{2}N)\). Both the encoding and decoding have low complexity.
3. **Error floor free** Due to the algebraic structure of polar codes, the block error probability under SC decoding exponentially decreases with the square root of the code length. Such algebraic property determines the polar codes have no error floor. On the contrary, both turbo and LDPC codes have the error floor phenomenon.
4. **Channel polarization universality** Channel polarization is a universal phenomenon in the communication scenario rather than a specific technique in coding system. So it provides a novel idea and methodology to design and optimize the communication systems.
Unfortunately, polar codes with finite length have some critical drawbacks. From the viewpoint of practical application, we list the main problems of the original polar codes should be solved.
1. **Channel dependent construction** The initial construction based on Bhattacharyya parameter is a channel dependent algorithm and not precise for a general B-DMC. So it is important to precisely evaluate the reliability of polarized channels for any B-DMC. Furthermore, constructing a polar code independent of the channel condition is desirable for practical application.
2. **Small minimum-Hamming distance**
In the case of finite length, the minimum Hamming distance of polar codes is very small and inferior to some famous algebra codes, such as Reed-Muller (RM) codes or BCH codes. This drawback leads the performance of polar codes under maximum likelihood (ML) decoding is worse than RM/BCH codes. Hence, it is significant to improve the minimum-Hamming distance of polar codes by using algebraic coding techniques.
3. **Coding length constraint** Due to the structure constraint of generator matrix of polar codes, the code length is restricted to the power of two, such as \(N=2^{10}=1024\). However, in the data transmission, the code length is required to be arbitrary and flexible, i.e., \(N=800,1000\), etc. Hence, this constraint limits the application of polar codes in communication systems. We should design a rate compatible polar code to fulfill the practice requirements.
4. **Poor error performance of SC decoding** The SC decoding is a suboptimal algorithm since the calculation and decision bit-by-bit may result in the error propagation. So the error performance of SC decoding is worse than that of turbo/LDPC codes. This is a big barrier for the polar code applying in communication systems. We should design a more powerful decoding algorithm to improve the error performance of polar codes.
In the next sections, we will explain the design rules of finite length polar codes, especially for the design principle behind the 5G polar codes.
## III Design Criteria of Finite Length Polar Codes
In the past decade, many works were proposed to overcome the problems of the polar codes with finite length. Figure 5 shows the main contributions to the polar coding. Considering the requirements of practical communications, the performance of polar codes with finite length can be improved by jointly designing the encoding structure and decoding algorithm. Thus, we can summarize the en/decoding design framework of polar codes as shown in Figure 6.
The concatenated coding consists of the inner polar code and the outer code (e.g. cyclic redundancy check (CRC) code) is the main form of powerful polar codes with finite length. In this framework, the outer codes play double roles. At the side of the encoder, suitable outer codes can raise the minimum Hamming distance to improve the performance of the concatenated codes. Furthermore, we should solve the problems of construction, encoding, and rate compatibility. on the other hand, at the side of the decoder, the outer codes can be used to check the survivor paths and select the correct codeword whereby dramatically enhance the error performance. Therefore, powerful decoding algorithms and hardware implementation are the main concerns. Guided by this framework, the design criteria of polar codes with finite length can be summarized as follows.
**Criterion 1. Polar codes should be constructed by high-precise, low-complexity and channel-independent methods.**
The existing construction methods of polar codes can be mainly divided into three categories: (1) channel dependent construction, (2) channel independent construction, (3) weight distribution based construction.
In the first category, some channel parameters, e.g., signal-to-noise ratio (SNR) in binary-input AWGN channel, are utilized to evaluate the reliability of the polarized channel based on the recursive structure of the polar coding. Recall that the Bhattacharyya parameter based construction is accurate only for the coding construction in a BEC yet approximate for other B-DMCs. Subsequently, Mori _et al._[15] designed a density evolution (DE) algorithm to track the distribution of LLR and calculate error probability under the successive cancellation (SC) decoding with a complexity of \(O(N\xi\log\xi)\). But high theoretical accuracy will need large number of samples \(\xi\). Afterward, Trifonov [16] advocated the use of Gaussian approximation (GA) construction with a complexity of \(O(N)\). Tal and Vardy [17] proposed an iterative algorithm to evaluate the upper/lower bound on error probability of each polarized channel with a complexity of \(O(N\mu^{2}\log\mu)\), where \(\mu\) is a fixed integer called the fidelity parameter. GA can approach the Tal-Vardy method with a lower complexity. Then Dai and Niu _et al._[18] proposed an improved GA algorithm for the polar codes with the long code length to further increase the accuracy of the GA construction.
Construction and Encoding
Bhattacharyya parameter construction [5][2][3][5]SC/BP decoding
Density Evolution [15][5][10][5][5][10][5][5][11][5][12][13][14][15][16][17][18][19][20][21][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][30][31][33][32][34][35][36][37][38][39][30][32][31][33][34][35][37][39][31][32][33][34][36][38][39][30][33][34][37][38][39][30][31][32][32][33][34][35][36][37][38][39][30][31][32][33][34][35][37][38][39][31][32][33][34][35][36][37][38][39][30][33][34][39][31][32][34][35][36][37][38][39][32][33][35][39][30][33][34][35][36][37][38][39][30][31][32][33][34][35][36][37][38][39][30][31][32][33][34][35][37][38][39][31][32][33][34][35][36][37][38][39][30][31][32][33][34][35][37][38][39][32][33][34][39][33][35][36][37][38][39][33][34][39][30][34][35][37][38][39][30][31][32][33][34][35][36][37][38][39][31][32][33][34][39][32][33][35][36][37][38][39][30][31][32][33][34][35][37][38][39][32][34][39][30][31][32][33][34][35][36][37][38][39][33][39][31][32][33][34][35][37][39][32][34][38][39][33][34][39][30][31][32][33][34][35][36][37][38][39][39][31][32][33][34][35][37][38][39][33][39][32][34][39][33][35][34][36][37][38][39][39][33][34][39][30][31][32][33][34][35][36][37][38][39][33][39][34][39][31][32][34][35][36][38][39][33][34][39][34][35][37][38][39][39][30][31][32][33][34][39][32][34][35][36][37][38][39][39][31][32][33][34][39][33][35][36][38][39][34][39][30][31][32][33][34][35][36][37][38][39][39][30][31][32][33][34][39][30][32][34][35][36][37][38][39][39][30][31][32][33][34][39][30][32][34][35][36][38][39][31][32][33][34][35][37][39][32][34][38][39][30][33][34][39][34][35][36][38][39][31][32][33][3][34][35][37][38][39][39][30][31][32][34][35][39][30][32][34][35][36][37][38][39][30][33][31][32][34][39][30][33][34][35][36][38][39][30][31][32][34][35][36][39][31][32][33][34][35][37][38][39][30][31][32][33][34][39][32][34][35][36][38][39][33][3][340][31][32][33][34][35][36][39][31][32][34][35][36][37][38][39][39][31][32][34][39][30][31][32][33][34][35][37][38][39][32][33][3][34][39][33][34][35][36][38][39][39][30][31][32][33][340][33][34][35][36][37][38][39][39][31][32][33][34][35][38][39][33][34][39][34][35][36][38][39][33][340][39][30][341][33][34][35][36][38][39][34][39][31][32][34][35][36][39][31][32][340][34][35][36][38][39][31][32][34][35][36][39][32][34][35][36][39][31][32][33][34][35][36][38][39][31][32][34][35][37][38][39][39][30][31][32][33][34][35][36][39][31][32][34][35][36][38][39][33][340][39][30][31][32][33][34][35][36][39][341][342][35][36][38][39][31][32][33][34][35][36][39][31][32][343][37][38][39][30][31][32][344][35][36][39][31][32][34][35][36][39][31][32][34][35][37][38][39][31][32][34][39][33][38][39][30][31][32][34][35][36][39][31][32][33][34][35][36][38][39][31][32][34][39][33][340][31][32][341][32][34][35][36][39][31][32][342][343][35][36][39][31][32][344][35][36][39][31][32][34][35][36][39][31][32][34][35][36][37][38][39][33][39][31][323][34][35][36][39][31][32][340][32][341][32][342][34][35][36][39][31][32][343][35][36][37][38][39][39][30][31][32][344][35][36][39][31][32][34][35][36][37][38][39][39][30][31][32][33][340][39][31][32][34][35][36][37][38][39][39][30][31][32][341][32][34][35][36][39][31][32][34][35][36][37][38][39][30][31][32][34][39][32][33][340][33][341][35][36][37][38][39][39][30][31][32][341][32][34][35][39][33][342][3][34][35][36][37][38][39][39][30][31][32][34][39][33][30][31][32][3][341][3][32][34][35][36][37][38][39][39][30][31][32][34][3][39][31][32][3][34][35][39][30][31][32][3][341][32][3][34][35][36][37][38][39][30][31][32][3][34][33][35][39][33][34][3][35][36][37][38][39][30][31][3][32][3][34][
In the second category, the reliability of the polarized channel can be ordered based on some channel-independent characteristics of polar codes. This construction is desirable for the practical implementation. Schurch _et al._[19] introduced the concept of partial order (PO) to indicate the invariant reliable order of a part of polarized channels. Furthermore, He _et al._[20] proposed the polarized weight (PW) algorithm. Although PW is an empirical construction, it can amazingly achieve almost the same performance as those constructed by the GA algorithm. In fact, the polar codes adopted in 5G standard [10] are constructed through a fixed reliability sequence of polarized channels obtained by a computer searching [21].
In the third category, distance spectrum or weight distribution is considered to thoroughly interpret the behavior of polar codes and deduce the construction of polar codes. Valipour and Yousefi [22] designed a probabilistic method to estimate the weight distribution of polar codes whereas the computational complexity is very high. Then, by using successive cancellation listing algorithm, [8] and [23] proposed the methods to enumerate and calculate the distance spectrum of polar codes. Recently, Niu _et al._[24] introduced a new theoretical tool, named polar spectrum, to analyze and construct the polar codes. Polar spectrum provides a new insight to understand the algebraic property and has the potential value for the design of polar codes.
**Criterion 2. Concatenated coding is a powerful coding scheme to improve the error performance of polar codes.**
In the case of finite length, increasing the minimum Hamming weight is the core idea to enhance the error performance of polar codes. According to the classic coding theory, concatenated coding is a suitable method to fulfill this aim. As shown in Figure 6, the compound encoder consists of the inner polar encoder and the outer encoder. In the literature, Niu and Chen [7] firstly proposed the cyclic redundancy check concatenated polar (CRC-polar) codes to improve the error performance of short to medium length, which is a typical example of concatenated coding. Since CRC-polar codes increase the minimum Hamming distance and weight distribution, the corresponding performance can dramatically outperform that of turbo/LDPC codes. As a high efficiency and low complexity coding scheme, CRC-polar code was cited by 5G proposal [25] and accepted as the basic coding in 5G standard [10]. Furthermore, Zhang _et al._[26] investigated the optimization of CRC generator polynomials.
Trifonov and Miloslavskaya [27][28] proposed a dynamic-frozen coding scheme by combining the eBCH codes and polar codes whereas this scheme is not flexible for arbitrary coding configurations. Guided by the idea of concatenated coding, Huawei corporation proposed a parity check concatenated polar (PC-polar) codes [29] and the corresponding coding scheme was incorporated into 5G standard [10]. An similar concept and initial coding were also introduced in [30]. Chen _et al._[31] designed the hash-polar code to improve the performance of polar codes under the SCL decoding. Recently, Zhou _et al._[32] proposed a genetic algorithm to construct the polar codes so as to obtain the performance gain for SCL decoding.
As a representative coding scheme, CRC-polar codes guide a new direction to improve the performance of polar codes. In the next section, we will explain the technical details
**Criterion 3. Rate matching is the critical requirement for the application of polar codes.**
For the original polar code [5], the code length \(N\) is limited to the power of two, i.e., \(N=2^{n}\). Consequently, designing good rate-compatible polar (RCP) codes, also named rate matching, to flexibly support an arbitrary code length and code rate becomes the key issue to practical application of the polar codes, which usually falls into two categories: puncturing and shortening.
For the puncturing mode, Shin _et al._[33] advocated the use of a reduced generator matrix for efficiently improving the error performance of the RCP codes under the successive cancellation (SC) decoding yet searching good polarizing matrices is a time-consuming process. Niu _et al._[34] proposed an efficiently universal puncturing scheme, named quasi-uniform puncturing algorithm (QUP) and the resulting RCP codes outperform the turbo codes used in 3G/4G wireless systems. Due to the high-performance and flexibility, this scheme was cited by 5G proposal [25] and used as the basic puncturing scheme of polar codes in 5G standard [10].
On the other hand, for the shortening mode, Wang _et al._[35] devised a simple method by shortening the columns with weight \(1\) which is the same as the reversal quasi-uniform shortening (RQUS) algorithm proposed in [36]. Such simple and high efficient scheme is also accepted as the basic shortening scheme in 5G standard [10]. Meanwhile, Miloslavskaya [37] exploited the structure of polar codes and jointly optimized the shortening patterns and the source-bit assignment whereas the searching complexity is still very high and the shortening pattern is not universal.
**Criterion 4. Designing the high-performance and low-complexity algorithms is the key issue of polar decoding.**
In the past decade, polar decoding algorithms were deeply investigated based on multiple mechanisms and viewpoints. Generally, the decoding algorithms can be mainly divided into four categories: (1) SC enhanced decoding, (2) soft output decoding, (3) SC flipping decoding, and (4) decoding for short code.
In the first category, we mainly simplify or enhance the SC decoding over the trellis or code tree. Alamdar-Yazdi and Kschischang [38] proposed a simplified successive-cancellation (SSC) decoder to decrease the redundant calculations in SC decoding without affecting the error performance. For the improved SC decoding algorithms, the most important algorithm is the successive cancellation list (SCL) decoding, which was independently proposed by two groups, Chen-Niu [39][40] and Tal-Vardy [41][42]. Unlike SC only decodes one path at each level, Chen and Niu [39][40] realized that the greedy search of SC on the code tree can be extended to a width-first search. So a maximum of \(L\) candidate paths are reserved as the survivor path list and the final decision can be selected from the list. Tal and Vardy [41][42] proposed the similar mechanism and designed a so-called "lazy-copy" operation to reduce the memory overhead of path copy. Thanks to the algebraic structure of polar codes, with a small list size (e.g. \(L=32\)) at the medium length (e.g. \(N=1024\)), the BLER performance of SCL decoding can approach that of ML decoding. Meanwhile, the complexity of SCL is \(O(LN\log_{2}N)\). SCL decoding is a great step to improve the performance of polar codes. Then Balatsoukas-Stimming _et al._[43] designed the SCL decoder based on LLR calculation in order to facilitate the hardware implementation. Afterward, Zhang _et al._[44] proposed the split reduced SCL decoder to further reduce the average decoding complexity.
On the other hand, Niu and Chen [45] realized that polar decoding can also performed a depth-first search on the code tree and proposed the successive cancellation stack (SCS) decoding. This algorithm uses an ordered stack to store the candidate paths and dramatically decrease the computational complexity down to \(O(N\log_{2}N)\) approaching that of SC decoding. Lately, based on the similar idea, Trifonov _et al._[46] proposed the sequential decoding and the path metric is optimized in [47] to further efficiently reduce the algorithm complexity.
Combining the principles of SCL and SCS, Chen and Niu _et al._[48] proposed the successive cancellation hybrid (SCH) decoding so as to provide a flexible configuration when the time and space complexities are limited. Lately, Guan and Niu _et al._[49] designed a successive cancellation priority (SCP) decoding to decrease the number of path sorting operations. Under proper configurations, all these improved SC algorithms (SCL/SCS/SCH/SCP) can approach the performance of ML decoding but with an acceptable complexity.
To further improve the performance of polar codes, Niu and Chen [7] proposed the CRC-aided SCL/SCS (CA-SCL/SCS) decoding schemes. In these schemes, the SCL/SCS decoder outputs the candidate paths into a CRC detector, and the check results are utilized to detect the correct codeword. To lower the time complexity of SCL decoding brought by a large list size, Li _et al._[8] proposed an adaptive CRC-aided SCL decoder (aCA-SCL) by gradually increasing the list size. Specifically for short to medium code length, the CA-SCL/SCS decoding can substantially improve the performance of polar codes and outperform turbo/LDPC codes. This is the key advantage of polar codes that can be adopted in 5G standard.
In the second category, we mainly focus on improving the performance and throughput of the BP decoder. Ankan [5] first pointed out that BP algorithm can be used to decode polar codes over the trellis. The computational complexity of BP decoding is \(O(I_{max}N\log_{2}N)\), where \(I_{max}\) is the maximum number of iterations. Then, Hussami _et al._[50] designed the multiple trellis BP algorithm to improve the performance of standard BP. Yuan and Parhi [51] pro
posed the min-sum (MS) and scaled min-sum (SMS) algorithm to simplify the BP decoder. In addition, two early stopping criteria were investigated in [52] and [53] respectively in order to decrease the iterative number and lower the complexity of BP decoding. Furthermore, by using the soft information calculation based on SC-like scheduling, Fayyaz and Barry [54] introduced the low-complexity soft-output decoding, named soft cancellation (SCAN), to make a tradeoff between the performance and complexity. Unfortunately, all the above algorithms are inferior to the SCL decoding. Recently, Elkelesh _et al._[55] proposed the belief propagation list (BPL) decoding, which can approach SCL at the cost of increasing the complexity.
In the third category, our primary concern is the successive cancellation flipping (SCF) decoding to approach the performance of SCL with a low complexity. Afisiadis _et al._[56] first proposed the bit-flipping method to generate multiple decision attempts. Then Chandesris _et al._[57] introduced the concept of high order bit flips and designed a dynamic SCFlip decoding to keep the balance between the error performance and the average complexity. Zhang _et al._[58] constructed a critical-set tree search to build a progressive bit-flipping decoding to efficiently lower the average complexity. Although these SCF decoding algorithms have low average complexity, the computational complexity in the worst case is still very high.
In the fourth category, we pay attention to the (quasi)-maximum likelihood decoding of short polar codes. Goela _et al._[59] introduced the linear programming (LP) decoding for polar codes whereas this algorithm is only suitable for the BEC channel. Kahraman and Celebi [60] realized the lower triangle structure of the generator matrix and introduced the sphere decoding (SD). Niu _et al._[61] simplified the standard SD based on the optimum path metrics. Lately, Guo and Fbregas [62] designed the fixed and dynamic bounds to further reduce the complexity of SD. Piao and Niu _et al._[63] considered the structure of CRC-polar codes and designed the CRC-aided SD (CA-SD) in order to achieve the performance of ML decoding. In addition, Wu _et al._[64] proposed the ordered statistic decoding (OSD) to approximate the ML decoding of short polar codes. Although SD or OSD decoding can approach the performance of ML decoding, the computational complexity of these algorithms is very high. Therefore, these algorithms are only applied for decoding polar codes with short block length.
**Criterion 5. Designing the high-throughput and low-latency architecture is the key issue of hardware implementation.**
For the hardware implementation, the SC and SCL decoder with high-throughput and low-latency are pursued by the practical application. Leroux _et al._ proposed the pipelined-tree architecture in [65] and the semi-parallel architecture in [66] respectively to improve the throughput of SC decoder. Then Zhang and Parhi [67] designed the sequential and overlapped architecture to further reduce the decoding latency of SC decoder. In addition, Yuan and Parhi [68] introduced the multi-bit decision to improve the throughput of SC decoder. Specifically, Xu and Niu [69] designed an SC decoder based on stochastic computation which is a new architecture with high-throughput and low power assumption.
On the other hand, many works focus on the hardware implement of SCL decoder. Sarkis _et al._[70] proposed a fast list decoder based on the idea of SSC to improve the throughput. Fan _et al._[71] considered the path selective expansion and double threshold fast-sorting method to reduce the computation and latency of SCL decoder. Recently, Xia _et al._[72] designed a high-throughput SCL decoder achieves the decoding throughput of \(1.103\) Gbps with the list size \(L=8\).
**Remark 2**.: _In 5G wireless communication system, high-reliability is the basic requirement of the data transmission. The designing of 5G polar codes should keep the balance between the high performance and the implementation complexity. Although the DE/GA/Tal-Vardy constructions have high precision, the channel mapping sequence [10] is adopted in 5G standard due to the advantages of channel-independence and low complexity. Since the CRC-polar codes [7] have simple structure and excellent performance, they are utilized as the basic coding scheme in 5G standard. Meanwhile, PC-polar codes [29] are also adopted as the further supplement. Furthermore, in order to fulfill the requirement of flexible coding, the QUP [34] and RQUS [35][36] schemes are used as the rate matching methods in 5G standard. Certainly, behind all these design concerns, the CRC aided SCL/SCS decoding [7] is the most important factor whereby the polar codes can be applied in
5G. By now, CA-SCL/SCS decoding has become the most popular algorithm and the standard reference in the field of polar codes._
## IV Construction and Encoding of Polar Codes
In this section, we first investigate the efficient construction methods, such as GA and PW. Then the minimum distance and weight distribution of CRC-Polar codes are analyzed to reveal the advantage of concatenated coding. Finally, we briefly explain the basic principle of QUP and RQUS scheme. All these methods are the critical parts of the polar encoding in the case of finite length.
### Efficient Construction Methods
Gaussian approximation (GA) construction [16] is suitable for the polar coding in the AWGN channel. Suppose the coded bits are modulated using binary phase shift keying (BPSK) and transmitted over the AWGN channel with noise variance \(\sigma^{2}\). The transition probability is written as \(W(y|x)=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\frac{(y-(1-2x))^{2}}{2\sigma^{2}}}\), where \(x\in\{0,1\}\) and \(y\in\mathbb{R}\). The LLR of each receive signal \(y\) is denoted by \(L(y)=\ln\frac{W(y|0)}{W(y|1)}=\frac{2y}{\sigma^{2}}\), which obeys Gaussian distribution, that is, \(L(y)\sim\mathcal{N}\left(\frac{2}{\sigma^{2}},\frac{4}{\sigma^{4}}\right)\). Due to the symmetry of polar codes, an all-zero codeword is assumed to transmit.
In GA construction, the LLR of each polarized channel \(W_{N}^{(i)}\) is assumed to obey a Gaussian distribution, that is, \(L_{N}^{(i)}\sim\mathcal{N}\left(m_{N}^{(i)},2m_{N}^{(i)}\right)\), where \(m_{N}^{(i)}\) is the LLR mean. Since the LLR mean indicates the reliability of each channel, we can trace these LLR means to select the good channels to carry the information bits.
Suppose an initial LLR mean \(m_{1}^{(1)}=\frac{2}{\sigma^{2}}\) is given and the LLR means of polarized channels \(Z\left(W_{N/2}^{(i)}\right),i=1,2,...,N/2\) have been calculated, the LLR means of \(N\) channels can be computed recursively as follows
\[\left\{\begin{aligned} m_{N}^{(2i-1)}&=\phi^{-1} \left[1-\left(1-\phi\left(m_{N/2}^{(i)}\right)\right)^{2}\right],\\ m_{N}^{(2i)}&=2m_{N/2}^{(i)}.\end{aligned}\right. \tag{10}\]
where the function \(\phi(\cdot)\) is defined as
\[\phi(t)=\begin{cases}1-\frac{1}{\sqrt{4\pi t}}\int_{\mathbb{R}}\!\!\tanh \left(\frac{z}{2}\right)\!e^{-\frac{(z-t)^{2}}{4t}}\!dz,&t>0,\\ 1,&t=0.\end{cases}. \tag{11}\]
Obviously, the exact calculation of LLR means in check nodes involves complex integration at the cost of high computational complexity. Generally, in conventional GA construction, we can use the well known two-segment function \(\varphi(t)\) to approximate \(\phi(t)\),
\[\varphi(t)=\begin{cases}e^{-0.4527t^{0.86}+0.0218},&0<t<10,\\ \sqrt{\frac{\pi}{t}}e^{-\frac{t}{4}}\left(1-\frac{10}{7t}\right),&t\geq 1 0.\end{cases}. \tag{12}\]
Since this two-segment function \(\varphi(t)\) has calculation error compared to the function \(\phi(t)\), the channel selection may be inaccurate when the code length becomes long. So in [18], this function is modified by the more precise approximations, such as the new two-segment function \(\Omega_{2}(t)\), three-segment function \(\Omega_{3}(t)\) and four segment function \(\Omega_{4}(t)\). They are defined as follows,
\[\Omega_{2}(t)=\begin{cases}e^{0.012t^{2}-0.421t}&0<t\leq 7.063,\\ e^{-0.294t-0.317}&t>7.063.\end{cases}, \tag{13}\]
\[\Omega_{3}(t)=\begin{cases}e^{0.0673t^{2}-0.491t}&0<t\leq 0.636,\\ e^{-0.453t^{0.86}+0.022}&0.636<t\leq 9.225,\\ e^{-0.283t-0.425}&t>9.225.\end{cases}, \tag{14}\]
and
\[\Omega_{4}(t)=\begin{cases}e^{0.105t^{2}-0.499t}&0<t\leq 0.191,\\ 0.998e^{0.053t^{2}-0.480t}&0.191<t\leq 0.742,\\ e^{-0.453t^{0.86}+0.022}&0.742<t\leq 9.225,\\ e^{-0.283t-0.425}&9.225<t.\end{cases}. \tag{15}\]
Using these modified approximation functions in the improved GA construction, the polar codes with the long code length (e.g. \(N=2^{14}\sim 2^{18}\)) can ensure the excellent performance in BI-AWGN channels.
Another typical construction is the polarization
weight (PW) metric [20]. Given the code length \(N=2^{n}\), by using the so-called \(\beta\) expansion, PW metric can be calculated as follows,
\[PW_{N}^{(i)}=\sum_{s=1}^{n}i_{s}\beta^{n-s}, \tag{16}\]
where \(\beta\) is the weight factor and \((i_{1},i_{2},\cdots,i_{n})\) is the binary expansion vector associate to the channel index \(i\), that is, \(i-1=\sum_{s=1}^{n}i_{s}2^{s}\). When the code length is fall in the range \(N=16\sim 1024\), the weight factor is optimized as \(\beta=2^{1/4}\approx 1.1892\).
Obviously, by (16), PW is a channel-independent metric. The larger the metric \(PW_{N}^{(i)}\), the more reliable the corresponding channel \(W_{N}^{(i)}\). On the basis of PW, after some adjustment by using the computer simulation, the mapping sequence of polar codes [10] is obtained in 5G standard.
### CRC Concatenated Polar Codes
CRC-polar codes are the primary concatenated coding scheme of polar codes. As shown in Figure 7, CRC-polar encoder consists of the CRC encoder and the polar encoder. \(k\)-bit source block \(c_{1}^{k}\) is input into the CRC encoder and generates the coded block \(u_{1}^{K}\), where \(K=k+m\) is the block length and \(m\) is the CRC bits. These coded bits are mapped into the information bit set \(\mathcal{A}\), i.e., \(u_{1}^{K}=u_{\mathcal{A}}\). Then the information bits \(u_{\mathcal{A}}\) and frozen bits \(u_{\mathcal{A}}^{c}\) are fed into the polar encoder and generates the final concatenated codeword \(x_{1}^{N}\). So the entire code rate is defined as \(R=k/N\).
Thus the encoding process of CRC-polar codes can be written as
\[\begin{cases}u_{1}^{N}\mathbf{G}_{N}=x_{1}^{N},\\ u_{\mathcal{A}}=c_{1}^{k}\mathbf{G}_{CRC},u_{\mathcal{A}}^{c}=\mathbf{0}.\end{cases}, \tag{17}\]
where \(\mathbf{G}_{CRC}\) is the generator matrix of CRC codes.
The key reason that CRC-polar codes obtain the performance gain is CRC codes can significantly improve the minimum Hamming distance and the weight distribution of the polar codes. This principle can be intuitively depicted in Figure 8. When single polar code is encoded in Figure 8(a), the Hamming distance of neighbour codewords, i.e., \(d_{min}\) is small. On the contrary, when the concatenated coding is utilized, as shown in Figure 8(b), the minimum Hamming distance can be enlarged since the CRC check can exclude many neighbour candidates.
For the CRC-polar codes, the BLER probability can be upper bounded by the union bound [73], that is,
\[\begin{split} P_{e}&\leq\sum_{w=d_{min}}^{N}A_{w}Q \left(\sqrt{\frac{2wRE_{b}}{N_{0}}}\right)\\ &\approx A_{d_{min}}Q\left(\sqrt{2d_{min}\frac{RE_{b}}{N_{0}}} \right),\end{split} \tag{18}\]
where \(A_{w}\) is the weight enumerator and \(\frac{E_{b}}{N_{0}}\) is the bit SNR. We can find that the weight distribution \(\{w,A_{w}\}\), especially the minimum weight \(d_{min}\) and the enumerator \(A_{d_{min}}\) will determine the performance of CRC-polar codes.
Given the code length \(N=128,256\) and the code rate \(R=k/N=1/2\), we compare the weight distribution of polar and CRC-polar codes as shown in Table 1 and Table 2.
In these two tables, the optimal CRC codes, such
Figure 8: Minimum Hamming distance comparison between polar code and CRC-Polar code.
Figure 7: CRC-polar encoder.
as CRC6-opt etc., are selected from [26]. In Table 1, the generator polynomials of CRC6 and CRC6-opt are \(g(x)=\) 0x43 and 0x73. Similarly, in Table 2, the generator polynomials of CRC9, CRC9-opt, CRC10 and CRC10-opt are \(g(x)=\) 0x2CF, 0x269, 0x633, and 0x75F. Here, we use the hexadecimal to present the generator polynomial.
We see that (128,64) polar code has the minimum Hamming weight \(d_{min}=8\) and the weight enumerator \(A_{d_{min}}=432\). When the CRC6-opt is applied, the \(d_{min}\) of CRC-polar code is increased to 12 and the weight enumerator \(A_{12}=300\) is also smaller than that of polar code, that is, \(A_{12}=2304\). Similar phenomena can also be observed in Table 2. When the CRC codes are used, the minimum Hamming distance will increase from 8 to 16. In a word, CRC concatenated coding is an efficient and powerful method to improve the performance of polar codes.
### Rate Compatible Polar Codes
According to the availability of prior information, the rate-compatible polar codes can be divided into two modes. In the puncturing mode, some code bits are deleted in the encoder whereby these bits are unknown by the decoder and treated as the ones transmitting over zero-capacity channels. In the shortening mode, the values of deleted code bits are predetermined and known to both the encoder and decoder. Thus, the associated channels can be regarded as one-capacity channels.
Theoretically, the optimal puncturing table of rate compatible polar codes can be optimized by a brute-force search of the distance spectrum (for ML decoding) or BLER bounds (for SC decoding). However, the exhausted search for all the puncturing/shortening patterns is difficult to be realized.
Quasi-uniform puncturing (QUP) [34] and reversal quasi uniform shortening (RQUS) [35][36] are two simple and high efficient schemes to achieve the optimal tradeoff between the error performance and implementation complexity of RCP codes.
Since the code length of classical polar codes is limited to a power of \(2\), in order to generate an arbitrary \(M\)-bit length RCP code, we should start from an extended code of length \(N=2^{n}\), with \(n\) determined by \(n=\lceil\log M\rceil\). Then an \(N\)-bit codeword is shrunk to \(M\) bits by appropriately deleting \(Q=(N-M)\) bits from it.
Define the puncturing/shortening table \(\mathscr{T}_{N}=(t_{1},t_{2},\cdots,t_{N})\) where \(t_{i}\in\{0,1\}\) with \(t_{i}=0\) meaning that the corresponding bit \(x_{i}\) is deleted. For QUP scheme, the \(N\)-length table \(\mathscr{T}_{N}\) is first initialized as all ones, and then set the _first_\(Q\) bits as zeros. After the bit-reversal permutation on the table, the indices of zero elements in \(\mathscr{T}_{N}\) should be punctured. Similarly, for RQUS, table \(\mathscr{T}_{N}\) is first initialized as all ones and set the _last_\(Q\) bits as zeros. By the bit-reversal permutation, the indices of zero elements in \(\mathscr{T}_{N}\) should be shortened.
Figures 9(a) and 9(b) show the process of QUP and RQUS. Note that, since the natural order coding (5) is used in 5G standard, the bit-reversal permutation is not necessary. Hence, QUP/RQUS scheme is equivalent to puncture (shorten) the first (last) \(Q\) bits in the original codeword as shown in these two subgraphs. Since QUP is suitable for low code rate and RQUS
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Code type} & \multirow{2}{*}{\(K\)} & \multicolumn{4}{c|}{\(A_{w}\)} \\ \cline{3-6} & & \(w=8\) & \(w=12\) & \(w=16\) & \(w=20\) \\ \hline Polar & 64 & 432 & 2304 & 232440 & 1044823 \\ \hline CRC6+Polar & 64+6 & 4 & 327 & 1301 & - \\ \hline CRC6-opt+Polar & 64+6 & 0 & 300 & 972 & - \\ \hline \end{tabular}
\end{table}
Table 1: _Weight distribution of Polar and CRC-Polar codes for \(N=128\)._
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Code type} & \multirow{2}{*}{\(K\)} & \multicolumn{4}{c|}{\(A_{w}\)} \\ \cline{3-6} & & \(w=8\) & \(w=16\) & \(w=20\) & \(w=24\) \\ \hline Polar & 128 & 96 & 131824 & 548864 & 119215 \\ \hline CRC9+Polar & 128+9 & 0 & 539 & 2357 & - \\ \hline CRC9-opt+Polar & 128+9 & 0 & 507 & 1946 & - \\ \hline CRC10+Polar & 128+10 & 0 & 552 & - & - \\ \hline CRC10-opt+Polar & 128+10 & 0 & 215 & - & - \\ \hline \end{tabular}
\end{table}
Table 2: _Weight distribution of Polar and CRC-Polar codes for \(N=256\)._
is suitable for high code rate, 5G standard [10] specifies that QUP is applied in the case of \(R\leq 7/16\) and RQUS is utilized in the case of \(R>7/16\).
Figures 9(c) and 9(d) give the examples of QUP and RQUS respectively. Given \(N=8,M=5,Q=3\), after bit-reversal permutation, the puncturing table is \(\mathscr{T}_{8}=(0,1,0,1,0,1,1,1)\), that means the code bits \(x_{1}\), \(x_{3}\), and \(x_{5}\) should be punctured. Similarly, the shortening table is \(\mathscr{T}_{8}=(1,1,1,0,1,0,1,0)\), that means the code bits \(x_{4}\), \(x_{6}\), and \(x_{8}\) should be shortened.
## V Improved polar decoding algorithms
In this section, we briefly describe the basic principle of improved decoding algorithms of polar codes, such as SCL, SCS, SCH and SCP decoding. Finally, we demonstrate the superior performance of CA-SCL decoding by comparing the 5G polar codes, turbo and LDPC codes.
### Code Tree and SC Decoding
In [6], we introduced the compact-stage code tree to uniformly describe SC decoding and its improved algorithms, such as SCL/SCS/SCH/SCP decoding. Figure 10 gives an example of SC decoding over a compact-stage code tree, which is corresponding to the trellis of Figure 4 and consists of eight levels by compacting the stages.
In this code tree, except the leaf nodes and the frozen nodes, each node has two descendants and the corresponding branches are labeled with 0 and 1, respectively. The number written next to each node denotes the path metric from the root to that node, e.g. LLR or a posteriori probability (APP). A decoding path includes a series of branches from the root to one leaf node. In Figure 10, the black circles represent the nodes that are visited and the gray ones are those that are not visited in the search process.
Figure 10: _SC decoding example on the code tree._
Figure 9: _QUP and RQUS scheme for rate compatible polar codes._
Since at a certain level associated with an information bit only one of the two branches with the larger/better path metric is selected for further extension, the SC decoding can be regarded as a greedy search over the compact-stage code tree. Hence, if a decision error occurs, this error will propagate in the extended path. As shown in Figure 10, the SC decoding path "00000011" (marked by the red bold branches) is not optimal due to the level by level decision strategy.
### Successive Cancellation List Decoding
In order to mitigate the error propagation effect and improve the performance of SC decoding, Chen and Niu [39][40], Tal and Vardy [41][42] independently proposed the SCL decoding. In fact, Dumer [74] proposed the similar idea in order to improve the performance of RM codes.
Unlike SC decoder only reserves one path at each level, SCL decoder can extend \(2L\) paths at each level and select \(L\) most reliable paths as the survivors. In the end, the path with the largest metric is selected from the survivor list as the final decision. The time complexity of SCL decoding is \(O(LN\log_{2}N)\) and the space complexity is \(O(LN)\).
Figure 11 depicts an example of SCL decoding with list size \(L=2\). In each level, four paths are extended and the corresponding path metrics are calculated. After path sorting, two survivor paths (marked by blue and red bold edges) are selected and stored. Finally, two survivors are stored in the list. One path "00000011" with path metric 0.20 and the other path "00010000" with metric 0.36. Due to \(0.36>0.20\), the more reliable path "00010000" is found whereby the error propagation in SC can be efficiently weaken.
### Successive Cancellation Stack Decoding
SCL decoding substantially improves the performance of finite length polar codes yet the computational complexity is also increased. In order to reduce the decoding complexity, Niu _et al._[45] proposed the SCS algorithm by using the depth-first search over the code tree. The SCS decoder goes along the code tree and uses a stack to sort and store the candidate paths. First, the top path is extended. Then, the succeeded paths are sorted and inserted into the stack. Until the top path with the largest metric reaches a leaf node, the decoding process stops and this path is output as the optimal decision. The main difference between SCL and SCS is that the candidate path of the former has the same length whereas the path of the latter has distinct length. Since SCS decoder only extends and calculates the necessary path nodes, its time complexity is far bellow that of SCL decoder even close to SC decoder. On the other hand, in order to achieve the same performance as SCL decoder, SCS needs a large stack depth \(D\) and the space overhead is \(O(DN)\). In the worst case, the space complexity is up to \(O(LN^{2})\) with \(D=LN\).
Figure 12 gives a simple example of SCS decoding. In the stack, the top path (marked by red bold edges) has the largest metric 0.36 and the second candidate (marked by blue bold edges) has the second largest metric 0.3. Obviously, the top path "00010000" is output as the optimal decision, which is the same as the decoding result of SCL. However, the length of the top path is 8 and that of the second path is 6. Therefore,
Figure 11: SCL decoding example on the code trees.
Figure 12: SCS decoding example on the code tree.
the number of the visiting nodes in SCS is small than that in SCL. So the time complexity of SCS is lower than that of SCL.
In order to make a tradeoff between the time complexity and the space overhead, Chen and Niu _et al._ devised the successive cancellation hybrid (SCH) decoding in [48] by combining SCL and SCS. This algorithm can achieve the same performance as SCL and SCS with a lower time and space complexity.
### Successive Cancellation Priority Decoding
Guan and Niu _et al._[49] proposed another algorithm, named successive cancellation priority (SCP) decoding, to reduce the time complexity of SCL. The SCP decoder performs priority-first decoding by interacting the priority queue with the trellis iteratively. On one hand, the priority information is stored in the priority queue so as to guide the extension of the candidate path. On the other hand, the trellis calculates and stores the intermediate results. Due to reducing most of the unnecessary path extensions by using the priority queue, the time complexity of the SCP decoder is much lower than the standard SCL decoder.
An example of SCP decoding is illustrated in Figure 13. In the priority queue, the survivor path (marked by red bold edges) has the largest metric 0.36 and ranked at the queue head. According to the priority information, the trellis calculates and extends the candidate paths. Hence, the survivor path "0001000" is also output as the optimal decision. However, since only the candidate paths in the queue are extended in the trellis, the time complexity of SCP is lower than that of standard SCL.
We summarize the characteristics of the four improved SC algorithms in Table 3. All the four algorithms can approach the performance of ML decoding. SCL is the baseline to improve the performance of SC decoding with a computational complexity \(O(LN\log_{2}N)\). SCS is the counterpart to achieve the same performance with a low computational complexity \(O(N\log_{2}N)\) in high SNR yet high space complexity \(O(LN^{2})\). As a hybrid decoding, SCH can keep a good balance between the computational and space complexity. Finally, SCP presents another technique to achieve a good tradeoff.
All these improved SC decoding algorithms can be applied in CRC-polar codes. In this case, the SCL/SC-S/SCH/SCP decoder outputs the candidate paths into a CRC detector, and the check results are utilized to detect the correct codeword. Using these CRC-aided decoding schemes, the performance of CRC-polar codes can substantially outperforms that of turbo/LDPC codes.
We compare the BLER performance of SC, CA-SCL, CA-SCS and CA-SCP decoding for (1024,512) CRC-polar code under the AWGN channel in Figure 14. The polar code is constructed by GA algorithm and CRC8 codes is used with the generator polynomial \(g(x)=0\)x9F. We can see that the SC decoding has the worst performance. When the search width
Figure 14: Performance comparison of SC, CA-SCL, CA-SCS and CA-SCP decoding for (1024,512) CRC-polar code in AWGN channel.
Figure 13: SCP decoding example on the code tree.
in CA-SCP is configured as the same as the list size \(L\) in CA-SCL, these two decoders achieve the same performance. Furthermore, when the CA-SCS decoder is set to a large depth stack (e.g. \(D=1024\)), it can also approach the same performance as CA-SCL or CA-SCP.
### Performance Evaluation of 5G Polar Codes
In 5G standard, CRC concatenated rate-compatible polar codes (constructed by QUP or RQUS algorithms) are the basic schemes of polar codes. Next we compare the performance of 5G polar codes, turbo and LDPC codes under the AWGN channel. The BLER curves vs \(E_{s}/N_{0}\) with the information length \(K=400\) and code rate \(R=1/5\sim 8/9\) are shown in Figure 15. 5G polar codes are constructed from the parent code with the code length \(N=1024\) by QUP or RQUS schemes. The CA-SCL decoding algorithm is employed with the list size \(L=32\) and a CRC8 code with the generator polynomial \(g(x)=0\)x9F is used. An eight-state turbo code in 3GPP LTE standard [75] is used as a reference. The LDPC codes proposed by Qualcomm corporation in the 5G standard proposal of channel coding [76] are applied. The logarithmic maximum a posterior (Log-MAP) algorithm is applied in turbo decoding and the maximum number of iterations is \(I_{\max}=8\). And the belief propagation (BP) algorithm is applied in LDPC decoding and the maximum number of iterations is \(I_{\max}=50\).
In most cases, polar codes can achieve additional coding gains relative to turbo or LDPC codes. In the region of low code rate \(R=1/5\sim 1/2\), as shown in Figure 15, the polar codes punctured by QUP can achieve the same or slightly better performance than those turbo or LDPC codes. Especially for the code rate \(R=1/5\), polar codes can outperform the turbo codes by \(0.5\sim 1\)dB. Compared with turbo or LDPC codes, polar codes are a kind of powerful channel codes at the short to medium code length and the performance gain can be further improved by increasing the list size.
In the region of high code rate \(R=2/3\sim 8/9\), compared to turbo codes, a maximum \(1\sim 1.2\) dB additional gains can be obtained at the code rate \(R=8/9\). On the other hand, compared to LDPC codes, a maximum \(0.8\)dB performance gain can be attained at the code rate \(8/9\). In contrast to the case of low code rate, the RQUS scheme can generate better polar codes in this case. In addition, both turbo and LDPC codes show an error floor phenomenon with the increasing SNR. On the contrary, CRC-polar codes have no such effect.
**Remark 3**.: _In 5G standard [10], either the downlink (downlink control indicator (DCI) or broadcast channel (BCH)) or uplink (uplink control indicator (UCI)) message bits are encoded by the CRC-polar codes and QUP/RQUS rate matching schemes. Accordingly, these control singling or channels are dictated the CA-SCL decoding [7] as the channel decoder algorithm to provide high reliability. As commented by Matlab 5G toolbox [77], "it is well known that CA-SCL decoding can outperform turbo or LDPC codes and this was one of the major factors in the adoption of polar codes by 3GPP"._
## VI Polar Coding in the Future
In the future 6G system, ultra-high reliable transmission, high spectrum efficiency and large system capacity are the primary requirements of wireless communications. In this section, we mainly discuss the polar coding to fulfill these requirements. First, we briefly introduce the limited performance of short polar code to satisfy the reliable transmission. Second, the code construction methods are investigated to improve the performance of polar codes under the fading channels. Third, polar coded modulation and polar coded HARQ are discussed in order to increase the spectrum efficiency. In the end, we design the framework of polar
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Algorithm & Searching strategy & Error Performance & Computational complexity & Space complexity \\ \hline SCL & width-first & approaching ML & \(O(LN\log_{2}N)\) & \(O(LN)\) \\ \hline SCS & depth-first & approaching ML & \(O(N\log_{2}N)\) in high SNR & \(O(LN^{2})\) \\ \hline SCH & width/depth & approaching ML & \(O(N\log_{2}N)\) in high SNR & \(O(LN)\sim O(LN^{2})\) \\ \hline SCP & trellis/priority queue & approaching ML & \(O(N\log_{2}N)\) in high SNR & \(O(LN\log_{2}N)\) \\ \hline \end{tabular}
\end{table}
Table 3: _Summary of improved SC algorithms._
processing to jointly optimize the wireless transmission systems.
### Optimal Short Polar Codes
Low-latency (\(100\mu s\)) and ultra-high reliability (BLER\(<10^{-6}\sim 10^{-7}\)) are the key performance metrics of 6G wireless transmission [78]. These requirements become the serious challenges for the short channel codes.
In 2019, Arikan [79] proposed the polarization-adjusted convolutional (PAC) codes with the Reed-Muller design rule to improve the ML performance of short polar codes. PAC codes have excellence performance and approach the normal approximation (NA) of the finite blocklength capacity in [80]. However, the code rate is not flexibly configured and the complexity of sequential decoding is very high when the code rate approaches the capacity.
Piao and Niu _et al._[81] found that the simple CRC-polar codes by the optimization of encoding and decoding can also approach the normal approximation. First, they used the sphere constraint based enumeration methods [82] to analyze the minimum weight distribution of CRC-polar codes. In fact, in the case of short code length, the generator polynomial of CRC codes is very significant to affect the performance of CRC-polar codes. Table 4 gives the optimal generator polynomials of CRC codes for short CRC-polar codes with code length \(N=128\). Given the code rate \(R=1/3,1/2,2/3\) and the CRC bit length \(m=20,24,16\), the optimal polynomials \(g(x)\) (indicated by hexadecimal) listed in this table are obtained after the brute-force searching by calculating the minimum weight distribution, where \(d_{min}\) is the minimum Hamming distance and \(A_{d_{min}}\) is the corresponding enumerator.
Then, they designed a new CRC-aided hybrid decoding (CA-HD) for the CRC-polar codes by combining the CA-SCL and CA-SD algorithm. Figure 16 shows the diagram of CA-HD decoding, which includes two decoding modes: Adaptive CA-SCL mode and CA-SD mode.
As shown in Figure 16, the adaptive CA-SCL decoding is first started to estimate the transmitted codeword. If the list size is smaller than the maximum size \(L_{max}\) and the decision results can pass the CRC check, then the decoding is terminated. Otherwise, when the list size reaches the maximum size and the decoding is failed, CA-SD decoding is activated. In order to reduce the searching radius of sphere decoding, the CRC bits are re-encoded and input into the SD decoder. In
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(N\) & \(R\) & \(m\) & \(g(x)\) & \(d_{min}\) & \(A_{d_{min}}\) \\ \hline & 1/3 & 20 & 0x1005D1 & 24 & 171 \\ \cline{2-6}
128 & 1/2 & 24 & 0x10001E5 & 16 & 66 \\ \cline{2-6} & 2/3 & 16 & 0x117B7 & 10 & 167 \\ \hline \end{tabular}
\end{table}
Table 4: _Optimal generator polynomials of CRC-Polar codes with code length \(N=128\)._
Figure 15: _BLER performance comparison of RCP codes (punctured by QUP and RQUS algorithms), LTE turbo codes and LDPC codes with information length \(K=400\) and various code rates._
the end, the result of SD decoding is output as the final decision. CA-SCL decoding has low complexity but the performance is inferior to the ML decoding. On the contrary, CA-SD decoding is identical to the ML decoding whereas the complexity is very high. Therefore, by elaborately integrating CA-SCL and CA-SD, the CA-HD decoding can achieve the optimal tradeoff between the performance and complexity.
We investigate the performance of CRC-polar codes with the code length \(N=128\) and code rate \(R=1/3\), \(1/2\), and \(2/3\) (configured parameters are selected from Table 4) under various algorithms, such as ADSCL (the maximum list size \(L_{max}=1024\)), CA-SCL (the fixed list size \(L=32\)) and CA-HD (\(L_{max}=1024\)). Figure 17 depicts the BLER performance of CRC-polar codes. Furthermore, the BLER performance of (128,64) PAC code, LDPC code and the NA bound are also illustrated in Figure 17. Here, the performance of LDPC code is evaluated based on the construction of 5G NR LDPC code with the base graph 2 (BG2). and the belief propagation (BP) decoding algorithm by 50 iterations.
We observe that the CRC-polar code under CA-SCL and ADSCL can achieve better performance than LDPC code. Furthermore, the performance under CA-HD decoding significantly outperform that under CA-SCL and ADSCL and approach the NA bound. Especially, for the (128,64) code, the CRC-polar code is superior to PAC code and LDPC code and very close to the NA bound with a small gap of \(0.025\) dB. We can conclude from these results that CRC-polar codes with the optimized generator polynomial and CA-HD decoding can almost approach the finite blocklength capacity. Therefore these optimal CRC-polar codes can be regarded as the important candidate to provide the low-latency and ultra-high reliability transmission in the 6G wireless system.
On the other hand, for the medium blocklength, the performance of CRC-polar codes is evaluated and compared with turbo/LDPC codes in [6]. For an example, given the code length \(N=1024\) and the code rate \(R=\frac{1}{2}\), the CRC-polar code can achieve \(0.5\sim 1\) dB performance gain over the turbo/LDPC code at the BLER of \(10^{-4}\). Generally, for the moderate to long blocklength, polar codes under the SCL decoding with a large list size will reach a similar or better performance than the turbo/LDPC codes. In addition, CRC-polar codes show no sign of error floors in the high SNR regime, which is a significant advantage over the turbo/LDPC codes.
### Construction in Fading Channels
The construction of polar codes in the fast or block fading channels is an important direction for the practical application and has been attracted wide attentions. For the fast Rayleigh fading channel, Trifonov [83] first presented an iterative algorithm to calculate and track the diversity order and noise variance of the polarized channels. Lately, Zhou and Niu _et al._[84] designed two algorithms to find a capacity-equivalent BI-AWGN channel of the Rayleigh channel and constructed the polar codes by using the GA algorithm.
Figure 16: CA-HD decoding for CRC-polar codes.
Figure 17: The performance comparison of CRC-polar codes with various decoding algorithms and PAC codes at the code length \(N=128\).
Recently, Niu and Li [85] established a systematic framework in term of the polar spectrum to analyze and construct polar codes in various fast fading channels, such as Rician, Rayleigh and Nakagami channels, which explicitly reveals the relationship between the diversity order and the codeword weight.
On the other hand, for the block fading channel, Bravo-Santos [86] proposed a recursive calculation of Bhattacharyya parameter yet with a time-consumption Monte-Carlo computation. Subsequently, Si _et al._[87] designed a two-stage polar coding over channel uses and fading blocks. However the construction still depends on the recursive calculation of Bhattacharyya parameter. Niu and Li [88] systematically analyzed the error performance of polar codes by introducing the new concept, named split polar spectrum. The upper bound on the error probability of polarized channel is derived to explicitly reveal the relationship between the diversity order \(L\) and the block-wise weight distribution of the codeword.
In a word, the polar codes constructed based on polar spectrum [85][88] can achieve similar or better error performance than those constructed by conventional construction. Due to the advantages of low complexity and high performance, these methods are suitable for the construction of polar codes in wireless communications.
### Polar Coded Modulation and HARQ
In order to fulfill the high spectrum efficiency requirement of 6G wireless communications, extending the concept of channel polarization and designing the polar coded modulation (PCM) and polar coded hybrid automatic repeat request (PC-HARQ) system become the key technologies in wireless data transmission.
Seidl _et al._[89] first established the framework of polar-coded modulation, which includes two-stage channel polarization transform, such as coding polarization and modulation polarization. The PCM framework can be designed based on two coded-modulation schemes, i.e., the bit-interleaved polar-coded modulation (BIPCM) and multi-level polar coded modulation (MLPCM). Figure 18 shows the system architectures of two schemes. For the MLPCM, multiple polar encoders are utilized and a tuple of coded bits is directly mapped to a modulated symbol. On the other hand, for the BIPCM, a single polar encoder is used and the coded bit sequence is interleaved and mapped to the modulated symbol. Generally, the bit-to-symbol mapping is very important for the PCM design in order to enhance the polarization effect. Gray mapping and set partition mapping are suitable for BIPCM and MLPCM respectively. Furthermore, rate-matching (e.g. QUP scheme) and coding construction (e.g. GA algorithm) are also critical techniques in the PCM design.
For the MLPCM, Zhou and Niu _et al._[90] extended the PW metric and presented a universal construction for MLPCM to facilitate the practical implementation. Then Khoshnevis _et al._[91] established the throughput maximization scheme by using the set-partition (SP) mapping and the rate matching algorithm. Furthermore, Dai _et al._[92] introduced the spatial coupled structure among multiple coded blocks and designed the asynchronous polar-coded modulation scheme to improve the transmission reliability of MLPCM.
On the other hand, for the BIPCM, Shin _et al._[93] found the mapping patterns for the pulse-amplitude modulation (PAM) with Gray labeling whereas the search complexity is very high. Then, using the constellation symmetry, Chen and Niu _et al._[94] proposed an efficient search algorithm to find the optimal
Figure 18: System architecture of PCM.
mapping of BIPCM. In order to improve the performance of BIPCM, Tian _et al._[95] designed a joint successive cancellation decoding algorithm by combining the demapping and deinterleaving into SC decoder. Mahdavifar _et al._[96] considered the multi-channel model of BICM and constructed the compound polar code.
Hybrid automatic repeat request (HARQ) is another key technology to enhance the link reliability and throughput in practical wireless systems. Chen and Niu first proposed two types of the polar-coded HARQ schemes, such as incremental redundancy (IR) PC-HARQ [97] and chase combining (CC) PC-HARQ [98]. By using the QUP scheme, the original IR-PC-HARQ scheme [97] can achieve the same or higher throughput than turbo/LDPC code HARQ but its latency is slightly high. On the other hand, the CC-PC-HARQ [98] is easy to implement while the system performance is limited due to small coding gain.
Lately, Li _et al._[99] proposed the incremental freezing (IF) scheme so as to achieve coding gain at retransmissions. Further, an adaptive IR scheme based on the polarizing matrix extension (PME) was proposed in [100]. By using QUP method, the PME based PC-HARQ scheme constructs a longer length polar code with multiple transmissions. Compared with the IF scheme, the PME scheme can obtain additional coding gain with enhanced polarization effect.
In summary, polar coded modulation and HARQ will become the key supports of the 6G wireless transmission. Considering the practical application, the construction method, rate-matching and mapping pattern are needed to be further explored in the future.
### Polar Processing
Polarization effects exist in almost every unit of the communication system rather than only in the coding module. Theoretically, when the code length goes to infinite, polar coding can achieve the corresponding limitation of various communication scenario, such as loss/lossy source coding, multiple access, broadcasting, relay and distributed communication. As stated in [101], Korada pointed out that polarization is almost optimal for everything. From the viewpoint of practical application, Niu _et al._[102] proposed the framework of the polar coded transmission, name polar processing, as a new design methodology to fulfill the high spectrum efficiency of 6G wireless system.
Figure 19 illustrates the system architecture of polar processing. The architecture consists of three-stage polarization: coding polarization, modulation polarization and signal polarization. In the first stage, i.e., signal polarization, many signal processing techniques have a general polarization effect. For an example, in the multiple-input-multiple-output (MIMO) system, different antenna link has distinct channel reliability. Similarly, in non-orthogonal multiple access (NOMA) system, different user undergoes diverse channel condition. In multi-carrier system, similar phenomenon can be observed. Therefore, individual antenna/user/carrier can be regarded as the general polarized channel and the reliability distinction in each channel reveals the polarization in signal space. In the second stage, i.e., modulation polarization, one signal streaming is further decomposed into many modulated bit subchannels with different reliability. Finally, in the third stage, i.e., coding polarization, we use the one or multiple polar codes to match each modulated-bit polarized channels.
By using the three-stage polarization method, the polar processing transmitter glues the coding, modulation and signal processing into a joint polarization architecture so as to dramatically improve the system performance. Meanwhile, the polar processing forms a joint successive cancellation structure whereby CA-SCL decoding, soft demodulation and soft signal detection can be integrated into a low complexity compounded SCL receiver rather than the complex iterative calculation in turbo processing.
Under the framework of polar processing, Dai and Niu _et al._[103] designed the polar-coded MIMO (PC-MIMO) system by the bit/symbol/antennal polarization. Then they [104] proposed the polar-coded NOMA (PC-NOMA) system by the bit/symbol/user polarization. Lately, Li and Niu _et al._[105] investigated the polar-coded generalized frequency division multiplexing (GFDM) systems by the similar methods. Recently, Piao and Niu _et al._[106] considered the polar-coded precoding system by designing a unitary finite-feedback transmit precoder.
Give different configurations of MIMO (\(1\times 1,2\times 2,4\times 4,8\times 8\)) and \(64\) QAM modulation, in the case of BLER\(=10^{-4}\), we evaluate the spectrum efficiency of three coded MIMO systems, that is, PC-MIMO, Turbo coded MIMO (TC-MIMO) and LDPC coded MIMO
(LC-MIMO) in Figure 20. The polar codes are constructed by using the method in [103] and CA-SCL decoding is used. Turbo codes are referred to LTE standard [75] and the decoding is the Log-MAP algorithm. LDPC codes are utilized from 5G standard [10] and the BP decoding is applied.
As shown in Figure 20, for all the MIMO configurations, PC-MIMO can achieve \(1\sim 2\) dB performance gain over TC-MIMO or LC-MIMO. Since the polar processing jointly polarized the entire communication system, PC-MIMO can significantly improve the spectrum efficiency. Therefore, it follows that PC-MIMO is a powerful candidate technology to fulfill the high efficiency transmission requirement of 6G system.
**Remark 4**.: _By now, polar codes have been adopted as the coding standard of control channels in 5G wireless system. Due to the constraints of the control channels, the concatenated coding, construction and rate-matching of polar codes are standardized while other advanced techniques are still open. The application in 5G is only a start point of polar codes in practical implementation. Look to the future, the polar coded transmission, or equivalently polar processing, will uncover a universal and powerful methodology to optimize the communication system. Due to the double advantages of performance and implementation, we believe that polar codes and polar processing will become more popular in the 6G wireless system, satellite communication system, microwave communication system, etc._
## VII Tao and polarization: the traditional and the modern
In this paper, we review the basic principle of polar codes, the application in 5G standard and the promising directions in the future. As a great breakthrough of theory, the invention of polar codes is an important milestone of information theory and channel coding to uncover the constructive method approaching the channel capacity. For the practical application, polar codes with concatenated coding and CA-SCL decoding fulfill the high reliability requirement of 5G wireless system at the short to medium code length. In the future, polar processing may guide a revolution of system design and open a new era of communication technology.
When we retrospect the Chinese traditional culture, we find the fantastic correspondence between the polarization and Tao. In the famous work of Taoism, I Ching [107], it explains the principle of change. That is to say, Tai Chi gives rise to Liang Yi, Liang Yi give rises to four phenomena (Si Xiang), Si Xiang give rises to Eight Trigrams, and the understanding of auspiciousness through Eight Trigrams. According to legend, the ancient Chinese figure of Fu Xi invented the eight diagrams. The modern binary number system, the basis for binary code, was invented by Gottfried Leibniz in 1689. He encountered the I Ching
Figure 19: _System architecture of polar processing._
Figure 20: _Performance comparison of PC-MIMO, TC-MIMO and LC-MIMO._
and noted that similar presentation between the binary numbers and the eight trigrams [108]. Recently, we find the 1-1 mapping relationship between the eight trigrams and the code tree of polar codes as shown in Figure 21.
We observe that Tai Chi is mapped to the root node and Yin Yi and Yang Yi are associated to the polarized channels \(W_{2}^{(1)}\) and \(W_{2}^{(2)}\) respectively. Furthermore, four phenomena, such as, Greater Yin, Shao Yang, Lesser Yin and Greater Yin, are corresponding to four polarized channels \(W_{4}^{(i)},i=1,2,3,4\). Finally, eight trigrams are 1-1 mapped to eight polarized channel \(W_{8}^{(i)},i=1,2,\cdots,8\). The eight trigrams indicate everything in the world and have distinct manifestation. On the other hand, the polarized channels have different reliabilities due to the channel polarization. It seems that I Ching provides an interesting interpretation of channel polarization. If we deeply explore the idea of Taoism, such ancient Chinese philosophical thought may guide a new insight into the design and optimization of the polar codes.
Passing through 3,000 years, the traditional Taoism and the modern coding contrast finely with each other. This is an amazing orchestration between the classic philosophy and modern technology!
## Acknowledgement
This work is supported in part the Key Program of National Natural Science Foundation of China (No. 92067202), in part by the National Natural Science Foundation of China (No. 62071058), and in part by the Major Key Project of PCL (PCL2021A15).
## Notes
\({}^{1}\)In fact, Stole [109] also independently found the construction method of polar codes whereas the proof of polarization phenomenon and capacity-achieving should be first credited to Arikan.
|
2310.17225 | Long-term Orbital Period Variation of Hot Jupiters from Transiting Time
Analysis using TESS Survey Data | Many hot Jupiters may experience orbital decays, which are manifested as
long-term transit timing variations. We have analyzed 7068 transits from the
Transiting Exoplanet Survey Satellite (TESS) for a sample of 326 hot Jupiters.
These new mid-transit time data allow us to update ephemerides for these
systems. By combining the new TESS transit timing data with archival data, we
search for possible long-term orbital period variations in these hot Jupiters
using a linear and a quadratic ephemeris model. We identified 26 candidates
that exhibit possible long-term orbital period variations, including 18
candidates with decreasing orbital periods and 8 candidates with increasing
orbital periods. Among them, 12 candidates have failed in our leave-one-out
cross-validation (LOOCV) test and thus should be considered as marginal
candidates. In addition to tidal interaction, alternative mechanisms such as
apsidal precession, R{\o}mer effect, and Applegate effect could also contribute
to the observed period variations. The ephemerides derived in this work are
useful for scheduling follow-up observations for these hot Jupiters in the
future. The Python code used to generate the ephemerides is made available
online. | Wenqin Wang, Zixin Zhang, Zhangliang Chen, Yonghao Wang, Cong Yu, Bo Ma | 2023-10-26T08:23:49Z | http://arxiv.org/abs/2310.17225v2 | Long-term Orbital Period Variation of Hot Jupiters from Transiting Time Analysis using TESS Survey Data
###### Abstract
Many hot Jupiters may experience orbital decays, which are manifested as long-term transit timing variations. We have analyzed 7068 transits from the Transiting Exoplanet Survey Satellite (TESS) for a sample of 326 hot Jupiters. These new mid-transit time data allow us to update ephemerides for these systems. By combining the new TESS transit timing data with archival data, we search for possible long-term orbital period variations in these hot Jupiters using a linear and a quadratic ephemeris model. We identified 26 candidates that exhibit possible long-term orbital period variations, including 18 candidates with decreasing orbital periods and 8 candidates with increasing orbital periods. Among them, 12 candidates have failed in our leave-one-out cross-validation (LOOCV) test and thus should be considered as marginal candidates. In addition to tidal interaction, alternative mechanisms such as apsidal precession, Romer effect, and Applegate effect could also contribute to the observed period variations. The ephemerides derived in this work are useful for scheduling follow-up observations for these hot Jupiters in the future. The Python code (PdotQuest) used to generate the ephemerides is made available online.
Exoplanet systems -- Transit photometry -- Transit timing variation method +
Footnote †: journal: ApJS
## 1 Introduction
Hot Jupiters (HJs) are a type of exoplanet with masses comparable to Jupiter and orbital periods shorter than ten days. As a result, they can be readily detected through ground-based transit surveys and have been the subject of extensive long-term investigations. Due to their close proximity to the host stars, they are expected to undergo intense tidal interactions with the stars (Ogilvie, 2014). The tidal bulges induced by hot Jupiters on their host stars generate torques that can transfer angular momentum from the planets to the stars (Rasio et al., 1996; Levrard et al., 2009), known as equilibrium tides (Counselman, 1973; Rasio et al., 1996). Particularly, when the orbital period of a hot Jupiter is shorter than the rotation period of its host star (Penev et al., 2018), the star may experience spin-up, and the hot Jupiter may spiral inward over time, potentially leading to tidal disruption or destruction by Roche lobe overflow or atmospheric evaporation (Levrard et al., 2009; Jackson et al., 2009; Matsumura et al., 2010). The decay rates can vary depending on the magnitude of the stellar tidal dissipation for a given configuration (Levrard et al., 2009; Matsumura et al., 2010). Population studies offer additional evidences for orbital decay, such as the scarcity of gaseous giants with periods less than one day (Jackson et al., 2008; Hansen, 2010; Penev et al., 2012; Ogilvie, 2014), the unusually rapid rotation of certain hot Jupiters' host stars (Penev et al., 2018), and the rarity of hot Jupiters around subgiants (Hansen, 2010; Schlaufman and Winn, 2013).
The orbital decay of hot Jupiters can be revealed through monitoring of their transit timings over decades, known as transit timing variations (TTVs). Currently,
the detection of orbital period decay has been confirmed in the WASP-12 system through various TTV studies (Maciejewski et al., 2016; Patra et al., 2017; Yee et al., 2020). Another candidate is WASP-4 b (Bouma et al., 2019), although the evidence is less compelling. Long-term TTVs of hot Jupiters can also be induced by other physical mechanisms, such as planetary mass loss (Valsecchi et al., 2015; Jackson et al., 2016), apsidal precession (Miralda-Escude, 2002; Ragozzine and Wolf, 2009), the Romer effect, i.e. the line-of-sight acceleration due to wide stellar companions (Bouma et al., 2019), and the Applegate mechanism (Applegate, 1992). Short-term TTVs can be used to detect extra planets in the same planetary system (Holman and Murray, 2005; Agol et al., 2005). Additionally, overestimation of the measurement precision of transit time data could also introduce false TTV signal (Mallonn et al., 2019).
To test the various scenarios of the long-term orbital period variations, a large and precise follow-up transit observation dataset is required. The Transiting Exoplanet Survey Satellite (TESS; Ricker et al., 2014), launched in 2018, has provided such a great opportunity. It offers the latest and precise transit timing measurements for a lot of known hot Jupiters, which are suitable for long-term TTV studies. By combining the high-precision 2-minute cadence transit data provided by TESS with archival data from previous work, new constraints can be placed on the period change rate of these hot Jupiters and the tidal dissipation factor of their host stars. Furthermore, transit ephemeris updates provided by TESS can significantly enhance our capability to predict the future transit times of hot Jupiters, which is crucial for future space telescopes targeting these hot Jupiters.
The paper is organized as follows. Section 2 describes our sample selection and timing analysis. Section 3 introduces the transit timing models and the model fitting processes. Section 4 presents our results, including the candidate systems exhibiting period decay, period increase, and constant period. In Section 5 and 6, we summarize our main findings, compare our study with previous studies, and discuss the possible physical origin of the observed long-term timing variations.
## 2 Sample Selection and TESS Data Analysis
In this study, we select transiting hot Jupiters (with an orbital period less than 10 days and the mass larger than 0.3 \(M_{J}\)) observed by TESS. We also require the planet has previous transiting data before the TESS mission, thus providing a longer time baseline, which is crucial for detecting long term orbital period variations. After a rigorous searching effort, we find a total of 326 hot Jupiters in our whole sample.
We then download the 2-minute cadence Presearch Data Conditioning-Simple Aperture Photometry (PDC-SAP) TESS light curves(Team, 2021) from the MAST website. Light curves showing excessive noises are excluded from the following analysis. We fit a Mandel and Agol (2002) model to the phase-folded light curve of each TESS sector. The parameters used for the transit model included zero epoch \(T_{0}\), period \(P\), the impact parameter \(b\), stellar density in \(g/cm^{3}\), and planet-to-star radius ratio \(R_{p}/R_{*}\). The eccentricity and argument of periastron were fitted using the parameterization of \(\sqrt{e}cos\omega\) and \(\sqrt{e}sin\omega\). In addition, two quadratic limb darkening parameters were fit for each band. For each transiting hot Jupiter, we fit first the combined light curve before fit for individual transits, with an example of WASP-12 b shown in Figure 1 and 3. In Figure 1 we show the phase folded light curve of WASP-12 b and the best-fit transit light curve model, while in Figure 2 we show all the individual transit light curves of the first sector of TESS data for WASP-12 b. The derived all TESS transit times for WASP-12 b are displayed in Figure 3. The epoch number is calculated relative to a reference zero epoch from literature. For our sample of 326 hot Jupiters, we obtain a total of 7,068 TESS light curves and middle transit times (Table 1).
from the Exoplanet Transit Database (ETD; Poddany et al., 2010) in our analysis when necessary.
### Two Transit-timing Models
We fit two models to the transit timing data. The first model assumes a constant orbital period and hence a linear ephemeris:
\[T_{N}=T_{0}+NP, \tag{1}\]
where \(T_{N}\), \(T_{0}\), N and \(P\) are the calculated mid-transit times, the reference mid-transit time, the number of orbits counted from the designated reference transit, and the orbital period of the planet respectively. The free parameters \(T_{0}\) and \(P\) are to be fitted in this model, with initial guess values taken from discovery paper.
In the second model, we assume the planet has a constant rate of change of orbital angular momentum, which will cause a constant period change rate \(a=\frac{dP}{dt}=\dot{P}\), usually in the unit of ms/yr. The corresponding ephemeris model can be calculated using the following two equations:
\[T_{N} = T_{N-1}+P_{N-1} \tag{2}\] \[P_{N} = P_{N-1}+a(P_{N-1}+P_{N})/2, \tag{3}\]
where \(T_{N}\), \(T_{0}\), \(N\), \(P_{0}\), and \(a\) are the calculated mid-transit times, the reference mid-transit time, the number of orbits counted from the designated reference transit, the initial orbital period of the planet, and the constant period change rate, respectively. Eqn.(3) means each succesive orbital period is slightly different from its previous one. Notice that the orbital period change rate \(a\) can also be expressed as \(a=\frac{dP}{dt}=\frac{1}{P}\frac{dP}{dN}\), which can be used to convert our equations to the quadratic equation form used in Maciejewski et al. (2021). The three free parameters to be fitted are the reference epoch \(T_{0}\), the initial period \(P_{0}\) at the reference epoch, and the constant period change rate \(a\), with initial guesses for the first two taken from literature values.
### A New Tool for Long-term Transit Timing Data Analysis
We have developed a new software tool, PdotQuest, to analyze the long-term transit timing variations using both of a linear ephemeris model and a quadratic ephemeris model. This software allows for the efficient fitting of data by simply inputting a column of middle transit times, a column of transit time measurement uncertainties, as well as an initial orbital period corresponding to the earliest transit time. The best-fit results for the two ephemeris models can then be displayed and compared.
In the software, we employ the emcee package (Foreman-Mackey et al., 2013) to determine the optimal model parameters by minimizing the \(\chi^{2}\) statistic and calculate uncertainties of all fitting parameters. We run the MCMC sampling with 100 walkers and burn the initial 20% of the 500 steps used for each walker to ensure convergence. Broad uniform priors are applied to all fitting parameters.
A "n-sigma rejection" iterative fitting scheme has been employed for data clipping to remove outliers during the fitting process. The standard deviation of the fitting residuals is calculated, and any data point with residual value outside of a n-\(\sigma\) range from the residual mean is eliminated. After a few trial, we find the "3-sigma rejection" scheme sometimes removes useful data points from the fitting and the "5-sigma rejection" scheme sometimes keeps too many outliers. Thus, we decide to utilize both of the "3-sigma rejection" and the "5-sigma rejection" clipping strategy during our fitting, and reach a conclusion based on the two fitting results.
The Bayesian Information Criterion (BIC) is utilized to compare the relative quality of each model in describing the transit timing data, given the different number of free parameters among the models. The BIC is defined as:
\[BIC=\chi^{2}+k\log n, \tag{4}\]
where n is the sample size, and k is the number of free parameters in the model. A lower BIC score usually signals a better model, with \(\Delta BIC>10\) corresponding to'very strong' evidence in favor of the model with smaller BIC (Kass and Raftery, 1995). In our analysis, we select hot Jupiter candidates showing signs of long-term period change using the following two criteria: (a) the
\begin{table}
\begin{tabular}{c c c c} \hline \hline System & Epoch & \(T_{e}\)(BJD\({}_{\rm TDB}\)) & Uncertainty (days) \\ \hline CoRoT-1 b & 1458 & 2458469.06773 & 0.00085 \\ \hline \end{tabular} Note. – Only a portion of this table is displayed here to illustrate its format and content. The complete, machine-readable version of the table can be accessed in the online version of this work.
\end{table}
Table 1: Transit times.
constant period change rate \(a\) is at least 3-\(\sigma\) away from zero value, and (b) \(\Delta BIC>10\) where \(\Delta BIC\) is the BIC difference between the linear and quadratic model fitting.
For all of our long-term period variation candidates, we also introduce a new test called "leave-one-out cross-validation" (LOOCV), in which we remove one transit data point at a time and re-fit the remaining data points. In this way, we are able to identify the most influential data points and assess the robustness of the quadratic model fitting against the removal of any individual data point. This new test is motivated by the fact that some transit time data from literature have unrealistic small error bars and can yield significantly biased fitting results, with HD 189733 b as an example shown in Sec 4.
## 4 Results
In this section, we present the fitting results for each hot Jupiter analyzed from our sample. We divide the whole hot Jupiter sample into three different categories: 18 hot Jupiters showing signs of orbital period decay, 8 hot Jupiters showing signs of orbital period increase, and 300 hot Jupiters showing no signs of period change. All the ephemerides derived in this work are made available online (Table 2).
Figure 2: Individual transit light curve of WASP-12b from sector 1 observations of TESS.
### Candidates with Decreasing Orbital Periods
We have 18 hot Jupiters showing signs of orbital period decay. Among them, the last 7 (from 4.1.12 to 4.1.18) have failed to pass the LOOCV test, i.e. upon the removal of one single data point, they no longer meet the two criteria we set above for identifying long-term period variation candidates..
#### 4.1.1 WASP-12 b
WASP-12 b is an ultra-hot Jupiter first reported by Hebb et al. (2009), with a mass of 1.47 \(M_{J}\) and radius of 1.9 \(R_{J}\). It orbits a 6150 K late F-type star with a period of 1.091 days. The host star properties are consistent with a \(1.3M_{\odot}\) main sequence star or a \(1.2M_{\odot}\) subgiant without a convective core (Weinberg et al., 2017). Its decreasing orbital period was first detected by Maciejewski et al. (2016), and subsequent studies have confirmed the period change (Patra et al., 2017; Baluev et al., 2019) and established orbital decay as its cause using transit and occultation observations (Yee et al., 2020; Turner et al., 2021). The most recent research conducted by Wong et al. (2022) revealed a decay rate of \(-29.81\pm 0.94\) ms/yr, while Hagey et al. (2022) obtained a result of \(-29.1\pm 1.0\) ms/yr based on the ETD transit times. We have analyzed 189 mid-transit time data for this system (107 from literature, 82 from TESS). Our best-fitting result is \(\dot{P}=-30.19\pm 0.92\) ms/yr, with the timing residual O-C diagram shown in Figure 4. The downward parabola in the plot clearly reveals the secular period decrease trend. WASP-12 b still remains as the best orbital period decay candidates.
#### 4.1.2 WASP-4 b
WASP-4 b is a hot Jupiter first identified by Wilson et al. (2008), with a mass of 1.2 \(M_{J}\) and radius of 1.4 \(R_{J}\). It orbits a G7V star on a 1.338-day orbit. Like WASP-12 b, WASP-4 b is also a well-studied planet showing signs of orbital decay. Bouma et al. (2019) first reported the orbital variation of WASP-4 b to be \(-12.6\pm 1.2\) ms/yr using TESS and ground-based observations, and Southworth et al. (2019) found a period decay rate of \(-9.2\pm 1.1\) ms/yr using only ground-based observations. Baluev et al. (2020) have re-analyzed 124 transit light curves of WASP-4 b and obtained a period derivative of \(-5.94\pm 0.39\) ms/yr. They have also analyzed radial velocity (RV) data and attributed the orbital period variation to the line-of-sight acceleration caused by a wide-orbit companion. Recently, a comprehensive analysis conducted by Turner et al. (2022) does not find acceleration in the RV data, and suggests the possible existence of another wide-orbit high-mass planet WASP-4 c in this system. They tend to use the orbital decay scenario to explain the TTV seen in WASP-4 b, with a rate of \(-7.33\pm 0.71\) ms/yr. We have analyzed 122 mid-transit time data (75 from literature, 47 from TESS). Our analysis yields a period change rate of \(\dot{P}=-6.43\pm 0.55\) ms/yr (Figure 1), which is consistent with the \(-5.81\pm 1.58\) ms/yr value reported by Ivshina & Winn (2022) and the \(-4.8\pm 1.4\) ms/yr value reported by Maciejewski et al. (2022) but with a smaller error bar.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ System} & \(T_{0}\)(BJD\({}_{\rm TDB}\)) & Uncertainty (days) & \(P_{0}\)(days) & Uncertainty (days) & \(\dot{P}\) (ms/yr) & \(\sigma\) rejection \\ \hline CoRoT-1 b & 2454138.32729 & 0.00005 & 1.50896863 & 0.00000005 & / & / \\ CoRoT-2 b & 2453566.47840 & 0.00028 & 1.74300015 & 0.00000035 & -21.65 \(\pm\) 2.96 & 5 \\ CoRoT-2 b & 2453566.47894 & 0.00028 & 1.74299976 & 0.00000034 & -19.12 \(\pm\) 2.90 & 3 \\ \hline \end{tabular} Note. – The table contains ephemerides for both samples fitted by a linear model and TTV candidates fitted by a quadratic model. The ephemerides of TTV candidates are presented on two separate lines as they were fitted using different sigma-rejection schemes. Only a portion of this table is displayed here to illustrate its format and content. The complete, machine-readable version of the table can be accessed in the online version of this work.
\end{table}
Table 2: Ephemerides.
Figure 3: TESS transit times for WASP-12 b. The epoch number is calculated relative to an reference zero epoch from literature.
#### 4.1.3 CoRoT-2 b
CoRoT-2 b is a hot Jupiter discovered by Alonso et al. (2008), with a mass of 3.3 \(M_{J}\) and radius of 1.5 \(R_{J}\). It orbits an active G7V star on a 1.743-day orbit. Since its discovery, CoRoT-2 b has been the subject of numerous studies, including observations of its atmosphere and studies of its orbital dynamics. The study of Ivshina and Winn (2022) gave a period change rate of \(\dot{P}=-103.76\pm 6.33\) ms/yr, where they have used data from Ozturk and Erdem (2019). We notice there exhibits a significant disparity between the error bars of transit times ranging from 2454237.53562 to 2454378.7143 JD (Ozturk and Erdem, 2019) and the fitting residuals, which is shown in the top panel of Figure 5. For the purpose of assigning realistic uncertainties to the data, we have employed a formula \(e_{new}=\sqrt{(}\sigma_{resi}^{2}+\Sigma e^{2})\) to manually inflate the error bars of the data points shown in Figure 5, where \(\sigma_{resi}\) is the standard deviation and \(e\) is the original measurement uncertainty. Comparison of the transit timing data before and after the uncertainty inflation is shown on Figure 5. We have analyzed 123 mid-transit time data (118 from literature, 5 from TESS). With the inclusion of TESS data, our best-fitting result is now \(\dot{P}=-21.65\pm 2.96\) ms/yr using the 5-\(\sigma\) rejection scheme and \(-19.12\pm 2.90\) ms/yr using the 3-\(\sigma\) rejection scheme (Figure A2), much smaller than the result from Ivshina and Winn (2022).
#### 4.1.4 Hat-P-37 b
HAT-P-37 b is a 1.2 \(M_{J}\), 1.2 \(R_{J}\) hot Jupiter discovered by Bakos et al. (2012). It orbits a G-type star with a period of 2.797 days. Baluev et al. (2019) homogeneously analyzed the light curves of HAT-P-37 b and found a period change rate of \(-27.2\pm 8.8\) ms/yr, which showed a slight preference for the orbital decay model
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline \multicolumn{1}{c}{ System} & \(\dot{P}\) (ms/yr) & BIC\({}_{linear}\) & BIC\({}_{quad}\) & \(\dot{P}\) (ms/yr) & BIC\({}_{linear}\) & BIC\({}_{quad}\) & LOOCV \\ & (5-\(\sigma\) rejction) & & & (3-\(\sigma\) rejection) & & & \\ \hline CoRoT-2 b & -21.65 \(\pm\) 2.96 & 634.56 & 579.03 & -19.12 \(\pm\) 2.90 & 332.00 & 274.41 & pass \\ HD189733 b & -8.44 \(\pm\) 2.44 & 58.98 & 47.33 & -8.69 \(\pm\) 2.42 & 41.95 & 25.08 & fail \\ HAT-P-37 b & -34.29 \(\pm\) 6.67 & 221.71 & 195.71 & -34.03 \(\pm\) 6.74 & 221.71 & 195.71 & pass \\ KELT-16 b & -27.82 \(\pm\) 4.76 & 103.63 & 70.77 & -26.64 \(\pm\) 4.80 & 86.95 & 56.42 & pass \\ TrES-1 b & -14.82 \(\pm\) 2.37 & 355.98 & 321.61 & -14.89 \(\pm\) 2.35 & 249.19 & 203.68 & pass \\ TrES-3 b & -8.77 \(\pm\) 0.47 & 1834.16 & 1510.6 & -8.51 \(\pm\) 0.48 & 1695.13 & 1377.28 & fail \\ TrES-5 b & -9.71 \(\pm\) 2.11 & 216.66 & 194.11 & -8.97 \(\pm\) 2.06 & 169.19 & 153.79 & pass \\ WASP-4 b & -6.43 \(\pm\) 0.56 & 342.39 & 213.62 & -6.47 \(\pm\) 0.58 & 276.84 & 160.39 & pass \\ WASP-10 b & -27.74 \(\pm\) 3.49 & 186.51 & 123.57 & -28.03 \(\pm\) 3.65 & 180.41 & 116.80 & pass \\ WASP-12 b & -30.19 \(\pm\) 0.92 & 1256.13 & 222.20 & -30.37 \(\pm\) 0.95 & 1101.08 & 216.94 & pass \\ WASP-16 b & -51.98 \(\pm\) 12.87 & 99.24 & 84.17 & -53.33 \(\pm\) 13.99 & 94.21 & 78.96 & fail \\ WASP-19 b & -1.64 \(\pm\) 0.29 & 693.77 & 661.02 & -1.59 \(\pm\) 0.29 & 618.58 & 604.08 & fail \\ WASP-22 b & -71.76 \(\pm\) 17.12 & 38.79 & 21.64 & -69.15 \(\pm\) 16.86 & 35.91 & 19.44 & pass \\ WASP-45 b & -169.21 \(\pm\) 21.09 & 350.59 & 289.88 & -166.11 \(\pm\) 21.67 & 344.01 & 285.29 & pass \\ WASP-47 b & -48.45\(\pm\) 14.14 & 77.18 & 65.36 & -49.98 \(\pm\) 14.15 & 72.24 & 59.76 & fail \\ WASP-80 b & -23.04 \(\pm\) 5.15 & 4017.15 & 3996.25 & -31.99 \(\pm\) 4.91 & 343.96 & 301.65 & fail \\ XO-3 b & -31.59 \(\pm\) 5.24 & 350.35 & 313.85 & -27.69 \(\pm\) 5.29 & 310.94 & 276.60 & fail \\ XO-4 b & -62.57 \(\pm\) 17.48 & 90.84 & 77.61 & -61.16 \(\pm\) 17.38 & 90.84 & 77.62 & pass \\ HAT-P-7 b & 18.28 \(\pm\) 4.05 & 112.96 & 92.35 & 20.79 \(\pm\) 4.31 & 52.18 & 32.16 & pass \\ HAT-P-43 b & 94.40 \(\pm\) 22.55 & 70.24 & 53.24 & 108.34 \(\pm\) 23.91 & 63.60 & 33.84 & fail \\ HAT-P-44 b & 119.67 \(\pm\) 25.89 & 63.69 & 42.49 & 118.92 \(\pm\) 25.89 & 63.69 & 42.49 & fail \\ WASP-1 b & 22.50 \(\pm\) 5.23 & 88.21 & 69.68 & 24.33 \(\pm\) 5.19 & 60.07 & 38.75 & pass \\ WASP-6 b & 20.66 \(\pm\) 5.19 & 32.84 & 17.30 & 20.80 \(\pm\) 5.15 & 32.84 & 17.30 & fail \\ WASP-11 b & 23.32 \(\pm\) 6.69 & 102.65 & 89.70 & 23.44 \(\pm\) 6.59 & 102.65 & 89.70 & fail \\ WASP-17 b & 77.64 \(\pm\) 8.19 & 550.63 & 457.85 & 77.47 \(\pm\) 8.13 & 550.63 & 457.85 & fail \\ WASP-46 b & 51.68 \(\pm\) 2.86 & 2852.85 & 2512.55 & 63.22 \(\pm\) 2.78 & 2092.71 & 1305.92 & pass \\ \hline \end{tabular}
\end{table}
Table 3: Model fitting and statistical results for 26 hot Jupiters
over a constant period model. A-thano et al. (2022) has reported a period change rate of \(-80\pm 30\) ms/yr by using ground-based observation. However, the corresponding \(Q^{\prime}_{*}\) is too small and inconsistent with theoretical estimation. As a result, they favor the apsidal precession model and suggest the TTV can be explained by light-time effect (LiTE). TESS has observed HAT-P-37 b in seven sectors, but the first six cannot be used due to significant background contamination. We have analyzed 35 mid-transit time data (27 from literature, 8 from TESS). Upon inclusion of the only usable sector of TESS data, our best-fitting orbital decay rate is \(\dot{P}=-34.29\pm 6.67\) ms/yr (Figure A3).
#### 4.1.5 Kelt-16 b
KELT-16 b is a highly irradiated, ultra-short period hot Jupiter discovered by Oberst et al. (2017), with a mass of 2.75 \(M_{J}\) and a radius of 1.4 \(R_{J}\). It orbits an F7V star with a period of 0.97 days. We have analyzed 111 mid-transit time data (46 from literature, 65 from TESS). Our best-fitting value of \(\dot{P}=-27.82\pm 4.76\) ms/yr (Figure A4) is different from previous studies, which all agree on a constant orbital period model (Maciejewski et al., 2018; Patra et al., 2020; Mancini et al., 2022). Given the relatively short time span since the planet's discovery, we anticipate that more observation data will be gathered in the future to refine our understanding of this system.
#### 4.1.6 TrES-1 b
TrES-1 b is a 0.8 \(M_{J}\), 1.1 \(R_{J}\) hot Jupiter discovered by the Trans-Atlantic Exoplanet Survey (Alonso et al., 2004). It orbits a K0V star with a period of 3.03 days. We have analyzed 75 mid-transit time data (41 from literature, 34 from TESS). Our model fitting analysis reveals a \(\dot{P}\) of \(-14.82\pm 2.37\) ms/yr (Figure A5), which is consistent with the \(-18.36\pm 3.73\) ms/yr value from Ivshina and Winn (2022) and \(-10.9\pm 2.1\) ms/yr value from Hagey et al. (2022).
#### 4.1.7 TrES-5 b
Figure 4: Timing residuals of WASP-12 b. The top panel displays the fitting results using a 5-\(\sigma\) rejection scheme, while the bottom panel shows the fitting results using a 3-\(\sigma\) rejection scheme. The blue curves and shaded areas indicate the best-fit quadratic model and corresponding \(5-\sigma\) (top) and \(3-\sigma\) (bottom) confidence regions. The orange points are based on literature data. The green points are based on TESS data. The gray points are clipped data.
Figure 5: Part of the transit timing residuals of CoRoT-2 b. The top panel presents the raw data, which are taken from Öztürk and Erdem (2019), while the bottom panel displays the same data with inflated error bars. In both panels, we show the residuals of the linear model fitto provide a clearer visual comparison, where it is clearly seen that the original measurement uncertainties from Öztürk and Erdem (2019) are too small.
TrES-5 b is a 1.8 \(M_{J}\), 1.2 \(R_{J}\) hot Jupiter discovered by Mandushev et al. (2011). It orbits a K-type star with a period of 1.482 days. Sokov et al. (2018) suggested the existence of a second planet in the system on a 1:2 resonance orbit, with a mass of 0.24 \(M_{J}\) based from a 99-day period TTV analysis. Maciejewski et al. (2021) could not confirm the presence of an additional planet but found a long-term variation of the orbital period of TrES-5b at a rate of \(-20.4\pm 4.7\) ms/yr. They suggested this variation is caused by a line-of-sight acceleration of the system induced by a massive wide-orbit companion. Ivshina & Winn (2022) analyzed TESS data and found the period to be changing at a rate of \(-17.47\pm 3.79\) ms/yr. We have analyzed 121 mid-transit time data (55 from literature, 66 from TESS). Our analysis yields a period decay rate of \(-9.71\pm 2.11\) ms/yr based on three additional sectors of TESS data (Figure A6). TrES-5 b is an intriguing target that warrants further observations.
#### 4.1.8 WASP-10 b
WASP-10 b is a 3.2 \(M_{J}\), 1.1 \(R_{J}\) hot Jupiter discovered by Christian et al. (2009). It orbits a K5V star with a period of 3.093 days. Previous studies have investigated the short-term TTVs of WASP-10 b, with proposed causes including starspot occultations (Barros et al., 2013) or a 0.1 \(M_{J}\) companion with a 5.23-day period (Maciejewski et al., 2011), which remains unconfirmed. RV analysis of Knutson et al. (2014) suggested the presence of a potential massive distant companion. Using the ETD data, Hagey et al. (2022) derived a best-fitting decay rate of \(-21.9\pm 2.4\) ms/yr. We have analyzed 49 mid-transit time data (41 from literature, 8 from TESS). Our analysis finds a period change rate of \(\dot{P}=-27.74\pm 3.49\) ms/yr (Figure A7), which is consistent with the result from Hagey et al. (2022).
#### 4.1.9 WASP-22 b
WASP-22 b is a 0.6 \(M_{J}\), 1.2 \(R_{J}\) low-density hot Jupiter discovered by Maxted et al. (2010). It orbits a G1-type star with a period of 3.533 days. Knutson et al. (2014) confirmed its RV trend and provided evidence for the presence of a third body in the system, which could be a second planet or a M-dwarf. There is no discussion of its long-term TTV trends from previous studies. We have analyzed 16 mid-transit time data (11 from literature, 5 from TESS). our best-fitting result is \(\dot{P}=-71.76\pm 17.12\) ms/yr (Figure A8). Such a large amplitude of \(\dot{P}\) should be easy to verify by more observations in the future.
#### 4.1.10 WASP-45 b
WASP-45 b is a 1.0 \(M_{J}\), 1.1 \(R_{J}\) hot Jupiter discovered by Anderson et al. (2012). It orbits a K2V star with a period of 3.126 days. Recent timing study of WASP-45 b is from Ivshina & Winn (2022), which found a period decay rate of \(-262.57\pm 28.35\) ms/yr. They labeled it as a mediocre candidate due to its early scattered data. We have analyzed 24 mid-transit time data (10 from literature, 14 from TESS). Our model fitting gives a best-fitting \(\dot{P}\) of \(-169.21\pm 21.09\) ms/yr (Figure A9), which is slight smaller than the value reported by Ivshina & Winn (2022).
#### 4.1.11 Xo-4 b
XO-4 b is a 1.6 \(M_{J}\), 1.3 \(R_{J}\) hot Jupiter discovered by McCullough et al. (2008). It orbits an F5V star with a period of 4.125 days. There have been no previous reports of TTV from this planet. We have analyzed 37 mid-transit time data (21 from literature, 16 from TESS). Combining the archival data and three more sectors of TESS data, we find a best-fitting period change rate of \(\dot{P}=-62.57\pm 17.48\) ms/yr (Figure A10).
#### 4.1.12 HD 189733 b
HD 189733 b was discovered by Bouchy et al. (2005), with a mass of 1.1 \(M_{J}\) and a radius of 1.1 \(R_{J}\). It orbits a chromospherically active K1.5 V star with a period of 2.219 days. It is one of the most extensively studied hot Jupiters due to its close proximity to the Earth. Dowling Jones et al. (2018) reported a detection of period decay with \(\dot{P}=-18.8\pm 4.3\) ms/yr. Most of their mid transit times were from ETD. We have analyzed 34 mid-transit time data (14 from literature, 20 from TESS). We find \(\dot{P}=-8.44\pm 2.44\) ms/yr, with the best-fit result shown in Figure 6. During the LOOCV analysis, we have sequentially removed each data point from the dataset and re-fitted the model accordingly. We find that for three particular data points taken from Morvan et al. (2020), the resulting fits no longer satisfy the criteria for significant orbital variation (see Figure 7 for details). The LOOCV results suggest the existence of long-term orbital decay found by Dowling Jones et al. (2018) is questionable. Further investigation with additional data is necessary to verify this conclusion.
#### 4.1.13 TrES-3 b
TrES-3 b is a 1.9 \(M_{J}\), 1.3 \(R_{J}\) hot Jupiter discovered by O'Donovan et al. (2007). It orbits a G-type star in a 1.306-day period. Earlier studies have shown that TrES-3 b exhibits almost no orbital period change (Mannaday et al., 2020, 2022). We have analyzed 283 mid-transit time data (190 from literature, 94 from TESS). Including the TESS data, our best-fitting orbital decay model gives a period change rate
of \(\dot{P}=-8.77\pm 0.47\) ms/yr (Figure A11). Our LOOCV analysis finds that the removal of one data point from literature (\(2457824.94673\pm 0.000009\) BJD\({}_{\rm TDB}\), Stefanson et al., 2017) results in the reversal of the sign of the period change rate, with a best-fitting value of \(4.38\pm 0.76\) ms/yr (Figure A12). Subsequently, further analysis of the BIC diagram shows that the removal of another point (\(2456885.79665\pm 0.00008\) BJD\({}_{\rm TDB}\), Saeed et al., 2020) also produces a significant decrease in BIC\({}_{quad}\). Given that these two data points were obtained via ground-based telescopes and possess notably lower uncertainty values compared to the remaining data, it is plausible that the accuracy of these two data points has been overestimated. After the exclusion of both data points, our model fitting yields a period change rate of \(-2.74\pm 0.89\) ms/yr with \(\Delta BIC=9.82\). Thus, we consider TrES-3 b as a mediocre candidate with a weak orbital decay trend.
#### 4.1.14 Wasp-19 b
WASP-19 b is an ultra-short-period planet first identified by Hebb et al. (2010), with a mass of 1.1 \(M_{J}\) and radius of 1.4 \(R_{J}\). It orbits a G8V star on a 0.789-day period. It is one of the most favorable targets in the search for tidal orbital decay. Patra et al. (2020) found strong evidence of decay with a period change rate of \(-6.50\pm 1.33\) ms/yr. However, the study of Petrucci et al. (2020) showed evidence favors a constant period model and provided an upper limit of \(\dot{P}\) as \(-2.294\) ms/yr. The latest quadratic analysis performed by Ivshina & Winn (2022) yielded \(-3.54\pm 1.18\) ms/yr. We have analyzed 142 mid-transit time data (87 from literature, 55 from TESS). Our best-fitting decay rate is \(\dot{P}=-1.64\pm 0.29\) ms/yr with two additional sectors of TESS data (Figure A13). In the LOOCV analysis we find that the removal of one data point near 2457796.59224 JD from Espinoza et al. (2019) resulting
Figure 6: Timing residuals of HD 189733 b. The lines and symbols used are similar to Figure 4.
Figure 7: LOOCV analysis and corresponding \(\Delta\)BIC of HD 189733 b. The top panel displays the period change rate dp/dt obtained by fitting the quadratic model after the removal of each single transit timing data. The orange squares show the dp/dt values that satisfy the criterion of being \(3\sigma\) away from zero, while the purple diamonds represent dp/dt values that fail to meet this criterion. The dash-dotted line marks the original best-fitting dp/dt value before the removal of any data, and the green shaded area marks the corresponding \(1\sigma\) confidence region. The bottom panel displays the corresponding \(\Delta\)BIC, where it is evident that the three data points failing the \(3\sigma\) test also do not satisfy the criterion of \(\Delta\)BIC\(<10\). The complete mosaic figure set is available in the online version of this Paper.
in a diminishing decay rate of \(-0.88\pm 0.43\) ms/yr. Similar to the case of TrES-3 b, when we remove one additional data point near 2457448.71292 JD from Petrucci et al. (2020) that can significantly reduces the BIC in our LOOCV test, the new rate of orbital decay is \(-1.40\pm 0.30\) ms/yr (Figure A14).
#### 4.1.15 Xo-3 b
XO-3 b is a 11.7 \(M_{J}\), 1.2 \(R_{J}\) hot Jupiter discovered by Johns-Krull et al. (2008). It orbits an F5V star in a 3.192-day period, with an eccentric and misaligned orbit (Hebrard et al., 2008). TESS transit timing analysis of this system was performed by Yang and Wei (2022), who found \(dP/dE=-6.2\times 10^{-9}\pm 2.9\times 10^{-10}\), which is equivalent to \(\dot{P}=-195\pm 9\) ms/yr. Subsequent study of Ivshina and Winn (2022) found a decay rate of \(-182.08\pm 12.96\) ms/yr. We have analyzed 47 mid-transit time data (35 from literature, 12 from TESS). With the inclusion of one new sector of TESS data, our best-fitting result is \(\dot{P}=-31.59\pm 5.24\) ms/yr using the 5-\(\sigma\) rejection scheme and \(-27.67\pm 5.21\) ms/yr using the 3-\(\sigma\) rejection scheme (Figure A15). Through our LOOCV analysis, we find that the primary contributor to the decay trend is the transit time near 2456419.0441 JD derived by Wong et al. (2014) using Spitzer IRAC 4.5 \(\mu m\) band data. The exclusion of this point would reduce the decay rate to a much smaller value of \(-8.83\pm 5.14\) ms/yr and a new \(\Delta\)BIC=2.38 (Figure A16). Thus we do not consider XO-3 b as a strong orbital decay candidate.
#### 4.1.16 WASP-16 b
WASP-16 b is a 0.9 \(M_{J}\), 1.0 \(R_{J}\) planet in a 3.12-day orbit around a G3V star (Lister et al., 2009). We have analyzed 17 mid-transit time data (11 from literature, 6 from TESS). Including the TESS data, our best-fitting model gives \(\dot{P}=-51.98\pm 12.87\) ms/yr (Figure A17). In the process of LOOCV analysis, we observed that the removal of one data point (2456037.70089 \(\pm\) 0.0024 BJD_TDB, derived from TRESCA by Southworth et al. (2013)) led to the disappearance of the period decay trend (Figure A18). Thus, we consider WASP-16 b as a mediocre candidate.
#### 4.1.17 WASP-47 b
WASP-47 b is a 1.1 \(M_{J}\), 1.1 \(R_{J}\) planet in a 4.159-day orbit around a G9V star (Hellier et al., 2012). The WASP-47 system has attract attention due to its remarkable four-planet configuration. WASP-47 b has two nearby neighbors: an interior super-Earth with an orbital period of 0.79 day (WASP-47 e) and an exterior hot-Neptune with an orbital period of 9.0 days (WASP-47 d). The system also has a distant, moderately eccentric gas giant (WASP-47 c). We have analyzed 31 mid-transit time data (27 from literature, 4 from TESS). Our model fitting yields a decay rate of \(\dot{P}=-48.45\pm 14.14\) ms/yr (Figure A19). The analysis is complicated by the short-term TTV perturbations produced by the other planets in the system. The LOOCV analysis suggests that the long-term trend may not be true, as the removal of the first data point results in the complete disappearance of the trend (Figure A20).
#### 4.1.18 WASP-80 b
WASP-80 b is a 0.5 \(M_{J}\), 1.0 \(R_{J}\) planet in a 3.068-day orbit around a K7V star (Triaud et al., 2013). We have analyzed 33 mid-transit time data (28 from literature, 5 from TESS. Our model fitting finds a decay rate of of \(\dot{P}=-23.04\pm 5.15\) ms/yr using the 5\(\sigma\) rejection scheme and \(\dot{P}=-31.99\pm 4.91\) ms/yr using the 3\(\sigma\) rejection scheme. The discrepancy arises from the inclusion of one data point near 2456459.80958 JD, which appears to be an outlier (Figure A21). Our LOOCV analysis identified the transit time data near 2456125.42 JD from Triaud et al. (2013) as the primary contributor to the observed period decay trend (Figure A22). As this data point was collected using a ground-based telescope, it is possible that its precision had been overestimated. Hence, the current data do not provide strong support for WASP-80 b as having significant orbital decay.
### Candidates with Increasing Orbital Periods
We have 8 hot Jupiters showing signs of orbital period increase. Among them, 5 systems (4.2.4-4.2.8) have failed to pass the LOOCV test, i.e. upon the removal of one single data point, they no longer meet the two criteria we set above for identifying long-term orbital period variation candidates.
#### 4.2.1 Hat-P-7 b
HAT-P-7 b is an ultra-hot Jupiter discovered by Pal et al. (2008), with a mass of 1.8 \(M_{J}\) and radius of 1.5 \(R_{J}\). It orbits an F8 star in a 2.205-day period, likely possessing a retrograde or near-polar orbit with an inclination angle of \(\phi\approx 120^{\circ}\)(Campante et al., 2016; Benomar et al., 2014). Additionally, a common proper motion stellar companion has been identified within this system through high contrast imaging technique (Narita et al., 2012; Ngo et al., 2015). This planet has received significant attention due to its unusual atmospheric and orbital properties. We have analyzed 77 mid-transit time data (11 from literature, 66 from TESS). Our analysis of long-term archival and TESS transit time data indicates the planet has a positive period change rate of \(\dot{P}=18.28\pm 4.05\) ms/yr (Figure A23).
#### 4.2.2 WASP-1 b
WASP-1 b is a 0.9 \(M_{J}\), 1.5 \(R_{J}\) planet in a 2.52-day orbit around a F7V star (Collier Cameron et al., 2007). A common proper motion stellar companion has been identified within this system (Ngo et al., 2015; Collier Cameron et al., 2007). We have analyzed 45 mid-transit time data (36 from literature, 9 from TESS). Our quadratic model fitting yields a best-fitting period change rate of \(\dot{P}=22.50\pm 5.23\) ms/yr using \(5\sigma\) rejection scheme and \(24.33\pm 5.19\) ms/yr using \(3\sigma\) rejection scheme, respectively (Figure A24).
#### 4.2.3 WASP-46 b
WASP-46 b is a 1.9 \(M_{J}\), 1.2 \(R_{J}\) hot Jupiter discovered by Anderson et al. (2012). It orbits a G6V star in a 1.430-day period. The host star exhibits weak Ca II H+K emission, indicating an active photosphere and chromosphere. Ciceri et al. (2016) conducted the first TTV study of this planet, finding that a linear ephemeris model yields an inadequate fit to the observations. Petrucci et al. (2018) investigated homogeneous TTVs for the planet using both new and previously published data, concluding that the potential for orbital decay cannot be excluded. Davoudi et al. (2021) observed a positive orbital variation and attributed it to star magnetic activity. We have analyzed 86 mid-transit time data (54 from literature, 32 from TESS). Our analysis yielded an increasing rate of \(\dot{P}=51.68\pm 2.86\) ms/yr using the \(5\sigma\) rejection scheme and \(63.22\pm 2.78\) ms/yr using the \(3\sigma\) rejection scheme, respectively (Figure A25). However, the large scattering shown in the residuals casts doubts on the ability of the quadratic model to explain the observations.
#### 4.2.4 HAT-P-43 b
HAT-P-43 b is a 0.7 \(M_{J}\), 1.3 \(R_{J}\) planet in a 3.333-day orbit around a G-type star (Boisse et al., 2013). We have analyzed 48 mid-transit time data (7 from literature, 41 from TESS). Our best-fitting model shows a positive period change rate of \(\dot{P}=94.40\pm 22.55\) ms/yr using the \(5\sigma\) rejection scheme and \(108.33\pm 23.91\) ms/yr using the \(3\sigma\) rejection scheme, respectively (Figure A26). However, upon removal of the first data point adopted from Boisse et al. (2013) in our LOOCV test, we find the system no longer meets the criterion to be classified as long-term orbital period variation candidate (Figure A27). It is therefore a mediocre candidate for long-term period variation, and further observations are necessary to help clarify this result.
#### 4.2.5 HAT-P-44 b
HAT-P-44 b is a 0.35 \(M_{J}\), 1.2 \(R_{J}\) hot Jupiter discovered by Hartman et al. (2014). It orbits a G8V star in a 4.301-day period. An outer planet, HAT-P-44 c, with a mass of at least 1.6 \(M_{J}\) and a period of 219.9 days, is likely present in the system (Hartman et al., 2014). We have analyzed 27 mid-transit time data (13 from literature, 14 from TESS). Our analysis of HAT-P-44 b's TTV has unveiled an upward trend in the orbital period variation at a rate of \(\dot{P}=119.67\pm 25.89\) ms/yr (Figure A28). In our LOOCV analysis, the removal of the first data point near 2455696.94 JD from Hartman et al. (2014) results in a new period change rate of \(144.47\pm 57.56\) ms/yr, which falls short of the \(3\sigma\) requirement needed to be identified as an orbital period variation candidate (Figure A29). Nevertheless, this trend does not disappear entirely and further observation would be necessary to confirm its existence.
#### 4.2.6 WASP-6 b
WASP-6 b is a 0.5 \(M_{J}\), 1.2 \(R_{J}\) planet in a 3.361-day orbit around a G8 star (Gillon et al., 2009). We have analyzed 25 mid-transit time data (12 from literature, 13 from TESS). Our best-fitting model shows a period increase rate at \(\dot{P}=20.66\pm 5.19\) s/yr (Figure A30). Our LOOCV analysis indicated that once the first data point near 2454596.43 JD adopted from Gillon et al. (2009) was removed, the period increase rate would be reduced to \(13.87\pm 6.22\) ms/yr, which fails to meet the \(3\sigma\) criterion for orbital variation (Figure A31). Nevertheless, this positive period change rate still deviates more than \(2-\sigma\) away from zero, and additional data are warranted to refute the existence of positive \(\dot{P}\).
#### 4.2.7 WASP-11 b
WASP-11 b (HAT-P-10 b) is a 0.5 \(M_{J}\), 1.0 \(R_{J}\) hot Jupiter discovered by West et al. (2009) and Bakos et al. (2009). It orbits around a K3V star in a 3.722-day period. The presence of a low-mass stellar companion in the system has been confirmed by both direct imaging and a RV trend (Knutson et al., 2014; Ngo et al., 2015). We have analyzed 35 mid-transit time data (24 from literature, 11 from TESS). Our analysis of its orbital period variation yields an increasing \(\dot{P}\) of \(23.32\pm 6.69\) ms/yr (Figure A32). In our LOOCV analysis, the exclusion of one data point near 2455898.76 JD adopted from Wang et al. (2014) would reduce the \(\dot{P}\) to \(17.13\pm 6.63\) ms/yr (Figure A33). This removed transit time data point from ETD was originally derived using self-reported data from a ground-based telescope (Wang et al., 2014), which may not be very reliable. The new derived \(\dot{P}\) still sits near the \(3\sigma\) border, which renders it as a good candidate.
WASP-17 b is a 0.5 \(M_{J}\), 1.9 \(R_{J}\) planet in a 3.735-day orbit around a F4 star (Anderson et al., 2010). We have analyzed 31 mid-transit time data (19 from literature, 12 from TESS). Our best-fitting model has a positive period change rate of \(\dot{P}=77.64\pm 8.19\) ms/yr (Figure A34). Our LOOCV analysis has revealed that the transit data near 2454559.19 JD derived by Bento et al. (2014) using ULTRACAM on ESO's NTT is the most significant contributor to the observed increasing trend. It is likely that the reported uncertainty of this transit time data point has been underestimated. After the removal of this data point, the re-fitted model shows a trend of \(3.36\pm 4.46\) ms/yr, which is consistent with a constant orbital period (Figure A35). Thus, we consider WASP-17 b as a mediocre candidate.
### Constant Period Candidates
Analyses of transit time data of the remaining 300 hot Jupiters reveal no strong evidence of orbital period change. 133 of them show a good fit by both the linear and quadratic models but do not meet the \(3-\sigma\) criterion, while the other 167 hot Jupiters shows only a good fit by the linear model. We have shown one such example, HAT-P-16 b, in Figure 8. We can use the quadratic model to fit the data well with a best-fit period change rate value of \(\dot{P}=3.41\pm 4.99\) ms/yr, which is consistent with a zero value period change rate. To assess the period change rate measurement precision, we plot the distribution of the 3-sigma error bars of the period change rate \(\dot{P}\) for the 133 hot Jupiters with a good quadratic model fit. (Figure 9) As can be seen from this distribution, we can detect an absolute period change rate \(|\dot{P}|\) greater than \(\sim 100\) ms/yr for most of these hot Jupiters. The lack of enough transit time data is the main cause for the other 167 hot Jupiters to have a bad quadratic fit. More high precision transit time data in the future are needed to put a more tight constraint on their period change rate, such as from PLAnetary Transits and Oscillation of stars mission and the China Space Station Telescope (PLATO, expected around 2026, Rauer et al., 2014), (CSST, expected around 2025, Zhan, 2011) and the Earth 2.0 mission (ET, expected around 2027, Ge et al., 2022).
rors when de-trending the photometry light curves. For TrES-3 b, WASP-19 b, WASP-80 b and WASP-17 b, the problem points likely have unreasonably small error bars than expected from similar ground-based observations. These issues serve as a reminder that prudence should be exercised when employing data from the literature.
### Comparison to Previous Works
Several groups have published long-term transit time variation study en masse (Patra et al., 2020; Ivshina and Winn, 2022; Shan et al., 2023). We compare our results with their results in this section.
#### 5.2.1 Comparison to Ivshina and Winn (2022)
Ivshina and Winn (2022) have compiled a database of transit times for 382 planets using data from the TESS mission and previously published transit times from literature, and use the database to update ephemerides. They also use the database to discover 10 cases with suggestive evidences for orbital period changes. We have used part of the data from their database, it is important for us to make a detailed comparison here.
For the 10 planet candidates flagged as exhibiting long-term period changes, we first use our code to fit their transit time data and find similar fitting results of \(\dot{P}\) as shown in Ivshina and Winn (2022), usually within the \(1\sigma\) error bar. This proves the robustness of our code, and the main discrepancy between results from this work and their work is the transit time data used, where we have additional TESS data and a longer time span for the ephemeris model fitting. For example, all but the three planets, WASP-4 b, WASP-45 b, and WASP-99 b, have new TESS data available in this work.
For WASP-12 b, WASP-4 b, TrES-1 b, WASP-99 b and WASP-45 b, consistent results of the period change rate (within \(1\sigma\)) have been obtained. For TrES-5 b, WASP-19 b, and XO-3 b, a smaller absolute period change rate (\(|\dot{P}|\)) are obtained with the updated TESS data from this work. For CoRoT-2 b, we have made adjustments to the apparently unreasonable error bars adopted in some of the literature data and obtained a significantly smaller absolute period change rate. For WASP-161 b, we will discuss it in the next section, Section 5.2.2.
There are 8 hot Jupiters flagged as period decay cases in this work but not flagged by Ivshina and Winn (2022). For HAT-P-37 b, WASP-80 b and KELT-16 b, the additional TESS observations available at the time of this study have caused the fitting difference. However, for HD 189733 b, WASP-16 b, WASP-22 b, and WASP-47 b, we can still derive period change rates that meet the 3-sigma criteria using only data from Ivshina and Winn (2022), which suggests it is likely they have masked some of the transit time data during their fitting process. Another special case is TrES-3 b. Three middle transit time data points adopted from literature by Ivshina and Winn (2022) have unrealistic small error bars (as small as 1 second), where two of them showing very big impact on the model fitting evidenced by our LOOCV analysis. Ivshina and Winn (2022) have taken care of only one data point by manually inflating its error bar by a factor of 30, while we manually remove both data points. This is the main reason why this planet has failed to meet the selection criteria for being a period change candidate in Ivshina and Winn (2022) but meet the selection criteria in this work. When we manually inflate the error bars of all these three data points to a value of about 30 second, we can still obtain a negative period change rate.
Ivshina and Winn (2022) did not report any candidate having an increasing period. For HAT-P-7 b, HAT-P-43 b, HAT-P-44 b, WASP-1 b, and WASP-11 b, the additional TESS observations available at the time of this study have made the fitting difference. On the other hand, for WASP-6 b, WASP-17 b, and WASP-46 b, we can still identify them as period increasing candidates when fitting the data from Ivshina and Winn (2022) only. This could be due to the fact that Ivshina and Winn (2022) were not interested in finding candidates with a positive period change rate.
#### 5.2.2 Comparison to Shan et al. (2023)
Shan et al. (2023) have analyzed the transit timing data from the TESS Objects of Interest Catalog for a sample of 262 hot Jupiters. They identified 31 hot Jupiters for which the TESS mid transit times show at least \(1\sigma\) offsets from predictions calculated using ephemerides taken from the literature. They used tidal dissipation to explain some of the timing offsets seen in their results. Our approach differs from theirs in that we do not rely on previously published ephemerides but re-do a new ephemeris model fitting using the new TESS data and archival data. We then use the new ephemeris model fitting results to identify possible candidates showing a positive or negative long-term orbital period change rate.
XO-3 b and WASP-17 b are the only two systems on both of our interested candidate lists. For XO-3 b, as we have already pointed out in section 4.1.15, the earlier work of Yang and Wei (2022) reported a decay rate of \(-195\pm 9\) ms/yr, whereas in this work we have obtained a much smaller value of \(-32\pm 5\) ms/yr with additional TESS data. For WASP-17 b, Shan et al. (2023) found it shows the largest late offset timing of \(70.8\pm 11.7\) minutes using a linear ephemeris model, while we find it to be
consistent with \(\dot{P}=0\) ms/yr after the removal of one single transit time data with a very small error bar (\(\sim 8\) second).
Another target worth mentioning on their candidate list is WASP-161 b. This is another case that demonstrates the importance of acquiring long-term high quality transit time data in studying the long-term orbital period variation of hot Jupiters. Shan et al. (2023) suggested that WASP-161 b is undergoing tidal dissipation with \(\dot{P}=-1.16\times 10^{-7}\pm 2.25\times 10^{-8}\), corresponding to \(\dot{P}=-3.66\pm 0.7\) s/yr. Ivshina & Winn (2022) reported a \(\dot{P}\) result of \(-15.5\pm 0.35\) s/yr and suggested that there may be an error when obtaining the transit time data by Barkaoui et al. (2019). Yang & Chary (2022) have re-analyzed the SSO-Europa light curve data used by Barkaoui et al. (2019) and obtained a new mid transit time of 2458124.71742\(\pm\)0.00083 HJD, which can be converted to 2458124.718220742\(\pm\)0.00083 BJD\({}_{\rm TDB}\). We here adopt this new transit time from Yang & Chary (2022) instead of the time data from Barkaoui et al. (2019) in our model fitting. Combining with new TESS data available in 2023 January, we have derived a new orbital period change rate of \(\dot{P}=-0.786\pm 0.208\) s/yr, which is much lower than the results reported by Ivshina & Winn (2022) and Shan et al. (2023). Further observations with precise timing are needed to better understand its orbital period variation trend.
#### 5.2.3 Comparison to Patra et al. (2020)
In the study of Patra et al. (2020), they have analyzed long-term transit timing data for 12 hot Jupiters orbiting bright host stars to seek direct evidence of tidal orbital decay. Three of their targets are overlapping with our long-term period change candidates (WASP-12 b, KELT-16 b and WASP-19 b), which have been discussed in section 4.1.1, 4.1.5, and 4.1.14. We here discuss another five targets from their sample.
WASP-18 b is a 10.4 \(M_{J}\), 1.2 \(R_{J}\) "super-Jupiter" in a 0.941-day orbit around a F6 star (Hellier et al., 2009). It was considered as a TTV candidate and has been extensively studied (Wilkins et al., 2017; McDonald & Kerins, 2018; Shporer et al., 2019; Patra et al., 2020; Maciejewski et al., 2022; Rosario et al., 2022). They do not find any sign of orbital decay in the system. Our best-fit result of the period change rate is \(\dot{P}=0.73\pm 0.75\) ms/yr, which is consistent with a constant period model.
HATS-18 b is a 2.0 \(M_{J}\), 1.3 \(R_{J}\) planet in a 0.838-day orbit around a G-type star (Penev et al., 2016). It has received a lot of attention as an orbital decay candidate. However, all the recent studies from Penev et al. (2016), Patra et al. (2020), and Southworth et al. (2022) do not find strong evidence of periodic variation in HATS-18 b. Our analysis in this work yields a period change rate of \(-6.72\pm 3.01\) ms/yr and \(\Delta BIC=5\), which slightly favors a quadratic model than a linear model.
HAT-P-23 b is a 2.1 \(M_{J}\), 1.2 \(R_{J}\) planet in a 1.213-day orbit around a G5 star (Bakos et al., 2011). Maciejewski et al. (2018), Patra et al. (2020) and Basturk et al. (2022) have studied its transit timing data and found it to be consistent with a constant period model. Our quadratic model fitting yields a period change rate of \(1.46\pm 1.52\) ms/yr, which is also in good agreement with a constant period model.
WASP-43 b is a 2.0 \(M_{J}\), 1.0 \(R_{J}\) planet in a 0.813-day orbit around a K7V star (Hellier et al., 2011). It has been the subject of discussion as an orbital decay candidate for a long period of time. Ever since the first report of orbital decay from WASP-43 b by Jiang et al. (2016), all subsequent studies have effectively ruled out the existence of orbital decay by incorporating new timing data (Hoyer et al., 2016; Stevenson et al., 2017; Patra et al., 2020; Garai et al., 2021; Davoudi et al., 2021; Hagey et al., 2022). Our quadratic model fitting yields a period change rate of \(0.19\pm 0.34\) ms/yr, which also supports the constant period scenario. WASP-43 b serves as another notable example that illustrates more high quality timing data can lead to a more accurate result.
WASP-122 b is a 1.4 \(M_{J}\), 1.8 \(R_{J}\) planet in a 1.710-day orbit around a G4 star (Turner et al., 2016). Due to its favorable physical properties, Patra et al. (2020) have put this system on their list as a target to watch for orbital decay. We obtain a best-fit period change rate of \(0.18\pm 4.25\) ms/yr using archival timing data from literature and four sectors of TESS data for WASP-122 b. This result agrees well with a constant period model, showing no evidence of orbital decay.
### Possible Explanation for Long-term Period Variation
There are several possible scenarios to explain the long-term orbital period variations of exoplanets. Here we summarize a few of them in this section, including Romer effect, apsidal precession, the Applegate effect, mass loss and tidal dissipation.
**Romer effect**. Due to variations in light travel time, the accelerating motion of the center of mass of the system towards or away from the observer's line of sight can cause illusory changes in the period, which is known as the Romer effect (Bouma et al., 2019). This acceleration is usually attributed to a wide-orbit companion. Acceleration towards the observer results in a decreasing period, while an increasing period corresponds to acceleration away from the observer. In fact, several of our candidate systems exhibit evidence of companions on a
wide orbit, as discussed in Section 4. Because the observed period derivative of hot Jupiters would be correlated with the time derivative of the RV data, as shown in Equation 23 of Bouma et al. (2019), the Romer effect can be verified through the fitting of RV data of the host star.
**Apsidal precession**. Apsidal precession is expected in systems where hot Jupiters are in orbits that are at least slightly eccentric (\(e>0.003\)), causing sinusoidal variations in transit times. It's driven mainly by the quadrupole of the planetary tidal bulge, with additional contributions from general relativity (Ragozzine and Wolf, 2009). For the most promising systems, the precession period can be as short as a few decades (Birkby et al., 2014). Therefore, within a relatively short observational window, the curvature of TTVs from Apsidal precession can resemble signals of orbital decay or expansion. Agidal precesssion has been ruled out for some systems (WASP-12b, e.g., Patra et al., 2017; Yee et al., 2020; Turner et al., 2021). This could be due to the fact that hot Jupiters tend to have very low eccentricities. Unless triggered by some special conditions such as an external perturber, tidal interactions tend to circularize the orbits of hot Jupiter on a very short timescale. For most of our candidates, the higher limits of their measured eccentricities are well above 0.003. Thus we cannot rule out the possibility that Apsidal precession may account for some of period variations derived in this work. We anticipate the forthcoming transit and occultation observations in the next 5-10 years can help reveal the nature of apsidal precession from hot Jupiters.
**The Applegate effect.** The Applegate effect (Applegate, 1992) offers an explanation for quasi-periodic variations over times-scales of years to decades in eclipse times of eclipsing binaries. This effect suggests that magnetically active stars undergo cyclic changes in their internal structure, resulting in the exchange of angular momentum between their inner and outer zones. While originally applied to eclipsing binaries, this mechanism could also apply to hot Jupiters orbiting a star with a convective zone (Watson and Marsh, 2010). The changing gravitational quadrupole of the star would cause the planet's orbital period to vary over the timescale of the stellar activity cycle. The magnitude of TTV driven by the Applegate mechanism is proportional to \(a^{-2}T_{\rm modulation}^{3/2}\)(Watson and Marsh, 2010), which is particular strong for hot Jupiter due to its small star-planet separation. The modulation time-scale \(T_{\rm modulation}\) is related to the period of stellar activity cycle, which could be equal to 11 or 22 yr for our Sun as an example depending on the dynamo mechanism at work. Watson and Marsh (2010) have estimated that TTV amplitudes caused by the Applegate effect range from a few seconds for modulation time-scale of 11 yrs to several minutes for modulation time-scale of 50 yrs. For short period transiting hot Jupiters, such as WASP-18 b, the time-scales and amplitudes of Applegate driven TTVs are comparable to those of TTVs induced by tidal dissipation (Watson and Marsh, 2010). Thus, The Applegate effect may account for part of the TTVs observed in some of our candidates over decades long time-scales, and further investigation is needed to verify its presence.
**Mass loss.** When a hot Jupiter approaches its Roche limit, mass loss could occur, which may be due to escaping winds or Roche lobe overflow. This mass loss leads to a decrease in planetary density, causing the Roche limit to expand outward. Consequently, the torques in the accretion disk act to exchange angular momentum with the planet. These torques can drive the planet away from the star, causing it to migrate outward with the expanding Roche limit. The subsequent evolutionary scenarios depend on various characteristics of the system, such as the core mass of the planet. Whether mass loss ultimately leads to orbital expansion or contraction depends on various conditions. (Valsecchi et al., 2015; Jackson et al., 2016). Several of our samples are close to the Roche limit or even expected to undergo mass loss at extremely high rates, such as WASP-12 b and WASP-121 b (Li et al., 2010; Salz et al., 2019). Although the mass loss effect may not explain the period increase seen in our candidates, it remains an intriguing mechanism warrants further investigation.
**Tidal effects.** As is known, in the case where the orbital period of a hot Jupiter is shorter than the rotation period of its host star, the hot Jupiter is prone to undergo inward spiral migration (Rasio et al., 1996; Levrard et al., 2009; Matsumura et al., 2010). Conversely, when the rotation period of the host star is shorter than the orbital period of the hot Jupiter, there is a possibility of angular momentum transfer from the rapidly rotating host star to the planet's orbit, leading to an increase in the orbital period (Ogilvie, 2014). However, for our orbital period increasing candidates in Section 4.2, both the stellar rotation periods derived from the literature and the ones we estimated through \(v\sin i\) method are longer than the orbital periods of the planets. Therefore, tidal effects are unlikely to be a plausible explanation for orbital period increase.
## 6 Conclusion
We have analyzed transit time data from TESS and literature for a total of 326 hot Jupiters, to identify candidates with positive or negative long-term period change rate. We fit the transit time data using both of
a linear and a quadratic ephemeris model. We find 18 hot Jupiters showing evidence of negative period change rate and 8 with positive period change rate. Our results are useful to anyone who is interested in planning observations for these systems in the future.
We plan to expand the TTV study of hot Jupiters into the field of short period transiting brown dwarfs (BDs), to help further explore the possible differences between the HJ population and BD population raised by Grether & Lineweaver (2006) and Ma & Ge (2014), etc. For example, Bowler et al. (2020) have argued that high-mass BDs predominantly form like stellar binaries by comparing the mass-eccentricity distribution of BDs to that of giant planets. Since TTV studies can also probe the dynamic properties of the HJs and BDs systems, we expect the TTV studies conducted on both HJ and BD populations can potentially offer more observational evidence supporting distinct formation channels. Thus we encourage our colleagues to continue monitor the transits of not only HJs, but also transiting BDs.
We are indebted to Ivshina & Winn (2022) for the development of the database that served as a valuable resource for our study. Their meticulous efforts in compiling and organizing the data were instrumental in our analysis. We acknowledge the significant contribution of the NASA Exoplanet Archive, which provided access to a wealth of observational data and resources. The availability of such comprehensive databases greatly facilitated our research endeavors.
Furthermore, we extend our appreciation to the TESS mission for its remarkable contribution to exoplanet science. The high-quality transit time measurements obtained from TESS played a crucial role in our investigation of transit timing variations (TTVs) in hot Jupiters.
|
2302.00548 | Designing, Synthesizing and Modeling Active Fluids | We review recent advances in the design, synthesis, and modeling of active
fluids. Active fluids have been at the center of many technological innovations
and theoretical advances over the past two decades. Research on this new class
of fluids has been inspired by the fascinating and remarkably efficient
strategies that biological systems employ, leading to the development of
biomimetic nano- and micro-machines and -swimmers. The review encompasses
active fluids on both the nano- and micro-scale. We start with examples of
biological active systems before we discuss how experimentalists leverage novel
propulsion mechanisms to power nano- and micro-machines. We then examine how
the study of these far-from-equilibrium systems has prompted the development of
new simulation methods and theoretical models in nonquilibrium physics to
account for their mechanical, thermodynamic and emergent properties. Recent
advances in the field have paved the way for the design, synthesis, and
modeling of autonomous systems at the nano- and micro-scale and open the door
to the development of soft matter robotics. | Ilham Essafri, Bappa Ghosh, Caroline Desgranges, Jerome Delhommelle | 2023-02-01T16:11:59Z | http://arxiv.org/abs/2302.00548v1 | # Designing, Synthesizing and Modeling Active Fluids
###### Abstract
We review recent advances in the design, synthesis, and modeling of active fluids. Active fluids have been at the center of many technological innovations and theoretical advances over the past two decades. Research on this new class of fluids has been inspired by the fascinating and remarkably efficient strategies that biological systems employ, leading to the development of biomimetic nano- and micro-machines and -swimmers. The review encompasses active fluids on both the nano- and micro-scale. We start with examples of biological active systems before we discuss how experimentalists leverage novel propulsion mechanisms to power nano- and micro-machines. We then examine how the study of these far-from-equilibrium systems has prompted the development of new simulation methods and theoretical models in nonquilibrium physics to account for their mechanical, thermodynamic and emergent properties. Recent advances in the field have paved the way for the design, synthesis, and modeling of autonomous systems at the nano- and micro-scale and open the door to the development of soft matter robotics.
Introduction
Fluids at low Reynolds numbers, _i.e._, fluids for which viscous forces dominate, have gained enormous interest in the last several decades. There is indeed a whole world of living organisms that thrive under these conditions. A great example is _E. Coli_[1; 2; 3; 4]. Even if such fluids are well known to be important for engineers, and, in particular, for fluidized beds, their significance has been steadily increasing since the seventies. This is primarily due to the realization that, in such fluids, the force \(\frac{\eta^{2}}{\rho}\), in which \(\rho\) denotes the fluid density and \(\eta\) the fluid viscosity, is independent from the inertial properties of the immersed system. As discussed by Purcell[5], this force will be able to tow anything, large or small, in fluids with a low Reynolds number. At the same time, moving in such viscous fluid requires some ingeniousness. Following the Purcell's'scallop theorem', if a low-Reynolds number swimmer executes a geometrically reciprocal motion, that is a sequence of shape changes that are identical when reversed, then the net displacement of the swimmer must be zero in an incompressible, Newtonian, fluid[6; 7]. To quote Purcell, "Fast, or slow, it exactly retraces its trajectory, and it's back where it started"[5]. Indeed, to be able to swim, bacteria use propulsion mechanisms, either cilia or flagella which beat or rotate, to make small moves, but also periodic deformations of their body to execute non-reciprocal motion and keep moving[5; 8]. Understanding the principles underlying such propulsion mechanisms and navigation strategies is still an outstanding challenge, with potential applications in medicine, among others[9].
This observation has also opened the door to the development of a new field that is now known as active matter[10; 11]. Active matter relies on the transduction of energy, often starting with the conversion of chemical energy, or "fuel", into mechanical energy, leading to the motion of the particles. This new bio-inspired research area leverages recent advances in self-propelled particles synthesis. Such particles can serve as elementary building blocks for active assemblies that mimic the response of groups, clusters, or colonies of biological swimmers. They, therefore, provide a path to study the onset of collective behavior and emergence in living systems. Experimental protocols have led recently to the design and synthesis of autonomous nanomachines and micromachines. They use different propulsion mechanisms such as phoresis, diffusiophoresis, or thermophoresis. These synthetic machines also pave the way for multifunctional materials and devices. They react and adapt to environmental cues and signals emitted from other synthetic machines, leading to the novel intelligent active materials design[12]. Achieving this goal will require a collective effort from multi-disciplinary team of scientists from biology, chemistry, physics, en
gineering, and mathematics to understand and control active matter.
Synthetic swimmers behave similarly to microorganisms as they change their swimming direction at regular intervals and interact with solid surfaces, as well as each other[13]. From a theoretical perspective, the theory of Brownian motion is perhaps the simplest approximation to model the motion of a small particle immersed in a fluid. While it appears to be random, it is possible to describe its motion using Langevin dynamics. The idea is to partition the total force acting on the Brownian particle with its environment (or heat bath) into a systematic part (or friction) and a fluctuating part (or noise). Each force is related to the other by the fluctuation-dissipation theorem[14]. This provides a relation between the strength of the random noise (or fluctuating force) and the magnitude of the friction (or dissipation). As discussed by Zwanzig[15], it characterizes the balance between friction, which tends to drive any system to a completely "dead" state, and noise, which tends to keep the system "alive". When Brownian particles[16] are self-propelled, we add an "active" swimming term to the equations of motion. Experiments show that such systems exhibit a larger diffusivity than passive Brownian particles. For instance, over 10 minutes, a passive particle may diffuse over a region of 35 \(\mu m^{2}\), whereas the corresponding active particle may explore a region greater than \(1mm^{2}\)[17]. The "active" additional force drives the system into a far-from-equilibrium state[18; 19]. Indeed, swimmers use energy from their environment, that they convert into directed motion[20; 21]. This results in a constant energy flow into the system and a time-reversal symmetry breaking which, in turn, has led to tremendous recent developments in the field of nonequilibrium physics[22]. Models have also become more refined by incorporating hydrodynamics interactions, as well as interactions with complex environments. Such interactions take place at different spatial and temporal scales, prompting the development of new unifying principles at the mesoscale and of novel analytical tools to characterize emergent behavior in active assemblies[23; 24; 25; 26; 27].
The mini-review is organized as follows. In the first part, we provide an account of active fluids at the nanoscale and then focus our attention on active fluids at the microscale in the second part. For each part, we start by presenting examples of real-world biological systems, that provide the inspiration for the design of biomimetic synthetic materials, that are capable of responding and adapting to environmental cues like their real-life counterparts. We also discuss the latest advancements in nonequilibrium physics, as well as the recent advances in theoretical models and simulation methods, that promote the understanding of active matter and the rationalization of the novel emergent behavior observed in active systems. We finally summarize the main conclusions in the last section and how recent progress in the field paves the way for the development of an
autonomous soft matter robotics.
## II Active Fluids at the Nanoscale
### Real-life systems
#### Biological nanomachines
There are numerous examples of molecular machines in biological and living systems [28; 29; 30; 31; 32; 33; 34]. They exist under a myriad of forms and exhibit exceptional functionalities [35; 36; 37; 38; 39; 40; 41; 42]. For instance, nanomachines can synthesize complex molecules in the cell [43]. They can also serve as pumps [44] and generate concentration differences, or as motors to convert chemical energy into directed motion. Nanomachines often work cooperatively [45; 46] to achieve even more complicated tasks. For instance, during muscle contraction, myosin II motors act cooperatively by binding to the same single actin filament and pulling it against an external load [47]. More generally, cooperativity allows them to become biological factories [48] capable of controlling what happens in the cell.
#### Powering molecular machines with nanomotors
One of the most fascinating enzymes in biology is the ATP synthase [53]. It is also the smallest known biological nanomotor [54] and is found in almost all living organisms, including plants, animals, and bacteria. ATP synthase (see Fig. 1A) is composed of two rotary motors, the proton-driven \(F_{0}\) and the ATP-synthesizing \(F_{1}\), that are coupled via elastic torque transmission [55]. This high-revving nanomotor mechanism utilizes the transport of protons to drive the synthesis of ATP. Another example of nanomachine is the organelle ribosome which consists of ribosomal RNA and protein [56]. Together they can read messenger RNAs and translate the encoded information into proteins. Nanomotors such as RNA polymerase [57] (see Fig. 1B), DNA polymerase [58; 59], and helicases [60; 61], which deal with DNA and RNA reactions, walk along DNA strands to perform their functions. Molecular motors such as kinesins, myosins, or dyneins are responsible for directed motion. They use chemical energy to drive conformational changes, which lead to the active transport of material from one part of the cell to another. Dyneins [62], fueled by ATP hydrolysis, generate force and movement along microtubules, allowing for the motions of cilia and flagella. Myosins [63]
are actin-based motor proteins that use cellular ATP to power interactions with actin filaments and create directed movements important in intracellular transport and cell division. Kinesins[64] are important molecular motors that directionally transport various cargos, including membranous organelles, protein complexes, and mRNAs. Another example is the ATPase active ion pump that
Figure 1: Biological nanomachines. A) F\({}_{1}\)-ATPase[49]. The \(\beta_{TP}\) subunit is shown in green and the \(\beta_{E}\) subunit is shown in red. The orientation of the central asymmetric \(\gamma\)-shaft is shown with a yellow arrow in the two free energy minima (initial position at 0\({}^{\circ}\) and metastable intermediate state at 70\({}^{\circ}\)) and at the end of the rotation cycle at 120\({}^{\circ}\). Free energy profiles (solid lines) are shown in the presence of ADP (blue) or ATP (red) in the binding site of \(\beta_{TP}\), and dashed lines show the corresponding profiles after the addition of the F\({}_{o}\)-generated torque potential with a constant slope of 0.1 kcal\(/^{\circ}\). Reprinted with permission from _J. Am. Chem. Soc._ **2017**, 139, 4025-4034. Copyright 2017 American Chemical Society. B) Impact of the N\({}^{6}\)-Methyladenine (N\({}^{6}\)-mA or 6 mA) epigenetic DNA modification of the DNA template on RNA polymerase II (pol II) transcription elongation, which causes site specific pol II pausing/stalling[50]. Reprinted with permission from _J. Am. Chem. Soc._ **2017**, 139, 14436-14442. Copyright 2017 American Chemical Society. C) Enzymes as Nanomotors: both catalase and urease enzymes move towards areas of higher substrate concentration generated by the Y-shaped microfluidic device, thereby exhibiting chemotaxis[51]. Reprinted with permission from _J. Am. Chem. Soc._ **2013**, 135, 1406-1414. Copyright 2013 American Chemical Society. D) Urease single-enzyme diffusion enhanced by substrate catalysis[52]. Reprinted with permission from _J. Am. Chem. Soc._ **2010**, 132, 2110-2111. Copyright 2010 American Chemical Society.
can use some of the free energy released by the hydrolysis of ATP to pump sodium ions across the cell membrane. Proton-gradient-driven motors have also been observed to be the driving force behind flagellar filaments, which are used by many micro and nanoscale swimmers as propellers.[65]
**Enzymes as energy transducers**
A detailed understanding of how enzymes convert chemical energy into mechanical force has remained an outstanding challenge[66]. Enzymes act as catalysts in biological systems[67] (see Fig. 1C). Once a substrate is bound to an enzyme's active site, the enzyme-catalyzed reaction is associated with a rapid turnover, as well as a high specificity and efficiency, that can power biological nanomachines[68; 69; 70]. Recent work shows that enzymes could be at the center of the stochastic motion of the cytoplasm, the organization of metabolons, as well as the convective transport of fluid in cells[29]. Over the last decade, the development of enzyme-based nanomotors[51] has been the focus of intense research. Indeed, recent studies have revealed the existence of free-swimming enzymes capable of moving in low Reynolds fluids. By harnessing chemical energy released through the enzymatic turnover of substrates, these free-swimming enzymes can generate enough mechanical force to power their motion and to enhance their diffusion[71; 72] (see Fig. 1D). For instance, the diffusion of urease enzymes has been found to increase in the presence of urea[52]. Further analysis shows that this increase depends on the concentration in the substrate and is weakened when urease is inhibited with pyrocatech, thereby showing that the enhancement of diffusion results from the enzyme catalysis. After each turnover, the catalytic reactions generate an 'impulsive force', leading to a new kind of mechanobiological event. In addition, when subjected to a gradient in substrate concentration, enzymes move up, triggering chemotaxis at the molecular level[73]. The design of "intelligent" enzyme-powered autonomous nanomotors, which have the ability to assemble and deliver cargo for biological applications, is thus now one of the most critical topics in nanotechnology. These discoveries could lead to the identification of the principles of fabrication and design of novel biomimetic molecular machines[29].
### Synthetic systems
**Artificial biology**
Can we design and program nanosized synthetic machines to perform complex tasks similar to those performed by biological systems? The recent emergence of artificial biology[74; 75; 76; 77] has focused on addressing this challenge. This new discipline aims at designing and engineering the structure and function of biological entities. This approach relies on a combination of biological and abiotic building blocks. For instance, it is now possible to integrate organelles, such as, _e.g._, ribosomes) and biomolecules, such as, _e.g._, enzymes, into an artificial membrane[78; 79]. Another example is the recent realization of molecular walkers. Here, the idea is to start from proteins or nucleic acids. Chemical reactions are then used to drive conformational changes so that these molecular machines can walk on various materials [80; 81; 82; 83]. A related question is the following. Can we synthesize artificial molecular machines that can rival and potentially out-perform natural biological nanomachines? A possible route is supramolecular chemistry. The pioneering work by Sauvage, Stoddart, and Ferringa has revolutionized the design and synthesis of molecular machines (see Fig. 2A). Sauvage introduced a new type of bond, known as mechanical bond[84], in chemistry. Stoddart developed mechanically interlocked molecules (MIMs) such as rotaxanes and catenanes[85; 86], as well as molecular shuttle[87] and molecular switches to control the motion of an artificial molecular motor[88]. Feringa developed light-responsive rotors and used their rotary motions in mesoscopic and nanoscale applications, most notably in the well-known nanocar[89]. Since then, more and more evolved molecular machines have been created, and the role played by response to an external stimulus, including redox conditions, pH, temperature and light has been emphasized[90; 91; 92; 93; 94; 95; 96; 97; 98; 99]. Another route towards the design of synthetic nanomotors with directed motion is the construction of tiny chemically powered motors without moving parts. They rely on an asymmetric chemical reactivity, in which active particles harness chemical energy that they translate into work. Different types of nanomotors have been developed, such as nanowires[100; 101], helical motors[102; 103; 104], and nanorockets[105; 106; 107]. To make up for the low Reynolds number surrounding them and counteract Brownian motion, nanomotors can be designed as fuel-dependent or fuel-independent nanomotors. The first category consists of nanomotors that catalytically turnover fuel from their environment to generate motion. They can act as motors or pumps and have demonstrated great significance in active transport at the nanoscale[35; 36; 37; 38; 39]. The other category of nanomotors extracts energy from external sources and converts it into motion.
**Artificial enzymes: Nanozymes**
Recently, synthetic enzymes or nanozymes [110] have emerged as exciting tools at the interface between chemistry, biology, materials, and nanotechnology. Such nanosized objects contain a part that mimics an enzyme activity that can act as a catalyst [109]. Nowadays, there are more than 900 nanomaterials classified as nanozymes. For instance, Fe\({}_{3}\)O\({}_{4}\), CuO, Au, Ag, Pt, and Pd nanomaterials have been shown that exhibit a catalytic efficiency comparable to natural enzymes [111; 112; 113; 114]. Nanozymes are also major agents in nanorobotics and nanomedicine. Recent work has shown that they have the potential to allow for the building of the logic control, sensing, driving, and functioning system of nanorobots [115]. For instance, catalase-like nanozymes are essential to artificial motility. They can catalyze the decomposition of H\({}_{2}\)O\({}_{2}\) into molecular oxygen (O\({}_{2}\)) and water (H\({}_{2}\)O) and, as a result, power motion. Nanozymes can also help control motion. Recent work has shown that nanozymes decorated with targeting molecules can assist the motion and guidance of the nanomachine to the targeting position [116].
### Fuel-dependent systems: translational and rotational motion
Two general types of nanomotors can produce motion at the nanoscale. The first type en
Figure 2: A) Examples of multistimulus-responsive materials [108]. Reprinted with permission from _ACS Cent. Sci._ **2017**, 3, 927-935. Copyright 2017 American Chemical Society. B) Active entities are combined into superstructures to mimic therapeutic cells [109]. Reprinted with permission from _Adv. Drug. Deliv. Rev._ **2017**, 118, 94-108. Copyright 2017 Elsevier.
compasses nanomotors that operate thanks to a phenomenon known as self-electrophoresis. This concept was introduced by Mitchell in 1956 [117] and the underlying idea is the following. When a bacterium pumps ions across its membrane asymmetrically, it forms an electrical circuit. Indeed, if ions are pumped out at one end of the cell and back in at the other end, ions flow from the rear of the bacterium's body to the front and the organism moves forward. This self-generated flow field creates an autonomous motion which is from now on referred to as self-propulsion [118; 119]. The first synthetic design of this type of autonomous nanomotor was first reported in 2004 by Paxton _et al._[120]. They used a Pt-based nanomotor capable of powering a linear motion and showed that a platinum-gold nanorod underwent self-electrophoresis in a hydrogen peroxide solution. In other words, the Pt nanozyme, one of the metallic nanoparticles that possess catalase/peroxidase-like activity, catalyzes H\({}_{2}\)O\({}_{2}\) to generate O\({}_{2}\) or oxidize other substrates. The concentration gradient in O\({}_{2}\) generates an interfacial tension gradient which, in turn, results in motion. Paxton _et al._ also reported the speed of the motor to be equal to approximately several body lengths per second [121].
Recent work has focused on designing novel catalytic nanomotors capable of different types of motion such as rotation [122]. Wang _et al._ used Tafel plots to predict the direction of motion of all possible bimetallic combinations through self-electrophoresis [123]. Fournier-Bidoz _et al._ designed a self-powered synthetic nanorotor from barcoded gold-nickel nanorods, with the gold end anchored to the surface of a silicon wafer [124]. They observed circular movements at constant velocity as hydrogen peroxide fuel is catalytically decomposed into oxygen at the unattached nickel end of the nanorod. By varying the concentration of hydrogen peroxide and the length of the nickel segment, Fournier-Bidoz _et al._ controlled the angular velocity of the rotating nanorods. Moreover, when several hundred nanorods are present in the solution, the authors observed novel rotational behaviors. For instance, a nanorod rotating clockwise can undergo a collision with another nanorod and the resulting nanorod pair can rotate counterclockwise. New designs can also lead to novel mechanized functions. For instance, Solovev _et al._ reported autonomous and remotely guided catalytically self-propelled InGaAs/GaAs/(Cr)Pt nanotubes [125]. These rolled-up tubes with diameters of 280-600 nm move in hydrogen peroxide solutions with speeds as high as 180 \(\mu\) m s\({}^{-1}\). The effective transduction of chemical energy into translational motion allows these nanotubes to perform tasks such as cargo transport (see Fig. 2B). Furthermore, while cylindrically rolled-up tubes move in a straight line, asymmetrically rolled-up tubes follow a corkscrew-like trajectory. These nanotubes can thus drill and embed themselves into biomaterials.
### Fuel-free nanomotors
Fuel-free nanomotors[126] have become leading candidates for applications in nanomedicine. Unlike fuel-dependent nanomotors, their propulsion mechanisms are biocompatible and sustainable. Instead of relying on a chemically-powered propulsion, fuel-free nanomotors leverage external stimuli such as, _e.g._, magnetic, chemical, thermal, or electrical fields. Several groups have been exploring fuel-free nanomachine propulsion mechanisms, including the utilization of magnetic[127; 128; 129; 130; 131], electrical[132; 133; 134], optical[135] and ultrasonic[136; 137; 138] fields. Magnetically-driven nanomotors are noteworthy devices since they require field strengths harmless to humans. They are particularly promising in a variety of _in vivo_ biomedical applications. Ghosh _et al._ recently reported the first "voyage" in human blood of magnetic nanomotors, based on conformal ferrite coatings[139]. Other outstanding applications of magnetically-driven nanomotors deal with cellular internalization. Cellular functions and physiology are dependent on the rheological properties of the cell. Moreover, since the cellular interior is constantly reorganizing, this contributes to a highly complex mechanical environment characterized by heterogeneities across multiple length scales. Helical nanomotors recently helped monitor the motion of the cytoskeleton inside the cell. More specifically, researchers show that variations of the hydrodynamic pitch in the helical propulsion define different elastic relaxation time scales, corresponding to the locations and motions of the cytoskeleton[140]. In other studies[141], helical nanomotors driven by small rotating magnetic fields allowed for the exploration of the interior of cancerous cells.
Ultrasound-driven (US) nanomotors belong to another class of synthetic fuel-free nanomotors. They are propelled through acoustic fields and are very effective in intracellular delivery. Indeed, US-powered nanomotors possess sufficient force for penetrating the cellular membranes, rapidly internalizing into cells, and actively delivering therapeutic cargoes. Mallouk's group demonstrated the first effective internalization of gold nanorods motors into HeLa cells after 24 h incubation. It was also possible afterward to activate the intracellular propulsion of these internalized nanomotors with an acoustic field, involving axial propulsion and spinning[142]. Over the past years, several studies have demonstrated the advantages of acoustic nanomotors for intracellular applications[143; 144]. For instance, US-propelled nanomotors deliver small interfering RNA (siRNA) payloads inside cells for gene-silencing applications. More recently, acoustic nanomotors have been designed to transport oxygen inside cells toward promising therapeutic applications[145]. Additionally, the active internalization and motion of acoustic nanomotors inside cells have been exploited
for enhanced intracellular sensing of disease biomarkers, including specific nucleic acids and proteins [146; 147].
Finally, nanomotors propelled by light are very promising devices [148]. Light allows for the manipulation of nanomachines with spatial and temporal precision, as experimentalists can readily modulate light intensity, frequency, polarization, and propagation direction. This, in turn, enables excellent controllability and programmability of these nanomotors. Moreover, photo-catalysis can help design new light-powered nanomotors. The idea is to use already well-known photo-electrochemical reactions to advance the development of these new nanodevices. In other words, a light-powered nanomotor is motorized thanks to the photovoltaic effect. The electric current is then converted into propulsion through electrokinetic mechanisms [149]. Wang _et al._[150] proposed a visible-/near-infrared light-driven nanomotor based on a single silicon nanowire. The silicon nanomotor harvests energy from light and propels itself by self-electrophoresis. Importantly, due to the optical resonance inside the silicon nanowire, the spectral response of the nanowire-based nanomotor can be modulated by the nanowire's diameter. This pioneering work provides new opportunities to develop novel functions such as multiple communication channels in nanorobotics and controllable self-assembly. Other examples based on light within the "therapeutic window" and thus compatible with living tissues have been developed recently. Nelson _et al._ used black TiO\({}_{2}\) and were able to design a visible-light-driven nanomotor [151]. Tang _et al._ fabricated nanomotors with silicon, which can be driven by visible and near-infrared radiation at ultralow intensity. They recently designed a visible-light-propelled nanotree-based microswimmer [152] using the principles of the dye-sensitized solar cell. By loading dyes, they were able to have complementary absorption spectra and thus control the navigation of the microswimmers.
### Directed motion for therapeutics
Here, we examine directed motion created by external fields to guide nanomotors. External stimuli can guide and direct nanomotors towards an area of interest with a high efficiency. Here, the idea is to control the motion of a group of particles rather than guiding every single one independently. This new approach offers exceptional opportunities in drug transport and delivery [105]. Several methods for motion control, such as magnetic guidance, thermal control, chemical response, and phototaxis, have already been proved to have a significant impact.
Catalytically-powered nanomotors are starting to play a significant role in cargo towing. Indeed,
magnetic guidance has shown tremendous progress on cargo-carrying catalytic nanomotors for different loading and unloading mechanisms. Kagan _et al._[153] presented the first example of nanoshuttles for the transport and release of drugs. This pioneering study illustrated that catalytic nanowire shuttles could readily pick up drug-loaded particles and transport them over predetermined routes toward target destinations. Moreover, the combination of carbon nanotubes with nanomotors, such as CNT-based nanowire motors[154], showed excellent results when transporting 'heavy' therapeutic cargo. In this case, the nanomotors pick up, transport, and release varying-sized drug carriers towards predetermined destinations. Magnetic guidance also helped navigate and deliver a cargo with precision inside channel networks, with a drastic acceleration of the nanomotor [155; 156] and a speed close to natural biomolecular motors (\(50-60\)\(\mu\)m/s). Calvo-Marzal _et al._ designed an electrochemically switch to control and fine-tune the speed of a catalytic nanomotor. Here, the potential-induced motion control is attributed primarily to changes in the local oxygen level in connection with the interfacial tension gradient. Such reversible voltage-driven motion represents an attractive approach for the on-demand regulation of artificial nanomotors and opens the door to new and exciting operations of these nanoscale devices[157].
### Theory and simulations
**Continuum theory for phoretic propulsion**
The propulsion of synthetic nanomotors relies on phoretic mechanisms[158; 159; 160]. For instance, a Janus particle can be described as a spherical particle with a catalytic, or reactive site, on its surface[161]. Reactions take place at the catalytic site and result in concentration changes in the chemicals contained in the surrounding fluid. In turn, these concentration changes give rise to chemical gradients that trigger a hydrodynamic flow in the vicinity of the surface and lead to the option of particles with a momentum of the same magnitude, and in the opposite direction, as that for the surrounding fluid because of the conservation of momentum. Such a motion is termed as self-diffusiophoresis[162]. Several theories have been developed to account for phoretic mechanisms[163], starting with the pioneering contribution of Derjaguin and Sidorenkov[164], including those triggered by gradients due to electric fields[165; 166] or temperature gradients[167; 168] in addition to concentration gradients[169].
Classical hydrodynamics can be invoked to derive the underlying equations, together with the
boundary conditions, and determine the self-propulsion velocity of, _e.g._, the Janus particles described above [162; 170; 171; 172; 173; 159; 161]. As discussed by Slomka and Dunkel [174], one can write the swimmer velocity field \(\mathbf{u}\) as a sum of two terms, \(\mathbf{v}\) and \(v_{0}\mathbf{P}\), in which \(\mathbf{v}\) is the solvent velocity field, \(\mathbf{P}\) is the local mean orientation of the swimmer and \(v_{0}\) the self-propulsion velocity relative to the solvent flow. The dynamics of the solvent velocity field \(\mathbf{v}\) is given by the Navier-Stokes equation (equations of conservation of momentum and mass) for an incompressible solvent flow as [175; 176]
\[\rho(\partial_{t}+\mathbf{v}\cdot\nabla)\mathbf{v}\ =\ -\nabla P+\eta\nabla^{2} \mathbf{v} \tag{1}\]
\[\nabla\cdot\mathbf{v}\ =\ 0 \tag{2}\]
in which \(\rho\) denotes the mass density, \(P\) the scalar pressure and \(\eta\) is the viscosity. The stress tensor \(\sigma\) is given by \(\sigma=P\mathbf{1}+\eta[\nabla\mathbf{u}+(\nabla\mathbf{u})^{T}]\), in which \(\mathbf{1}\) is the identity tensor. The hydrodynamic force \(\mathbf{F}\) and torque \(\mathbf{L}\) acting on the swimmer are found by integrating over the surface \(S\) of the swimmer [176]
\[\mathbf{F}(t)=\int\int_{S}\sigma\cdot\mathbf{n}\ dS \tag{3}\]
\[\mathbf{L}(t)=\int\int_{S}\mathbf{x}\times(\sigma\cdot\mathbf{n})\ dS \tag{4}\]
where \(\mathbf{x}\) is the position on the surface \(S\) and \(\mathbf{n}\) the unit vector normal to S into the fluid.
Eqs. 1 and 2 assume that the suspension has reached a quasi-equilibrium state. As discussed by Slomka and Dunkel [174], this state is such that the net momentum transfer between swimmers and the surrounding fluid is negligible, the active particles can be regarded as force-free and the solvent flow is driven by the stress field \(\sigma\) created by the swimmer [176; 177; 178; 179; 180]. We add that recent work has started to focus on the impact of an increase in the size of active particles on their dynamics change as inertial effects start to become more significant [181]. In such cases, inertial effects due to, _e.g._, the unsteady acceleration of a swimmer, are described by the unsteady time-dependent Stokes equation [182].
The next step consists in introducing the Reynolds number \(R_{e}=(VL)/\nu)\), in which \(\nu\) is the kinematic viscosity, \(V\) and \(L\) and characteristic length and speed scales in the system, and take the low Reynolds number regime limit (\(R_{e}\to 0\)) to obtain the Stokes equation from Eq.1
\[\nabla P+\eta\nabla^{2}\mathbf{v}\ =\ 0 \tag{5}\]
\[\nabla\cdot\mathbf{v}\ =\ 0 \tag{6}\]
The flow field obtained from the solution to the Stokes equation, or Oosen tensor, is known as a Stokeslet, and gives rise to what is known as a dipole swimmer model, in which the nanomotor is approximated by two point forces with opposite directions. The flow field is then given by an expansion, in which dipole contributions provide the leading term far away from the self-propelled particle [183; 184; 185]. The resulting flow lines (see, for instance, the flow lines shown in Fig. 3A) lead to the onset of effective interactions with surfaces and other nanomotors, which can be either attractive (inflow) or repulsive (outflow). Flow-induced interactions have been found to account for, among others, the nematic arrangement of elongated self-propelled particles [186; 187].
The asymmetric catalytic reactions that occur at the surface of the self-propelled particles generate a mechanochemical coupling between the flow field and the concentration fields of the reactants. The concentration field of any given solute \(i\) can be determined from a reaction-diffusion equation. This equation follows from the conservation equation for \(i\), given by
\[\partial_{t}c_{i}=D\nabla^{2}c_{i}+R(c_{i}) \tag{7}\]
in which \(c_{i}\) is the concentration in solute \(i\), \(D\) the diffusion coefficient and \(R(c_{i})\) stands for the changes in concentration arising from the chemical reactions occurring at the catalytic side of the nanomotor. Strictly speaking, Eq. 7 is valid in the low Peclet number (\(P_{e}\) regime, or, in other words, that the ratio of the solute advection to diffusion, measured by \(P_{e}=LV/D\), is very small [188]. Assuming that the concentration field has reached the steady-state, Reigh and Kapral were able to determine the concentration field (see Fig. 3B) and, from there, to determine the propulsion velocity for the nanomotor as a function of the concentration field and of the rate constant for the self-propulsion reaction [189; 172].
Gapsard and Kapral recently proposed a fluctuating chemohydrodynamics theory that accounts for the stochastic motion of self-diffusiophoretic particles [190; 173]. They derived equations of motion for the stochastic dynamics and reaction of an active Janus particle self-propelled by diffusiophoresis. Specifically, using Green-function methods and the Faxen theorem [173], they obtained the frequency-dependent force, torque, and reaction rate from the boundary conditions and the fluctuating chemohydrodynamic equations. This has led to the identification of coupled Langevin equations for the translation and rotation of self-propelled particles, as well as for the reaction. They showed that the equations so obtained are consistent with the Onsager-Casimir reciprocal relations between affinities and currents, thereby providing a thermodynamically consistent picture for self-diffusiophoretic particles.
**Coarse-grained simulation methods for nanomotors**
Simulations provide a fascinating alternative to visualize, understand and rationalize the mechanisms underlying the operation of nanomotors and their collective behaviors. Furthermore, they can provide access to properties that are difficult to obtain in experiments. Golestanian _et al._ proposed one of the first models for nanomotors [191; 192; 161]. The nanomotor is modeled as a spherical particle, with a reactive patch on its surface, and its motion driven by the asymmetric distribution of reaction products. This success inspired the development of one of the most popular and simple nanomotor models, known as the sphere-dimer motor [193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205]. The simulation of the
Figure 3: A) Examples of ”raspberry” swimmers, with in (top left panel) a sketch of the construction of pusher and puller raspberry swimmers (rods). A force \(\mathbf{F}\) (blue arrow) is applied to the central bead (blue cross) in the direction of the symmetry axis \(\hat{u}\) (black arrow). A counter-force\(-\mathbf{F}\) (red arrow) is applied to the fluid at a point \(l\hat{u}\) (red cross), with \(l\) the dipole length. \(l>0\) corresponds to a puller and \(l<0\) to a pusher. The other panels show the flow field around puller raspberry swimmers. Streamlines are shown in white and magenta arrow heads indicate the flow direction. Reprinted with permission from _J. Chem. Phys._ **2016**, 144, 134106. Copyright 2016 American Institute of Physics. B) Comparison between the normalized concentration field for the self-propulsion reaction product from (left) the continuum theory and (right) MD-MPCD simulations [172]. Reprinted with permission from _Soft Matter_ **2015**, 11, 3149-3158. Copyright 2015 Royal Society of Chemistry.
sphere-dimer motors is then carried out using a hybrid method, that integrates the motion of the sphere-dimer particles with a classical molecular dynamics scheme [206] and the time-dependent evolution of the surrounding solvent with a multiparticle collision dynamics scheme [207; 208; 209; 210; 211; 212; 213; 214]. The sphere-dimer consists of two Lennard-Jones sites [206], labeled as \(C\) for the catalytic sphere and as \(N\) for the noncatalytic sphere, which are held together via a holonomic contraint at a fixed distance \(d_{CN}\)[189; 215]. Each of the Lennard-Jones sites interact with other motor sites and solvent particles via the usual Weeks-Chandler-Andersen modification [206] of the Lennard-Jones interaction potential
\[V_{sf}=4\varepsilon_{sf}\left[\left(\frac{\sigma_{s}f}{r}\right)^{12}-\left( \frac{\sigma_{s}f}{r}\right)^{6}+\frac{1}{4}\right] \tag{8}\]
in which the index \(s\) denotes either \(C\) or \(N\), \(f\) refers to a fluid particle, and \(\varepsilon sf\) and \(\sigma_{sf}\) are the Lennard-Jones parameters for the depth of the attractive well and for the exclusion diameter of the underlying Lennard-Jones potential. The minimal number of types of fluid particles is 2, corresponding to type \(A\) before the fluid particle has undergone the irreversible chemical reaction \(A\to B\) that accounts for the self-propulsion of the motor and to type \(B\) after the reaction has taken place.
The fluid particles are coarse-grained into effective particles and the fluid-fluid interactions and chemical reactions that take place within the fluid are then implemented through a reactive multiparticle collision dynamics [207; 208; 209; 210]. The two parts of the algorithm are the following:
- the nonreactive part of the algorithm performs stochastic rotations of all fluid particle velocities to mimic the effect of collisions. To this end, a stochastic rotation operator \(\omega_{\xi}\) is applied to all fluid particles located within the same cubic sub-region of the fluid. Specifically, the post-collision velocity \(\mathbf{v}_{\xi,i}^{{}^{\prime}}\) of a particle \(i\) within the cubic sub-region of edge \(a\) (denoted by \(\xi\)), is related to its pre-collision velocity \(v_{\xi,i}\) via
\[\mathbf{v}_{\xi,i}^{{}^{\prime}}=\mathbf{V}_{\xi}+\omega_{\xi}\left[\mathbf{v} _{\xi,i}-\mathbf{V}_{\xi}\right] \tag{9}\]
in which \(\mathbf{V}_{\xi}=\frac{1}{N_{\xi}}\sum_{i=1}^{N_{\xi}}\mathbf{v}_{\xi,i}\) is the center-of-mass velocity for the \(\xi\) region and the operator \(\omega_{\xi}\) corresponds to a clockwise rotation of an angle \(\alpha\) around an unit vector \(\mathbf{n}_{\xi}\) with a random orientation,
- the reactive part of the algorithm is applied after the nonreactive part and consists of stochastic identity changes for the fluid particles. These identity changes mimic the outcome for the chemical reactions taking place in the fluid and which account for the self-propulsion of the nanomotor.
The algorithm so obtained allows for the conservation of momentum and kinetic energy. In addition, transport coefficients can be readily obtained through explicit Green-Kubo relations [210].
Alternative hydrodynamic simulation approaches include the lattice Boltzmann (LB) method [216; 217] and the dissipative particle dynamics (DPD) methods [218]. The former couples a system of particles, standing for the self-propelled particle, to a lattice-Boltzmann model representing the solvent. Self-propelled particles, also termed as squirmers [23; 219; 220; 221; 222], are coarse-grained into an arrangement of mass points with a frictional coupling to the solvent and appropriate boundary conditions are applied at surfaces. Instead of solving directly the Stokes equations (Eq. 5), the LB method solves the Boltzmann transport equation which obeys the same conservation laws and describes the evolution of the single-particle phase space probability distribution \(f(\mathbf{r},\mathbf{v},t)\), _i.e._ the probability of finding a fluid molecule with velocity \(\mathbf{v}\) and position \(\mathbf{r}\) at time \(t\). As discussed by Kuron _et al._[219], the LB method linearizes the relaxation of \(f\) to the Maxwellian equilibrium. This is achieved by discretizing space on a cubic lattice and by discretizing time. The probability is allowed to flow between neighboring cells through a finite set of velocities \(\mathbf{c}_{i}\). Fluid-particle interactions take place exclusively via boundary conditions, with no-slip boundary conditions introduced through reflections of the populations streaming into the boundary back into the fluid. Momentum transfers between particles and fluid are then accounted for by these reflections [219], leading to the following force \(\mathbf{F}_{bb}(t)\)
\[\mathbf{F}_{bb}(t)=a^{3}\sum_{\mathbf{r}_{b}}\sum_{i}\mathbf{c}_{i}\left(f_{ i}(\mathbf{r}_{b},t)-f_{i}(\mathbf{r}_{b}-\mathbf{c}_{i}\tau,t)\right) \tag{10}\]
and torque \(\mathbf{T}_{bb}(t)\)
\[\mathbf{T}_{bb}(t)=a^{3}\sum_{\mathbf{r}_{b}}\sum_{i}(\mathbf{r}_{b}-\mathbf{ r})\times\mathbf{c}_{i}\left(f_{i}(\mathbf{r}_{b},t)-f_{i}(\mathbf{r}_{b}- \mathbf{c}_{i}\tau,t)\right) \tag{11}\]
in which \(\mathbf{r}_{b}\) denotes a boundary (particle) node, \(a\) the grid spacing for the lattice discretization and \(\tau\) the time step for the discretization.
DPD is a multi-particle method that is akin to molecular dynamics, but with pairwise momentum-conserving stochastic and friction forces, and has been recently applied to study the collective properties of self-propelled particles [223; 224; 225; 226]. In the DPD approach [218], the dynamics of the particles is given by the Langevin equations through the following stochastic differential equations
\[d\mathbf{r}_{i} = \frac{\mathbf{p}_{i}}{m_{i}}dt \tag{12}\] \[d\mathbf{p}_{i} = \left[\sum_{j\neq i}\mathbf{F}_{ij}^{C}(\mathbf{r}_{ij})+\sum_{j \neq i}-\gamma\omega_{D}(r_{ij})(\mathbf{e}_{ij}\cdot\mathbf{v}_{ij})\mathbf{ e}_{ij}\right]dt+\sigma\omega_{R}(r_{ij})dW_{ij} \tag{13}\]
where \({\bf r}_{ij}={\bf r}_{i}-{\bf r}_{j}\), \(r_{ij}=|{\bf r}_{ij}|\) and \({\bf e}_{ij}={\bf r}_{ij}/r_{ij}\) is the unit vector from particle \(j\) to particle \(i\). In the second equation, \({\bf F}_{c}\) denotes the conservative interparticle forces, while the second term on the right-hand-side corresponds to the dissipative forces and the third term to a Gaussian white-noise term, with \(dW_{ij}\) as independent increments of a Wiener process. The functions \(\omega_{D}\) and \(\omega_{R}\) are weight functions that quantify the range of interaction for the dissipative and random forces, and \(\gamma\) and \(\sigma\) are the friction coefficient and amplitude of the noise.
Finally, Langevin dynamics in the overdamped limit has also been used to model nanomotors [173; 190]. In this case, a nanomotor \(i\) is characterized by its position \({\bf r}_{i}(t)\) and axis \(\hat{\nu}_{i}(t)=(\cos\theta_{i}(t),\sin\theta_{i}(t))\) and obeys the following equations of motion
\[\begin{array}{rcl}\partial_{t}{\bf r}_{i}&=&v_{0}\hat{\nu}_{i}+\mu{\bf F}_{i }+\eta_{i}^{T}(t)\\ \partial_{t}\theta_{i}&=&\eta_{i}^{R}(t)\end{array} \tag{14}\]
Here, \(v_{0}\) denotes the self-propulsion velocity, \(\mu\) the mobility, \({\bf F}_{i}\) the force exerted on \(i\), \(\eta_{i}^{T}(t)\) and \(\eta_{i}^{R}(t)\) are translational and rotational white noise terms, respectively. This model was, for instance, recently applied to simulate the autonomous detection and repair microscopic mechanical defects and cracks by self-propelled Au/Pt nanomotors [227].
### Simulation-aided design of nanomotors
Responsive and active soft materials have drawn considerable attention over the past decade [228; 229; 230]. In particular, synthetic and biomimetic nanomotors could potentially revolutionize nanomedicine and nanorobotics. Theories and simulations now play an active role in the improvement of design and control strategies of nanomachines. For instance, coarse-grained simulations have been instrumental for the optimization of DNA nanotechnology. Ouldridge _et al._ proposed a new model for a two-footed DNA walker, designed to step along a reusable track. Applying a moderate tension to the track can provide a bias for the walker to step forward, but also help him recover from undesirable overstepped states. Moreover, these authors showed that the process by which spent fuel detaches from the walker strongly influences the motion of the walker along the track, and suggested several modifications to the walker to improve its operation [231]. Chen _et al._ found a novel way to characterize the swimming motion of a linear catalytic nanomotor in a 2D fluid [232]. The diffusion of the nanomotor was accelerated by the chemical propulsion as long as they confined its rotational degree of freedom. They also suggested how, in experiments, analyzing the
confined diffusive behavior of nanomotors collectively could prove more efficient than tracking the trajectory of each nanomotor individually. Simulations can also help enhance the capacity of experimental synthetic nanomachines. Ortiz _et al._[233] studied how simulations could help design and harness enzymatic catalysis for on-demand pumping in nanofluidic devices. In particular, they modeled urease-based pumps and identified novel spatiotemporal variations in pumping behavior, thereby suggesting how self-powered fluidic devices, based on enzymatic pumps, could be improved.
Simulations can also serve as _in silico_ experiments to advance applications in nanomedicine that rely on self-healing nanomaterials and targeted drug delivery. Li _et al._[227] developed an nanomotor-based autonomous repair system that sought and localized cracks and mimicked wound healing. Nanomotors were obersved to form"patches" and repair scratched electrodes, thereby restoring the conductive pathway. Fluid pumps have also been an area of intense research. For instance, Tansi _et al._ presented a new method to construct, move, and organize particle islands using light-powered fluid pumping[234]. Their method relied on freely suspended nanoparticles to generate fluid pumping towards desired point sources. The pumping rates were found to depend on particle concentration and light intensity, making them easy to control. Molecular dynamics simulation can also help design nanomachines as drug delivery systems. Recently, Cai _et al._ carried out molecular dynamics simulations to help design a new rotary nanomotor for a drug delivery nanosystem, involving graphene origami to drive a carbon nanotube rotor[235]. Such screening methods can be of great significance when testing the different components of a nanomachine.
Physical models were proposed recently to improve our understanding of the self-assembly and collective motion of nanoobjects. In particular, approaches taking into account both the hydrodynamics and the chemistry are crucial to elucidate the behavior of nanomotors propelled by self-diffusiophoresis[189]. The inhomogeneous concentration fields induced by asymmetric motor reactions are "felt" by other motors and, as a result, strongly influence their motion. Systems composed of a collection of Janus particles can, for instance, exhibit dynamic cluster states[236]. Particles can join, leave or be trapped within a cluster[237, 238]. Prior work has shown the importance of the catalytic capsize, motor density, interaction potentials and fluid properties in cluster formation[239]. Models also provide insight into forward- and backward-moving motors. For sphere-dimer motors, forward-moving motors are attracted towards the areas with high product
concentrations and, as a result, move toward other motors, while backward-moving motors tend to avoid other motors. The flow field is also found to depend on the bond length in the dimer, with forward-moving motors acting as pullers for short bond lengths and as pushers for long bond lengths [203; 240]. Chemical oscillations can also impact the collective behavior. Motors show little tendency to cluster where fuel concentration is low, while they form dynamic clusters where fuel concentration is high [241]. Other studies have assessed the ability of chemically propelled nanomotors to perform tasks in complex media crowded by obstacles of various kinds. For instance, in nature, molecular machines can carry out diverse biochemical and transport tasks by moving on biofilaments or operating in membranes. Simulations by Qiao _et al._[242] on oligomeric motors attached to a filament provide insight into the self-generated concentration fields produced by the catalytic reactions.
**Controlling the motion of nanomotors**
Brownian motion is a crucial feature at the nanoscale and considerably increases the difficult to control nanosized objects. With the advent of molecular nanotechnology, increasing effort has focused on the production of the tiniest lego possible [243]. Improvements in design and synthesis of molecular machines, such as nanoscale motors, rotors, switches, and pumps, have led to tremendous advances [38; 85; 244]. However, Brownian motion presents a real problem for the design and the manufacture of molecular-scale machines and factories [82]. Recently, there has been a paradigm shift, according to which Brownian motion can be seen as an unexpected help rather than a disadvantage. For instance, Toyabe _et al._[245] designed smart devices that could power themselves using Brownian motion. More specifically, they showed that dimeric particles could rotate clockwise by converting information into energy through a feedback manipulation of Brownian motion. Millen _et al._ proposed a method utilizing Brownian motion to measure the temperature of nanoscale objects [246]. In this case, Brownian motion results from the collisions with the surrounding gas (O\({}_{2}\) and N\({}_{2}\)) molecules. The surface temperature of the nanoobject can then be inferred from the collision features. This new procedure is thus an exceptional opportunity to better operate and control nanoscale systems. Indeed, it opens the door to the use of thermal energy as a lever for fine-tuning their activity. Microscopically, the temperature is often calculated from the kinetic energy stored in the velocity degrees of freedom of atoms or molecules. In the case of colloidal particles suspended in a fluid (overdamped motion), the kinetic energy is constantly dissipated
into heat. This heat is then turned back into motion via the fluctuations of the hydrodynamic velocity field of the solvent. Fluctuation and dissipation relations, such as, _e.g._, the Stokes-Einstein relation, give a route for the calculation of the temperature and transport properties of the fluid from the Brownian fluctuations of suspended probes. "Hot Brownian motion" recently emerged as a new non-isothermal out-of-equilibrium concept. This is the case, for instance, when various degrees of freedom of the particle (_i.e._, for a sphere, translational and rotational positions, and momenta) are each predicted to have their own effective temperatures. This departure from the equipartition principle is evidence that the system is very far from equilibrium [247; 248; 249]. Schachoff _et al._ used heated gold nanoparticles to reveal the impact of a radially symmetric temperature profile around the nanoparticle. Their results show that an effective temperature and the friction properties of a hot Brownian motion can be defined in terms of a fluctuation-dissipation relation similar to that of isothermal systems. Moreover, asymmetric temperature profiles were found to induce a self-thermophoretic propulsion and how the use of DNA origami coupled to the nanoparticle could allow for the control of rotational motion [250] Environmental feature can also outweigh the increased rotational diffusion of nanoswimmers. Wu _et al._ reported an anomalously rapid transport of self-propelled nanoswimmers in a porous matrix [251]. In addition, they showed that nanoswimmers escaped from cavities more than an order of magnitude faster than expected, when compared to the corresponding Brownian particles. Moreover, self-propulsion resulted in qualitatively different phenomena, such as surface-mediated searching and the cancellation of energy barriers at hole exits [252]. Finally, recent reports show that collective effects and emergence can lead to a controllable directed motion [253].
## III Active Fluids at the Microscale
### Real-life Systems
**Microbiology**
Microbiology studies biological microorganisms, such as bacteria, viruses, and protozoa. famous microbiologists include, but are not limited to, Jenner and his vaccine against smallpox and Fleming with the discovery of penicillin. Microbiology is a fascinating field that explores life diversity on Earth and the existence of life elsewhere in the Universe. Many microorganisms have
yet to be discovered. It is estimated that there are one trillion microbial species on Earth and that 99.999 percent of microbial species have not been identified[254; 255]. Many microorganisms fulfill important tasks by helping make drugs, manufacturing biofuels, and cleaning up pollution. Unicellular swimmers such as, _e.g._, the _Escherichia coli_ bacteria, are typically of a few to several tens of micrometers in size. Because these microswimmers live in a world where the viscous forces are greater than the inertia forces (low Reynolds number), microorganisms have refined over time their propulsion strategies, which successfully overcome and even utilize viscous drag[1]. Moreover, microswimmers hardly ever swim alone. Indeed, in assemblies of motile microorganisms, cooperativity reaches a new level of complexity as they exhibit highly organized movements with remarkable large-scale patterns such as networks, complex vortices, or swarms.[256]
### Bacteria and algae
Bacteria use a system of helical filaments called flagella for propulsion. Different bacteria have different arrangements of flagella depending on what they need to achieve in terms of motility. They can have either a single flagellum or multiple flagella located at a particular spot on their surface. Alternatively, there can be a single flagellum on each of the two opposite ends or multiple flagella pointing in all directions[257]. An example of the latter is _E. coli_, shown in Fig. 4A. In this case, the flagella arrangement allows for a very interesting swimming pattern, known as "run-and-tumble" motion[258; 259; 260]. During the "run" phase, flagella form a bundle (counterclockwise rotation), that pushes the bacterium forward in one direction. In the "tumble" phase, one or a few flagella leave the bundle (clockwise rotation), which leads to a random change in the orientation of the bacterium. The frequency of each of these two steps informs on the bacterium's local environment. For instance, the frequency of the "run phase" correlates with how favorable the local environment. If there are nutrients close to the bacterium, the bacterium will undergo a rapid forward motion to access the nutrients. When there are no nutrients, the bacterium undergoes the "tumble" phase and starts looking for nutrients elsewhere. The run-and-tumble motion is thus akin to a goal-oriented navigation. In this case, the bacterium reacts to a chemical gradient in nutrients through chemotaxis. The bacteria may also react to other stimuli including temperature changes (thermoraxis), pressure changes (barotaxis), and flow changes (rheotaxis). In addition to bacteria, algae such as volvocine green algae have emerged as model organisms for flagellar propulsion. In particular, Volvox has allowed for an improved understanding of the transition from unicellular to
multicellular life. This multicellular green alga forms spherical colonies of up to 50,000 cells. The cells move their flagella in a coordinated fashion, which enables the colony to swim, for example, towards light [261, 262].
Figure 4: A) (Top) Swimming motility and structural parameters of _E. coli_ ATCC10798 and W3110 through sequential phase-contrast images [263]. Images are taken at 50 ms intervals for 10 s and integrated with an intermittent color code: “red \(\rightarrow\) yellow \(\rightarrow\) green \(\rightarrow\) cyan \(\rightarrow\) blue.” Scale bar, 20 \(\mu\)m. (Bottom) Electron micrographs of _E. coli_ ATCC10798 (left) and W3110 (right) cells. Scale bars, 2 \(\mu\)m. Adapted with permission from _Sci Rep._ **2020**, 10, 15887. Copyright 2020 Springer Nature. B) Ensemble behavior of _Pseudomonas aeruginosa PA01_ at oil-aqueous interfaces [264]. Individual bacteria trajectories are shown only over a 20 s time span for clarity, with active motions of (i) interfacial visitors, persistent circular motions including (ii) pirouettes and (iii) curly trajectories. A fourth trajectory type (iv), inert Brownian diffusive bacteria, is also present. Reprinted with permission from _Langmuir_ **2020**, 36, 6888-6902. Copyright 2020 American Chemical Society. C) Swarming bacteria migrate via a Levy Walk, with in (a-b), a phase contrast imaging of a _B. subtilis_ swarming colony, in (c), fluorescent microscopy showing the fluorescently labelled bacteria, and in (d-e) example trajectories of individual bacteria inside the swarm at high (d) and low (e) magnifications [265]. Reprinted with permission from _Nat. Commun._ **2020**, 6, 8396. Copyright 2015 Nature Springer. D) Slime trails [266]. Deposition of slime by _M. xanthus_ as it glides on agar. (A) A\({}^{+}\)S\({}^{+}\) strain DK1622. (B) A\({}^{+}\)S\({}^{-}\) strain DK10410. (C) A\({}^{-}\)S\({}^{+}\) strain ASX1. Photographs of the swarming edge were taken after 1 day. Reprinted with permission from _Curr. Biol._ **2002**, 12, 369-377. Copyright 2002 Elsevier.
**Swarming and gliding**
Bacteria can also swim as a group and exhibit collective motion (see Figs. 4B and C). For instance, the formation of swarms of bacteria has been reported close to a moist surface or in thin liquid films [265; 266; 267; 268; 269]. Contrary to swimming, swarming implies specific changes in cell shape, with the cells becoming more elongated through the suppression of cell division. Swarms also lead to the formation of a new entity with a very large number of flagella. This points to a significant role of flagella and flagella interactions between adjacent cells in the swarming process [270; 271; 272; 273; 274; 275]. Myxobacteria (see Fig. 4D) are other examples of bacteria traveling in swarms. They exhibit a different form of bacteria motility, known as gliding. In this case, cells move on a substrate (or through a gel or porous material). For instance, _Myxococcus xanthus_ move many cell lengths over surfaces without flagella, giving rise to rippling patterns. Genetic and cell behavioral studies have identified two motion patterns for the gliding motion of _M. xanthus_. These two patterns are governed by different genes, corresponding to A (for adventurous) and S (for social) cell behaviors. In the former, \(A^{+}\) stands for 'A-motility' for which single cells move, resulting in a spatial distribution with many single cells. \(S^{+}\) stands for S-motility, in which isolated cells do not move, but cells close to one another undergo motion [276]. While \(A^{-}S^{-}\) strains are nonmotile and never move more than \(1/4\) a cell length, both \(A^{+}S^{-}\) and \(A^{-}S^{+}\) strains are motile. However, their swarm patterns and swarming rates differ from \(A^{+}S^{+}\)[277; 278; 279; 266]. This demonstrates how bacteria adapt and leverage two propulsion systems in a synergistic way.
### Synthetic Systems
**Active colloids and Janus micromotors**
Microtechnology has undergone tremendous progress in recent years. It is now possible to design and synthesize microstructured materials with dimensions matching the size of a cell or collections of cells. These structures are exceptional tools as they offer the possibility to control the interface between cells and their interactions with their chemical and physical environment. Exciting developments in this field stem from soft lithography named for its use of soft, elastomeric elements in pattern formation [280; 281]. Soft lithography enables printing, molding, and embossing using an elastomeric stamp with feature sizes ranging from 30 \(nm\) to 100 \(\mu m\). Re
cent advances include the design of three-dimensional curved structures, the ability to work with different materials, and the generation of a well-defined and controllable surface chemistry. Soft lithography can yield channel structures appropriate for microfluidics, as well as pattern and manipulate cells[282, 283].
Several sophisticated micromotors were synthesized in recent years. Helical microswimmers were developed for targeted therapies[284], environmental sensing[131] and monitoring, cell manipulation and analysis, and lab-on-a-chip devices. Qiu _et al._ proposed the first functionalized artificial bacterial flagella. These 3D microswimmers can deliver plasmid DNA into targeted cells using rotating magnetic fields. Cells targeted by f-ABFs were successfully transfected by the transported pDNA and expressed the encoding protein[285].Patchy particles are an emerging tool for the synthesis of intelligent, structured micro-objects. Such colloidal particles can be anisotropically patterned, either by surface modification (localized attractive spots) or through their shape. Janus particles are unique among these micro-objects as they can have different chemical or physical properties and directionality within a single particle. In particular, the 'two-faced' spherical Janus particles play a significant role as micromotors[286, 287]. Active Janus colloids are capable of propelling themselves in fluidic environments via localized and asymmetric catalytic reactions that decompose, for instance, hydrogen peroxide[288, 289]. Gao _et al._ proposed catalytic iridium-based Janus micromotors that only require 0.001% of the chemical fuel to self-propel at 20 body lengths s\({}^{-1}\). In this case, Janus micromotors are composed of Ir and SiO\({}_{2}\) hemispheric layers. The catalytic decomposition of hydrazine at the Ir interface creates propulsion through osmotic effects. Such a low fuel concentration represents a 10,000-fold decrease in the level required for catalytic nanomotors[290]. Janus micro-objects can also serve as a building block for selective functionalization in biomedical applications. Wu _et al._ developed an autonomous self-propelled Janus capsule motor that can also serve as smart cargo[291]. This capsule motor is composed of partially coated dendritic platinum nanoparticles on one side, allowing for the catalytic decomposition of hydrogen peroxide and the generation of oxygen bubbles. This process then recoils the motion of the capsule motors. The capsules can autonomously move at 125 body lengths/s while exerting large forces exceeding 75 pN. Finally, these asymmetric hollow capsules can achieve directed motion using an external magnetic field. Recently, Janus micromotors have been applied to water treatment[292, 293], and analytical sensing[294, 295].
**Modular microswimmers and directed self-assembly**
Can we construct artificial microswimmers from different components? Differently put, is it possible to create modules combining different functions and assemble them altogether? Recent research has started to address this challenge [296]. For instance, autonomous microswimmers, which include active and inactive functional components, have been assembled to create self-propelling complexes. The resulting modular microswimmers can exhibit different types of modular swimming. More generally, two kinds of modular swimmers were designed in recent years. The first is called swimmers with bound structures. In this case, chemical bonds or electrostatic interactions link all components, thus limiting the rotational degree of freedom. Dreyfus _et al._ created a flexible artificial flagellum with a linear chain of colloidal magnetic particles linked by DNA and attached to a red blood cell, and uses an external uniform magnetic field to align the filaments. Oscillating a transverse field then allowed for the actuation of the movement, thereby inducing a beating pattern that propelled the structure [95]. Colloidal rotors [297], self-propelled sphere dimers [194], a colloidal chain made of Janus particles with a zigzag-shaped arrangement to form rotators [298] and magnetic microlassos for cargo delivery [299] were also recently designed. The second type of modular swimmer is called dynamic structures. In this case, the composition and organization of components can change dynamically, as the modular swimmers can rearrange, disassemble and re-assemble in response to external fields or self-generated gradients. For instance, Snezho _et al._ designed a self-propelling snake from a dispersion of magnetic microparticles at a liquid-air interface that is energized by an alternating magnetic field [300]. Helical ribbons have been self-assembled as paramagnetic beads in an external magnetic field [301]. An external periodic magnetic field also served to create throwers and rowers from asymmetric paramagnetic beads [302]. Other fascinating examples are reconfigurable microswimmers. Du _et al._ designed a two-body swimmer from by paramagnetic beads under an eccentric magnetic field [303]. Palacci _et al._ developed colloidal dockers from a peanut-shaped hematite, guided by a weak magnetic field to the vicinity of a colloid. The two then couple via a light-activated phoretic force induced by a chemical gradient [304]. Electric fields can also be used to generate a rotating pinwheel formed by Janus particles around a homogeneous colloid under an AC electric field [305]. Asymmetric colloidal dimer can be propelled by an electrohydrodynamic flow [306], and mobile microelectrodes with Janus particle can selectively attract or repel colloids by dielectrophoresis in a vertical electric field [307]. An alternative strategy, based on self-generated fields, can be used for the self-assembly of dynamic modular swimmers. This is the case for autonomous movers capable of gliding across the surface of a liquid without an external power source [308]. For such systems, the motion of individual
objects was powered by the catalytic decomposition of hydrogen peroxide, while self-assembly resulted from capillary interactions at the fluid/air interface [308]. Different types of interactions can play a significant role in the self-organization of micromachines. For instance, two tadpoles can self-assemble into a cluster bound by van der Waals interaction [309]. Janus micromotors and a non-catalytic hydrophobic colloid can couple via hydrophobic interaction [310]. Catalytic reactions can also give rise to hydrodynamic forces that self-assemble microrotors and swimmers [311]. Martinez _et al._ use reconfigurable photoactivated magnetic microdockers to assemble and transport microscopic cargos [312]. In the latter, the photoactivation process induces a phoretic flow capable of attracting cargos toward the surface of the propellers. At the same time, a rotating magnetic field is used to transport the composite particles to any targeted location. The method allows for the assembling of small colloidal clusters of various sizes, composed of a skeleton of mobile magnetic dockers, which cooperatively keep, transport, and release the microscopic cargos. Modular phoretic microswimmers were also designed using a concentration gradient induced by ion exchange. [313] and from heat-induced phoretic forces [314].
### Microswimmers in complex environments
While recent studies have shed light on propulsion mechanisms at the microscale, a complete understanding of the dynamics of microswimmers in complex environments still eludes us [20]. For instance, micromotors travel in mucus gels, blood vessels, and microfluidic chips to perform their tasks. These complex environments can be seen as various types of confinements [317]. The transport dynamics of micromotors depends on the geometry of the confinement. In addition to thermal fluctuations and self-propulsion, the behavior of confined microswimmers is also impacted by interfaces. Studies have indicated a strong coupling between the microswimmers' motion and interfaces, leading to the intriguing transport behaviors near interfaces [251]. For example, in their study of spherical Janus colloids close to solid surfaces, Das _et al._ reported an active quenching of the particles' Brownian rotation, which leads to constrained in-plane swimming along the wall. This new steering mechanism leads to a constrained 2D-enhanced diffusion at the walls [318]. Here, the main contribution comes from the dynamic flow field at the interface, but not from the interactions between the wall and the particles. Other studies have shown how directed motion can be induced by combining geometric constraints. Another example is the study by Brown _et al._ of catalytic Janus particles swimming and hopping between colloids in a two-dimensional colloidal crystal.
The hopping rate was found to vary inversely with fuel (hydrogen peroxide) concentration [319]. This has led to new approaches to achieve directional guidance of chemically active microswimmers. Simmchen _et al._ used various topographic features (stripes, squares, or circular posts) to guide the motion of Janus microswimmers [320]. Microswimmers followed step-like topographical
Figure 5: A) Light-activated self-propelled colloids with (a-b) Scanning electron microscope (SEM) images of hematite particles embedded in a TPM (3-(Trimethoxysil)propyl methacrylate) shell, (c) Trajectories of colloidal microswimmers in the absence of light (black) or with the light on (red), and (d) Averaged mean square displacement (red symbols), compared to a random-walk dynamics (blue dashed line), (e) self-propulsion velocity against light intensity for blue light (blue symbols) and UVA-violet light (violet symbols), and with a red dashed line corresponding to a fit with a Michaelis–Menten kinetics, and (f) Self-propulsion velocity against the Debye length of the solution, varied by addition of sodium chloride salt (black symbols) and withdrawing of the surfactant (blue symbol) [315]. Reprinted with permission from _Phil.Trans. R. Soc. A_ **2014**, 372, 20130372. Copyright 2014 The Royal Society. B) Living crystals of light-Activated colloidal swimmers, with in (B.A) TPM polymer colloidal sphere and protruding hematite cube (dark). Living crystals (B.B) are assembled from a homogeneous distribution (inset) when a blue light is switched on, and melt (B.C) by thermal diffusion when the light is off. (B.C) shows the system 10s after light is turned off (inset, after 100s). (B.D to B.G) Colors indicate the time evolution of particles belonging to different clusters. In (B.B-B.G), scale bars correspond to \(10mm\)[316]. Reprinted with permission from _Science_ **2013**, 339, 936-940. Copyright 2013 American Association for the Advancement of Science.
features that were only a fraction of the particle radius in height.
Optical traps can also be used to confine microswimmers[321; 322]. Nedev _et al._ deisgned the first microswimmer "elevator"[323] by trapping Janus particles, with a silica sphere with a gold half-shell, with optical tweezers and by moving them along the axis of the laser beam, thereby allowing for the upward and downward motions of the Janus particles. They showed that this process arose from a complex interplay between optical and thermal forces, with scattering forces orienting the asymmetric particle, while the strong absorption on the metal side inducing a thermal gradient, resulting in particle motion. Thus, an increase in laser power led to upward motion, while a decrease in laser power resulted in downward motion. Gao _et al._ performed an angular trapping of Janus particles by controlling the laser polarization direction[324]. Optical tweezers were also used to trap self-propelled Janus particles in a "round-trip" motion[325] and trochoidal trajectories[326]. However, hematite-based microswimmers are generally ejected from conventional optical tweezers. Recently, Abacousnac _et al._ develop dark optical tweezers as holographic optical trapping systems. They were able to trap dielectric spheres enclosing a hematite cube (dark-seeking particles) and move them alomg the three dimensions[327].
**Toward living materials: collective motion in active fluids**
Active nematics constitute a new class of active fluids, which combine the properties of liquid crystals properties with those of active matter. Nematic liquid crystals are generally modeled as rod-like particles, mimicking the shape of elongated micro-objects. Depending on temperature or concentration, these elongated particles can predominantly align in a given direction, _i.e._, in a nematic phase with long-range orientational. Structural inhomogeneities or the application of an external force can lead to mismatches between neighboring domains with different directions, resulting in topological defects and singularities in the orientation field[328]. There are two types of topological defects: comet-like \((+1/2)\) or trefoil-like \((-1/2)\) in 2D nematic liquid crystals, with the topological charge \(+1/2\) or \(-1/2\) standing for the angle by which the particles turn (\(+180\) for a charge of \(+1/2\) and \(-180\) for a charge of \(-1/2\)). Cortes _et al._ have shown that topological defects in active nematic liquid crystals can be annihilated in pairs \((+1/2\) and \(-1/2)\) or created in pairs (\(+1/2\) and \(+1/2\), or \(-1/2\) and \(-1/2\))[329]. Bacteria can be mimicked by adding activity to elongated microparticles, making them self-propelled. This autonomous motion can be viewed as a force capable of creating/annihilating topological defects. Several groups [330; 331; 332] have
established that topological defects could lead to the onset of complex streaming flows. Active turbulence is a chaotic-like feature that destroys the long-ranged nematic order. The challenge is thus to control the direction of the bacteria but also command their chaotic behavior. Zhou _et al._ used motile rod-shaped bacteria in water-based nontoxic liquid crystals to obtain living liquid crystals (LLCs)[333]. They reported that the coupling between the long-range orientational order in the liquid crystal and the swimming activity of the bacteria dramatically altered both the individual and collective bacterial dynamics. For instance, the motion of the bacteria perturbs the orientational order of the liquid crystal, even resulting in local melting and allowing for a direct observation of the bacteria's motion. Collective motion in this active fluid also gives rise to self-organized textures unseen in equilibrium liquid crystals[333]. Peng _et al._ have shown that active matter can be controlled via topological defects and patterns[334]. Orientational order in a liquid crystal can direct the flow of self-propelling bacteria which, in turn, impact the patterning of the liquid crystal molecules. Patterns on a substrate can lead to surface anchoring of the liquid crystals that results in the ordering of the bacteria. Bacteria were found to be able to differentiate between different types of topological defects, as bacteria headed toward defects of positive topological charge and avoided negative charges. Understanding of the interplay between hydrodynamics and topology will be key in the design, control, and manipulation of soft active matter for biosensing and biomedical applications.
Swarming and pattern formation are large-scale phenomena that demonstrate a form of intelligence as exhibited by bacteria when forming colonies. Palacci _et al._ studied the self-organized clustering of light-activated colloidal surfers[316]. They demonstrated that they could control the formation of clusters of microsurfers by switching on and off blue light (see Fig. 5). Palacci _et al._ were able to relate this observation to the general property known as giant-number fluctuations found in swarming systems[335]. The microsurfers clusters were named two-dimensional "living crystals," as clusters formed, broke, exploded, and formed again elsewhere. More generally, in another type of dissipative system, Naryan _et al._ observed long-lived giant number fluctuations in a swarming granular nematic system, and showed that an agitated monolayer of rodlike particles could exhibit liquid crystalline order. This showed that the onset of flocking, coherent motion, and large-scale inhomogeneities could take place in a system in which particles did not communicate except by contact, had no sensing mechanisms, and were not impacted by the environment[336]. Dissipative building blocks, such as photoactive components, can also be used to control self
assembly. Aubret _et al._ succeeded in the targeted formation and synchronization of self-powered microgears. These self-spinning microgears can follow spatiotemporal light patterns demonstrating the possibility to program interactions and direct self-assembly. It lays the groundwork for the autonomous construction of dynamical architectures and functional micro-machinery [337]. Other reconfiguring self-assembly systems were designed by Kang _et al._[338], who formed chains and flower structures by using photoresponsive hybrid colloids. They also succeeded in transforming the chains into flower-like structures with decreasing UV intensity and triggered the formation of orientationally ordered clusters by applying an external magnetic field. Several discoveries have recently paved the way for the control and reconfiguration of micro-objects self-assembly. This opens the door to the design of novel smart materials such as reconfigurable robots and programmable soft robotics swarms for cooperative grasping, collective cargo transportation, and the building of micro-factories [339; 340; 341].
### Theory and Simulations
#### Modeling across the scales: the Vicsek model
The transition to collective motion is also often termed as flocking. In fact, it is commonly observed in a wide range of living systems [342; 335]. This includes small systems at the subcellular or cellular level [343], bacterial colonies [268], and large systems such as schools of fish, flocks of birds [344] and herds of mammals [345]. A well-known model for active particles is the Vicsek model [346]. This model captures the minimal ingredients for active particles to undergo a transition from single self-propelled particles to collective motion [346; 347; 348; 349; 35]. The Vicsek model accounts for the overdamped dynamics of a system composed of \(N\) self-propelled particles that interact through a local alignment rule. Each particle \(j\) is characterized by \(\mathbf{r}_{j}\), which stands for its position, \(\mathbf{v}_{j}=v_{0}\mathbf{s}_{\mathbf{j}}\) its velocity, \(v_{0}\) the norm of the velocity, and \(\mathbf{s}_{j}=(cos\theta_{j},sin\theta_{j})\) its heading, or direction of motion. Accounting for angular noise, the equations of motion for the active particles are given in 2D by
\[\begin{array}{rcl}\mathbf{r}_{j}(t+\Delta t)&=&\mathbf{r}_{j}(t)+\Delta tv_ {0}\mathbf{s}_{j}(t+\Delta t)\\ \theta_{j}(t+\Delta t)&=&Arg\left[\sum_{k\in D_{j}}e^{i\theta_{k}(t)}\right]+ N_{D_{j}}\eta\xi_{j}\\ \mathbf{v}_{j}(t+\Delta t)&=&v_{0}e^{i\theta_{j}(t+\Delta t)}\end{array} \tag{15}\]
where the average heading at time \(t\) is calculated over all neighboring particles \(k\) within a unit
disk \(D_{j}\) around particle \(j\), \(\xi_{j}\) is an angle chosen randomly between \(-\pi\) and \(\pi\), \(\eta\) denotes the noise intensity. The random angle \(\xi_{j}\) is a zero-average, delta-correlated scalar noise termed as white noise because of its flat Fourier spectrum [335]. Several studies have introduced variations on this model that include a short-range repulsion, or steric interactions, between particles [347] or allow the norm of the self-propelling velocity \(v_{0}\) to fluctuate [350].
Several features characterize the Vicsek model and account for the transition to collective motion. First, particles are self-propelled. Second, they change their relative positions in a complex manner, _i.e._, according to their velocity fluctuations, leading to a far-from-equilibrium behavior [335]. Finally, there are no conservation laws apart from the conservation in the number of self-propelled particles, which means that there is no momentum conservation. Strictly speaking, the latter differs from what takes place in the case of microswimmers in a suspension. Indeed, momentum is transferred from the microswimmers to the surrounding fluids and hydrodynamic interactions are expected to play a significant role, especially for 3D suspensions [351]. The Vicsek model exhibits a disordered gas-like phase (high external noise), microphase separation with propagating bands of high density (intermediate external noise) and a polar liquid (low external noise) [335, 347, 348]. Toner and Tu developed fluctuating hydrodynamic equations for the Vicsek model [352, 353, 354, 355, 356] and showed via a dynamic renormalization group calculation that these polar flocks possess a long-range orientational order, even in 2D. Although this theory only applies to dilute, aligning, dry active matter [11, 357], its predictions include the concept of giant number fluctuations which is relevant to a wide range of active matter systems [358, 359, 360, 361, 362, 363, 364, 365, 366, 367]. Indeed, for orientationally ordered phases of active matter, the variance of the number of particles in subsystems of increasing size increases faster than the mean. A detailed analysis of the phase transitions in the Vicsek model has highlighted its similarity with the liquid-gas transition [364, 349]. This is in agreement with the solutions found using hydrodynamic equations for flocking models [365, 366, 367].
Partial differential equations for the density field \(\rho(x,t)\) and the momentum field \(W(x,t)=\rho(x,t)P(x,t)\) in which \(P(x,t)\) is a polarization field (between 0 and 1), can be written as
\[\begin{array}{rcl}\partial_{t}\rho&=&-\partial_{x}W\\ \partial_{t}W&=&-\xi W\partial_{x}W+a_{2}W-a_{4}W^{3}-\lambda\partial_{x}\rho+ D\partial_{xx}W\end{array} \tag{16}\]
These equations account for a continuous mean-field transition from a homogeneous isotropic state with \(\rho=\rho_{0}\) and \(P=0\) when \(a_{2}<0\) to a homogeneous polarized state with \(P=\rho_{0}^{-1}\sqrt{(}a_{2}/a_{4})\) when \(a_{2}>0\). The \(\lambda\) term corresponds to the pressure gradient induced by density heterogeneities,
while \(\xi\) and \(D\) are transport coefficients for the advection and the diffusion of the local order parameter [365]. After integration [349], the solutions exhibit traveling bands with both smectic microphases and phase-separated profiles. We finally add that extensions of the Vicsek model to binary mixtures have involved systems with two sub-populations with different external noises [368], and mixtures of passive and active particles [369].
**MIPS at the micron scale**
Following the success of the Vicsek model, the next step has consisted in developing a minimal model for the simulation of wet active matter. This has led to the proposal of the Active Brownian Particles (ABP) model [370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384]. This model quickly proved to be instrumental in furthering our understanding of active matter. Indeed, it provided a testing ground for the hypothesis that clustering and phase separation were intrinsic properties of active systems [385, 386], and resulted from the flux of chemical energy that drives motility and breaks detailed balance [22]. Fily and Marchetti [387, 388, 389, 390] used an ABP model to show that clustering could be observed even in the absence of any alignment rule, unlike in the Vicsek model. In the ABP model, particles are modeled as soft repulsive disks, characterized by their positions \(\mathbf{r}\) and their axis \(\hat{\mathbf{v}}=(\cos\theta,\sin\theta)\). The equations of motion for a particle \(i\) are as follows
\[\begin{array}{rcl}\partial_{t}\mathbf{r}_{i}&=&v_{0}\hat{\mathbf{v}}_{i}+ \mu\sum_{j\neq i}\mathbf{F}_{ij}+\eta_{i}^{T}(t)\\ \partial_{t}\theta_{i}&=&\eta_{i}^{R}(t)\end{array} \tag{17}\]
in which \(v_{0}\) denotes the self-propulsion velocity, \(\mu\) the mobility, \(\eta_{i}^{T}(t)\) and \(\eta_{i}^{R}(t)\) are the translational and rotational white noise terms, respectively. The force \(\mathbf{F}_{ij}\) between two particles \(i\) and \(j\) is short-ranged and repulsive with, in this case, \(\mathbf{F}_{ij}=-k(\frac{2a}{r_{ij}}-1)\mathbf{r_{ij}}\) if \(r_{ij}<2a\) and \(0\) otherwise, in which \(k\) is a force constant and \(a\) the radius of the particle. Other repulsive potentials have also been employed, such as _e.g._ a Weeks-Chandler-Andersen potential, leading to the same type of clustering and phase separation [391]. Alternate models have also included attractive interactions between active particles [392, 393, 394, 395, 396, 397]. We add that ABP exhibit similar features to those observed for Vicsek models, including the onset of giant fluctuations in the system [387]. The equations of motion can also be extended to account for active particles of different shapes [384], including dumbells [398, 399, 400, 401, 402, 403, 404], which opens the door to simulations of active nematics [330, 332, 358, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420].
Cates and Tailleur[421] proposed the concept of active simple fluids as fluids composed of spherical self-propelled particles, whose interactions are isotropic. Isotropic interactions encompass attractive and repulsive potential, as well as, _e.g._, different types of chemical signaling and quorum sensing in bacteria[422]. These fluids were found to exhibit a far richer phase behavior than non-active, or passive, fluids which interact through the same isotropic potential. For instance, purely repulsive ABP undergo a liquid-gas phase separation[423; 424; 425; 426; 427; 428; 429; 430], while repulsive soft spheres do not. This phenomenon, termed as motility-induced phase separation (MIPS)[431; 38], arises from the nonequilibrium nature of the active fluid. It can be characterized as the coexistence of a dilute active gas with a dense liquid cluster of reduced motility.
**Thermodynamics for microswimmers**
Recent studies have focused on establishing the phase diagram (see Fig. 6A) of ABPs and microswimmers suspensions[433; 391; 37; 434; 435; 436; 437; 438; 392; 439; 440; 441; 442; 443; 444; 445; 446; 447; 448; 449; 450; 451; 452; 453; 454; 455; 456; 457; 458; 459; 460; 461; 462; 463; 464; 465; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 477; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 491; 492; 493; 494; 495; 496; 497; 498; 499; 500; 501; 502; 503; 504; 505; 506; 507; 508; 509; 510; 511; 512; 513; 514; 515; 516; 517; 518; 519; 520; 521; 522; 523; 524; 525; 526; 527; 528; 529; 530; 531; 532; 533; 534; 535; 536; 537; 538; 539; 540; 541; 542; 543; 544; 545; 546; 547; 548; 549; 555; 556; 557; 558; 559; 560; 561; 562; 563; 564; 565; 566; 567; 568; 569; 570; 571; 572; 573; 574; 575; 576; 577; 578; 579; 580; 581; 582; 583; 584; 585; 586; 587; 588; 589; 590; 591; 592; 593; 594; 595; 596; 597; 598; 599; 600; 601; 602; 603; 604; 605; 606; 607; 608; 609; 610; 611; 612; 613; 614; 615; 616; 617; 618; 619; 620; 621; 622; 623; 624; 625; 626; 627; 628; 629; 630; 631; 632; 633; 634; 635; 636; 637; 638; 639; 640; 641; 642; 643; 644; 645; 646; 647; 648; 649; 650; 651; 652; 653; 654; 655; 656; 657; 658; 659; 666; 671; 672; 673; 674; 675; 676; 677; 678; 679; 680; 681; 682; 683; 684; 685; 686; 687; 688; 689; 690; 691; 692; 693; 694; 695; 696; 797; 798; 799; 800; 801; 802; 803; 804; 805; 806; 807; 808; 809; 810; 811; 812; 813; 814; 815; 816; 817; 818; 819; 820; 821; 822; 833; 839; 840; 841; 842; 843; 844; 845; 846; 847; 848; 85; 859; 869; 870; 888; 889; 891; 892; 893; 894; 895; 896; 897; 898; 999; 900; 803; 805; 807; 899; 899; 910; 808; 809; 811; 823; 839; 841; 845; 846; 847; 859; 86; 871; 88; 88; 897; 898; 999; 920; 895; 896; 897; 899; 930; 898; 89; 942; 899; 896; 897; 898; 99; 950; 899; 960; 897; 899; 978; 988; 999; 989; 990; 991; 992; 993; 994; 995; 996; 997; 988; 999; 100; 101; 102; 103; 104; 105; 106; 107; 108; 1098; 1099; 111; 106; 108; 1099; 112; 107; 1099; 113; 108; 119; 114; 109; 1150; 1099; 116; 1170; 1099; 1171; 172; 1099; 1186; 1099; 1190; 119; 1173; 1091; 1187; 1191; 1888; 193; 1894; 189; 1895; 189; 1960; 1897; 1898; 1970; 1899; 1989; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 199; 1999; 19999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 19999; 1999; 19999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 1999; 19999; 1999; 1999; 19999; 1999; 1999; 1999; 1999; 19999; 1999; 19999; 19999; 1999; 19999; 19999; 19999; 19999; 19999; 19999; 19999
Nonequilibrium and living systems are also associated with irreversibility and time-reversal symmetry breaking. In other words, the injection of energy into the system can give rise to the onset of a steady-state, the emergence of steady-state currents and of a global entropy production rate. This has been quantified by fluctuation theorems [464; 465; 466; 467; 468; 469]. The determination of the entropy production rate has drawn significant interest in recent years, and the development of several the theoretical tools for the development of steady-state currents in the context of the nonequilibrium
modynamics and data science-based approaches is currently under way [25; 26; 27; 480; 481; 482; 483; 484].
### Dynamical behavior and external stimuli
Trapping microswimmers to form static or dynamic patterns (see Fig. 6B) is also an area of active research [20; 432]. One of the simplest ways to trap particles is to use spatial confinement. Fily _et al._ studied the dynamics of non-interacting and non-aligning self-propelled particles under strong confinement (_i.e._, when the box dimension is smaller than the persistence length of the particles. Fily _et al._ found that they could modify the particles' spatial distribution by changing the geometry of the simulation box. For instance, particles are packed in areas of strong curvatures in a 2D ellipse-shaped container, while particles concentrate in sharp corners in a 2D polygon-shaped container. Moreover, the greater the persistence time for the particles, the longer particles will remain trapped [485]. This could provide insight into how specific spatial distributions of bacteria arise in nature such as, _e.g._, in biofilm formation. In systems of 2D active particles interacting via soft repulsive interactions, Yang _et al._[443] found that particles aggregated on the sides of the container to form clusters, which left the center of the container virtually unoccupied. This form of segregation, which results from a combination of confinement and activity, could contribute to our understanding of cell sorting in embryonic development.
Both rigid and deformable microswimmers can exhibit intriguing trajectories and dynamical patterns in complex flows. Shape deformability, combined with the motility of microswimmers, add a layer of complexity. For instance, red blood cells can deform when they pass through microchannels that are smaller than the size of blood cells. Fedosov _et al._ employed mesoscale hydrodynamic simulations to predict the phase diagram for the shapes and dynamics of red blood cells in flow through cylindrical microchannels [486]. They found a rich dynamical behavior, with snaking and tumbling discocytes, slippers performing a swinging motion, and stationary parachutes. One of the simplest flows is a linear shear flow. Recently, Gaffney _et al._[487] showed that shape-deforming swimmers moving in the plane of a shear flow followed Jeffery's orbits [488]. Tarama _et al._ proposed a 2D theoretical and numerical framework based on the dynamics of the particles' orientation and deformation to understand the behavior of deformable active particles under shear flow. They found a manifold of different dynamical modes, including active straight motion, periodic motions, motions on undulated cycloids, winding motions, as well as quasi-periodic and chaotic motions. The validity of the model was tested against experimental
data on self-propelled droplets undergoing a linear shear flow.[489; 490] Numerous studies have focused on microswimmers in complex flows.[491; 183]. For instance, Zottl and Stark[492] studied the three-dimensional dynamics of a spherical microswimmer in cylindrical Poiseuille flow. They found that microswimmers display swinging and tumbling trajectories. In 2D, such trajectories are equivalent to oscillating and circling solutions of a pendulum. Hydrodynamic interactions between the swimmer and confining channel walls were found to lead to dissipative dynamics and result in stable trajectories, different for pullers and pushers. Most biological fluids such as mucus and blood are viscoelastic and non-Newtonian. However, most simulation studies on motile microorganisms have focused on Newtonian fluids so far. Recently, Mathijssen _et al._ developed a model for microswimmer in non-Newtonian Poiseuille flows. Unlike Newtonian fluids, swimmers' trajectories show oscillatory motion about the centerline. More specifically, swimmers in shear-thickening (-thinning) fluids migrate upstream more (less) quickly than in Newtonian fluids. The direct upstream migration is related to viscoelastic normal stress differences[493]. We finally that the determination of transport coefficients of active fluids is an active area of research[494; 495; 496; 497].
## IV Conclusions and perspectives
In this review, we discuss recent advances in the design, synthesis and modeling of active fluids at the nano- and micro-scale. The design of active materials on both scales is driven by observations of real-life, biological, active systems. On the nanoscale, these include biological nanomachines, nanomotors that power molecular machines and enzymes as energy transducers. On the microscale, examples of biological systems belong to the microbial world with bacteria and algae, among others. Advances in synthetic methods have led, on the nanoscale, to the development of an artificial biology, artificial enzymes or nanozymes, fuel-dependent or fuel-free nanomotors and to the directed motion of nanomotors for therapeutics. On the microscale, synthetic active fluids encompass suspensions of active colloids and Janus micromotors, the design of modular microswimmers and their directed self-assembly, the control of microswimmers in complex environment and of collective motion in active fluids to achieve the synthesis of living materials. Theoretical advances include the development of a continuum theory for phoretic propulsion, of coarse-grained simulation methods for nanomotors, the simulation-aided design of nanomotors and control of their motion. On the microscale, progress has been made through the proposal of
minimal models that capture the essential features of active matter, from the well-known Vicsek model for dry active matter to the Active Brownian Particle model for wet active matter. The latter has provided novel insight into the behavior of active matter, including the onset of a motility-induced phase separation as a general feature of active fluids.
Active fluids are a fascinating, as well as thriving research field. Ongoing research focuses on the design of multi-component, modular, programmable and adaptative nano- and micro-machines. On the theoretical front, current developments focus on the establishment of a thermodynamics for microswimmers, measures for entropy production in these far-from-equilibrium systems and the understanding of their dynamical behavior and collective response to external stimuli. It is expected that concerted experimental and theoretical efforts will lead to the emergence of an autonomous soft matter robotics, able to perform tasks in a controlled manner on both scales. We conclude this review by highlighting two rapidly emerging areas of research on active matter. First, the application of machine learning and artificial intelligence to these systems promises to shed light on the behavior of these systems and to provide new ways to control their function [498]. Recent work has shown how phase changes, and the inset of MIPS, can be detected via the use of fully connected networks in conjunction with graph neural networks [499]. Another example is the application of a convolutional long-short-term-memory (ConvLSTM) algorithm for the forecasting of the dynamics of active nematics [500]. Combining autoencoders and recurrent neural networks with residual architecture has also recently allowed to map out the spatiotemporal variation of multiple hydrodynamic parameters and forecast the chaotic dynamics of active nematics solely from image sequences of their past [501]. Most notably, recent work has shown how the adaptative behavior and learning, usually demonstrated by living systems, could be extended to synthetic active particles via reinforcement learning [502; 503; 504; 505]. The second emerging area of research focuses on the use of active matter to design and build soft matter robots. For instance, recent work has shown how the control of self-propelled colloidal particle propulsion speeds can lead to the cooperative capture and transport of cargo particles [506]. Very interestingly, such control can be exerted through the application of specific light patterns both in the case of synthetic active particles [507; 314; 32], but also in the case of bacteria [508; 509; 66]. Indeed, _E. coli_ cells under anaerobic conditions can express proteorhodopsin, a green-photon-driven proton pump [510], and have their self-propulsion velocity controlled via green light illumination. Many other types of devices leveraging active matter are currently under development, including the applications of active soft materials, including actin-, tubulin-, and cell-based systems, to perform logic operations and thus
perform computations [51].
###### Acknowledgements.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC-0020976.
## Author Declarations
### Conflict of interest
The authors have no conflict to disclose.
### Data availability statement
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
|
2308.15678 | Understanding step selection analysis through numerical integration | Step selection functions (SSFs) are flexible models to jointly describe
animals' movement and habitat preferences. Their popularity has grown rapidly
and extensions have been developed to increase their utility, including various
distributions to describe movement constraints, interactions to allow movements
to depend on local environmental features, and random effects and latent states
to account for within- and among-individual variability. Although the SSF is a
relatively simple statistical model, its presentation has not been consistent
in the literature, leading to confusion about model flexibility and
interpretation. We believe that part of the confusion has arisen from the
conflation of the SSF model with the methods used for parameter estimation.
Notably, conditional logistic regression can be used to fit SSFs in exponential
form, and this approach is often presented interchangeably with the actual
model (the SSF itself). However, reliance on conditional logistic regression
reduces model flexibility, and suggests a misleading interpretation of step
selection analysis as being equivalent to a case-control study. In this review,
we explicitly distinguish between model formulation and inference technique,
presenting a coherent framework to fit SSFs based on numerical integration and
maximum likelihood estimation. We provide an overview of common numerical
integration techniques, and explain how they relate to step selection analyses.
This framework unifies different model fitting techniques for SSFs, and opens
the way for improved inference. In particular, it makes it straightforward to
model movement with distributions outside the exponential family, and to apply
different SSF formulations to a data set and compare them with AIC. By
separating the model formulation from the inference technique, we hope to
clarify many important concepts in step selection analysis. | Théo Michelot, Natasha J. Klappstein, Jonathan R. Potts, John Fieberg | 2023-08-30T00:26:54Z | http://arxiv.org/abs/2308.15678v1 | # Understanding step selection analysis
###### Abstract
Step selection functions (SSFs) are flexible statistical models used to jointly describe animals' movement and habitat preferences. The popularity of SSFs has grown rapidly, and various extensions have been developed to increase their utility, including the ability to use multiple statistical distributions to describe movement constraints, interactions to allow movements to depend on local environmental features, and random effects and latent states to account for within- and among-individual variability. Although the SSF is a relatively simple statistical model, its presentation has not been consistent in the literature, leading to confusion about model flexibility and interpretation. We believe that part of the confusion has arisen from the conflation of the SSF model with the methods used for statistical inference, and in particular, parameter estimation. Notably, conditional logistic regression can be used to fit SSFs in exponential form, and this model fitting approach is often presented interchangeably with the actual model (the SSF itself). However, reliance on conditional logistic regression reduces model flexibility, and suggests a misleading interpretation of step selection analysis as being equivalent to a case-control study. In this review, we explicitly distinguish between model formulation and inference technique, presenting a coherent framework to fit SSFs based on numerical integration and maximum likelihood estimation. We provide an overview of common numerical integration techniques (including Monte Carlo integration, importance sampling, and quadrature), and explain how they relate to popular methods used in step selection analyses. This general framework unifies different model fitting techniques for SSFs, and opens the way for improved inferential methods. In this approach, it is straightforward to model movement with distributions outside the exponential family, and to apply different SSF model formulations to the same data set and compare them with AIC. By separating the model formulation from the inference technique, we hope to clarify many important concepts in step selection analysis.
## 1 Introduction
The increased availability of animal tracking data has led to the widespread use of statistical methods to estimate habitat selection at the scale of the observed movement step. Perhaps the most common model is the step selection function (SSF; Rhodes et al., 2005; Fortin et al., 2005), whereby the likelihood of moving to the spatial location \(\mathbf{x}_{t+1}\) given previous locations \(\mathbf{x}_{1:t}=\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{t}\}\) is in the following form,
\[p(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})=\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})\phi(\mathbf{x}_{t +1}\mid\mathbf{x}_{1:t})}{\int_{\Omega}w(\mathbf{x}_{t},\mathbf{x})\phi(\mathbf{x}\mid\mathbf{x}_{ 1:t})d\mathbf{x}}\,, \tag{1}\]
where \(\Omega\) is the study region. The function \(w\) describes the effects of environmental variables (e.g., resources, risks, and environmental conditions; Matthiopolous et al., 2023), and \(\phi\) accounts for the effects of movement constraints (e.g., on the range of observed step lengths). The habitat selection function is often assumed to take an exponential (or "log-linear") form, i.e., \(w(\mathbf{x}_{t},\mathbf{x}_{t+1})=\exp\{h(\mathbf{x}_{t},\mathbf{x}_{t+1})\mathbf{\beta}_{h}^{ \intercal}\}\), where \(h(\mathbf{x}_{t},\mathbf{x}_{t+1})\) is a vector of habitat variables for the step, and \(\mathbf{\beta}_{h}\) is the vector of associated selection parameters. The form of \(\phi\) reflects assumptions about movement patterns of the animal, and it is often written as a function of the step length and turning angle to capture movement speed and tortuosity of an animal's movement. We call "step selection function" the numerator of Equation 1, but the terminology is not consistent across the literature, and the term has been used variously to refer to \(w\), to \(w\times\phi\), or to the whole right-hand side of Equation 1. We choose to define \(w\times\phi\) as the SSF to reflect the fact that an animal's selection of a step is based on both habitat preferences and movement constraints. Figure 1 shows an example step selection function with the two model components.
Step selection analysis refers to a wide range of methods for applying SSFs to animal tracking data, with the aim to estimate the parameters of the habitat selection function \(w\) and the movement kernel \(\phi\). Although the data-generating mechanism for this model is described by Equation 1, this is a difficult statistical problem due to the presence of the integral in the denominator. This integral is required in Equation 1 to ensure that \(p(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})\) integrates to 1, i.e., that it is a valid probability density function with respect to \(\mathbf{x}_{t+1}\)(Rhodes et al., 2005; Forester et al., 2009). It also has a more intuitive interpretation: to evaluate the likelihood of a given step, we weigh its suitability against the suitability of all other possible steps in the study region. Here, "suitability" refers both to the habitat quality of a location (as captured by \(w\)) and to its accessibility (as captured by \(\phi\)). This integral cannot generally be calculated analytically, because the integrand (i.e., the expression that is integrated) depends on \(w\), which is usually a function of environmental covariates with no mathematically-convenient functional forms. That is, the integral cannot be rewritten in terms of simple functions that could be directly implemented with a computer, and so Equation 1 cannot generally be evaluated for a given movement track and set of parameters.
Although the integral in Equation 1 cannot be computed directly, methods have been developed to replace the expression by a tractable approximation. In some cases, such approximations are equivalent to applying
Figure 1: Example model components at some time \(t\), where the last point \(\mathbf{x}_{t}\) is at the centre of each panel and the previous steps are shown as black segments: (a) habitat selection function \(w(\mathbf{x}_{t},\mathbf{z})\), (b) movement kernel \(\phi(\mathbf{z}|\mathbf{x}_{t})\) based on distance from \(\mathbf{x}_{t}\), and (c) resulting step selection function \(w(\mathbf{x}_{t},\mathbf{z})\phi(\mathbf{z}|\mathbf{x}_{t})\). The integral on the denominator of Equation 1 is the volume under the step selection function, which is required to transform the step selection function into a probability distribution (sometimes called the step density).
conditional logistic regression (CLR) to a case-control data set, where each observed location ("case") is associated with a set of locations from the landscape ("controls"; Forester et al., 2009). This has been a popular framework for step selection analysis, because CLR can be fitted quickly and conveniently using statistical software (e.g., using the survival package in R; Therneau, 2023). Consequently, SSFs are often conflated with CLR, even though the latter is merely a convenient tool to fit the former by approximating the likelihood in some special cases. In our view, this presentation can lead to confusion about model interpretation, and reduces the flexibility of step selection analyses. In particular, we can avoid the need to make strong assumptions about the functional forms of \(w\) and \(\phi\) if we are willing to use numerical methods other than CLR for parameter estimation.
In this review, we show that most methods used to estimate parameters in step selection analyses can be viewed as applications of numerical integration. A variety of numerical integration techniques are commonly applied in statistics and mathematics to approximate integrals in cases where there is no known formula to compute them exactly. We will present several numerical tools developed for this purpose, and describe their utility in step selection analyses. This perspective suggests we can contrast existing methods (e.g., to identify those with lower approximation errors), and opens the way for improved inferential techniques in step selection analysis. Lastly, we hope that our review will motivate further exploration of numerical integration and estimation methods, which have broad utility across a wide range of ecological applications.
## 2 Likelihood approximation in step selection analysis
### Maximum likelihood estimation
We consider \(n\) observed locations \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\), recorded without error at regular time intervals. The goal of step selection analysis is to estimate the parameters \(\mathbf{\beta}_{h}\) of the habitat selection function \(w\), which quantify the strength of selection or avoidance of spatial covariates, and the parameters \(\mathbf{\beta}_{m}\) of the movement kernel \(\phi\), which quantify movement tendencies (e.g. speed and tortuosity). We denote as \(\mathbf{\beta}=(\mathbf{\beta}_{h},\mathbf{\beta}_{m})\) the vector of all parameters. Based on the standard assumptions that each step follows the model of Equation 1, and that successive steps are conditionally independent (i.e., given past locations), the likelihood of the parameters under the step selection model can be written as
\[L(\mathbf{\beta};\mathbf{x}_{1},\ldots,\mathbf{x}_{n})=\prod_{t=1}^{n-1}p(\mathbf{x}_{t+1}\mid \mathbf{x}_{1:t})=\prod_{t=1}^{n-1}\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})\phi(\mathbf{x}_{t +1}\mid\mathbf{x}_{1:t})}{\int_{\Omega}w(\mathbf{x}_{t},\mathbf{z})\phi(\mathbf{z}\mid\mathbf{x}_{ 1:t})d\mathbf{z}}.\]
Note that both \(w\) and \(\phi\) depend on some of the parameters in \(\mathbf{\beta}\) (specifically, \(w\) depends on \(\mathbf{\beta}_{h}\) and \(\phi\) on \(\mathbf{\beta}_{m}\)), but we do not make this explicit for notational convenience. The likelihood (or log-likelihood) can be optimised numerically with respect to \(\mathbf{\beta}\), for example using the function optim or nlm in R, to obtain maximum likelihood estimates of \(\mathbf{\beta}\). For an overview of maximum likelihood estimation in an ecological context, see for example Chapters 6-7 of Bolker (2008) or Chapter 10 of Fieberg (2023).
Model fitting requires computing the likelihood function for the observed data, and therefore evaluating the integral
\[I=\int_{\Omega}w(\mathbf{x}_{t},\mathbf{z})\phi(\mathbf{z}\mid\mathbf{x}_{1:t})d\mathbf{z} \tag{2}\]
for each time step \(t\in\{1,2,\ldots,n-1\}\). In this section, we describe several methods to approximate \(I\) by some quantity \(\widehat{I}\), which can be substituted in the likelihood formula to carry out approximate inference. The main approaches are summarised in Table 1.
### Monte Carlo integration
We use the term "Monte Carlo integration" to refer to all forms of numerical integration that rely on random sampling (i.e., all methods presented in this review, except for quadrature). Monte Carlo integration is a method for evaluating an integral of the form \(\int_{\Omega}f(\mathbf{z})g(\mathbf{z})d\mathbf{z}\), where \(f\) is a probability density function (Section 3.2 of Robert and Casella, 2010). The general idea is to generate a sample from the distribution \(f\), and use it to find an unbiased estimate of the integral. It can be shown that
\[\int_{\Omega}f(\mathbf{z})g(\mathbf{z})d\mathbf{z}\approx\frac{1}{K}\sum_{k=1}^{K}g(\mathbf{z} _{k}),\quad\text{where }\mathbf{z}_{k}\sim f, \tag{3}\]
with the error of the approximation decreasing as \(K\) increases. Throughout this review, we use the notation "\(\mathbf{z}_{k}\sim f\)" to indicate that \(\mathbf{z}_{k}\) follows the distribution with probability density function \(f\).
The most common approaches to fitting SSFs can be viewed as different forms of Monte Carlo integration applied to the integral in Equation 2, which result from different choices of the functions \(f\) and \(g\) (always with the constraint that \(f\times g=w\times\phi\)). Generally, this choice may impact the accuracy and precision of the approximation, so choosing it requires thought (see Section 2.2.3 and Rizzo, 2019).
Maximum likelihood estimation based on the numerical approximation of the integral in Equation 3 defines a general framework of approximate inference for SSFs. Methods of inference based on Monte Carlo likelihood approximations are common in econometrics, where they are usually called _simulated_ maximum likelihood estimation, to highlight their inherent stochasticity (Section 3.1.2 of Gourieroux and Monfort, 1996). Even though the Monte Carlo estimator of the integral given in Equation 3 is not biased, simulated maximum likelihood estimators generally are, due to the log-transformation of the likelihood before it is maximised (Gourieroux and Monfort, 1996). As a result, Monte Carlo-based step selection parameter estimators are biased, but this bias decreases with the number of integration points.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Approximate likelihood & \(\mathbf{z}_{k}\sim\)? & References \\ \hline MC & \(\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})}{\sum_{k=0}^{K}w(\mathbf{x}_{t},\mathbf{z}_{k})}\) & \(\phi(\cdot\mid\mathbf{x}_{1:t})\) & Fortin et al. (2005) \\ UMC & \(\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})\phi(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})}{\sum_{k=0}^ {K}w(\mathbf{x}_{t},\mathbf{z}_{k})\phi(\mathbf{z}_{k}\mid\mathbf{x}_{1:t})}\) & uniform & Forester et al. (2009) \\ IS & \(\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})\phi(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})}{\sum_{k=0}^ {K}w(\mathbf{x}_{t},\mathbf{z}_{k})\phi(\mathbf{z}_{k}\mid\mathbf{x}_{1:t})/h(\mathbf{z}_{k}\mid\bm {x}_{1:t})}\) & user-defined \(h\) & Forester et al. (2009) \\ UQ & \(\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})\phi(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})}{\sum_{k=0}^ {K}w(\mathbf{x}_{t},\mathbf{z}_{k})\phi(\mathbf{z}_{k}\mid\mathbf{x}_{1:t})}\) & regular grid & Rhodes et al. (2005) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of most common numerical integration approaches used in step selection analyses: Monte Carlo with known movement kernel (MC), uniform Monte Carlo (UMC), importance sampling (IS), and uniform quadrature (UQ). The columns include the approximate likelihood of a step from \(\mathbf{x}_{t}\) to \(\mathbf{x}_{t+1}\), method for determining the distribution of integration points \(\mathbf{z}_{k}\), and references to key papers presenting each approach.
Using Monte Carlo integration, Equation 2 can be approximated as \(I\approx\frac{1}{K}\sum_{k=1}^{K}g(\mathbf{z}_{k})\), for functions \(f\) and \(g\) chosen such that \(\mathbf{z}_{k}\sim f\) and \(f\times g=w\times\phi\). In this review, we include the observed location \(\mathbf{x}_{t+1}\) as an additional integration point in the random sample \(\{\mathbf{z}_{1},\ldots,\mathbf{z}_{K}\}\), and we denote \(\mathbf{z}_{0}=\mathbf{x}_{t+1}\) for convenience. This slight deviation from the formal definition of Monte Carlo integration is justified in this context for three reasons: (1) the resulting formulas have clear links to those presented in the literature, (2) it improves numerical stability and decreases bias for small \(K\) (as illustrated with simulations in Appendix A), and (3) the effect of this change vanishes for large values of \(K\). The approximate likelihood of a step under Monte Carlo integration is then
\[p(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})\approx\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})\phi( \mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})}{\frac{1}{K+1}\sum_{k=0}^{K}g(\mathbf{z}_{k})},\quad \text{where }\mathbf{z}_{k}\sim f,\]
for some general \(f\) and \(g\). We present several important special cases below, which encompass most existing methods for step selection analysis (e.g., Fortin et al., 2005), including extensions proposed by Forester et al. (2009), Duchesne et al. (2015), and Avgar et al. (2016). A key characteristic of the first approach, described in Section 2.2.1, is that it assumes that the movement kernel \(\phi\) is known prior to the step selection analysis. It has become increasingly common to jointly estimate the movement kernel and habitat selection, and the approaches in Sections 2.2.2-2.2.3 focus on that situation.
#### 2.2.1 Assuming that the movement kernel is known
A popular approach to step selection analysis is to define the movement kernel prior to fitting the SSF, typically from empirical or parametric distributions of step lengths and turning angles (Fortin et al., 2005). Then, since \(\phi\) is assumed to be known, we can apply Monte Carlo integration to the integral of Equation 2 by choosing \(f=\phi\) and \(g=w\). Equation 2 becomes
\[I\approx\frac{1}{K+1}\sum_{k=0}^{K}w(\mathbf{x}_{t},\mathbf{z}_{k}),\quad\text{where }\mathbf{z}_{k}\sim\phi(\cdot\mid\mathbf{x}_{1:t}).\]
That is, we generate the random Monte Carlo sample from the movement kernel \(\phi\), and we take the mean of the habitat selection function at those random points to evaluate the integral. This characteristic of the typical step selection analysis workflow has led practitioners to view the \(\mathbf{z}_{k}\) as a sample from the "available" landscape. The key limitation of this approach is that the movement kernel cannot be estimated jointly with habitat selection parameters, because the points \(\mathbf{z}_{k}\) are sampled from it prior to model fitting. In addition, this approach to estimating \(\phi\) without consideration of \(w\) has been shown to result in biased parameter estimators since the observed movements are a function of both processes (Forester et al., 2009).
Using this method, the likelihood of a step (Equation 1) is approximated by
\[p(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})\approx\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})\phi( \mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})}{\frac{1}{K+1}\sum_{k=0}^{K}w(\mathbf{x}_{t},\mathbf{z}_ {k})}\propto\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})}{\sum_{k=0}^{K}w(\mathbf{x}_{t},\mathbf{z} _{k})}, \tag{4}\]
where \(\mathbf{z}_{k}\sim\phi(\cdot\mid\mathbf{x}_{1:t})\). Note that multiplicative constants, i.e., terms that do not depend on the parameters \(\mathbf{\beta}\), can be omitted from the likelihood function with no effect on inference. Here, \(\phi\) can be omitted from the numerator because it is assumed known, and therefore does not depend on any estimated parameter, and \(1/(K+1)\) is omitted from the denominator. When \(w\) is written in the usual exponential form, this is the likelihood of a conditional logistic regression (CLR) model, and parameters can be estimated using standard
software such as the clogit function in the survival R package (Therneau, 2023). However, the method of Equation 4 has also been used for non-exponential functional forms for \(w\), by using likelihood maximisation procedures other than CLR (Potts et al., 2014).
This approach was initially proposed by Fortin et al. (2005), and has been widely used due to the convenience of implementation using CLR, and the intuitive appeal of interpreting the random locations as a sample of availability. Estimating the movement kernel separately has drawbacks, however: as previously mentioned, this approach leads to biased parameter estimators and does not propagate statistical uncertainty about the movement model to the habitat selection parameters. In addition, it does not allow for interactions between the movement parameters and local environmental features. As a consequence, recent research in step selection analysis has focused on formulations where \(\phi\) and \(w\) are estimated simultaneously (Rhodes et al., 2005; Forester et al., 2009; Avgar et al., 2016), and we present those in the next subsections. The interpretation of the random points \(\mathbf{z}_{k}\) is different when \(\phi\) is estimated as part of the SSF, and we discuss this in Section 4.
#### 2.2.2 Uniform Monte Carlo sampling
If the movement kernel \(\phi\) is not known a priori, we must choose a different probability density function \(f\) from which to generate random points. One natural choice is to use a uniform distribution over the domain \(\Omega\), because it is easy to sample from. In this case, the two functions in Equation 3 are defined as \(f(\mathbf{z})=1/A(\Omega)\) (where \(A(\Omega)\) is the area of \(\Omega\)), and \(g(\mathbf{z})=A(\Omega)w(\mathbf{x}_{t},\mathbf{z})\phi(\mathbf{z}\mid\mathbf{x}_{1:t})\). Then, Equation 2 can be approximated as
\[I\approx\frac{A(\Omega)}{K+1}\sum_{k=0}^{K}w(\mathbf{x}_{t},\mathbf{z}_{k})\phi(\mathbf{z }_{k}\mid\mathbf{x}_{1:t}),\quad\text{where }\mathbf{z}_{k}\sim\text{Unif}(\Omega).\]
Intuitively, the random points are used to estimate the mean value of the SSF over \(\Omega\), and the integral is approximated by the product of that mean value and \(A(\Omega)\) (see Figure 2a for a one-dimensional example).
This method, called "uniform sampling" by Forester et al. (2009), allows for joint estimation of habitat selection and movement parameters (e.g., Schlagel and Lewis, 2014). However, the uniform sampling approach can be computationally demanding, because good performance requires that the integration points provide adequate coverage of the study region \(\Omega\), and this can often only be achieved for large values of \(K\). In most step selection analyses, the number of points can be greatly reduced based on the observation that, at each step, the SSF decreases sharply with distance from the start point \(\mathbf{x}_{t}\) (due to movement constraints of the animal). Points far from \(\mathbf{x}_{t}\) therefore contribute a negligible amount to the integral, and the approximation is virtually unchanged if the domain of integration is truncated to a disc around \(\mathbf{x}_{t}\), with radius large enough to encompass any possible step (Boyce et al., 2003; Craiu et al., 2008). Fewer points are needed to ensure good coverage of this disc, which reduces the computational cost of evaluating the integral (Klappstein et al., 2022). Uniform sampling on a truncated interval is illustrated in Figure 2b.
Using this approximation, Equation 1 becomes
\[p(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})\approx\frac{w(\mathbf{x}_{t},\mathbf{x}_{t+1})\phi( \mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})}{\frac{A(\Omega)}{K+1}\sum_{k=0}^{K}w(\mathbf{x}_{t },\mathbf{z}_{k})\phi(\mathbf{z}_{k}\mid\mathbf{x}_{1:t})}\propto\frac{w(\mathbf{x}_{t},\mathbf{x} _{t+1})\phi(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})}{\sum_{k=0}^{K}w(\mathbf{x}_{t},\mathbf{z}_{ k})\phi(\mathbf{z}_{k}\mid\mathbf{x}_{1:t})}, \tag{5}\]
where \(z_{k}\sim\text{Unif}(\Omega)\). Like before, we remove \(A(\Omega)/(K+1)\) from the denominator because it is a constant and thus will not affect maximum likelihood estimation. Here, we cannot omit \(\phi\) from the numerator because
Figure 2: Illustration of numerical integration in one dimension, for the function \(f\times g\) shown as a black line, over the interval \(\Omega=[-5,5]\). The orange dots are the integration points, and the blue dots are the corresponding function evaluations. (a) Monte Carlo integration with a uniform sample over \(\Omega\); the height of the grey rectangle is the mean of function evaluations. (b) Monte Carlo integration with a uniform sample over \([-3,3]\). (c) Importance sampling with random points generated from a normal distribution that roughly approximates the integrand \(f\times g\). (d) Quadrature over a regular grid using a Riemann sum. In (a), (b) and (d), the shaded area approximates the area under the curve; there is no such simple visualisation method for importance sampling. In this small simulated example, the true integral is 2.53, and the approximations are (a) 2.83, (b) 2.28, (c) 2.41, and (d) 2.80.
it is not assumed to be known, and therefore is a function of the parameters \(\mathbf{\beta}\) of interest. If both \(\phi\) and \(w\) have an exponential form, then Equation 5 is equal to the CLR likelihood, as long as we include the observed location as an integration point.
Note that uniform Monte Carlo sampling refers to _spatially uniform points_, and this should not be confused with sampling points with uniform distances from \(\mathbf{x}_{t}\)(Avgar et al., 2016). Generating uniform distances will not result in a spatially uniform distribution of end points \(\mathbf{z}_{k}\), and so the above formulas do not hold in that case. This is due to the fact that the set of possible long steps is spread over a larger area than the set of possible short steps; therefore if distances are uniform, points will be relatively more concentrated around the origin than far from it (Rhodes et al., 2005). The case where distances are sampled from uniform distributions can in fact be viewed as a special case of importance sampling (Section 2.2.3).
#### 2.2.3 Importance sampling
The precision of numerical integration depends on the choice of integration points; generally, the variability in the approximation is lower if points are concentrated in areas where the function takes large values. Importance sampling is a method to increase the precision of Monte Carlo integration by generating random points from a user-defined distribution \(h\), called the importance function (Section 3.3 of Robert and Casella, 2010, and see an illustration in Figure 2c). To apply importance sampling to an SSF, we choose \(f=h\) and \(g=(w\times\phi)/h\) (so that \(f\times g=w\times\phi\) as required), and Equation 3 gives us the following approximation for the SSF integral,
\[I\approx\frac{1}{K+1}\sum_{k=0}^{K}\frac{w(\mathbf{x}_{t},\mathbf{z}_{k})\phi(\mathbf{z}_ {k}\mid\mathbf{x}_{1:t})}{h(\mathbf{z}_{k}\mid\mathbf{x}_{1:t})},\quad\text{where }\mathbf{z}_{k}\sim h(\cdot\mid\mathbf{x}_{1:t}). \tag{6}\]
The only constraint on \(h\) is that it should be strictly positive over \(\Omega\); when this is not the case, the approximation of the integral is truncated to the support of \(h\) (i.e., the geographical area over which \(h>0\)). Note that, in Equation 6, the importance function \(h\) is used to weigh the contribution of each sampled point to the approximation; this is required to correct for the preferential sampling of some points over others when generating \(\mathbf{z}_{k}\).
Importance sampling is useful because the function \(h\) can be chosen in such a way that the variance of the integral estimator decreases (i.e., its precision increases). The aim is to choose a function \(h\) with a shape that is as similar as possible to \(g\)(Section 6.6 of Rizzo, 2019). This is a convenient framework in the context of SSFs, because it is often possible to determine where the SSF will take large values, based on the movement constraints of the animal. Animals are likely to avoid long steps, and so the SSF often decays rapidly as distance from the start point increases (Figure 1). The speed of this decay can be determined approximately from the data, for example by fitting a distribution to the observed step lengths, and this information can be used to define the importance function \(h\). For example, \(h\) could be chosen as a bivariate normal distribution centred on the last location \(\mathbf{x}_{t}\), or the two-dimensional spatial distribution implied by step length and turn angle distributions estimated from the data.
Using importance sampling with function \(h\), the approximate likelihood of a step under the SSF model is
\[p(\boldsymbol{x}_{t+1}\mid\boldsymbol{x}_{1:t}) \approx\frac{w(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})\phi( \boldsymbol{x}_{t+1}\mid\boldsymbol{x}_{1:t})}{\frac{1}{K+1}\sum_{k=0}^{K}w( \boldsymbol{x}_{t},\boldsymbol{z}_{k})\phi(\boldsymbol{z}_{k}\mid\boldsymbol{ x}_{1:t})/h(\boldsymbol{z}_{k}\mid\boldsymbol{x}_{1:t})}\] \[\propto\frac{w(\boldsymbol{x}_{t},\boldsymbol{x}_{t+1})\phi( \boldsymbol{x}_{t+1}\mid\boldsymbol{x}_{1:t})}{\sum_{k=0}^{K}w(\boldsymbol{ x}_{t},\boldsymbol{z}_{k})\phi(\boldsymbol{z}_{k}\mid\boldsymbol{x}_{1:t})/h( \boldsymbol{z}_{k}\mid\boldsymbol{x}_{1:t})}, \tag{7}\]
where \(z_{k}\sim h\). The two other Monte Carlo approaches (Sections 2.2.1-2.2.2) can be viewed as special cases of importance sampling. If \(h\) is the probability density function of a uniform distribution over geographical space, it is constant and can be omitted in Equation 7, and we obtain Equation 5. Alternatively, if \(\phi\) is assumed to be known, and we choose \(h=\phi\), then \(h\) and \(\phi\) cancel out in the denominator of Equation 7; omitting \(\phi\) from the numerator because is is known, this simplifies to Equation 4.
Although the term "importance sampling" has rarely been used in the SSF literature, this and equivalent methods have been widely advocated, starting with the recommendation of Forester et al. (2009) to distinguish between the sampling function \(h\) and the movement model \(\phi\). Forester et al. (2009) derived a formula very similar to Equation 7, but with one small difference: their numerator is divided by \(h(\boldsymbol{x}_{t+1}\mid\boldsymbol{x}_{1:t})\). This difference is inconsequential because \(h(\boldsymbol{x}_{t+1}\mid\boldsymbol{x}_{1:t})\) does not depend on the estimated parameters, so excluding it does not affect inference. The widely-used methods of Avgar et al. (2016), often called integrated step selection analysis, are based on Forester et al. (2009) and can also be viewed as importance sampling. They focus on cases where \(h\) and \(\phi\) are chosen to be from the same exponential family of distributions, so that the calculations simplify and CLR can be used, but there is no such restriction when the approach is implemented with maximum likelihood estimation. In the approach of Avgar et al. (2016), the parameter estimates are adjusted after model fitting to correct for the sampling design. Here, the bias is corrected directly by including \(h\) in Equation 7 when implementing the likelihood function. Another notable example is Johnson et al. (2008), who explicitly suggested importance sampling for a weighted distribution model analogous to the SSF in Equation 1, and used maximum likelihood estimation to fit the model without the need for CLR. More recently, Klappstein et al. (2023) and Pohle et al. (2023) also recognised that the approach proposed by Forester et al. (2009) was a form of importance sampling.
Figure 3 contrasts importance sampling and uniform Monte Carlo sampling (Section 2.2.2) for an example SSF. Similar to the one-dimensional example shown in Figure 2, the intuition is that the precision of the approximation of the integral depends on the coverage of areas where the SSF takes high values. More specifically, the variance of the approximation is minimised when the distribution of integration points (i.e., the importance function \(h\)) is proportional to the SSF \(w\times\phi\). In practice, \(w\) and \(\phi\) are not known, but a good heuristic is to approximate the movement kernel with parametric distributions (which can then be sampled from), and use these to define \(h\).
### Quadrature
Quadrature is a deterministic (non-random) alternative to Monte Carlo sampling, where the function is evaluated on a user-defined grid of points over the domain of integration. The simplest example of quadrature is the Riemann sum (see Figure 2d); in one dimension, the integral is approximated by the sum of the areas of rectangles with heights determined by function evaluations along a regular grid of points. The two-dimension extension of this approach consists of evaluating the function at regularly-spaced points along a
two-dimensional grid, and approximating the integral by the sum of the volumes of cuboids.
In the context of an SSF, quadrature can be applied by evaluating \(w(\mathbf{x}_{t},\mathbf{z}_{k})\phi(\mathbf{z}_{k}\mid\mathbf{x}_{1:t})\) at the centres of \(K+1\) grid cells, \(\{\mathbf{z}_{0},\mathbf{z}_{1},\mathbf{z}_{2},\ldots,\mathbf{z}_{K}\}\), and calculating
\[I\approx A\sum_{k=0}^{K}w(\mathbf{x}_{t},\mathbf{z}_{k})\phi(\mathbf{z}_{k}\mid\mathbf{x}_{1:t }),\]
where \(A\) is the area of each grid cell. Unlike Monte Carlo approaches, there is no variance in the estimate of the integral obtained from quadrature, because the grid is fixed (for a given \(K\)) rather than random. However, there is some error in the estimate (akin to statistical bias), and this error decreases as the spatial resolution of the quadrature grid increases (i.e., as \(K\) increases).
Rhodes et al. (2005) proposed this approach, and pointed out that the grid cells need not cover the entire study area, and the approximation can be truncated to the region where the function is non-negligible (within some distance of \(\mathbf{x}_{t}\)). This can speed up computations, similar to the observation in Section 2.2.2 that Monte Carlo integration with uniform sampling can be performed over a disc centred on the last location, as long as the radius of the disc is big enough. We call the approach where integration points are on a regular grid "uniform quadrature", and it is illustrated in Figure 3c for an example SSF. The equation for approximating the integral using uniform quadrature is identical to that obtained for uniform Monte Carlo sampling, and so the expression for the SSF likelihood is given by Equation 5.
In step selection analysis, environmental covariates are often available only over the discrete cells of a raster, making the centroids of the raster cells a natural choice for the grid of quadrature (Rhodes et al., 2005), but this is not the only possible choice. Arce Guillen et al. (2023) described a new method of inference for SSFs, where the function to integrate is evaluated at the nodes of a (deterministic) triangular mesh. This is another form of quadrature, which works well for the general class of spatial point processes implemented in the inlabru spatial modelling software that they use (Simpson et al., 2016; Bachl et al., 2019).
Figure 3: Illustration of three integration designs for an example SSF. The triangle in the centre shows the last location \(\mathbf{x}_{t}\), and the heatmap shows the function that needs to be integrated, i.e., \(w(\mathbf{x}_{t},\mathbf{z})\phi(\mathbf{z}\mid\mathbf{x}_{1:t})\). This function decreases with distance to \(\mathbf{x}_{t}\) due to the animal’s movement constraints. The black dots in each panel represent 100 points used for numerical integration: (a) uniform points over a disc, (b) points generated from an importance function based on distance to \(\mathbf{x}_{t}\), and (c) regular quadrature grid over a disc. Importance sampling generates more points in areas where the function is high, and will typically lead to a better approximation of the integral.
Some have argued that deterministic numerical integration is preferable in habitat selection modelling because it has better properties than Monte Carlo integration for low-dimensional integrals like the one in Equation 2(Warton and Shepherd, 2010; Arce Guillen et al., 2023). However, this comparison assumed uniform Monte Carlo sampling, for which the performance can be improved substantially using importance sampling with a well-chosen function \(h\). An interesting alternative might therefore be adaptive quadrature, where the spatial arrangement of points in a (deterministic) spatial grid is chosen based on the shape of the function. The general idea is to iteratively subdivide the domain of integration, in such a way that regions where the function is more irregular are subdivided further. This is analogous to the idea in importance sampling of generating points in regions where the function is more complex or takes higher values (Pinheiro and Chao, 2006).
## 3 Illustration
We illustrate some of the key concepts and methods using simulations and a real data analysis. The general approach to fitting SSFs presented in Section 2 requires implementing the (approximate) likelihood function, rather than relying on existing CLR software. Writing custom code greatly increases the flexibility of the model formulation and inference methods. To help readers implement their own step selection analyses, we provide R code that can be used as a starting point. We aimed for a trade-off between simplicity and flexibility, and we provide basic functions that can be tailored to fit a wide range of model formulations. The documented code and detailed examples are provided in Appendix C.
### Comparing sampling designs in simulations
Different methods of fitting SSFs can be viewed as different numerical integration approaches for the same underlying model. When using Monte Carlo and quadrature methods, the placement of integration points is known to affect the precision of the results (Rizzo, 2019), and this provides a rigorous grounding for the intuition that some methods of generating locations perform better than others. The closer the distribution of integration points is to the true SSF, the more precise the approximation will be. In this section, we use simulations to assess the effect of several design choices on our ability to recover the parameters of an SSF. We do not intend these simulations to be exhaustive, or to provide general guidelines to select the best sampling design in every application, as the choice of method and model is study-specific. Rather, our aim is to showcase the qualitative effect of different design choices, encouraging researchers to explore the various options for themselves when they perform step selection analysis.
We considered three designs for the integration points: (1) points sampled uniformly at random over a disc of radius equal to the maximum observed step length, (2) importance sampling where \(h\) was based on a gamma distribution of distances and a von Mises distribution of turning angles, and (3) uniform quadrature where integration points were defined on a regular spatial grid over a disc of radius equal to the maximum observed step length. In all cases, the parameters of the importance function were chosen based on the empirical distributions of step lengths and/or turning angles.
We first simulated a movement track of length 1000 from an SSF with known parameters. For the habitat selection component, we used \(w(\mathbf{x}_{t},\mathbf{x}_{t+1})=\exp(\beta\times c_{1}(\mathbf{x}_{t+1}))\), where \(c_{1}\) was a simulated spatial covariate
shown in Appendix B and \(\beta=5\) was the corresponding selection parameter. The movement kernel was chosen as \(\phi(\mathbf{x}_{t+1}\mid\mathbf{x}_{1:t})=\exp(-4L_{t}+3\cos(\alpha_{t}))\), where \(L_{t}\) and \(\alpha_{t}\) are the step length and turning angle at time \(t\), respectively. This formulation implies that step lengths followed a gamma distribution with mean \(0.5\) and standard deviation \(0.35\), and turning angles followed a von Mises distribution with mean zero and concentration \(3\). For each scenario, we ran the following steps for increasing numbers of random points, \(K\in\{5,10,\ldots,195,200\}\):
1. Generate \(K\) integration points for each observed step, based on a given numerical integration method;
2. Estimate the selection parameter \(\beta\) using maximum likelihood estimation, based on the integration points from step \(1\).
For Monte Carlo sampling methods, we repeated these steps \(50\) times to capture estimator variability, yielding \(50\) estimated parameters \(\widehat{\beta}\) for each approach and each number of random points. For deterministic quadrature, there is no sampling variability and so the procedure only needed to be run once for any given \(K\). Note that, for quadrature, the number of integration points was constrained by the design of the grid, and so it does not always exactly match the values of \(K\) listed above. For each simulation, we evaluated estimator bias due to the numerical integration, as the difference between the estimated parameter and the asymptotic estimate obtained when \(K\) is very large (here, \(K=5000\)). We did not compare the estimates to the true parameter values used in simulation, because this would not make it possible to separate bias due to integration error (which we are interested in) and finite-sample bias inherent to maximum likelihood estimation (which does not depend on integration method).
Figure 4 compares the bias in the selection parameter \(\beta\) for the three numerical integration approaches, over a range of numbers of random points. The bias and variance decreased as the number of random points increased for all tested methods. Overall, uniform Monte Carlo sampling had lower precision (i.e., higher variation) than importance sampling; this is to be expected, as importance sampling uses observed movement patterns to generate integration points efficiently. Uniform quadrature was generally positively biased (i.e., the selection parameter was overestimated) and, although the bias decreased as \(K\) increased, both random sampling methods seemed to perform better in this analysis. As described in Section 2.2, the bias is due to the log-transformation of the likelihood for optimisation: the parameter value that maximises the approximate log-likelihood can be biased even if the estimator of the likelihood is unbiased (Gourieroux and Monfort, 1996).
These simulations show that the performance of any method of numerical integration is dependent on the number of integration points. In practice, there is no consensus about how high this number should be, and Thurfjell et al. (2014) reported that different studies have used a wide range of numbers of random points, between \(K=2\) and \(K=200\). The minimum number needed for a given analysis depends on many factors, such as the length of the observed times series, the sampling scheme used, and the complexity of the SSF model formulation. For this reason, we recommend that practitioners try several numbers of random points, until the parameter estimates stabilise, to ensure that the approximation error in the results is small. This is consistent with the results presented in Figure 2 of Fieberg et al. (2021), and similar to the advice of Warton and Shepherd (2010) and Northrup et al. (2013) in the context of resource selection functions. Small samples can also lead to numerical instability and failure to converge during model fitting.
### Comparing models using AIC and the same set of integration points
To demonstrate how an understanding of numerical integration techniques and their use in step selection analyses can facilitate model comparisons, we considered location data from a red deer (_Cervus elaphus_) in Northern Germany, automatically loaded with the amt R package as the data object deer(Signer et al., 2019). The locations are on a regular 6-hour time grid, and the package also provides a binary raster layer for forest cover (through the get_sh_forest() function). We will use importance sampling with a single set of integration points to fit multiple models and compare them using AIC.
We generated integration points using gamma-distributed step lengths and von Mises-distributed turning angles, with parameters estimated from their empirical distributions. We compared four SSF formulations, all with the same habitat selection component \(w\) (with forest cover as covariate), but with different families of distributions of step lengths \(L_{t}\) and turning angles \(\alpha_{t}\) in the movement kernel: (i) \(L_{t}\sim\text{Exp}(\theta_{1})\) and \(\alpha_{t}\sim\text{uniform}(-\pi,\pi)\), (ii) \(L_{t}\sim\text{gamma}(\theta_{1},\theta_{2})\) and \(\alpha_{t}\sim\text{uniform}(-\pi,\pi)\), (iii) \(L_{t}\sim\text{gamma}(\theta_{1},\theta_{2})\) and \(\alpha_{t}\sim\text{von Mises}(0,\kappa)\), and (iv) \(L_{t}\sim\text{Weibull}(\theta_{1},\theta_{2})\) and \(\alpha_{t}\sim\text{wrapped Cauchy}(0,\kappa)\). Note that we make a distinction between the distributions used to generate random locations (which determine the importance function \(h\)) and the distributions used to specify the movement kernel \(\phi\). This allows us to fit all four SSFs using the same integration points, i.e., the exact same data set, such that the models can be compared using AIC.
The results are shown in Table 2. For this data set, AIC favoured the SSF formulation (iv), where step lengths were modelled with a Weibull distribution and turning angles with a wrapped Cauchy distribution. This example illustrates two advantages of implementing maximum likelihood estimation using numerical
Figure 4: Results of simulation study. Bias (estimate \(-\) truth) in estimated selection parameter for one spatial covariate, as function of the number of integration points. Results are compared for three methods for performing numerical integration: uniform Monte Carlo (blue boxes), importance sampling with gamma-distributed distances and von Mises-distributed angles (red boxes), and uniform quadrature (black crosses). Each box represents 50 estimated values. For quadrature, the number of integration points was constrained by the design of the grid, so values of \(K\) do not exactly match those used for other methods.
integration: the flexibility of choosing non-exponential models for the movement kernel (the Weibull and wrapped Cauchy distributions are not in the exponential family), and the ability to compare models with different movement formulations.
## 4 Rethinking step selection analysis
### Beyond conditional logistic regression
It is common for step selection analysis to be presented as conditional logistic regression (CLR), where each observed step is a "case" associated with a set of "control" (random) steps. Indeed, in many important special cases, the SSF likelihood is approximately equivalent to that of a CLR model (Fortin et al., 2005; Forester et al., 2009), which allows parameters to be estimated with CLR software (Signer et al., 2019; Therneau, 2023). To justify that approach, Forester et al. (2009) described an SSF as a discrete-choice model, which is a popular way to present resource selection functions (Cooper and Millspaugh, 1999). In that framework, the animal is assumed to have access to a finite number of discrete and mutually exclusive resource units, and the model describes how it chooses one unit over the others. In a step selection model, however, the animal has infinitely many movement options (because it moves over a continuous space). To reduce the problem to a discrete choice, Forester et al. (2009) conditioned each movement decision on a set of random ("control") points, and assumed that those represented the animal's options for that time step. This can be viewed as an approximation of the target model, akin to the numerical integration approaches that we presented in Section 2, and the two approaches lead to equivalent formulas.
The equivalence between CLR and the SSF likelihood has allowed ecologists to leverage standard statistical software for quantifying drivers of movement and habitat selection for many years. Recently, Muff et al. (2020) and Chatterjee et al. (2023) demonstrated how a similar equivalence between CLR and Poisson regression with fixed stratum-specific intercepts could be exploited to model individual variability in habitat selection and movement parameters using random effects. Despite these equivalencies, we emphasise that CLR is not the model of interest, but rather a tool to fit the target SSF model, shown in Equation 1. Notably, the equivalence between CLR and the SSF likelihood only holds when both the habitat selection function \(w\) and the movement kernel \(\phi\) have an exponential form. CLR is therefore limited to distributions from the exponential family when modelling \(\phi\), and most step selection modelling research has therefore defaulted to using an exponential or gamma distribution for step lengths and the von Mises distribution for turn angles (Duchesne et al., 2015; Avgar et al., 2016). There is no such restriction in the general model, however,
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Step length & Turning angle & AIC & \(\Delta\)AIC \\ \hline (iv) & Weibull & wrapped Cauchy & 2500 & 0 \\ (ii) & gamma & uniform & 2507 & 7 \\ (iii) & gamma & von Mises & 2509 & 9 \\ (i) & exponential & uniform & 2521 & 21 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model comparison for deer analysis. The four SSFs are specified using different distributions of step lengths and turning angles, and they are shown in order of increasing AIC (i.e., the better model is at the top). \(\Delta\)AIC is the difference in AIC between each model and the better model.
which allows for the use of a much wider range of distributions, such as the Weibull distribution for step length and the wrapped Cauchy distribution for turning angle. As our applied example illustrates (Section 3.2), these alternatives can potentially lead to improved model fit. Likewise, it is possible to model habitat selection using functions other than the exponential (e.g., Potts et al., 2014; Schlagel et al., 2017), although it is a natural choice for continuous variables since selection functions should be positive and unbounded (McDonald, 2013).
### Separating model and inference
As we have already mentioned, it is important to distinguish between the choice of model formulation and the method of parameter estimation. Here, the model is of the form shown in Equation 1, and the main modelling decision is defining the functional forms of \(w\) and \(\phi\). For a given model formulation, many possible numerical integration methods can be used to approximate the likelihood, and therefore to estimate the model parameters, as described in Section 2. These methods, together with any implementation scheme such as CLR or other likelihood maximisation, constitute the inference procedure, and are not the model itself. Choices of model and inference procedure are both important for the analysis: the model formulation should capture important features of the data-generating process (i.e., animal movement and habitat selection), whilst the inference procedure should be chosen carefully to reduce the approximation error. We posit that much confusion has arisen in the context of step selection analysis due to the conflation of model and inference, especially when comparing techniques.
One particular area of confusion has been the interpretation of the integration points \(\{\mathbf{z}_{1},\ldots,\mathbf{z}_{K}\}\) generated as part of model fitting. Due to the historical influence of Fortin et al. (2005), who assumed that the movement model \(\phi\) was known and used it to generate random locations (Section 2.2.1), the \(\mathbf{z}_{k}\) are commonly assumed to be a sample of "available" points (or, equivalently, steps connecting the previous location and \(\mathbf{z}_{k}\) are seen as possible movements). However, this is crucially not the correct interpretation in other approaches, such as uniform sampling (Section 2.2.2) and importance sampling (Section 2.2.3). Indeed, with those methods, the sample of random locations does not have any particular biological interpretation, and it merely constitutes a numerical tool to approximate an integral over space. This is the case in most modern step selection analyses, in which availability is estimated jointly with habitat selection through the parameters of the function \(\phi\), rather than assumed known a priori (Rhodes et al., 2005; Forester et al., 2009; Avgar et al., 2016).
Another important distinction is between the distributions used to model movement, and the distributions used to generate random locations. The modelled distributions are specified through the choice of \(\phi\), independently of the choice of sampling function (e.g., \(h\) in importance sampling). Forester et al. (2009) and Avgar et al. (2016) showed that it is mathematically convenient to generate random locations from the same family of distributions that is used in \(\phi\), but this is not a necessity. As we demonstrate in Section 3.2, it is for example possible to use a gamma distribution of distances to generate random locations (i.e., in the importance function \(h\)), but specify \(\phi\) so as to model step lengths with a Weibull distribution in the analysis. Furthermore, it is straightforward to fit SSFs with different movement kernel formulations on the exact same data set (including identical random locations). This makes it possible to select the formulation for the movement kernel based on standard model selection criteria such as AIC. This is not possible within
the workflow outlined by Avgar et al. (2016) and implemented by Signer et al. (2019), which requires \(h\) and \(\phi\) to be from the same family. When the model and method of inference are separate, it is also easier to determine how different functional forms for the movement kernel \(\phi\) can be implemented in practice, and we discuss this in Appendix D.
In fact, the distribution from which the random locations are generated only matters insofar as different distributions might require different numbers of integration points to achieve low error (Section 3.1). For a large enough number of integration points, the choice of distribution is inconsequential. In practice, when the computational cost is high, it might not be an option to increase the size \(K\) of the random sample arbitrarily; in this case, it is preferable to choose an importance function \(h\) that reduces the estimation variance for a given \(K\). Importantly, the choice of \(h\) does not reflect a modelling assumption, and it is used merely as an inferential tool to reduce the approximation error.
## 5 Conclusion
The general problem of evaluating complex integrals has been studied extensively, and many different approaches could be adapted in the context of step selection analysis. For example, the Laplace approximation is a versatile method where it is not necessary to evaluate the function at integration points. Instead, it is assumed that the integrand is well approximated by a normal distribution, for which the integral is known. In step selection analyses, the integrand typically combines the movement kernel and the habitat selection function, resulting in a complex (possibly multimodal) function, and it is not clear whether the Gaussian assumption would be reasonable. As suggested in Section 2.3, another promising direction is to combine ideas from quadrature and importance sampling, and choose the nodes of a deterministic grid to improve the approximation in areas where the SSF is irregular. This is similar to adaptive quadrature, which is already widely used in ecology for non-Gaussian generalised linear mixed models (Bolker et al., 2009).
All model fitting approaches are approximately equivalent in the limit where the number of points in the Monte Carlo sample or the quadrature grid is large (i.e., as \(K\rightarrow\infty\)). It might therefore seem unnecessary to concern ourselves with the design choices described in this review, because we can always increase \(K\) until the bias and/or variance in the estimation are negligible. Because SSFs are relatively simple models, computation time is often small for modest-sized data sets, and the additional cost of increasing \(K\) might be moderate. However, we have shown that using simplistic numerical integration techniques can cause bias to persist even for quite large \(K\) (e.g., uniform Monte Carlo and quadrature; Figure 4), so thinking carefully about the numerical integration technique employed may often be preferable to simply increasing \(K\). Furthermore, step selection models are becoming more sophisticated and complex, so computational effort might become the bottleneck of many analyses (e.g., multistate models; Nicosia et al., 2017). We anticipate that the sampling design will become increasingly critical in those cases, as well as in studies with large data sets.
We are certainly not recommending that all biologists stop using CLR software for step selection analyses, as it is a convenient, fast, and stable implementation for many purposes. In cases where the habitat selection and movement are modelled in exponential form, the CLR approach outlined by Forester et al. (2009), Duchesne et al. (2015), and Avgar et al. (2016) can safely be applied, e.g., with the amt R package (Signer
et al., 2019). However, even in that context, understanding the role numerical integration plays in parameter estimation can shed light on several important technical details. In particular, this approach clarifies the role of "control" points in step selection analysis: although those locations are generally a numerical tool rather than a biologically-relevant set of spatial locations, placing them in areas where the animal is most likely to move decreases the required number of points. In addition, importance sampling explains the post-hoc parameter adjustment proposed by Avgar et al. (2016), which is needed to correct for the choice of importance function (see also Appendix S3 of that paper). Overall, we have shown that the interpretation of step selection models ought to be separated from any specific model fitting approach, and that understanding this dichotomy between model construction and model fitting leads to much broader application of step selection functions than those confined to a CLR approach.
### Acknowledgements
JF was supported by National Aeronautics and Space Administration award 80NSSC21K1182 and received partial salary support from the Minnesota Agricultural Experimental Station.
|
2307.12792 | Deep Homography Prediction for Endoscopic Camera Motion Imitation
Learning | In this work, we investigate laparoscopic camera motion automation through
imitation learning from retrospective videos of laparoscopic interventions. A
novel method is introduced that learns to augment a surgeon's behavior in image
space through object motion invariant image registration via homographies.
Contrary to existing approaches, no geometric assumptions are made and no depth
information is necessary, enabling immediate translation to a robotic setup.
Deviating from the dominant approach in the literature which consist of
following a surgical tool, we do not handcraft the objective and no priors are
imposed on the surgical scene, allowing the method to discover unbiased
policies. In this new research field, significant improvements are demonstrated
over two baselines on the Cholec80 and HeiChole datasets, showcasing an
improvement of 47% over camera motion continuation. The method is further shown
to indeed predict camera motion correctly on the public motion classification
labels of the AutoLaparo dataset. All code is made accessible on GitHub. | Martin Huber, Sebastien Ourselin, Christos Bergeles, Tom Vercauteren | 2023-07-24T13:42:19Z | http://arxiv.org/abs/2307.12792v1 | # Deep Homography Prediction for Endoscopic Camera Motion Imitation Learning
###### Abstract
In this work, we investigate laparoscopic camera motion automation through imitation learning from retrospective videos of laparoscopic interventions. A novel method is introduced that learns to augment a surgeon's behavior in image space through object motion invariant image registration via homographies. Contrary to existing approaches, no geometric assumptions are made and no depth information is necessary, enabling immediate translation to a robotic setup. Deviating from the dominant approach in the literature which consist of following a surgical tool, we do not handcraft the objective and no priors are imposed on the surgical scene, allowing the method to discover unbiased policies. In this new research field, significant improvements are demonstrated over two baselines on the Cholec80 and HeiChole datasets, showcasing an improvement of 47% over camera motion continuation. The method is further shown to indeed predict camera motion correctly on the public motion classification labels of the AutoLaparo dataset. All code is made accessible on GitHub1.
Footnote 1: [https://github.com/RViMLab/homography_imitation_learning](https://github.com/RViMLab/homography_imitation_learning)
Keywords:Computer vision Robotic surgery Imitation learning
## 1 Introduction
Automation in robot-assisted minimally invasive surgery (RMIS) may reduce human error that is linked to fatigue, lack of attention and cognitive overload [8]. It could help surgeons operate such systems by reducing the learning curve [29]. And in an ageing society with shrinking workforce, it could help to retain accessibility to healthcare. It is therefore expected that parts of RMIS will be ultimately automated [5, 30]. On the continuous transition towards different levels of autonomy, camera motion automation is likely to happen first [14].
Initial attempts to automate camera motion in RMIS include rule-based approaches that keep surgical tools in the center of the field of view [4, 9, 21]. The assumption that surgical tools remain centrally is, however, simplistic, as in
many cases the surgeon may want to observe the surrounding anatomy to decide their next course of action.
Contrary to rule-based approaches, data-driven methods are capable to capture more complex control policies. Example data-driven methods suitable for camera motion automation include reinforcement learning (RL) and imitation learning (IL). The sample inefficiency and potential harm to the patient currently restrict RL approaches to simulation [23, 22, 1], where a domain gap remains. Work to bridge the domain gap and make RL algorithms deployable in real setups have been proposed [3, 20], but clinical translation has not yet been achieved. For IL, on the other hand, camera motion automation could be learned from real data, thereby implicitly tackling the domain-gap challenge. The downside is that sufficient data may be difficult to collect. Many works highlight that lack of expert annotated data hinders progress towards camera motion automation in RMIS [19, 13, 7]. It is thus not surprising that existing literature on IL for camera motion automation utilizes data from mock setups [12, 26].
Recent efforts to make vast amounts of laparoscopic intervention videos publicly available [19] drastically change how IL for camera motion automation can be approached. So far, this data is leveraged mainly to solve auxiliary tasks that could contribute to camera motion automation. As reviewed in [18], these tasks include tool and organ segmentation, as well as surgical phase recognition. For camera motion automation specifically, however, there exist no publicly available image-action pairs. Some work, therefore, continues to focus on the tools to infer camera motion [15], or learns on a robotic setup altogether [17] where camera motion is accessible. The realization, however, that camera motion is intrinsic to the videos of laparoscopic interventions and that camera motion could be learned on harvested actions was first realized in [11], and later in [16]. This comes with the additional advantage that no robot is necessary to learn behaviors and that one can directly learn from human demonstrations.
In this work, we build on [11] for computationally efficient image-action pair extraction from publicly available datasets of laparoscopic interventions, which yields more than \(20\times\) the amount of data that was used in the closed source data of [16]. Contrary to [16], our camera motion extraction does not rely on image features, which are sparse in surgical videos, and is intrinsically capable to differentiate between camera and object motion. We further propose a novel importance sampling and data augmentation step for achieving camera motion automation IL.
## 2 Materials and Methods
The proposed approach to learning camera motion prediction is summarized in Fig. 1. The following sections will describe its key components in more detail.
### Theoretical Background
Points on a plane, as observed from a moving camera, transform by means of the \(3\times 3\) projective homography matrix \(\mathbf{G}\) in image space. Thus, predicting future
camera motion (up to scale) may be equivalently treated as predicting future projective homographies.
It has been shown in [6] that the four point representation of the projective homography, _i.e.,_ taking the difference between four points in homogeneous coordinates \(\Delta\mathbf{u}\mathbf{v}=\{\mathbf{p}_{i}-\mathbf{p}_{i}^{\prime}\,|\,i\in[0,4 )\}\in\mathbb{R}^{4\times 2}\) that are related by \(\mathbf{G}\mathbf{p}_{i}\sim\mathbf{p}_{i}^{\prime}\,\,\forall i\), is better behaved for deep learning applications than the \(3\times 3\) matrix representation of a homography. Therefore, in this work, we treat camera motion \(\mathcal{C}\) as a sequence of four point homographies on a time horizon \([T_{0},T_{N+M})\), \(N\) being the recall horizon's length, \(M\) being the preview horizon's length. Time points lie \(\Delta t\) apart, that is \(T_{i+1}=T_{i}+\Delta t\). For image sequences of length N+M, we work with four point homography sequences \(\mathcal{C}=\{\Delta\mathbf{u}\mathbf{v}_{t}\,|\,t\in[T_{0},T_{N+M})\}\).
### Data and Data Preparation
Three datasets are curated to train and evaluate the proposed method: two cholecyst datasets (laparoscopic gallbladder removal), namely Cholec80 [25] and HeiChole [27], and one hysterectomy dataset (laparoscopic uterus removal), namely AutoLaparo [28].
To remove status indicator overlays from the laparoscopic videos, which may hinder the camera motion estimator, we identify the bounding circle of the circular field of view using [2]. We crop the view about the center point of the bounding circle to a shape of \(240\times 320\), so that no black regions are prominent in the images.
All three datasets are split into training, validation, and testing datasets. We split the videos by frame count into \(80\pm 1\,\%\) training and \(20\pm 1\,\%\) testing. Training and testing videos never intersect. We repeat this step to further split the training dataset into (pure) training and validation datasets.
Due to errors during processing the raw data, we exclude videos 19, 21, and 23 from HeiChole, as well as videos 22, 40, 65, and 80 from Cholec80. This results in dataset sizes of: Cholec80 - \(4.4e6\) frames at 25 fps, HeiChole - \(9.5e5\) frames at 25 fps, and AutoLaparo - \(7.1e4\) frames at 25 fps.
Figure 1: Training pipeline, refer to Section 2.3. From left to right: Image sequences are importance sampled from the video database and random augmentations are applied per sequence online. The lower branch estimates camera motion between subsequent frames, which is taken as pseudo-ground-truth for the upper branch, which learns to predict camera motion on a preview horizon.
### Proposed Pipeline
#### 2.3.1 Video Database and Importance Sampling
The curated data from Section 2.2 is accumulated into a video database. Image sequences of length \(N+M\) are sampled at a frame increment of \(\Delta n\) between subsequent frames and with \(\Delta c\) frames between the sequence's initial frames. Prior to adding the videos to the database, an initial offline run is performed to estimate camera motion \(\Delta\mathbf{u}\mathbf{v}\) between the frames. This creates image-motion correspondences of the form \((\mathbf{I}_{n}\,,\mathbf{I}_{n+\Delta n}\,,\Delta\mathbf{u}\mathbf{v}_{n})\). Image-motion correspondences where \(\mathbb{E}(||\Delta\mathbf{u}\mathbf{v}_{n}||_{2})>\sigma\), with sigma being the standard deviation over all motions in the respective dataset, define anchor indices \(n\). Image sequences are sampled such that the last image in the recall horizon lies at index \(n=N-1\), marking the start of a motion. The importance sampling samples indices from the intersection of all anchor indices, shifted by \(-N\), with all possible starting indices for image sequences.
#### 2.3.2 Geometric and Photometric Transforms
The importance sampled image sequences are fed to a data augmentation stage. This stage entails geometric and photometric transforms. The distinction is made because downstream, the pipeline is split into two branches. The upper branch serves as camera motion prediction whereas the lower branch serves as camera motion estimation, also refer to the next section. As it acts as the source of pseudo-ground-truth, it is crucial that the camera motion estimator performs under optimal conditions, hence no photometric transforms, i.e. transforms that change brightness / contrast / fog etc., are applied. Photometrically transformed images shall further be denoted as \(\tilde{\mathbf{I}}\). To encourage same behavior under different perspectives, geometric transforms are applied, i.e. transforms that change orientation / up to down / left to right etc. Transforms are always sampled randomly, and applied consistently to the entire image sequence.
#### 2.3.3 Camera Motion Estimator and Predictor
The goal of this work is to have a predictor learn camera motion computed by an estimator. The predictor takes as input a photometrically and geometrically transformed recall horizon \(\{\tilde{\mathbf{I}}_{t}\,|\,t\in[T_{0},T_{N})\}\) of length \(N\), and predicts camera motion \(\tilde{\mathcal{C}}=\{\Delta\tilde{\mathbf{u}}\tilde{\mathbf{v}}_{t}\,|\,t \in[T_{N},T_{N+M})\}\) on the preview horizon of length \(M\). The estimator takes as input the geometrically transformed preview horizon \(\{\mathbf{I}_{t}\,|\,t\in[T_{M},T_{N+M})\}\) and estimates camera motion \(\mathcal{C}\), which serves as a target to the predictor. The estimator is part of the pipeline to facilitate on-the-fly perspective augmentation via the geometric transforms.
## 3 Experiments and Evaluation Methodology
The following two sections elaborate the experiments we conduct to investigate the proposed pipeline from Fig. 1 in Section 2.3. First the camera motion estimator is investigated, followed by the camera motion predictor.
### Camera Motion Estimator
#### 3.1.1 Camera Motion Distribution
To extract the camera motion distribution, we run the camera motion estimator from [11] with a ResNet-34 backbone over all datasets from Section 2.2. We map the estimated four point homographies to up/down/left/right/zoom-in/zoom-out for interpretability. Left/right/up/down corresponds to all four point displacements \(\Delta\mathbf{uv}\) consistently pointing left/right/ up/down respectively. Zoom-in/out corresponds to all four point displacements \(\Delta\mathbf{uv}\) consistently pointing inwards/outwards. Rotation left corresponds to all four point displacements pointing up right, bottom right, and so on. Same for rotation right. Camera motion is defined static if it lies below the standard deviation in the dataset. The frame increment is set to \(0.25\,\mathrm{s}\), corresponding to \(\Delta n=5\) for the \(25\,\mathrm{f}\mathrm{p}\mathrm{s}\) videos.
#### 3.1.2 Online Camera Motion Estimation
Since the camera motion estimator is executed online, memory footprint and computational efficiency are of importance. Therefore, we evaluate the estimator from [11] with a ResNet-34 backbone, SURF & RANSAC, and LoFTR [24] & RANSAC. Each estimator is run 1000 times on a single image sequence of length \(N+M=15\) with an NVIDIA GeForce RTX 2070 GPU and an Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz.
### Camera Motion Predictor
#### 3.2.1 Model Architecture
For all experiments, the camera motion predictor is a ResNet-18/34/50, with the number of input features equal to the recall horizon \(\mathrm{N}\times 3\) (RGB), where \(\mathrm{N}=14\). We set the preview horizon \(\mathrm{M}=1\). The frame increment is set to \(0.25\,\mathrm{s}\), or \(\Delta n=5\) for the \(25\,\mathrm{f}\mathrm{p}\mathrm{s}\) videos. The number of frames between clips is also set to \(0.25\,\mathrm{s}\), or \(\Delta c=5\).
#### 3.2.2 Training Details
The camera motion predictor is trained on each dataset from Section 2.2 individually. For training on Cholec80/HeiChole/AutoLaparo, we run \(80/50/50\) epochs on a batch size of 64 with a learning rate of \(2.5e-5/1.e-4/1.e-4\). The learning rates for Cholec80 and HeiChole relate approximately to the dataset's training sizes, see 2. For Cholec80, we reduce the learning rate by a factor 0.5 at epochs 50, 75. For Heichole/AutoLaparo we drop the learning rate by a factor 0.5 at epoch 35. The loss in Fig. 1 is set to the mean pairwise distance between estimation and prediction \(\mathbb{E}(||\Delta\mathbf{\ddot{u}}\mathbf{v}_{t}-\Delta\mathbf{u}\mathbf{v}_ {t}||_{2})+\lambda\mathbb{E}(||\Delta\mathbf{\ddot{u}}\mathbf{v}_{t}||_{2})\) with a regularizer that discourages the identity \(\Delta\mathbf{\ddot{u}}\mathbf{v}_{t}=\mathbf{0}\) (i.e. no motion). We set \(\lambda=0.1\).
#### 3.2.3 Evaluation Metrics
For evaluation we compute the mean pairwise distance between estimated and predicted motion \(\mathbb{E}(||\Delta\mathbf{\ddot{u}}\mathbf{v}_{t}-\Delta\mathbf{u}\mathbf{v}_ {t}||_{2})\). All camera motion predictors are benchmarked against a baseline, that is a \(\mathcal{O}(1)/\mathcal{O}(2)\)-Taylor expansion of the estimated camera motion \(\Delta\mathbf{u}\mathbf{v}_{t}\). Furthermore, the model that is found to perform best is evaluated on the multi-class labels (left, right, up, down) that are provided in AutoLaparo.
## 4 Results
### Camera Motion Estimator
#### 4.1.1 Camera Motion Distribution
The camera motion distributions for all datasets are shown in Fig. 2. It is observed that for a large fraction of the sequences there is no significant camera motion (Cholec80 76.21%, HeiChole 76.2%, AutoLaparo 71.29%). This finding supports the importance sampling that was introduced in Section 2.3. It can further be seen that e.g. left/right and up/down motions are equally distributed.
#### 4.1.2 Online Camera Motion Estimation
The results of the online camera motion estimation are summarized in Table 1. The deep homography estimation with a Resnet34 backbone executes \(11\times\) quicker and has the lowest GPU memory footprint of the GPU accelerated methods. This allows for efficient implementation of the proposed online camera motion estimation in Fig. 1.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Execution time [s] Speed-up [a.u.] Model / Batch [Mb] \\ \hline Resnet34 & \(\mathbf{0.016\pm 0.048}\) & \(\mathbf{11.1}\) & \(\mathbf{664/457}\) \\ LoFTR \& RANSAC & \(0.178\pm 0.06\) & 1.0 & 669/2412 \\ SURF \& RANSAC & \(0.131\pm 0.024\) & 1.4 & NA \\ \hline \hline \end{tabular}
\end{table}
Table 1: Memory footprint and execution time of different camera motion estimators, refer to Section 3.1.
Figure 2: Camera motion distribution, refer to Section 3.1. AutoLaparo: 2.81% - up, 1.88% - down, 4.48% - left, 3.38% - right, 0.45% - zoom_in, 0.2% - zoom_out, 0.3% - rotate_left 0.3%, - rotate_right 14.9% - mixed, 71.29% - static.
### Camera Motion Prediction
The camera motion prediction results for all datasets are highlighted in Table 2. It can be seen that significant improvements over the baseline are achieved on the Cholec80 and HeiChole datasets. Whilst the learned prediction performs better on average than the baseline, no significant improvement is found for the AutoLaparo dataset.
The displacement of the image center point under the predicted camera motion for AutoLaparo is plotted against the provided multi-class motion annotations and shown in Fig. 3. It can be seen that the camera motion predictions align well with the ground truth labels.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{3}{*}{Dataset} & \multirow{3}{*}{\begin{tabular}{l} Train Size \\ [Frames] \\ \end{tabular} } & \multicolumn{3}{l|}{\begin{tabular}{l} Mean Pairwise Distance [Pixels] \\ \end{tabular} } \\ \cline{3-7} & & \begin{tabular}{l} Taylor \\ \end{tabular} & \multicolumn{3}{l|}{ResNet (proposed)} \\ \cline{3-7} & & \begin{tabular}{l} \(\mathcal{O}(1)\) \\ \end{tabular} & \begin{tabular}{l} \(\mathcal{O}(2)\) \\ \end{tabular} & \begin{tabular}{l} \(18\) \\ \end{tabular} & \begin{tabular}{l} \(34\) \\ \end{tabular} &
\begin{tabular}{l} \(50\) \\ \end{tabular} \\ \hline Cholec80 & \(3.5e6\) & \(27.2\pm 23.1\) & \(36.4\pm 31.2\) & \(\mathbf{14.8}\pm 11.7\) & \(\mathbf{14.4}\pm 11.4\) & \(\mathbf{14.4}\pm 11.4\) \\ \hline HeiChole & \(7.6e5\) & \(29.7\pm 26.4\) & \(39.8\pm 35.9\) & \(\mathbf{15.8}\pm 12.5\) & \(\mathbf{15.8}\pm 12.5\) & \(\mathbf{15.8}\pm 12.5\) \\ \hline AutoLaparo & \(5.9e4\) & \(19.4\pm 18.4\) & \(25.8\pm 24.7\) & \(\mathbf{11.2}\pm 11.0\) & \(\mathbf{11.3}\pm 11.0\) & \(\mathbf{11.3}\pm 11.0\) \\ \hline \end{tabular}
\end{table}
Table 2: Camera motion predictor performance, refer to Section 3.2. Taylor baselines predict based on previous estimated motion, ResNets based on images.
Figure 3: Predicted camera motion on AutoLaparo, refer to Section 3.2. Camera motion predictor trained on Cholec80 with ResNet-50 backbone, see Table 2. Shown is the motion of the image center under the predicted homography. Clearly, for videos labeled left/right, the center point is predicted to move left/right and for up/down labels, the predicted left/right motion is centered around zero (a). Same is observed for up/down motion in (b), where left/right motion is zero-centered.
## 5 Conclusion and Outlook
To the best of our knowledge, this work is the first to demonstrate that camera motion can indeed be learned from retrospective videos of laparoscopic interventions, with no manual annotation. Self-supervision is achieved by harvesting image-motion correspondences using a camera motion estimator, see Fig. 1. The camera motion predictor is shown to generate statistically significant better predictions over a baseline in Table 2 as measured using pseudo-ground-truth and on multi-class manually annotated motion labels from AutoLaparo in Fig. 3. An exemplary image sequence in Fig. 4 demonstrates successful camera motion prediction on HeiChole. These results were achieved through the key finding from Fig. 2, which states that most image sequences, i.e. static ones, are irrelevant to learning camera motion. Consequentially, we contribute a novel importance sampling method, as described in Section 2.3. Finally, we hope that our open-source commitment will help the community explore this area of research further.
A current limitations of this work is the preview horizon \(M\) of length 1. One might want to extend it for model predictive control. Furthermore, to improve explainability to the surgeon, but also to improve the prediction in general, it would be beneficial to include auxiliary tasks, e.g. tool and organ segmentation, surgical phase recognition, and audio. There also exist limitations for the camera motion estimator. The utilized camera motion estimator is efficient and isolates object motion well from camera motion, but is limited to relatively small camera motions. Improving the camera motion estimator to large camera motions would help increase the preview horizon \(M\).
In future work, we will execute this model in a real setup for investigating transferability. This endeavor is backed by [10], which demonstrates how the learned homography could immediately be deployed on a robotic laparoscope
Figure 4: Exemplary camera motion prediction, refer to Section 3.2. In the image sequence, the attention changes from the right to the left tool. We warp the past view (yellow) by the predicted homography and overlay the current view (blue). Good alignment corresponds to good camera motion prediction. Contrary to the baseline, the proposed method predicts the motion well. Data taken from HeiChole test set, ResNet-50 backbone trained on Cholec80, refer Table 2.
holder. It might proof necessary to fine-tune the presented policy through reinforcement learning with human feedback.
#### 5.0.1 Acknowledgements
This work was supported by core and project funding from the Wellcome/EPSRC [WT203148/Z/16/Z; NS/A000049/1; WT101957; NS/A000027/1]. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 101016985 (FAROS project). TV is supported by a Medtronic / RAEng Research Chair [RCSRF1819\(\backslash\)\(\gamma\)\(\backslash\)34]. SO and TV are co-founders and shareholders of Hypervision Surgical. TV is co-founder and shareholder of Hypervision Surgical. TV holds shares from Mauna Kea Technologies.
|
2303.00661 | Spherical Harmonic Representation of Energetic Neutral Atom Flux
Components Observed by IBEX | The Interstellar Boundary Explorer (IBEX) images the heliosphere by observing
energetic neutral atoms (ENAs). The IBEX-Hi instrument onboard IBEX provides
full-sky maps of ENA fluxes produced in the heliosphere and very local
interstellar medium (VLISM) through charge exchange of suprathermal ions with
interstellar neutral atoms. The first IBEX-Hi results showed that in addition
to the anticipated globally distributed flux (GDF), a narrow and bright
emission from a circular region in the sky, dubbed the IBEX ribbon, is visible
in all energy steps. While the GDF is mainly produced in the inner heliosheath,
ample evidence indicates that the ribbon forms outside the heliopause in the
regions where the interstellar magnetic field is perpendicular to the lines of
sight. The IBEX maps produced by the mission team distribute the observations
into $6\deg\times6\deg$ rectangle pixels in ecliptic coordinates. The overlap
of the GDF and ribbon components complicates qualitative analyses of each
source. Here, we find the spherical harmonic representation of the IBEX maps,
separating the GDF and ribbon components. This representation describes the ENA
flux components in the sky without relying on any pixelization scheme. Using
this separation, we discuss the temporal evolution of each component over the
solar cycle. We find that the GDF is characterized by larger spatial scale
structures than the ribbon. However, we identify two isolated, small-scale
signals in the GDF region that require further study. | P. Swaczyna, M. A. Dayeh, E. J. Zirnstein | 2023-03-01T17:01:08Z | http://arxiv.org/abs/2303.00661v2 | # Spherical Harmonic Representation of Energetic Neutral Atom Flux Components Observed by IBEX
###### Abstract
The Interstellar Boundary Explorer (IBEX) images the heliosphere by observing energetic neutral atoms (ENAs). The IBEX-Hi instrument onboard IBEX provides full-sky maps of ENA fluxes produced in the heliosphere and very local interstellar medium (VLISM) through charge exchange of suprathermal ions with interstellar neutral atoms. The first IBEX-Hi results showed that in addition to the anticipated globally distributed flux (GDF), a narrow and bright emission from a circular region in the sky, dubbed the IBEX ribbon, is visible in all energy steps. While the GDF is mainly produced in the inner heliosheath, ample evidence indicates that the ribbon forms outside the heliopause in the regions where the interstellar magnetic field is perpendicular to the lines of sight. The IBEX maps produced by the mission team distribute the observations into 6\({}^{\circ}\)\(\times\)6\({}^{\circ}\) rectangle pixels in ecliptic coordinates. The overlap of the GDF and ribbon components complicates qualitative analyses of each source. Here, we find the spherical harmonic representation of the IBEX maps, separating the GDF and ribbon components. This representation describes the ENA flux components in the sky without relying on any pixelization scheme. Using this separation, we discuss the temporal evolution of each component over the solar cycle. We find that the GDF is characterized by larger spatial scale structures than the ribbon. However, we identify two isolated, small-scale signals in the GDF region that require further study.
Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA
Southwest Research Institute, San Antonio, TX 78238, USA
Department of Physics and Astronomy, University of Texas at San Antonio, San Antonio, TX 78249, USA
## 1 Introduction
Imaging the heliosphere through observations of energetic neutral atom (ENA) emission is an important tool for understanding the global structure of the interaction of the solar wind with the very local interstellar medium (VLISM) and its evolution over solar cycles (Gruntman, 1997). The Interstellar Boundary Explorer (IBEX) is the first mission to focus on observations of heliospheric ENAs (McComas et al., 2009). The spacecraft includes two ENA imagers: IBEX-Hi covering energies from 0.5 to 6 keV (Funsten et al., 2009) and IBEX-Lo observing neutral atoms from 10 eV to 2 keV (Fuselier et al., 2009). Most ENA observations used to analyze the global heliosphere are collected with IBEX-Hi. The first full-sky maps of ENA flux showed a narrow circular emission of ENAs called the IBEX ribbon, clearly visible in addition to the globally distributed flux (GDF) in all IBEX-Hi energy steps (Fuselier et al., 2009; McComas et al., 2009, 2012, 2014, 2017, 2020; Schwadron et al., 2009).
Prior to IBEX, the understanding was that heliospheric ENAs are primarily created from suprathermal ions in the inner heliosheath. Therefore, modeled ENA flux maps showed broad structures connected to the latitudinal configuration of the solar wind and heliospheric nose-tail asymmetry (Heerikhuisen et al., 2008, 2009). The ribbon, discovered as a narrow structure stretching over most of the sky, is not clearly organized
by the interstellar flow direction or heliographic latitude. The first ribbon analyses revealed, however, that it is observed in directions where the lines of sight are perpendicular to the interstellar magnetic field (Schwadron et al. 2009). Soon after, more than a dozen hypotheses were formulated to explain the IBEX ribbon, placing the source region of the ribbon ENAs from the termination shock to a distant boundary layer in the interstellar medium (see review by McComas et al. 2014b). Detailed analyses of the IBEX ribbon indicate that it is likely created in the secondary ENA mechanism (Chalov et al. 2010; Heerikhuisen et al. 2010; Schwadron & McComas 2013; Zirnstein et al. 2013). In this mechanism, ions in the solar wind are neutralized and expand as neutral solar wind to distances beyond the heliopause, where two subsequent charge exchanges produce secondary ENAs. Soma secondary ENAs travel back to the proximity of the Sun and are visible as the IBEX ribbon.
In each energy step, the ribbon follows a circle in the sky with a radius of \(\sim\)75\({}^{\circ}\) centered at a point close to the B-V plane defined by the directions of the interstellar flow and magnetic field (Funsten et al. 2013). The centers of the ribbon in subsequent ESA steps show a systematic progression, mainly in the direction perpendicular to the B-V plane. This progression is related to the structure of the solar wind (Swaczyna et al. 2016b) and is present throughout the solar cycle (Dayeh et al. 2019). The geometry of the ribbon is also critical to determine the interstellar magnetic field in the VLISM (Zirnstein et al. 2016) and to understand the details of the secondary ENA mechanism (Zirnstein et al. 2015, 2019a, 2019b, 2021a).
Analyses of the IBEX ribbon are difficult because of the overlapping GDF and ribbon components. Several methodologies have been developed to perform a global separation of these two signals assuming either a functional form of the ribbon profile (Funsten et al. 2013, 2015; Dayeh et al. 2019, 2023a; Swaczyna et al. 2016a; Reisenfeld et al. 2021) or by interpolating the GDF inside the ribbon region (Schwadron et al. 2011, 2014, 2018). Beesley et al. (2023) developed statistical methods for separating these sources applied to higher-resolution maps generated by Osthus et al. (2022). Furthermore, Swaczyna et al. (2022a, hereafter Paper I) proposed an approach in which spherical harmonics are used as a basis to represent the GDF, enabling estimation of the ribbon flux as the difference between the IBEX observed maps and the GDF reproduced from the spherical harmonic decomposition. In this paper, we further develop this methodology by extending the spherical harmonic representation to both components.
## 2 Methodology
The spherical harmonic decomposition of ENA flux maps presented in this paper bases on the methodology presented in Paper I. However, we implement several changes that we discuss in this section. First, we do not calculate the spherical harmonic coefficients but find them by minimizing a least-squares term (Section 2.1). Furthermore, we introduce a regularization term to eliminate overfitting in underconstrained regions (Section 2.2). Our analysis includes higher degree spherical harmonics to properly reconstruct narrow structures observed, especially within the IBEX ribbon region (Section 2.3). We also modify the ribbon mask definition used to estimate the region with the ribbon component (Section 2.4). Finally, in Section 2.5, we discuss the uncertainty analysis of the separated components.
### Least-squares Minimization
Standard maps provided in the IBEX data releases are represented by ENA flux values in 6\({}^{\circ}\times\)6\({}^{\circ}\) pixels defined on a regular grid in the ecliptic coordinates. This pixelization scheme allows for an approximately uniform distribution of exposure times because data collected by IBEX for a spin axis pointing exactly in the ecliptic plane are distributed into vertical strips at longitudes \(\pm\)90\({}^{\circ}\) from the spin axis pointing longitude.
The regular repointing provides full sky maps on a half-year cadence but the observations are often separated into maps constructed from one year of observations in the ram and anti-ram hemispheres. We focus on yearly ram maps in this study, but the procedure can be applied to other types of IBEX maps. Map files are organized so that the first row represents the pixels near the south ecliptic pole (centered at latitude \(\beta=-87^{\circ}\)) for increasing longitudes \(\lambda=3^{\circ},9^{\circ},...,357^{\circ}\). The subsequent rows provide fluxes in pixels at higher latitudes. However, for the purpose of the study, we organize each IBEX map into a single vector in which the data from all rows are joined together. Therefore, for each map, we construct a vector of observed ENA flux values: \(\mathbf{j}=\{j_{k}\}_{k=1,...,N_{\text{pix}}}\) in \(N_{\text{pix}}=1800\) pixels and a diagonal covariance matrix \(\mathbf{V}=\text{diag}\left(\left\{\sigma_{k}^{2}\right\}_{k=1,...,N_{\text{pix}}}\right)\) with the flux variances at the matrix diagonal. The data gaps can be included with any value in the vector \(\mathbf{j}\). However, the inverted covariance matrix \(\mathbf{V}^{-1}\) must have zeros at the corresponding diagonal positions.
As in Paper I, we use the real representation of spherical harmonics \(Y_{\ell m}(\theta,\phi)\), where \(\ell\) and \(m\) are the degree and order of the spherical harmonic, while \(\theta\) and \(\phi\) are the azimuth and polar angle. We transform the ecliptic latitudes to their co-latitudes, which are polar angles. There are \(2\ell+1\) orders of spherical harmonics \(m=-\ell,-\ell+1,...,\ell\) for each degree \(\ell\). Therefore, we have \((\ell_{\text{max}}+1)^{2}\) spherical harmonics for all degrees \(\ell\leq\ell_{\text{max}}\). In this study, we organize them into a vector ordered by degree and order. Let \(y_{k,\ell m}\) denotes the average of the spherical harmonic \(Y_{\ell m}(\theta,\phi)\) over the solid angle corresponding to pixel \(k\) (see Swaczyna et al. 2022a for details). Next, we define a vector \(\mathbf{y}_{k}=\left\{y_{k,\ell m}\right\}_{(\ell m):\ell\leq\ell_{\text{max}}}\) of these average values for each pixel \(k\) and all spherical harmonics with \(\ell\leq\ell_{\text{max}}\). Finally, these vectors for all pixels can be organized into rows of a \(N_{\text{pix}}\times(\ell_{\text{max}}+1)^{2}\) matrix \(\mathbf{Y}=\{\mathbf{y}_{k}\}_{k=1,...,N_{\text{pix}}}\).
Here, we want to find a vector of spherical harmonic coefficients \(\mathbf{c}=\{c_{\ell m}\}_{(\ell m):\ell\leq\ell_{\text{max}}}\) that minimizes the following least-square sum:
\[\chi_{\text{LS}}^{2}(\{c_{\ell m}\}_{(\ell m):\ell\leq\ell_{\text{max}}})= \sum_{k=1}^{N_{\text{pix}}}\frac{\left(\sum_{(\ell m)}y_{k,\ell m}c_{\ell m}-j _{k}\right)^{2}}{\sigma_{k}^{2}}. \tag{1}\]
This minimization allows us to find the coefficients that minimize the residual fluxes weighted by their uncertainties. Note that this approach differs from the one introduced in Paper I because we explicitly include uncertainties in this procedure. Using the terminology presented above, we can rewrite Equation (1) in a condensed form:
\[\chi_{\text{LS}}^{2}(\mathbf{c})=(\mathbf{Y}\mathbf{c}-\mathbf{j})^{\text{T}}\mathbf{V}^{ -1}(\mathbf{Y}\mathbf{c}-\mathbf{j}). \tag{2}\]
Note that the data gaps for which the inverted covariance is zero are effectively excluded from this sum. The minimization of the above expression represents the best-fit spherical harmonic representation of the ENA map represented in vector \(\mathbf{j}\). However, high-degree spherical harmonics, used in our study, can sometimes represent features smaller than the data gaps, which this expression would not constrain. Therefore, to minimize the impact of these gaps, we use a regularization term, as described in the next section.
### Tikhonov Regularization and L-curve
The spherical harmonic decomposition of a single IBEX map is described by \((\ell_{\rm max}+1)^{2}\) coefficients. The minimization of the term given in Equation (1) is clearly ill-posed (underconstrained) if \(\ell_{\rm max}>\sqrt{N_{\rm pix}}-1\approx 41.4\). However, because the IBEX maps frequently include gaps, we need to introduce a regularization term in the minimization even though we are not using as many degrees of spherical harmonics (see Section 2.3). Moreover, the ribbon masking procedure discussed further in the paper also requires regularization.
In our analysis, we implement Tikhonov regularization (see, e.g., Tikhonov et al. 1995; Calvetti et al. 2000) in the following form:
\[\chi^{2}_{\rm reg}=\mathbf{c}^{\rm T}\mathbf{R}\mathbf{c}, \tag{3}\]
where **R** is a regularization matrix. The selection of the regularization matrix depends on the regularization goal. In our case, we want to minimize gradients, i.e., we assume the flux does not significantly change over the gaps. Metzler & Pail (2005) showed that to minimize the absolute gradients of the spherical harmonic decomposition integrated over the sphere, one needs to implement the following diagonal regularization matrix:
\[\mathbf{R}=\left\{\ell_{i}(\ell_{i}+1)\delta_{\ell_{i} \ell_{j}}\delta_{m_{i}m_{j}}\right\}_{i,j}, \tag{4}\]
where \(i\) and \(j\) enumerate the spherical harmonics up to the degree of \(\ell_{\rm max}\), and \(\delta_{i,j}\) is the Kronecker delta. The regularization term aims to penalize high-degree spherical harmonics, which describe smaller-scale structures for which the spatial scale is larger.
The minimization of two terms given in Equations (2) and (3) cannot be performed independently but needs to be implemented as a single minimization requirement. Therefore, the joint minimization is typically considered by a sum of these two terms with an unknown regularization parameter \(\alpha\):
\[\chi^{2}=\chi^{2}_{\rm LS}+\alpha\chi^{2}_{\rm reg}=(\mathbf{Y}\mathbf{c}-\mathbf{j})^{\rm T}\mathbf{V}^{-1}(\mathbf{Y}\mathbf{c}-\mathbf{j})+\alpha\mathbf{c$ }^{\rm T}\mbox{\boldmath$R}\mathbf{c} \tag{5}\]
Equation (5) minimization can be performed analytically because our model is linear. The coefficients \(\widehat{\mathbf{c}}(\alpha)\) that minimize this equation are (e.g., Metzler & Pail 2005):
\[\widehat{\mathbf{c}}(\alpha)=\left(\mathbf{Y}^{ \rm T}\mathbf{V}^{-1}\mathbf{Y}+\alpha\mathbf{R} \right)^{-1}\mathbf{Y}^{\rm T}\mathbf{V}^{-1}\mathbf{j}. \tag{6}\]
The optimal regularization parameter \(\alpha\) can be obtained from analysis of the trajectory of the logarithms of the minimization terms (\(\log\chi^{2}_{\rm LS},\log\chi^{2}_{\rm reg}\)) obtained from the minimization of Equation (5) for various values of the regularization parameters, known as the L-curve (e.g., Hansen & O'Leary 1993; Calvetti et al. 2000). Figure 1 presents this trajectory for the ram-only 2016 map for energy step 1.7 keV. This map includes significant data gaps; thus, it is a good example to show the role of regularization. The L-curve name comes from its shape, as the trajectory includes a corner where the change of the logarithms is the smallest relative to each other. This point is quantified based on the point of the highest curvature. Near this point, the changes of the optimal \(\chi^{2}_{\rm LS}\) and \(\chi^{2}_{\rm reg}\) are the smallest with respect to each other. In other words, in other parts of the curve, a slight change in one of these results in a significant change in the other. The bottom panels of Figure 1 present the maps reconstructed from the spherical harmonic representation for \(\ell_{\rm max}=22\) for three regularization parameter values. If the regularization parameter is lower than the optimal, the map reconstructed from the spherical harmonic reconstruction show structure inside the data
gap region (e.g., a bright spot near the center of the map in the bottom left panel of Figure 1). If the regularization parameter is larger than optimal, the reconstructed map is smoothed and does not preserve the small-scale features of the original map.
### Maximum Degree of Spherical Harmonics \(\ell_{\text{max}}\)
Most of the globally distributed flux can be reconstructed from spherical harmonics up to the degree of 3 (see Paper I). However, here we provide an alternative representation of all features, including the IBEX ribbon and other small-scale structures. Therefore, we significantly increase the maximum degree \(\ell_{\text{max}}\). In this section, we justify that \(\ell_{\text{max}}=22\) allows for sufficient representation of the IBEX data presented in the IBEX maps.
Our first argument utilizes decomposition to a much higher degree of \(\ell_{\text{max}}=30\) for the time-combined maps in IBEX Data Release \(\#16\)1, covering data collected between 2009 and 2019. The time-combined maps have the lowest uncertainties. Therefore, we use them to estimate the required degree that allows for
Figure 1: Role of the regularization for spherical harmonic decomposition shown for the IBEX map at energy step 1.7 keV observed in 2016, which includes a significant gap in the data (top left panel). The top middle and right panels show the L-curve and the curvature as a function of the regularization parameter, respectively. The three bottom panels show the reconstructed maps for three values of the regularization parameter: \(\alpha=0.000079\), \(0.002815\), and \(0.125893\). The middle panel shows the result for optimal regularization. The maps shown here and in other figures in this paper use the equirectangular projection in ecliptic coordinates where the top and bottom edges correspond to the north and south ecliptic poles, respectively. The left and right edges correspond to ecliptic longitude of \(72^{\circ}\).
representation of all small-scale structures. We perform the fitting described in the sections above to find the coefficients \(c_{\ell m}\) and calculate the related uncertainties \(\delta c_{\ell m}\) (see Section 2.5). For each degree of spherical harmonics, we calculate the mean statistical significance over all orders of spherical harmonics:
\[\sigma_{\ell}=\frac{1}{2\ell+1}\sum_{m=-\ell}^{\ell}\left|\frac{c_{\ell m}}{ \delta c_{\ell m}}\right|, \tag{7}\]
which are shown for all energy steps in the top panel of Figure 2. This figure shows that the highest-degree statistically significant coefficients (\(\sigma_{\ell}>1\)) depend on the energy step and range from \(\ell=17\) (for the energy step 0.7 keV) to \(\ell=22\) (for 1.7 keV). The contribution from spherical harmonics for degrees above this limit on average over the entire sky is not statistically significant.
Additionally, we consider the Akaike Information Criterion (AIC, Akaike 1974), calculated as
\[\text{AIC}=\chi^{2}_{\text{LS,min}}+2(\ell_{\text{max}}+1)^{2}, \tag{8}\]
which adds a penalty term proportional to the number of the model parameters. While the minimum \(\chi^{2}_{\text{LS,min}}\) always decreases as the highest degree \(\ell_{\text{max}}\) increases because additional parameters allow for a better representation of the original map, the penalty term allows us to find the degree that minimizes the AIC. The bottom panel of Figure 2 presents the AIC value as a function of the maximum degree. The minimum AIC is obtained for \(\ell_{\text{max}}\) from 17 (0.7 keV) to 22 (2.7 keV).
Figure 2: Optimization of the maximum degree of spherical harmonics. The top panel shows the mean statistical significance of spherical harmonic contributions for all orders as a function of the degree. The bottom panel shows the AIC used to select among models with various numbers of parameters. The optimal maximum degree ranges from 17 to 22. The top scale shows the angular size of structures that can be reconstructed.
Generally, spherical harmonics up to some degree \(\ell\) can only represent structures with characteristic sizes of the order of \(180^{\circ}/\ell\). Therefore, the IBEX-Hi \(\sim\)\(6^{\circ}\times\)\(6^{\circ}\) field-of-view limits the smallest structures that can be observed to about \(\ell\approx 30\). However, the pixelization scheme decreases the effective resolution. Because the angular pixel size changes from the ecliptic plane to the poles, the effective resolution differs, but one can make an approximation that this leads to an effective resolution of \(\sim\)\(((6^{\circ})^{2}+(6^{\circ})^{2})^{0.5}\approx 8.5^{\circ}\), which corresponds to \(\ell\approx 21\).
Figure 3 compares the reconstructed full maps for \(E=1.7\) keV and \(\ell_{\rm max}=14\), \(22\), and \(30\), together with the normalized residuals calculated as differences between the original and reconstructed maps divided by the uncertainty of the original fluxes. The residuals for \(\ell_{\rm max}=14\) show systematic patterns that appear to be in the regions of the most substantial gradients in the IBEX ribbon. Therefore, we need more spherical harmonics to describe the ribbon. On the other hand, the result for \(\ell_{\rm max}=30\) shows a significant reduction in the relative residuals. In general, the variation is much smaller than expected from the uncertainties, which means that a significant amount of statistical variation from the original map is reflected in the spherical harmonic decomposition. Moreover, the magnitude of the relative residuals increases toward the poles because angular distances between pixels are smaller, and thus the spherical harmonics cannot reconstruct this variation. This effect is, to some extent, visible for \(\ell_{\rm max}=22\). Nevertheless, we decide to use \(\ell_{\rm max}=22\) for our study. While some small-scale structures may be a result of statistical fluctuations, it is important that we can represent the ribbon structure fully. This decomposition may be generally used for filtration of the IBEX map, but it is not our goal in this study.
### Ribbon Mask and Spherical Harmonic Decomposition of the GDF
The main characteristic that led to the discovery of the IBEX ribbon was that it is limited to a ring-shaped region of the sky (Funsten et al., 2013; Dayeh et al., 2019). While some ENA emission originating in the outer heliosheath may form broader structures (Zirnstein et al., 2019), we cannot separate them from the GDF as they have similar geometric structures. Therefore, these broad structures are not considered part of the ribbon. In other words, our working definition of the ribbon focuses on the geometric property of the ribbon, i.e., that it is a narrow circular structure seen in the IBEX maps, rather than on the source region of the ENAs.
We need to find a set of pixels in IBEX maps that include the ribbon component to mask them out for the derivation of the GDF. In Paper I, an iterative procedure was employed to select pixels where the magnitude of the residual flux calculated between the original IBEX map and the GDF is statistically high. The pixels with high sum values were classified into the ribbon mask. However, this criterion tends to provide a ribbon mask which is narrower where the ribbon is weaker compared to the underlying GDF because they result in lower statistical significance. However, lower ribbon fluxes do not mean that the ribbon is narrower. This effect can be noted in Figure 1 of Paper I in pixels near the nose and ecliptic plane, especially for \(\ell_{max}\geq 3\). As we use higher-degree spherical harmonics, we cannot follow the same procedure here. Therefore, we introduce a new method to find the ribbon based on the observation that it is limited to a ring-shaped region. Moreover, we define a separate mask for each energy step because the ribbon position depends strongly on energy but only slightly changes over time (Funsten et al., 2013; Swaczyna et al., 2016; Dayeh et al., 2019; Zirnstein et al., 2023). Therefore, we use one mask for all years. For the derivation of the mask, we use the combined data from 2009 to 2011. We do not use later observations because the ribbon fluxes drop significantly, reducing the contrast of the ribbon to the GDF emissions, especially in the highest energy step.
The ribbon masks considered for this study are defined by center (\(\lambda,\beta\)), radius \(r\), and width \(w\). The ribbon mask is generally defined as pixels for which the angular distance between the pixels' centers and the mask's center is within the range \([r-w/2,r+w/2]\). From the perspective of the GDF, the mask should be as narrow as possible to reconstruct as many small-scale structures of the GDF as possible. On the other hand, it needs to be wide enough to encompass the ribbon region. Spherical harmonics up to some degree \(\ell\) describe structures that have sizes not smaller than \(180^{\circ}/\ell\). We decide that we want to be able to reconstruct the GDF structures up to the degree of \(\ell=4\), and thus we select the width of \(w=180^{\circ}/4=45^{\circ}\) for our study. While Paper I showed that spherical harmonics could capture most structures up to \(\ell=3\), we want to describe smaller-scale structures in this study, and the power spectrum suggests that the spherical harmonics for \(\ell=4\) contribute significantly to the GDF in some energy steps (see Section 4.1). Nevertheless, we will verify our results for the choice of the full width corresponding to \(\ell=3\) and \(5\) (\(w=60^{\circ}\) and \(36^{\circ}\)) to illustrate the role of this parameter. While the selected mask sizes correspond to low-degree spherical harmonics, we estimate the decomposition of the GDF with the same \(\ell_{\text{max}}=22\) as used in the decomposition of the combined flux in Section 2.3. This procedure ensures that the variations outside the ribbon are equally well captured for the combined and GDF-only decompositions.
Figure 3: Spherical harmonic reconstruction of the time-combined IBEX map at energy step 1.7 keV. The top panel shows the original map, and the middle panels show the reconstruction for the maximum degree of spherical harmonics \(\ell_{\text{max}}=14\), 22, and 30 (left to right). The bottom panels show the residual signal normalized by the original map uncertainties.
The ribbon center and radius are refined in an iterative procedure. As the starting parameters, we use the time- and energy-averaged ribbon center and radius obtained by Dayeh et al. (2019). Based on these, we define the pixels inside the ribbon mask. These pixels are excluded from the derivation of the spherical harmonic decomposition of the GDF. Later, the analysis is performed identically to the one presented in Sections 2.1 and 2.2. This removal is achieved by replacing the diagonal elements corresponding to the mask's pixels in the diagonal matrix \(\mathbf{V}^{-1}\) with zeros. After the separation, a new ribbon center and radius are calculated using method presented in Appendix A. This method uses the spherical harmonic representation of the ribbon calculated as the difference between the spherical harmonic coefficients obtained for the original map and the map with the ribbon masked. We find the ribbon center position for which the ribbon flux as a function of the distance from this center forms the narrowest structure when averaged over the sky. The radius of the next mask is adopted from the mean distance of the ribbon flux from this center. Finally, we repeat the calculation of the spherical harmonic representation using the new mask.
The iteration is performed until the last mask is identical to one of the masks obtained earlier in the iteration. In some cases, the iteration converges on a single mask, i.e., a mask leads to the ribbon center and radius that defines the same mask. However, the iteration may result in a cycle of ribbon centers. Within each cycle, the ribbon centers and radii are close to each other, much closer than the uncertainty of these parameters. However, these small changes may change at most 4% of pixels included in the mask. Therefore, from the mask in the cycle, we select the one resulting in the narrowest ribbon, as defined in the procedure given in Appendix A.
Figure 4 shows the separation results applied to the ENA flux map at the energy step 1.7 keV for three masks obtained in the above-described process for the assumed width of the ribbon \(w=36^{\circ}\), \(45^{\circ}\), and \(60^{\circ}\). The comparison shows that the separation results are similar for these three values, but subtle differences show the width's importance. First, the narrow mask (\(w=36^{\circ}\)) results in a GDF estimate that shows some structures following the edges of the ribbon mask (Figure 4, third column, top panel). This effect is evident in the northern part of the map and results from the fact that some regions with flanks of the ribbon flux remain outside of the mask, and therefore part of the ribbon flux is included in the GDF. On the other hand, the wide mask (\(w=60^{\circ}\)) cannot fully reconstruct the GDF enhancement near the nose, which overlaps partially with the ribbon near the center of the map. This mask extends on the south side beyond the enhancement, so the GDF estimation cannot correctly capture it. The middle mask (\(w=45^{\circ}\)) minimizes these two problems, and we use it further in the analysis.
All GDF maps obtained from the above procedure show smoothing inside the ribbon mask because this part is constrained through the regularization term (Section 2.2), which aims to minimize gradients. Consequently, any analysis of small-scale structures in the GDF should be limited to the regions outside of the mask. The ribbon flux is obtained by subtracting the spherical harmonic decompositions of the total and GDF maps. Consequently, the ribbon estimation may result in negative fluxes, especially outside the ribbon mask. Negative fluxes are not statistically significant, i.e., their absolute values are, on average, smaller than the related uncertainties. Nevertheless, to avoid data selection bias, the pixels with negative values should not be removed from comparisons with models, nor should these values be replaced with zeroes. Both procedures would make the pixels with positive fluxes unbalanced, suggesting small positive background. Nevertheless, comparisons for the ribbon maps can also be limited to the extent of the ribbon mask.
### Spherical Harmonic Representations and Their Uncertainties
The spherical harmonic representations of the total and GDF maps are calculated from Equation (6) for the optimal regularization parameter obtained, as discussed in Section 2.2. Writing this equation separately for the total and GDF maps we obtain:
\[\mathbf{c}_{\mathrm{T}}=\underbrace{\left(\mathbf{Y}^{\mathrm{T}}\mathbf{V}_{ \mathrm{T}}^{-1}\mathbf{Y}+\alpha_{\mathrm{T}}\mathbf{R}\right)^{-1}\mathbf{Y }^{\mathrm{T}}\mathbf{V}_{\mathrm{T}}^{-1}}_{\Xi\mathbf{M}_{\mathrm{T}}}\mathbf{j}, \tag{10}\]
\[\mathbf{c}_{\mathrm{G}}=\underbrace{\left(\mathbf{Y}^{\mathrm{T}}\mathbf{V}_{ \mathrm{G}}^{-1}\mathbf{Y}+\alpha_{\mathrm{G}}\mathbf{R}\right)^{-1}\mathbf{Y }^{\mathrm{T}}\mathbf{V}_{\mathrm{G}}^{-1}}_{\Xi\mathbf{M}_{\mathrm{G}}}\mathbf{j}, \tag{11}\]
where subscripts T and G denote the quantities for the total and GDF maps, respectively. As discussed above, for the total map, the covariance matrix is adopted from the data \(\mathbf{V}_{\mathrm{T}}=\mathbf{V}\), which is a diagonal matrix with the flux variance. It is important to note that for the missing data (gaps), the variance is undefined (infinite) even though it is reported in the IBEX data releases as 0. Therefore, the inverted matrix has zeros at the diagonal positions corresponding to the missing data points. Similarly, for the inverted covariance matrix \(\mathbf{V}_{\mathrm{G}}\) we replace the values at the positions corresponding to pixels within the ribbon mask with zeros. The regularization parameters \(\alpha_{\mathrm{T}}\) and \(\alpha_{\mathrm{G}}\) are obtained separately for each map, as discussed in Section 2.2. Finally, the ribbon representation (subscript R) is obtained as the difference of the coefficients:
\[\mathbf{c}_{\mathrm{R}}=\mathbf{c}_{\mathrm{T}}-\mathbf{c}_{\mathrm{G}}=\underbrace{\left( \mathbf{M}_{\mathrm{T}}-\mathbf{M}_{\mathrm{G}}\right)}_{\Xi\mathbf{M}_{ \mathrm{R}}}\mathbf{j}. \tag{12}\]
Equations (10-12) show that the spherical harmonic representations are a linear combination of the observed ENA fluxes. Therefore, the covariance matrix of these coefficients can be obtained from uncertainty propagation as:
Figure 4: Results of the GDF and ribbon separation for combined ENA flux observed in 2009-2011 in energy step 1.7 keV and three possible mask widths of 36\({}^{\circ}\), 45\({}^{\circ}\), and 60\({}^{\circ}\) (top to bottom). The maps from left to right show the original IBEX map, with the white outlines showing the extent of the mask, the reconstructed total flux map, the GDF map, and the ribbon map.
\[\mathbf{W}_{a}=\mathbf{M}_{a}\mathbf{V}\mathbf{M}_{a}^{\mathrm{T}}, \tag{13}\]
where \(a=\mathrm{T}\), \(\mathrm{G}\), or \(\mathrm{R}\).
The spherical harmonic coefficients and their covariance matrix can be transformed into flux values and their covariance matrix on the original IBEX pixelization using simple transformations:
\[\tilde{\mathbf{J}}_{a}=\mathbf{Y}\mathbf{c}_{a}, \tag{14}\]
\[\widetilde{\mathbf{V}}_{a}=\mathbf{Y}\mathbf{W}_{a}\mathbf{Y}^{\mathrm{T}}. \tag{15}\]
We use tildes to mark that these are reconstructed from the spherical harmonic decomposition. The results of these transformations for the 2016 map and the energy step 1.7 keV are shown in the top and middle rows of Figure 5. This map has a significant data gap near the middle of the map, and thus is a good example to discuss the method's performance. The uncertainty maps show the square root of the diagonal values of the covariance matrix. While for the original map this is complete information, the maps reconstructed from the spherical harmonic representation have spatial correlations that cannot be represented in this figure. We use the same scale for all uncertainties maps to show that the uncertainties are smaller because we effectively use information from neighboring pixels to obtain the reconstructed flux in each of the considered pixels. The methodology uses the information that the structures are described by spherical harmonics up to \(\ell_{\mathrm{max}}=22\). The uncertainty of the ribbon map shows that the uncertainty is the highest inside the ribbon mask and is reduced outside, which is due to the assumption of our analysis that outside of the ribbon, the flux is only from the GDF. However, the GDF map shows that the uncertainty is reduced inside the ribbon mask and is only higher at the edges of the mask. A similar situation is visible in the total map for the data gap regions.
Figure 5: Separation for the 2016 map in energy step 1.7 keV. The top panels show the original map and the reconstructed fluxes from the spherical harmonic representations of the total, GDF, and ribbon flux (left to right). The middle and bottom rows show the uncertainties obtained from Equations (13) and (16–17), respectively. The uncertainty map for the original map is the same in both rows.
The underestimation of the uncertainty inside the ribbon mask and data gap regions results from the fact that inside this region, the estimated flux is governed by the regularization term, which is described by a single regularization parameter. Therefore, the values inside these regions depend little on the uncertainties of the original IBEX map. We thus need to use a different method to estimate these uncertainties. For this purpose, we assume that the global variance of the GDF flux inside the gaps and ribbon mask is the same as outside of the mask. Therefore, we construct a covariance matrix \(\mathbf{V}_{\text{G}}^{\prime}\) in which the entries corresponding to the data points in the gaps or ribbon mask are replaced with the variance of the observed fluxes over the rest of the sky. Similarly, we define \(\mathbf{V}_{\text{T}}^{\prime}\) where only the data gaps variances are changed. Let \(\mathbf{M}_{\text{T}}^{\prime}\) and \(\mathbf{M}_{\text{G}}^{\prime}\) denote matrices as defined in Equation (10-11), except matrices \(\mathbf{V}_{\text{T}}^{\prime}\) and \(\mathbf{V}_{\text{G}}^{\prime}\) replace matrices \(\mathbf{V}_{\text{T}}\) and \(\mathbf{V}_{\text{G}}\), respectively. With these definitions, the uncertainties of the spherical harmonic coefficients for the total and GDF map can be calculated as follows:
\[\mathbf{W}_{a}^{\prime}=\mathbf{M}_{a}^{\prime}\mathbf{V}_{a}^{\prime}( \mathbf{M}_{a}^{\prime})^{\text{T}} \tag{16}\]
This equation does not apply to the ribbon uncertainties because the calculations of the total and GDF coefficients use different uncertainties for pixels inside the ribbon mask. To appropriately account for the uncertainty of the ribbon mask, we need to define a covariance, including both possibilities in the ribbon mask. Let \(\mathbf{V}^{\prime\prime}\) denote a diagonal matrix constructed by expanding matrix \(\mathbf{V}_{\text{T}}^{\prime}\) by \(N_{\text{mask}}\) columns and rows, where \(N_{\text{mask}}\) is the number of pixels in the ribbon mask. Then, on the diagonal of the expanded part, we put the variance of the observed ENA fluxes from outside the ribbon mask.
Matrices \(\mathbf{M}_{\text{T}}\) and \(\mathbf{M}_{\text{G}}\) need to be redefined with the new dimensionality of the covariance matrix. In both matrices, the covariance matrices are replaced with one common matrix \(\mathbf{V}^{\prime\prime}\). Additionally, in the definition of matrix \(\mathbf{M}_{\text{T}}\) (Equation (10)), we extend matrix \(\mathbf{Y}\) by adding \(N_{\text{mask}}\) zero vectors as additional columns of this matrix. For matrix \(\mathbf{M}_{\text{G}}\), matrix \(\mathbf{Y}\) is changed by moving columns corresponding to the pixels to the end of the matrix, while the values at the original columns are replaced with zeroes. Let \(\mathbf{M}_{\text{T}}^{\prime\prime}\) and \(\mathbf{M}_{\text{G}}^{\prime\prime}\) denote these modified matrices. The spherical harmonic representation uncertainty of the ribbon is given by the covariance matrix:
\[\mathbf{W}_{\text{R}}^{\prime\prime}=(\mathbf{M}_{\text{C}}^{\prime\prime}- \mathbf{M}_{\text{G}}^{\prime\prime})\mathbf{V}^{\prime\prime}(\mathbf{M}_{ \text{C}}^{\prime\prime}-\mathbf{M}_{\text{G}}^{\prime\prime})^{\text{T}}. \tag{17}\]
We reconstruct the covariance matrix in the standard IBEX pixelization from Equations (16-17) using Equation (15). The square root of the diagonal values is presented in the bottom row of Figure 5, which shows that the newly obtained uncertainties are significantly higher in the ribbon mask and data gap regions. These uncertainties are reported in the derivative product release in connection with this paper.
## 3 Results
We apply the methodology presented in Section 2 to the Compton-Getting and survival probability corrected ram-only IBEX maps from IBEX Data Release #16. This data release includes observations over a full solar cycle from 2009 through 2019. We use the corresponding mask obtained for each energy step, as discussed in Section 2.4. The spherical harmonic representation obtained from our analysis is used to reconstruct IBEX maps with 6\({}^{\circ}\times\)6\({}^{\circ}\) pixelization separately for the GDF and ribbon components.
The GDF-only maps are shown in Figure 6. We adjust the color scale for each energy step to represent the ENA flux range over the considered period. The maps show that the strongest temporal evolution of the GDF is observed in the highest energy step, while the evolution of the GDF in the energy step 0.7 keV is very weak. Furthermore, the enhancement near the heliospheric nose, which is visible in the center of the
presented maps, shows the importance of the separation because this enhancement overlaps the ribbon region. The recent enhancement in observed ENA fluxes near this enhancement, which follows the increase in the solar wind dynamic pressure (McComas et al., 2019; Zirnstein et al., 2022), is clearly visible in separated maps from observations in 2017 and later, especially in the higher energy steps.
However, small-scale variations observed in single-year maps need to be carefully considered, as they may result from statistical uncertainties of the original maps (see Section 2.3). The lowest energy step appears as the most spatially dynamic in Figure 6, but only low-degree spherical harmonics are needed to reproduce most of the GDF as discussed in Section 4.2. Consequently, the variations visible in these maps are due to statistical noise and do not show real small-scale structures. Similar variations are also present in the source IBEX maps but are less apparent because the standard IBEX maps are rendered using a perceptually not-uniform "rainbow" color scale, and their color range is broader to include the sum of the GDF and ribbon flux. Moreover, some of these structures appear aligned with the ribbon mask region. The regularization term suppresses high-degree spherical harmonic coefficients. However, the spherical harmonic degree is related to the combined angular scale of structures. Therefore, structures aligned with the mask are less constrained.
Figure 7 presents the ribbon flux maps. Similar to the GDF, the lowest energy step does not show significant time evolution, but time changes are more prominent in higher energy steps. The three lowest energy steps have a clear ribbon structure over the entire solar cycle period, but the energy steps 2.7 and 4.3 keV show that the ribbon observed in recent years is weaker than in the early years of the mission. The employed color scale in the maps makes small changes visible even though most are not statistically significant. The time-combined maps show fewer small-scale structures than those presenting results for individual years. Most visible variations are caused by the statistical noise present in the original IBEX maps. While the high-degree spherical harmonics are needed to reproduce the ribbon profile, they also result in the reproduction of the statistical noise. The IBEX observations do not provide sufficient statistics needed to resolve the small-scale variations predicted by some IBEX ribbon models (Giacalone & Jokipii, 2015; Zirnstein et al., 2020). In addition, the ribbon structure in energy step 4.3 keV appears less coherent because the ribbon flux is very low, and for the adopted color scheme for these maps, the statistical noise become visible in this energy step. The GDF reconstructions do not account for the statistical noise inside the ribbon mask. Therefore, the ribbon maps include the total statistical noise present in the original IBEX maps from both the GDF and ribbon components.
Unlike the GDF maps, the ribbon maps do not clearly indicate "recovery" of the ribbon in the last three years. The brightening of the ribbon in the highest energy step does not have a ribbon-like structure and is likely caused by the increase in the GDF over this region. Some enhancement is present in energy steps 1.1, 1.7, and 2.7 keV in years 2017-2019, but it is generally weaker than the one observed in the GDF. On the other hand, the northernmost portion of the ribbon (in the left upper part of each map) weakens universally in energy steps 1.7, 2.7, and 4.3 keV without any signs of recovery, which suggest that this part of the ribbon does not recover. A more detailed analysis of the ribbon evolution in 2009 vs. 2019 can be found in Dayeh et al. (2023b).
Figure 6: GDF maps reconstructed from the spherical harmonic representation for energy steps 0.7, 1.1, 1.7, 2.7, and 4.3 keV (left to right panels). Rows from top to bottom show the results from the time-combined maps and single-year maps from 2009 to 2019.
Figure 7: Ribbon maps reconstructed from the spherical harmonic representation for energy steps 0.7, 1.1, 1.7, 2.7, and 4.3 keV (left to right panels). Rows from top to bottom show the results from the time-combined maps and single-year maps from 2009 to 2019.
The time-evolution analyses of the GDF in previous studies often avoided the regions overlapping with the ribbon. Utilizing our methodology, we can analyze the time series of each component in any desired direction in the sky. Here, we calculate the time series of the GDF and ribbon flux averaged over the entire sky and six regions centered at the heliospheric nose, tail, north and south ecliptic poles, and starboard and port sides. We average over a portion of the sky within 20\({}^{\circ}\) from each direction. Appendix B presents tools allowing for transformation from the spherical harmonic coefficients to averages over any predefined region of the sky. The position of the nose is at ecliptic (255.59\({}^{\circ}\), 5.14\({}^{\circ}\)) (Swaczyna et al. 2022b), the tail direction is antipodal to the nose, and the port and starboard are centered at points in the ecliptic plane and 90\({}^{\circ}\) away from the nose. We note that we scale the ribbon flux by a factor of 4 in Figure 8 to make visual comparison clearer.
The time series of the GDF and ribbon flux over the entire sky (left column in Figure 8) confirm that the GDF undergoes substantial evolution in the higher energy steps, with evident recovery in the most recent years. In contrast, the evolution of the ribbon is weaker. The GDF increase following the solar wind dynamic pressure increase is the strongest in the nose direction, but it is not visible in the tail direction, in which the GDF continues to decline. Moreover, while the initial maximum of the GDF in the nose is observed in most energy steps in 2010, it is delayed by \(\sim\)2 years in the tail direction, particularly at 1.1 and 1.7 keV. Thus, the recent solar wind dynamic pressure increase may be reflected in the tail direction in the coming years. The north and south poles have the best statistics of the ENA flux observations because they are observed near continuously throughout the year. The recent increase is clearly visible in the south and north ecliptic poles but appears \(\sim\)1 year later in the north pole, which is consistent with other studies of the ENA evolution (e.g., Reisenfeld et al. 2016, 2019; McComas et al. 2020) showing that the ENA source in the north is further away than in the south. On the other hand, the starboard and port sides show statistically similar increases in the last two years, although previous studies have hinted at asymmetric evolution of the flanks (Zirnstein et al. 2017).
Figure 8: Grid of time-series plots of the GDF and ribbon flux observed from the entire sky and six regions around the nose, tail, north and south ecliptic pole, starboard, and port directions (left to right). Rows show results for energy steps 0.7, 1.1, 1.7, 2.7, and 4.3 keV (top to bottom). The ribbon flux is multiplied by 4 to facilitate comparison with the GDF. The bands in this figure show 1\(\sigma\) uncertainties.
The temporal evolution of the ribbon may be tracked from the ribbon-separated maps rotated to the ribbon-centered frame. For comparison between different years, we use the ribbon centers obtained in Section 2.4. The left column in Figure 9 shows polar maps of the ribbon flux averaged over 2009-2011 in the ribbon coordinates. The heliospheric nose lies along the 0\({}^{\circ}\) azimuth angle line. For each map, we plot a circle that approximately follows the peak of the ribbon. We integrate the ribbon profiles over the polar angles to analyze the time evolution of the ribbon intensity. The results are shown in the right column of Figure 9. The heliographic latitude of the ribbon's circle for each energy is shown as the top scale. We combine yearly maps similarly as in Zimstein et al. (2023) into the following year ranges: 2009-2011, 2012-2013, 2014-2015, 2016-2017, and 2018-2019. This combination of fluxes reduces uncertainties and eases the analysis of the solar cycle evolution. Similar combinations have also been used in several studies for similar purposes of examining temporal ENA variations (e.g., Zirnstein et al., 2017; Dayeh et al., 2019, 2022; Schwadron & McComas, 2019; McComas et al., 2020).
The obtained profiles for the two lowest energy steps (0.7 and 1.1 keV) show little evolution over time. While the profile for the first three years is slightly higher, the later evolution is mainly within their uncertainties. For the energy step 1.7 keV, the ribbon splits into two main regions: the southern region centered near azimuth +60\({}^{\circ}\), and the northern region centered near azimuth -90\({}^{\circ}\). The southern region indicates some evolution, with a minimum in 2016-2017. The northern region evolves more strongly. The first two periods show similar fluxes, but later the flux starts to decrease; the regions closer to the nose appear to decline earlier than those further away from the nose, which suggests that the ribbon source may be further away, and thus the ENA response to solar cycle changes is delayed. The evolution at 2.7 keV is similar, except the southern region shows a clear maximum in 2012-2013. In the highest energy step, the southern region is weaker than the northern region and remains stable over time. We note that the northern region extends farther away from the nose in this energy step. Additionally, all energies show that the ribbon flux is consistently small over time near azimuth of \(\sim\)150\({}^{\circ}\). This region is near the port side, i.e., close to the heliospheric tail.
Figure 9: Time-evolution of the ribbon flux. _Left column_: polar maps of the ribbon flux averaged over 2009-2011. The dashed circle approximately follows the ribbon’s peaks. Lines of heliographic latitudes are shown with green lines. _Right column_: evolution of the polar-angle-integrated ribbon flux as a function of the azimuth. Rows from top to bottom correspond to energy steps 0.7, 1.1, 1.7, 2.7, and 4.3 keV. The bands in this figure show 1\(\sigma\) uncertainties.
### Power Spectra of ENA Flux Components
Figure 10 presents the power spectrum of the spherical harmonic representations of each component as a function of the spherical harmonic degree obtained from the time-averaged maps for each energy step. The power spectrum is defined for each degree \(\ell\) as
\[P_{\ell}=\frac{1}{2\ell+1}\sum_{m=-\ell}^{\ell}c_{\ell,m}^{2}\,. \tag{18}\]
The power spectrum of the GDF is larger than the power spectrum of the ribbon component for \(\ell\leq 2\) in all observed energy steps. Furthermore, in all energy steps except for the highest, the power spectrum of the ribbon is larger for \(3\leq\ell\leq 15\). In the highest energy step, the power spectra of both components are generally comparable over this range. However, there is a significant range where these two components are comparable, most notably for spherical harmonic degrees between 2 and 4. Therefore, it is impossible to separate the ribbon purely based on the selection of the dominant spherical harmonic components. However, the ratio between the power spectrum of the ribbon and GDF is the largest for the degree of 5, exceeding 20 in the three lowest energy steps. Furthermore, the ratio remains high for these energy steps
Figure 10: Power spectra as a function of the spherical harmonic degree of the GDF and ribbon components and their sum (total) for energy steps 0.7, 1.1, 1.7, 2.7, and 4.3 keV (top to bottom).
for several higher degrees. This shows that our methodology successfully separates two signal sources characterized by different angular scales. Visual inspection of Figures 6 and 7 confirms that one of them is the ribbon identified from the first IBEX observations.
The power spectrum of the GDF decreases more significantly for low degrees, confirming that the GDF can be mostly reconstructed from low-degree spherical harmonics. In all energy steps, the GDF power spectrum drops significantly from \(\ell=0\) to 1, from \(\ell=2\) to 3, and for some energies, also from \(\ell=4\) to 5. The constant spherical harmonic describes the mean ENA flux from the entire sky, thus dominating the spectrum. The second drop explains why in the analysis in Paper I, at least spherical harmonics with \(\ell\leq 2\) were needed to reconstruct the main structures of the GDF. As discussed in Section 2.4, the structures up to \(\ell=4\) in the GDF should be reconstructed globally, while smaller ones can only be analyzed outside the ribbon mask.
### Small-scale Structures in the GDF
We determine the selection of the maximum degree of spherical harmonics at \(\ell_{\max}=22\) based on the AIC (Section 2.3), from the reconstruction of the total IBEX map, including both the GDF and ribbon components. While the regularization term suppresses the higher degree spherical harmonics, they can reproduce some of the statistical noise visible in the IBEX map, especially those obtained based on a single year of observations. Therefore, we estimate the maximum degree needed to reconstruct the GDF for these maps to identify possible small-scale structures. For this purpose, we calculate the normalized residual sum of squares for each energy step \(e\):
\[\chi^{2}_{\mathrm{GDF},e}(\ell_{\mathrm{GDF},e})=\sum_{t}\sum_{k}\left(\frac{ j_{e,t,k}-\sum_{\ell m:\ell\leq\ell_{\mathrm{GDF},e}}\gamma_{k,\ell m}C_{e,t, \ell m}}{\sigma_{e,t,k}}\right)^{2}. \tag{19}\]
In the above sum, indices \(t\) and \(k\) enumerate years and pixels outside the ribbon mask for which observed flux is available in the original IBEX map. We calculate this sum by truncating the reconstruction from the spherical harmonic coefficients to the maximum degree of \(\ell_{\mathrm{GDF},e}\).
The sum given in Equation (19) for a normal distribution of the normalized residuals should be approximately equal to the number of data points included in the sum. We chose \(\ell_{\mathrm{GDF},e}\) for each energy step so that the sum is the closest to this number. This criterion provides the degrees of 3, 8, 9, 10, and 11 for energy steps 0.7, 1.1, 1.7, 2.7, and 4.3 keV, respectively. The lowest degree is needed for the lowest energy step, mainly because uncertainties are the highest in this energy step. It means that global structures smaller than \(180^{\circ}/3=60^{\circ}\) are not statistically significant for this energy step. The scale of structures observed in the GDF for higher energy steps decreases to \(\sim\)20\({}^{\circ}\). This limitation in the angular scale of structures that IBEX observations can resolve is caused by limited statistics of individual IBEX maps. For the time-combined maps, the above criterion would indicate the maximum degrees of 7, 9, 11, 13, and 16 in the respective energy steps. Nevertheless, studies focusing on the GDF-only maps may use the above limited range of spherical harmonics to reconstruct the IBEX maps. Figure 11 shows a version of Figure 6 in which the maximum degree of spherical harmonics included in the reconstruction is limited. This figure shows that a significant portion of the variation visible in Figure 6 disappeared, which confirms that they represented statistical noise. Still, to estimate the ribbon maps, we need higher-degree reconstruction of the GDF to provide an equally good representation of small-scale statistical variations outside of the ribbon mask to reduce the ribbon flux fluctuations outside of the ribbon mask.
Figure 11: As Figure 6 except the GDF flux is reconstructed with the spherical harmonics up to the maximum degree derived in Section 4.2.
It is also interesting to inspect possible outliers in the normalized residuals that may allow for identifying smaller features, including possible point sources. Figure 12 shows histograms of normalized residuals obtained from all single-year maps for the maximum degree found above. The histograms follow the normal distribution, shown with the black line, but some discrepancies require further discussion. Nevertheless, none of the residual signals exceed the \(5\sigma\) rule used to avoid accidental discovery in comparisons of many data points (e.g., Lyons 2008).
The distribution appears to indicate a surplus of pixels compared to the normal distribution in which the observed flux is smaller than observed, especially for normalized residuals less than about -2 (see green ellipses in Figure 12). At the same time, the histograms are slightly below the normal distribution for normalized residuals between about 2 and 3 (blue ellipses in Figure 12). This effect is likely connected to the estimation of the uncertainty based on the Poisson process. If, for the Poisson process with the true count mean \(x\), the observed number of counts \(y\) is smaller than \(x\), not only is the estimated Poisson process parameter underestimated, but also its standard deviation \(\sqrt{y}\). On the other hand, if \(y>x\), both the estimated mean and standard deviation are overestimated. Because of this positive correlation, the normalized residuals show an elevated tail for negative normalized residuals.
There are only 8 pixels with positive normalized residuals exceeding \(4\sigma\) (marked with red arrows in Figure 12). We inspect these pixels in the context of neighboring pixels to verify their importance and to check if they may indicate an interesting signal for further analysis. Six of these pixels are from the two highest energy steps, two from the 2017 map in energy step 2.7 keV, and two in each of the 2017 and 2018 maps in energy step 4.3 keV. All these pixels are centered within \(15^{\circ}\) from the ecliptic poles and additionally within \(63^{\circ}\) from the longitude of \(180^{\circ}\) at which the maps are divided between years. The solid angle covered by pixels within the described limits comprises less than \(\sim\)2% of the entire sky. As discussed in Paper I, the spherical harmonic representation does not account for time evolution within each year (i.e., as the Earth and thus IBEX orbit the Sun to fill a yearly map); therefore, these regions may not be correctly represented. Furthermore, because the poles are observed throughout the year, the time changes in the ENA flux during the year result in an abrupt spatial change around the longitude at which the map starts and ends each year because the neighboring strips at this longitude are separated by almost a year. Therefore, these six pixels do not represent actual small-scale structures.
Figure 12: Histograms of normalized residual signal between the original IBEX map and truncated GDF estimation (cf. text). Panels from left to right show energy steps 0.7, 1.1, 1.7, 2.7, and 4.3 keV. The histograms are compared with the normal distribution (solid black line). Green and blue ellipses indicate ranges with systematic deviations from the normal distribution, and red arrows point at outliers discussed in the text.
The first of the two remaining high residual pixels (4.24\(\circ\)) is a pixel centered at ecliptic \((\lambda,\beta)=(123^{\circ},-39^{\circ})\) in the 2012 map for energy step 0.7 keV showing a residual flux of \(\sim\)92 ENAs cm\({}^{\text{-2}}\)s\({}^{\text{-1}}\)sr\({}^{\text{-1}}\)keV\({}^{\text{-1}}\). Our inspection reveals that there are few neighboring pixels with positive residual fluxes in the same energy step and that this structure is even slightly stronger in the next year, although with a lower statistical significance. Nevertheless, this energy step shows significant time variation, and the pixel strips observed simultaneously show somewhat increased fluxes, suggesting that the background might be underestimated.
The last pixel indicates the most statistically significant positive residual flux of 4.83\(\circ\). This pixel, centered at ecliptic \((\lambda,\beta)=(93^{\circ},-45^{\circ})\), is in the 2017 map at energy step 1.1 keV. The residual flux is \(\sim\)62 ENAs cm\({}^{\text{-2}}\)s\({}^{\text{-1}}\)sr\({}^{\text{-1}}\)keV\({}^{\text{-1}}\)where the reconstructed flux is only \(\sim\)50 ENAs cm\({}^{\text{-2}}\)s\({}^{\text{-1}}\)sr\({}^{\text{-1}}\)keV\({}^{\text{-1}}\). This residual flux is limited to only this one year, although there is a flux surplus of \(\sim\)98 ENAs cm\({}^{\text{-2}}\)s\({}^{\text{-1}}\)sr\({}^{\text{-1}}\)keV\({}^{\text{-1}}\) in the same direction at energy step 0.7 keV but at lower statistical significance. The higher energy steps do not show a significant residual flux in this direction. In both energy steps, the positive residuals extend to the two neighboring pixels at the same longitude and to the pixel centered at \((99^{\circ},-45^{\circ})\). This outlier may therefore be not just a statistical fluctuation but an indication of a compact source of ENAs.
### IBEX Ribbon's Center
The method used in this analysis to find the IBEX ribbon's center is focused on minimizing the width of the region, including the ribbon signal, which is an essential constraint for finding the ribbon mask (Section 2.4). The procedure differs from the two-step fitting technique used in the studies of the ribbon's position (Funsten et al., 2013, 2015; Dayeh et al., 2019). The first fitting is used to find the ribbon's peak position for different azimuthal profiles. In the second fitting, a best-fit circle that follows these peaks is found. Figure 13 compares our method (described in Appendix A) with the previous one (see Appendix in Zirnstein et al., 2023) for the same combined periods discussed in Section 3.
Both techniques reproduce similar patterns of the centers' positions in the observed energy steps. In most cases, the centers from the lowest to highest energy steps are ordered by ecliptic latitude, with the lowest one being closest to the north pole. The ordering along a heliographic meridian relates to the solar wind structure reflected in the IBEX ribbon (Swaczyna et al., 2016). Moreover, the ordering in 2012-2015 appears to be more closely ordered along the meridian, while it is somewhat tilted in earlier and later years. The ribbon in the energy step 4.3 keV was very weak in 2014-2017 (see also Section 3), and thus our approach failed in finding the ribbon's center. Nevertheless, the two-step fitting can still find a fit for the 2014-2015 map.
There are two systematic differences in the results obtained with these two methods. First, the centers obtained from the two-step fitting are shifted by about \(+2^{\circ}\) in ecliptic longitude. Moreover, the centers in the highest energy step found in this method are shifted south compared to the method from Appendix A. It is important to note that these two methods define the ribbon differently; thus, this difference does not invalidate any of these methods. However, for future comparisons with models, it is critical to use the same technique for both the modeled and observed fluxes.
## 2 Summary
Spherical harmonic representations of smoothly changing functions defined on a sphere are a helpful tool for representing it with a finite number of coefficients. The spherical harmonics are ordered by their degree, with higher degrees allowing for the representation of smaller-scale features of the function. The ENA flux maps from IBEX are examples of functions expected to vary smoothly over the sky and may be represented by a finite linear combination of spherical harmonics. Our analysis shows that IBEX maps require spherical harmonics up to the maximum degree \(\ell_{\max}=22\) (Section 2.3), which means that one map is represented by values of \((22+1)^{2}=529\) coefficients of spherical harmonics for degrees \(\ell<\ell_{\max}\).
The spherical harmonic coefficients are obtained from the least-squares minimization (Section 2.1) supported with a regularization term, suppressing possible artificial extrema in the data gap regions (Section 2.2). This combination allows for independent analyses of single-year IBEX maps. Additionally, it enables the estimation of the GDF component in the data gaps and ribbon region (Section 2.4). Based on this analysis, we provide the spherical harmonic representation of the total (combined) ENA flux, as well as the GDF and ribbon components and their uncertainty matrices (Section 2.5).
Based on the separated signals, we confirm that the GDF and ribbon evolve differently over the solar cycle (Section 3). The GDF shows apparent enhancement following the solar wind dynamic pressure increase in late 2014 (McComas et al., 2019; Zirnstein et al., 2022). The response observed in the ENA flux is delayed, reflecting the distance to the ENA source region. The ribbon flux also evolves over the solar cycle, but it
Figure 13: Positions of the IBEX ribbon’s centers in ecliptic coordinates. The ellipses and points with error bars show the positions obtained using the methodologies presented in Appendix B and by Zirnstein et al. (2023, Appendix), respectively. Panels from the top left to bottom right show results for periods: 2009-2019, 2009-2011, 2012-2013, 2014-2015, 2016-2017, and 2018-2019.
appears to be connected to the evolution of the latitudinal structure of the solar wind over the solar cycle. In contrast to the GDF, the ribbon has not responded strongly to the solar wind dynamic pressure increase.
The degree of a spherical harmonic represents the characteristic spatial scale of structures. Section 4.1 analyzes the power spectra of the ENA flux components as a function of this degree. Low-degree spherical harmonics dominate the GDF, showing that most of the signal smoothly varies over the sky. On the other hand, the ribbon power spectrum is flatter, indicating that the proper representation requires higher degrees of spherical harmonics because the ribbon profile is narrow. While most of the IBEX signal can be represented by spherical harmonic representation, we found that it is currently limited due to temporal changes within each year, which may lead to abrupt changes at the edge of subsequent maps. Additionally, we identified two possible very small ENA sources (see Section 4.2), which require further analysis beyond the scope of this paper. Finally, the new method used to find the ribbon's center in this paper reproduces the most important features of the ribbon geometry, but the found centers do not agree with the previously used two-step method because of the difference in the methods' definitions (Section 4.3).
This paper serves as a detailed description of the spherical harmonic representation of IBEX maps and the separation of the GDF and ribbon components. Concurrently with this paper, we release the obtained spherical harmonic representations on Zenodo under Creative Commons Attribution License ([https://doi.org/10.5281/zenodo.7683357](https://doi.org/10.5281/zenodo.7683357)). In most situations, this derivative product release should be preferred over the release provided with Paper I. In this release, we include both the spherical harmonic coefficients with their covariance matrices for the total map and the ribbon and GDF components. Additionally, we also provide reconstructed maps in the standard IBEX pixelization. However, unlike the uncertainties in the standard IBEX maps, the corresponding uncertainties are highly correlated, and thus the pixels from these maps should not be statistically combined into a larger region. Instead, the procedure described in Appendix B should be applied in these situations. The methodology developed in this paper may be particularly useful for future analyses of ENA maps from upcoming Interstellar Mapping and Acceleration Probe (IMAP) mission (McComas et al., 2018).
_Acknowledgments_: This material is based upon work supported by the National Aeronautics and Space Administration (NASA) under grant No. 80NSSC21K0582 issued through the Heliophysics Guest Investigators - Open 2020 Program.
## Appendix A Calculating the IBEX Ribbon's Centers
The IBEX ribbon's centers are essential characteristics of the ribbon geometry in different energy steps. Previous methodologies (Funsten et al., 2013, 2015; Dayeh et al., 2019; Zimstein et al., 2023) used to find the ribbon's centers adopted a two-step fitting approach in which an IBEX map is rotated to coordinates in which the ribbon's center is close to the north pole. In these ribbon-center coordinates, the ribbon profiles for different azimuths are fit using a Gaussian function with the addition of a linear or quadratic function modeling the GDF(or skew-Gaussian as in Zirnstein et al., 2021). The peak of the Gaussian described the ribbon position for azimuths where the ribbon is sufficiently strong. In the second step, a circle is fit to the positions of the ribbon peaks.
Separating the ribbon enables a different approach in which we do not need to assume any specific functional form of the ribbon profile. First, we rotate the ribbon spherical harmonic coefficients into possible ribbon centers using the transformation from real to complex spherical harmonics, to which we apply a Wigner D-matrix obtained for the Euler angles describing the rotation. Subsequently, we return to
the real spherical harmonic representation, which results in the spherical harmonic coefficients in the rotated frame. This methodology can generally be used on any map represented with spherical harmonics.
For each tested center of the ribbon, we calculate the polar angle profile integrated over all azimuth angles in the rotated frame, from which we find the first and second central moments. For a perfectly Gaussian profile, the first moment indicate the ribbon peak position, while the second central moment represents the ribbon width. In our analysis, we want to find the ribbon center for which the ribbon is as narrow as possible to minimize the ribbon flux outside the mask (Section 2.4). Therefore, we seek the ribbon center for which this second central moment is the smallest. The first moment obtained for this center represents the ribbon radius.
In practice, we calculate the second central moment for possible centers on a rectangular grid in ecliptic coordinates for longitudes from \(190^{\circ}\) to \(250^{\circ}\) with a step of \(3^{\circ}\) and latitudes from \(24^{\circ}\) to \(50^{\circ}\) with a step of \(2^{\circ}\). We chose slightly larger steps for longitudes to have comparable angular distances in the grid. We find the second central moment for this grid, and we find the minimum using bi-cubic interpolation.
We estimate the center and radius uncertainties using bootstrapping. Based on the covariance matrix of the ribbon spherical harmonic coefficients, we randomly select 100 sets of these coefficients from a multivariate normal distribution described by the best fit ribbon spherical harmonics and their covariance. We repeat the procedure described above, which gives us 100 possible ribbon centers and radii. From this set, we calculate their covariance matrix representing their uncertainties.
## Appendix B Average Fluxes over Regions from Spherical Harmonic Coefficients
The spherical harmonic representations of ENA flux maps are provided by lists of coefficients and their covariance matrix. However, in most analyses, we are interested in comparisons of the ENA maps with models over some regions of the sky. For example, the average flux over region \(\Omega_{r}\) is given by the following expression:
\[f_{r}=\frac{\iint_{\Omega_{r}}\sum_{\ell,m}c_{\ell m}Y_{\ell m}( \theta,\phi)d\Omega}{\iint_{\Omega_{r}}d\Omega}=\sum_{\ell,m}c_{\ell m}\underbrace {\frac{\iint_{\Omega_{r}}Y_{\ell m}(\theta,\phi)d\Omega}{\iint_{\Omega_{r}}d \Omega}}_{z_{r,\ell m}}=\mathbf{c}\cdot\mathbf{z}_{r}. \tag{11}\]
We change the order of integration and summation because the integrals are finite. Defining vector \(\mathbf{z}_{r}=\left\{\mathbf{z}_{r,\ell m}\right\}_{(\ell m):\ell\leq\ell\max}\), this summation can be expressed as a dot product with the vector of the coefficients. Consequently, the integral defining \(z_{r,\ell m}\) needs to be only calculated once and later may be applied to multiple maps, e.g., from different years or energy steps. The vectors \(\mathbf{y}_{k}\) (Section 2.1) describing the average values over the IBEX pixels are just a special case of this integral, where the regions correspond to IBEX pixels (see also Paper I). Therefore, the discussion provided below applies also to the reconstructed pixelized maps. These integrals can be calculated either numerically or analytically.
Integrals over rectangular ranges of longitudes and latitudes are straightforward. However, this can be combined with a rotation to any other coordinate system. For example, we calculated the average over circular regions centered at different points in Figure 8. For this purpose, we combine the integration of spherical harmonics over the complete azimuth angle and a polar angle from 0 to \(r\), where \(r\) is the radius of this region with the rotation to the coordinates in which the center of the region coincides with the pole (see Appendix A). A similar procedure may be used to integrate over a ring region or a ring sector.
The vectors \(\mathbf{z}_{r}\) for several regions can be combined into a single matrix \(\mathbf{\mathrm{Z}}=\{\mathbf{z}_{r}\}_{r=1,...N_{r}}\), where \(N_{r}\) is the number of regions. With this definition, the fluxes in all these regions are:
\[\mathbf{f}=\mathbf{\mathrm{Z}}\mathbf{c}. \tag{12}\]
The covariance matrix of uncertainties in this region is calculated from simple propagation of errors:
\[\mathbf{\Sigma}_{f}=\mathbf{\mathrm{ZVZ}}^{\mathrm{T}}. \tag{13}\]
The same expression is used to calculate the covariance matrix for reconstructing the pixelized maps. It is essential to account for possible correlations when averaging over multiple pixels. Therefore, the procedure in this appendix should be followed for further analyses of various regions that combine pixels because the correlations are significant, especially between neighboring pixels.
|
2310.18116 | Direct Unsupervised Denoising | Traditional supervised denoisers are trained using pairs of noisy input and
clean target images. They learn to predict a central tendency of the posterior
distribution over possible clean images. When, e.g., trained with the popular
quadratic loss function, the network's output will correspond to the minimum
mean square error (MMSE) estimate. Unsupervised denoisers based on Variational
AutoEncoders (VAEs) have succeeded in achieving state-of-the-art results while
requiring only unpaired noisy data as training input. In contrast to the
traditional supervised approach, unsupervised denoisers do not directly produce
a single prediction, such as the MMSE estimate, but allow us to draw samples
from the posterior distribution of clean solutions corresponding to the noisy
input. To approximate the MMSE estimate during inference, unsupervised methods
have to create and draw a large number of samples - a computationally expensive
process - rendering the approach inapplicable in many situations. Here, we
present an alternative approach that trains a deterministic network alongside
the VAE to directly predict a central tendency. Our method achieves results
that surpass the results achieved by the unsupervised method at a fraction of
the computational cost. | Benjamin Salmon, Alexander Krull | 2023-10-27T13:02:12Z | http://arxiv.org/abs/2310.18116v2 | # Direct Unsupervised Denoising
###### Abstract
Traditional supervised denoisers are trained using pairs of noisy input and clean target images. They learn to predict a central tendency of the posterior distribution over possible clean images. When, _e.g._, trained with the popular quadratic loss function, the network's output will correspond to the minimum mean square error (MMSE) estimate. Unsupervised denoisers based on Variational AutoEncoders (VAEs) have succeeded in achieving state-of-the-art results while requiring only unpaired noisy data as training input. In contrast to the traditional supervised approach, unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate, but allow us to draw samples from the posterior distribution of clean solutions corresponding to the noisy input. To approximate the MMSE estimate during inference, unsupervised methods have to create and draw a large number of samples - a computationally expensive process - rendering the approach in-applicable in many situations. Here, we present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency. Our method achieves results that surpass the results achieved by the unsupervised method at a fraction of the computational cost.
## 1 Introduction
The prevalence of noise in biomedical imaging makes denoising a necessary step for many applications [14]. Deep learning has proven itself to be the most powerful tool for this task, as is evidenced by a growing body of research [27]. Although deep learning-based approaches typically require large amounts of training data, recent advances in unsupervised deep learning [20, 19, 25] have shown that this requirement need not be a barrier to their use. Unlike with supervised deep learning-based denoisers, which are trained with pairs of corresponding noisy and noise-free images, users of unsupervised methods can train their models with the very data they want to denoise.
The performance of unsupervised deep learning-based denoisers is now approaching and even sometimes matching the performance of their supervised counterparts [20, 19, 25], however, these two methods are fundamentally different in the way they do inference. By training a Variational AutoEncoder (VAE) [11], unsupervised methods approximate a posterior distribution over the clean images that could underlie a noisy input image. This distribution will be referred to as the _denoising distribution_. Random samples from the denoising distribution then constitute the infinite possible solutions to a denoising problem. Supervised and self-supervised learning methods, on the other hand, offer a single prediction that compromises between all possible solutions. This is usually a central tendency of the denoising distribution and the specific central tendency that is predicted depends on the loss function used. For example,
Figure 1: **Our Direct Denoiser outperforms unsupervised VAE-based denoising (HDN) [19], while requiring only a fraction of the computational cost:** In red, the time to draw 1, 10, 100 and 1000 samples from HDN’s learned denoising distribution plotted against the PSNR (higher is better) of the per-pixel mean of these samples. Additionally, in blue, the time to take a single solution from our Direct Denoiser is plotted against its PSNR. These results are from denoising the _Convallaria_ dataset.
a supervised method trained with the mean squared error (MSE) loss function will predict the mean, which is also known as the minimum mean squared error (MMSE) estimate. A model trained with the mean absolute error (MAE) loss function will predict the pixel-wise median, which is known as the minimum mean absolute error (MMAE) estimate.
While the ability of unsupervised methods to produce diverse solutions can in some circumstances be beneficial for downstream processing [20], users oftentimes require only a single solution such as the MMSE estimate. If they are to approximate this from an unsupervised learning-based denoiser, they must process their image many times and average many possible sampled solutions, leading to a significant computational overhead. For example, the authors of [20, 19, 25] average 100 or 1000 samples per image to obtain their MMSE estimate. Such an approach requires substantial computational effort, and is not likely to be economically and ecologically reasonable for labs regularly analyzing terabytes of data.
This paper presents an alternative route to estimating the central tendencies from an unsupervised denoiser; one that requires noisy images to be processed only once. We do so by training an additional deterministic convolutional neural network (CNN), termed _Direct Denoiser_, that directly predicts MMSE or MMAE solutions and is trained alongside the VAE. It uses noisy training images as input and the sampled predictions from the VAE as training targets. Lacking a probabilistic nature, this network will minimize its MSE or MAE loss function by predicting the mean or pixel-wise median of the denoising distribution. The result is a denoising network with the evaluation times of a supervised approach and the training data requirements of an unsupervised approach.
In summary, we propose an extension to unsupervised deep learning-based denoisers that dramatically reduces inference time by estimating a central tendency of the learned denoising distribution in a single evaluation step. Moreover, we show these estimates to be more accurate than those obtained by averaging even up to 1000 samples from the denoising distribution. Figure 1 shows how much shorter inference time is with our proposed approach, and how much higher the quality of results are.
The remainder of the paper is structured as follows. In Section 2, we give a brief overview of related work, concentrating on different approaches to denoising. In Section 3, we provide a formal introduction to the unsupervised VAE-based denoising approach, which is the foundation of our method. In Section 4, we describe the training of the Direct Denoiser. We evaluate our approach in Section 5, showing that we consistently outperform our baseline at a fraction of the computational cost. Finally, in Sections 6 and 7 we discuss our results and give an outlook on the expected impact of our work and future perspectives.
## 2 Related Work
### Supervised denoising
Traditional supervised deep learning-based methods (_e.g_. [30, 28]) rely on paired training data consisting of corresponding noisy and clean images. These methods view denoising as a regression problem, and usually train a UNet [23] or variants of the architecture to learn a mapping from noisy to clean. The most commonly used loss function for this purpose is the sum of pixel-wise quadratic errors (\(L_{2}\) or MSE), which directs the network to predict the MMSE estimate for the noisy input.
The approach's requirement for clean training images greatly limits its applicability, particularly for scientific imaging applications, where often no clean data can be obtained. In 2018, Lehtinen _et al_. [16] had the insight that training of equivalent quality can be achieved by replacing the clean training image with a second noisy image of the same content; a training method termed _Noise2Noise_. In practice, such image pairs can often be acquired by recording two images in quick succession. By using the \(L_{2}\) loss and assuming that the imaging noise is zero-centered, the network is expected to minimize the loss to its noisy training target by converging to the same MMSE estimate as in supervised training.
While Noise2Noise and traditional supervised methods are state-of-the-art with respect to the quality of their results, their requirement for paired training data makes them inapplicable in many situations. In contrast, our method requires only unpaired noisy data, which is available for any denoising task, making it directly applicable in situations where supervised methods are not.
### Self-supervised denoising
Self-supervised methods have been introduced to enable denoising with unpaired noisy data. Here we focus on _blind-spot_ approaches (_e.g_. [12, 2, 17, 22]), which mask individual pixels in the input image and use them as training targets. These methods rely on the assumption that imaging noise is pixel-wise independent given an underlying signal. By effectively forcing the network to predict each pixel value from its surroundings, blind-spot approaches can learn to denoise images without the need for paired noisy-clean data. Like supervised methods, self-supervised denoisers (when used with \(L_{2}\) loss) predict an MMSE estimate for each pixel, albeit based on less information, since the corresponding input pixel cannot be used during prediction. As a result, the quality of the output can be worse than supervised methods. The blind-spot approach has been improved to reintroduce the lost pixel information during inference [21, 15], achieving improved quality in some situa
tions. In [4], Broaddus _et al_. extended the method to allow for the removal of structured noise.
Our method also does not require paired data, but we do not follow the self-supervised blind-spot paradigm. As a consequence, we do not have to address the loss of pixel information.
### Unsupervised VAE-based denoising
Unsupervised VAE-based denoising methods [20] form the backbone of our method. Like in self-supervised methods, training requires only noisy images. However, their training and inference procedures differ greatly from self-supervised approaches. We discuss this class of methods in detail in Section 3.
### Knowledge distillation
Knowledge distillation [9] is the process of training a smaller _student_ network using a large _teacher_ network or an ensemble [5] of teachers. The goal of this approach is to reduce the computational effort required during inference and enable more efficient employment of a powerful model. Surprisingly, the student model can achieve better results compared to being trained on the data directly. A survey of the topic can be found in [7].
The approach of training our Direct Denoiser with the output of another network can be seen as knowledge distillation. However, in our case the Direct Denoiser is not intended as a smaller replacement of the VAE, but as a model with a faster inference procedure.
## 3 Background
### The denoising task
A noisy observation, \(\mathbf{x}\), of a signal, \(\mathbf{s}\), can be thought of as sampled from an observation likelihood, or _noise model_, \(p_{\text{NM}}(\mathbf{x}|\mathbf{s})\). A noise model describes the random, unwanted variation that is added to a signal when it is recorded. The goal of denoising is to estimate the \(\mathbf{s}\) that parameterized the noise model from which a known \(\mathbf{x}\) was sampled.
### Unsupervised denoising
It was Prakash _et al_. [20] who proposed doing so via variational inference, using a VAE [11] to approximate the posterior distribution \(p(\mathbf{s}|\mathbf{x})\). They improved their approach with a more powerful architecture that could also handle mild forms of structured noise in [19]. Salmon and Krull then presented an alternative approach to tackling structured noise in [25], but it unfortunately cannot yet be applied in realistic settings.
To understand how unsupervised denoising works, we must give a brief explanation of the VAE [11]. For a full introduction, see [6].
Given a tractable prior distribution \(p_{\theta}(\mathbf{z})\) and a likelihood \(p_{\theta}(\mathbf{x}|\mathbf{z})\), the marginal distribution \(p_{\theta}(\mathbf{x})\) could be learnt by minimizing the objective
\[-\log p_{\theta}(\mathbf{x})=-\log\int_{\mathbf{z}}p_{\theta}(\mathbf{x}| \mathbf{z})p_{\theta}(\mathbf{z})d\mathbf{z}. \tag{1}\]
However, this integral is often intractable for high dimensional \(\mathbf{x}\). VAEs instead approximate \(p_{\theta}(\mathbf{x})\) by minimizing the following upper bound,
\[-\log p_{\theta}(\mathbf{x})+D_{KL}[q_{\phi}(\mathbf{z}| \mathbf{x})\parallel p_{\theta}(\mathbf{z}|\mathbf{x})]\] \[=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[-\log p_{\theta}( \mathbf{x}|\mathbf{z}))]+D_{KL}[q_{\phi}(\mathbf{z}|\mathbf{x}))\parallel p_{ \theta}(\mathbf{z})], \tag{2}\]
where \(\theta\) and \(\phi\) are learnable parameters and \(D_{KL}\) is the always positive Kullback-Leibler (KL) divergence [13]. Here, an approximate posterior \(q_{\phi}(\mathbf{z}|\mathbf{x})\) is introduced and optimized to diverge as little as possible from the true posterior \(p_{\theta}(\mathbf{z}|\mathbf{x})\).
The authors of DivNoising [20], Hierarchical DivNoising (HDN) [19] and AutoNoise [25] adapt the VAE for denoising by incorporating a known explicit noise model into this objective, directing the decoder of the VAE to map the latent variable \(\mathbf{z}\) to estimates of the signal \(\mathbf{s}\),
\[-\log p_{\theta}(\mathbf{x})+D_{KL}[q_{\phi}(\mathbf{z}|\mathbf{ x})\parallel p_{\theta}(\mathbf{z}|\mathbf{x})]\] \[=\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x})}[-\log p_{\text{NM} }(\mathbf{x}|\mathbf{s}))]+D_{KL}[q_{\phi}(\mathbf{z}|\mathbf{x}))\parallel p_ {\theta}(\mathbf{z})], \tag{3}\]
where \(\mathbf{s}=g_{\theta}(\mathbf{z})\).
### Inference in unsupervised denoising
After minimizing this new denoising objective, the signal underlying a given \(\mathbf{x}\) is estimated by first encoding \(\mathbf{x}\) with \(q_{\phi}(\mathbf{z}|\mathbf{x})\), sampling a \(\mathbf{z}\) and mapping that sample to an estimate of the signal with \(g_{\theta}(\mathbf{z})\). These solutions are samples from an approximation of the posterior \(p(\mathbf{s}|\mathbf{x})\), which we refer to as the _denoising distribution_.
Each sample from the denoising distribution is unique, allowing users to examine the uncertainty involved in their denoising problem. However, a single consensus solution is often preferred. The authors of [20, 19, 25] chose to calculate the per pixel mean of 100 or 1000 samples, deriving the minimum mean square error (MMSE) estimate of the denoising distribution, to get a consensus solution for measuring denoising performance. Taking so many samples requires many forward passes of the denoiser and incurs a potentially prohibitive computational overhead for large datasets.
Our method extends the high quality denoising performance and minimal training requirements of VAE-based denoisers by allowing them to directly and efficiently produce MMAE and MMSE results without repeated sampling.
## 4 Method
When given samples from a probability distribution, we are often interested in what a representative value of those samples is. In the case of unsupervised denoising, we are interested in a representative image from the denoising distribution. A common value to choose for this is the central tendency of the distribution [29], a point which minimizes some measure of deviation from all of the samples.
For samples from a learned denoising distribution, \(p(\hat{\mathbf{s}}|\mathbf{x})\), over possible solutions \(\hat{\mathbf{s}}\) for a noisy input image \(\mathbf{x}\), this would be
\[\hat{\mathbf{s}}^{*}=\arg\min_{\mathbf{y}}\mathbb{E}_{\hat{\mathbf{s}}| \mathbf{x}}[L(\mathbf{y},\hat{\mathbf{s}})], \tag{4}\]
where \(L\) is some per-pixel loss function. If \(L\) is the \(L_{1}\) loss,
\[L(\mathbf{y},\hat{\mathbf{s}})=1/n\sum_{i}^{n}|y_{i}-\hat{s}_{i}|, \tag{5}\]
then \(\hat{\mathbf{s}}^{*}\) corresponds to the pixel-wise median of the distribution, _i.e_., the MMAE estimate. Here, \(n\) denotes the number of pixels and \(y_{i}\) and \(\hat{s}_{i}\) denote \(i^{\text{th}}\) pixel values. For the \(L_{2}\) loss,
\[L(\mathbf{y},\hat{\mathbf{s}})=1/n\sum_{i}^{n}(y_{i}-\hat{s}_{i})^{2}, \tag{6}\]
\(\hat{\mathbf{s}}^{*}\) will be the arithmetic mean, _i.e_., the MMSE.
The authors of [20, 19, 25] estimated \(\hat{\mathbf{s}}^{*}\) using a large number of samples from their denoising distribution. We propose instead training a CNN to directly predict a central tendency.
Let \(h_{\eta}\) be our Direct Denoiser with parameters \(\eta\) and \(p(\hat{\mathbf{s}}|\mathbf{x})\) be a denoising distribution. The following objective,
\[\arg\min_{\eta}\mathbb{E}_{\mathbf{x}}[\mathbb{E}_{\hat{\mathbf{s}}|\mathbf{ x}}[L(h_{\eta}(\mathbf{x}),\hat{\mathbf{s}})]], \tag{7}\]
where \(L\) is either the \(L_{1}\) or \(L_{2}\) loss, would train \(h_{\eta}\) to predict either the pixel-wise median or mean of \(p(\hat{\mathbf{s}}|\mathbf{x})\), respectively. After training an unsupervised denoiser according to [20, 19, 25], we could train our Direct Denoiser with Eq. 7 by sampling noisy images \(\mathbf{x}\) from a training set and then running them through the unsupervised denoiser to obtain possible clean solutions \(\hat{\mathbf{s}}\) from the denoising distribution.
We however find that it is possible to train both models simultaneously. Let \(f_{\theta,\phi}\) represent a VAE with the loss function in Equation 3, where \(\hat{\mathbf{s}}\sim f_{\theta,\phi}(\mathbf{x})\) is a sample from the denoising distribution.
A single training step for simultaneously optimizing an unsupervised denoiser and an accompanying Direct Denoiser is as follows:
1. Pass a noisy training image \(\mathbf{x}\) to the unsupervised denoiser and sample a possible solution \(\hat{\mathbf{s}}\).
2. Update the parameters \((\theta,\phi)\) towards minimizing the loss function in Equation 3.
3. Pass the same \(\mathbf{x}\) to the Direct Denoiser, calculating \(h_{\eta}(\mathbf{x})\).
Figure 2: **Training scheme:** We train our novel _Direct Denoiser_ (blue) along side a Variational AutoEncoder (VAE) [20, 19]. The processing of data is shown with solid arrows and the backward propagation of gradients required for training is shown with dashed arrows. The VAE encoder takes a noisy image as input and predicts the parameters of a distribution in latent space, a sample is drawn from here and mapped to a possible clean image by the decoder network. The reconstruction loss is computed using a pre-trained noise model. Our Direct Denoiser is trained using noisy images as input and the clean image samples (predicted by the VAE) as target. Since individual samples differ for the same input, there is no unique correct solution for this task. As a consequence, by using an \(L_{2}\) loss, the Direct Denoiser will learn to predict the expected value, _i.e_., the MMSE solution. Using an \(L_{1}\) loss leads to predicting the pixel-wise median. We block gradients from passing through the sampled clean image to prevent the VAE changing its outputs.
4. Update the parameters \(\eta\) to minimize \(L(h_{\eta}(\mathbf{x}),\hat{\mathbf{s}})\), where \(L\) is the \(L_{1}\) or \(L_{2}\) loss function.
5. Repeat until convergence.
A visual representation of this training scheme can be found in Figure 2.
## 5 Experiments
Our Direct Denoiser was trained alongside HDN [19], using six datasets of intrinsically noisy microscopy images that come with known ground truth signal. Each dataset can be found in [19], as can details of their size, spatial resolution and train, validation and test splits. Note that for the _Struct. Convallaria_ dataset, we adapted HDN into HDN\({}_{3\text{-}6}\), making it capable of handling structured noise.
**Denoising Performance** To evaluate denoising performance, we compare the Peak Signal-to-Noise Ratio (PSNR) of our Direct Denoiser's direct solutions to the PSNR of HDN's consensus solutions. The consensus solutions were produced by averaging samples of size 1, 10, 100 and 1000, reporting both their per-pixel median and mean. The Direct Denoiser's solutions were reported from a network trained with an \(L_{1}\) loss and a network trained with an \(L_{2}\) loss. Results are in Table 1. Visual results from the same experiment can be seen in Figure 3.
**Inference Times** We also compared inference time to denoising performance. In Figure 1, the total time for HDN to generate 1, 10, 100 and 1000 samples for all 100 images in the _Convallaria_ test set was measured, then plotted against the PSNR of the mean of those samples, averaged over all 100 images. On the same plot, the total time for our Direct Denoiser to produce single solutions for each image is plotted against their average PSNR. Each test image consisted of 512\(\times\)512 pixels.
Using our GPU (an NVIDIA GeForce RTX 3090 Ti), generating a single 512\(\times\)512 solution from HDN's denoising distribution takes 0.076 seconds, using 2207MB of the GPU's memory. Our Direct Denoiser takes 0.029 seconds at 1909MB to do the same. Processing one image with either model uses the full capacity of the GPU's parallelism, so we saw no speed improvements by processing more than one image at a time.
If a consensus solution from HDN with PSNR approaching that of the the Direct Denoiser requires sampling 1000 solutions, inference with the proposed method is \(2621\times\) faster.
**Training Times and Memory Usage** Finally, the additional training time incurred by co-training HDN with the Direct Denoiser was examined. The authors of HDN [19] train their network for 200,000 steps for all datasets, using a batch size of 64 and image patch size of 64\(\times\)64. Using our GPU, training HDN alone takes 0.27 seconds per step for 15 hours total, using 13GB of GPU memory. Training both HDN and the Direct Denoiser takes 0.34 seconds per step for 18.9 hours total, using 15GB of GPU memory. Note that smaller virtual batches can be used as in [19] to reduce memory consumption. For the proposed method to be a net time saving, inference would have to take 3.9 hours less. Using our hardware and inference image resolution, time is saved when the inference test set consists of 185 images with \(512\times 512\) resolution.
**Network Architecture and Training** The Direct Denoiser used in these experiments was a UNet [23] with approximately 12 million parameters, while the unsupervised denoiser was the same Hierarchical VAE [26] used in [19] with approximately 7 million parameters. We chose to give our UNet more parameters than the Hierarchical VAE to ensure the former had the capacity to learn the full relationship between noisy images and solutions generated by the latter. This may not have been necessary, and training a Direct Denoiser with a lower computational demand would be an interesting topic for future research.
Our UNet had a depth of four, with a residual block [8] consisting of two convolutions followed by a ReLU activation function [1] at each level. Downsampling was performed by convolutions with a stride of two, and upsampling by nearest neighbor interpolation [24] followed by a single convolution with stride one. All convolutions had a kernel size of 3. The number of filters was 32 at the first level and that number doubled at each subsequent level. Skip connections were merged by concatenating the skipped features with the features from the previous level and passing the two through a residual block.
Training followed the same procedure described in [19], with the only difference being that our Direct Denoiser had its own Adamax optimizer [10] with an initial learning rate of 3e-4 that reduced by a factor of 0.5 when validation loss had plateaued for 10 epochs.
## 6 Discussion
Solutions from our Direct Denoiser consistently scored a higher PSNR than consensus solutions of 1000 samples from HDN. Table 1 shows HDN's PSNRs converging towards our direct prediction result with increased sample size. It seems that solutions from our Direct Denoiser are sometimes equivalent to averaging sample sizes orders of magnitude larger than the largest samples size we used in our experiment. Moreover, by looking at the inference times reported in Figure 1, the time required to take such a sample size would be impractical for large datasets.
## 7 Conclusions
We have demonstrated that an extension of the unsupervised denoising approach-the Direct Denoiser-can be used to dramatically speed up inference time, while at the
same time improving performance when compared the standard inference procedure with up to 1000 sampled images. We believe our approach will become the default way of producing central tendencies from unsupervised denoising models with the increase in speed potentially allowing an easy adaptation by the community.
While we have evaluated our method only for MSE and MAE loss functions, we believe the approach could also be used with other loss functions such as _Tukey's biweight loss_[3], which might allow us to find regions of high probability density or even the _maximum a posteriori_ estimate.
Recent work in image restoration has suggested the use of more sophisticated perceptual loss functions (see _e.g._[18]). These types of loss functions would likely only be usable in a supervised setting with clean training data and would be unlikely to work with Noise2Noise or self-supervised methods. However, since the training targets sampled by our VAE are essentially clean images, they should be compatible with different types of complex loss functions, opening the door to using perceptual loss with noisy unpaired data.
|
2304.05966 | EdgeDS: Data Spaces enabled Multi-access Edge Computing | The potential of Edge Computing technologies is yet to be exploited for
multi-domain, multi-party data-driven systems. One aspect that needs to be
tackled for the realization of envisioned open edge Ecosystems, is the secure
and trusted exchange of data services among diverse stakeholders. In this work,
we present a novel approach for integrating mechanisms for trustworthy and
sovereign data exchange, into Multi-access Edge Computing (MEC) environments.
To this end, we introduce an architecture that extends the ETSI MEC
Architectural Framework with artifacts from the International Data Spaces
Reference Architecture Model, accompanied by processes that automatically
enrich Edge Computing applications with data space capabilities in an
as-a-service paradigm. To validate our approach, we implement an open-source
prototype solution and we conduct experiments that showcase its functionality
and scalability. To our knowledge, this is one of the first concrete
architectural specifications for enabling data space features in MEC systems. | Ioannis Kalogeropoulos, Maria Eleftheria Vlontzou, Nikos Psaromanolakis, Eleni Zarogianni, Vasileios Theodorou | 2023-04-12T16:41:21Z | http://arxiv.org/abs/2304.05966v1 | # EdgeDS: Data Spaces enabled
###### Abstract
The potential of Edge Computing technologies is yet to be exploited for multi-domain, multi-party data-driven systems. One aspect that needs to be tackled for the realization of envisioned open edge Ecosystems, is the secure and trusted exchange of data services among diverse stakeholders. In this work, we present a novel approach for integrating mechanisms for trustworthy and sovereign data exchange, into Multi-access Edge Computing (MEC) environments. To this end, we introduce an architecture that extends the ETSI MEC Architectural Framework with artifacts from the International Data Spaces Reference Architecture Model, accompanied by processes that automatically enrich Edge Computing applications with data space capabilities in an as-a-service paradigm. To validate our approach, we implement an open-source prototype solution and we conduct experiments that showcase its functionality and scalability. To our knowledge, this is one of the first concrete architectural specifications for enabling data space features in MEC systems.
## I Introduction
Edge Computing, i.e., the Cloud Computing paradigm that brings data processing and data storage in close proximity to-- or directly on--the "edge" network nodes of data providers and end users, is increasingly gaining traction. The promise of support for next-generation decentralized applications and services through unprecedented optimizations in delays and bandwidth usage, has already established Edge Computing as a fundamental pillar of the 5G ecosystem, and nowadays is being recognized, accompanied with Artificial Intelligence, as a key enabler for future 6G networks.
At the same time, applications are becoming more distributed in nature and the value chain of data services is crossing multiple administrative and business domains to offer competitive advantages to data-driven ecosystem stakeholders. The communications and infrastructures domains appear to follow this trend, by opting for more open and interoperable distributed architectures that can minimize CapEx and OpEx costs and catalyze the rollout and evolution of advanced modern services. Open Radio Access Network (Open RAN) and Service Based Architecture (SBA) of the mobile core are characteristic examples of this new model in the telecommunications domain, indicating that the management, automation and optimization of network and application services will shortly not be performed within isolated administrative environments, but rather in a synergistic operation of collaborating systems.
To showcase the benefits of data-driven collaboration across systems and actors, we employ a motivating use-case example from the Autonomous Driving domain, as illustrated in Fig. 1. We envision different scenarios for autonomous, connected vehicles on a highway, which are powered with advanced safety, traffic-routing and other features delivered by the enactment of data service ecosystems. For instance, we consider the'see-through' case of a vehicle with an intention to overtake a track and is being temporarily provided access to a video stream of a car, equipped with a camera, preceding that track. Similar scenarios are applicable, such as the 'platooning' of vehicles that can drive in a group-like coordinated manner, sharing sensing information, and the exchange of safety or traffic alerts among vehicles on the highway. A particularly interesting aspect among those scenarios is the heterogeneity of provided manufacturer technologies, the connectivity modes (e.g., cellular vehicle-to-infrastructure (V2I) connectivity via gNodeBs or access through road-side units (RSU) by the highway operator), the data application providers and operators, the various tiers of distributed applications (e.g., client tier - MEC tier - backend tier) as well as the co-existence of legacy, non-connected vehicles to the highway.
In an attempt to encompass such use cases of interaction at the Network Edge, the ETSI Multi-access Edge Computing (MEC) ISG has introduced a comprehensive Architectural Framework[1] that facilitates not only the life-cycle management of applications (i.e., MEC Applications) that are running on virtualization Edge infrastructures, but also the creation of an ecosystem for the exchange of Edge Services among MEC Applications. Despite a clear definition of functional blocks and workflows for the establishment of service exchange mechanisms, this framework is by-design application-agnostic and does not specify in detail the security and authentication primitives that would allow cross-domain, multi-party collaboration between edge actors. Nevertheless, we argue that the establishment of trustworthy, secure and data-resource-aware data exchange between MEC Applications is pivotal to the opening of fragmented edge computing environments and to unleashing the potential of open, decentralized architectures.
Recently, several "Data Space" initiatives have emerged, specifically focused on the definition standards and procedures to ensure the reliable, trustworthy and sovereign exchange of data services, across organizational boundaries, using well-defined, open interfaces. Although the added value of enhanced data security in Edge Computing Industry 4.0 applications is well identified [2], straightforward and practical instructions of how such merging of paradigms should take place are still missing.
In this work, we take an important step of incorporating Data Spaces features as native services in MEC systems. To this end, we propose an extension of the ETSI MEC architecture, by introducing an "IDS-Connector-as-a-service"
approach, instilled directly into the MEC mechanisms. We present in detail the architecture to support such features, as well as the workflow steps for the data spaces-enabled interaction of MEC Applications. To our knowledge, our work is the first to offer concrete directives towards the realization of data spaces-enabled Edge Environments and in this direction, we also provide a prototype implementation of our approach as open source code, with the intention to foster more research by the community in this area.
The rest of the paper is organized as follows. Section II provides background information on data spaces, MEC and early approaches on the intersection of Edge Computing and Data Spaces. Subsequently, Section III presents the architectural view of our approach detailing its MEC Platform automation mechanisms. Section IV describes our experiments from the application of our implemented prototype on different scenarios and finally, the paper ends with a discussion of our findings and concluding remarks in Section V and Section VI respectively.
## II Background
### _International Data Spaces_
The notion of a _data space_ is not new. Moving from traditional Database management systems, data spaces offer enhanced capabilities for browsing through catalogs, local storage and index, and advanced search and query mechanisms [3]. Expanding on this notion lay data spaces initiatives that have recently spawned, such as the International Data Spaces Association (IDSA)1. The main objective of data spaces involve the secure and trusted data exchange among stakeholders, whilst ensuring data sovereignty and monitoring capabilities for the entire data workflow.
Footnote 1: [https://intenationaldataspaces.org/](https://intenationaldataspaces.org/)
International Data Spaces (IDS) [4] represent a decentralized data sharing architectural concept, in which data physically remain at their source and only transferred to another interested part once data exchange requests are instantiated. Data sovereignty and trust are established, since each participant is able to attach usage restrictions to their data and monitor data transactions through continuous monitoring and logging. Additionally, security is ensured, through the identity evaluation of each participant by IDS certified bodies. Furthermore, IDS offers data processing capabilities through certified services, metadata storing, as well as metadata-query functionalities that enable participants to search for the appropriate data sources and request access to the respective data.
Among core participants in the IDS ecosystem are the _data provider_ and the _data consumer_. The data provider is the entity that provides access to a data source, and attach respective usage restrictions, while the data consumer is the participant, who can search for appropriate data sources and after accepting usage policies set by the provider, can obtain access to the data.
A central component of the IDS is the IDS Connector, which enables the data exchange between data providers and consumers. Each of them is represented by a connector, which allows the registration of offered data resources, along with the metadata that describe them. Additionally, the connector facilitates the attachment of usage rules to the data on the provider's side and the usage contract negotiation on the consumer's part, which ultimately leads to the bilateral agreement between involved parties, and ensures the enforcement of data access policies.
Security, being a strategic requirement of the IDS [4], is based on the certification and dynamic monitoring of all participants and technical components (e.g. connectors), as well as the Transfer Layer Security (TLS) protected communication between connectors. Providers and consumers should be successfully certified to participate in the IDS along with their certified IDS connectors. If these conditions are met, a unique IDS-ID is generated and a digital certificate (X.509) is issued for the participant-connector combination, thus enabling the identification, authentication and point-to-point encryption for the communication between connectors. The connector can then be registered at the Dynamic Attribute Provisioning Service (DAPS) component and request a Dynamic Attribute Token (DAT) through which the validity of the connector's self-description is certified. The DAT is included in every outgoing communication message of the connectors, thus ensuring the trustfulness of communication partners at any time.
The IDS framework has recently gained traction and has been identified as a data ecosystem enabler across many industry domains, including the Energy, Manufacturing and Health Care data sectors [5], providing a secure, sovereign and trustworthy framework for data sharing. Noteworthy initiatives, such as the Catena-X project [6] and the Mobility Data Space [7], are exploiting IDS components for the development of their data space ecosystems; however these projects are work in progress and not mature enough to be considered for production.
Fig. 1: Motivating Use Case from the Autonomous Driving domain
### _MEC Architectural Framework_
Multi-Access Edge Computing (MEC), as proposed by ETSI [1], is a highly promising framework that paves the way towards satisfying ultra low-latency requirements, as well as, providing rich computing environment for value-added services closer to end-users. Specifically, MEC enables the implementation of MEC applications as software-only entities, existing on top of a Virtualisation infrastructure, which is located on or close to the network edge. This framework defines a reference architecture comprised of various entities, acting at a system-, host- and network-level, remaining however generic enough to allow for the development of extensions.
The core functionalities of the MEC architecture are realized at the MEC-host level, which contains the MEC platform and a Virtualisation infrastructure, which provides computing, storage and network resources for the MEC applications. The MEC Platform provides an environment, where MEC applications can discover, advertise, consume and offer MEC services, while maintaining responsibility for receiving various traffic rules and DNS. Furthermore, the MEC Platform is responsible for offering its own MEC services, regarding the management of MEC Applications services and location information about the registered Applications, Zones and Access Points, etc. Moreover, MEC applications exist as Virtualised applications on top of the Virtualisation infrastructure provided by the MEC host and are able to communicate with other applications towards consuming or providing services.
The process of discovering and utilizing a MEC service by a MEC App is well defined. Specifically, MEC Applications perform _availability queries_ to the MEC Platform and receive a list of all the available MEC services, along with the necessary information required for their consumption. Subsequently, each MEC App possesses the exposed APIs for the desired MEC service and can access the data provided by that service.
Several works [8] have proposed extensions of the MEC architecture with complementary technologies and architectures (by modifying or adding new entities or reference points), such as Network Functions Virtualization (NFV)[10], Software Defined Networking (SDN), and Cloud-Radio Access Network (C-RAN)[9], thus expanding MEC capabilities for improved traffic management, task offloading and resource orchestration and virtualization across edge nodes, among others.
### _Data Spaces "on the Edge"_
Several studies have explored the idea of incorporating data spaces to edge computing architectures, however, to the best of our knowledge, none has gone thus far as to implement them in practice and even utilize a specific reference architecture, such as IDS [4].Trakadas et al. [11] proposed a decentralized hybrid cloud MEC architecture and highlighted key challenges that emerge in hybrid clouds for data-intensive Internet of Things (IoT) applications, such as issues in privacy and security, which on a conceptualization level proposed to be tackled by utilizing data spaces, to ensure secure and trusted data exchange and provide distributed identity management. Zeiner et al. [12] highlighted the need for data sharing mechanisms between neighboring edge servers and proposed the concept of time-aware data spaces, as a computing unit for collecting and analyzing data, while also ensuring the validity of the data. Sun et al. [13] designed an IoT data sharing privacy-preserving model that is based on the edge computing service, and establishes the virtual data management service, by using a data space layer for the acquisition, query and analysis of data, which are physically distributed in multiple systems.
Although the aforementioned approaches highlight the advantages of incorporating Data Space functionalities on edge platforms, there is no other specific architectural framework or detailed workflows proposed to indicate how this could be realized. Furthermore, none of these approaches propose the integration of specific Data Space frameworks, such as the IDS or reference Edge architectures, such as ETSI MEC. The key contribution of this work is the integration of IDS into the MEC Architecture and the use of the IDS Connector component for the communication and data exchange among MEC Applications or the MEC Platform by proposing the concept of the IDS Connector-as-a-Service. With this new functionality, MEC hosts can provide a trusted and secure data sharing environment among the MEC Applications or MEC Platform, where data sovereignty is preserved, by enabling the attachment of strictly defined, uniform access restrictions to the data.
## III The EdgeDS Approach
### _Architecture_
In this work, we suggest the integration of Data Spaces concepts with MEC to exploit on the secure and trustworthy data sharing mechanisms. Our introduced architecture is depicted in Fig. 2, where extensions to the original ETSI MEC Architecture are denoted as additional components with highlighted (green) background. Specifically, as shown in Fig. 2, each MEC Application (MEC App) can act as a data provider/consumer, in accordance with IDS roles distinction. By introducing the concept of _IDS Connector-as-a-Service_ among the functionalities provided by the MEC Platform, each application is capable of obtaining its own connector--which exists within the MEC Platform as a MEC Platform-offered service instance--and exchange data with any application that has a registered connector inside or outside the MEC host.
The desired architecture is able to guarantee the secure transfer of data between two applications, respecting IDS-defined constraints, regardless of the type of devices or the network these devices are connected to. Hence, six distinct use cases exist that can be defined, depending on the type of entities that participate in the data exchange scheme, as well
Fig. 2: Extension of ETSI MEC Architectural Framework with Data Spaces Enablement.
as their location, relative to a MEC Host. In particular, the proposed architecture supports the consumption of data services between the MEC Platform and a MEC Application or between two MEC Applications, while any of those exchanging parties can belong in the same or different MEC Host(s). Moreover, data exchange between an External App and a MEC Platform or a MEC Application is also supported.
The integration of IDS policies inside the MEC architecture paves the ground for a straightforward exchange of data between any of the two aforementioned entities. As proposed both in MEC and IDS architectures, a list (catalog) of the registered data services is made available. Specifically, the former defines a list of the available services provided by each MEC Application, while the latter proposes one or more catalogues of the available data resources within each connector. Keeping in line with both approaches, a list of all the available MEC Services that are not IDS enabled is reserved, along with all the connectors (i.e., connector service instances) present within the MEC Host. This approach supports the concept of Connector-as-a-Service, providing to MEC Applications an interface of the connectors similar with the one of the regular MEC services. Additionally, the information relevant to the data provided through the connectors is accessible within each connector's catalogue(s).
Furthermore, the proposed integration of IDS into the MEC architecture enables the incorporation of specific IDS-certified _Data Apps_ to the MEC Platform. These applications can be regarded as data processing services (Extract-Transform-Load (ETL), Analytics, ML models, etc), which can be used in the data exchange workflow and are either offered by the Provider's, the Consumer's or third-party IDS Connectors. Data Apps are made available to the MEC Platform, by being registered at a specific IDS component (App Store) within the MEC Platform Service Registry. This allows for data interactions between MEC Applications, focusing on the exchange of data processing results. For instance, with respect to the example from the Autonomous Driving domain described in Section 1, an Object Detection IDS-certified Data App could be offered by a third-party in the MEC Platform, enabling a MEC Application to send certain image frames and another MEC Application to retrieve the detected objects from those images, rather than the images themselves. Apart from offering rich data processing utilities at the Edge, data apps provide also an additional means of privacy-preservation, since they enable the exchange of data processing output and aggregated data, instead of exchanging raw datasets among MEC Apps.
In the scenario of two MEC Applications exchanging data, we denote as MEC App2 the application that attempts to receive data and as MEC App1 the one that provides the desired data. After the completion of instantiation for both applications and the assignment of a certified IDS connector to each of them (more details in Subsection III-B below), the connectors' information are made available though the MEC Platform's relevant API for service discovery and availability. While data services that are not IDS enabled are registered directly on the MEC Platform, MEC App1 registers the data resources it provides on its own connector. In order to consume the desired data services, MEC App2 receives the information of MEC App1's connector. Subsequently, MEC App2 performs a request to its own connector, encapsulating the information of the other connector, along with the identifiers of the targeted resources. Finally, if the rule policies attached to the contract of the requested resource clear the request, then data transfer from the provider to the consumer is followed as defined by IDS.
Regarding the other use cases, in the scenario of a MEC Application consuming data services from the MEC Platform of its own host, the process is the same as above, since the application is able to discover Platform's services through the same APIs, while the relevant connectors will be located within the same host. The process to be followed, however, when the two parties of the transfer belong to a different MEC Host, needs to be extended so that both can access the information regarding the relevant connectors. In particular, as proposed by ETSI [1] they should be able to communicate through the Reference Point Mp3, which is reserved for accessing other MEC Hosts.
Towards incorporating the concept of Dataspaces and taking advantage of their features, we adopt the current version of IDS reference architecture as proposed by the IDSA, because it is generic enough to encompass diverse scenarios. Moreover, the placement of each connector within the MEC Platform is motivated by the need of restricting them in a controlled environment of well-defined security and available resources. By offering the Connector-as-a-Service feature, we automatically augment MEC Applications with Dataspace capabilities, refraining from imposing on them any data-spaces-related extension requirements.
### _MEC Platform Automation Mechanisms_
While the life-cycle management (LCM) of data spaces-enabling services, such as IDS connectors, into the Edge Cloud continuum, is quite challenging and toilsome, and also requires continuous interaction with different orchestration and management entities, this work offers automation into the context of composition, deployment, activation, authentication and authorization, as well as LCM of those services. Moreover, it also ensures scalability, high availability, and security of the system, through the usage of a MEC Platform Manager, which encapsulates relevant automation features.
To this end, as shown in Fig. 3, through the adoption of the _IDS Connector-as-a-Service_ concept, the MEC Platform Manager is capable of composing, deploying, instantiating and managing the MEC Applications and IDS connectors that are running at the Edge, via its _IDS Connector service management component_. To this end, we employ a simple high-level extension to the MEC Information Model (IM), by simply denoting through an optional boolean attribute whether a MEC app shall be data-spaces-enabled, and the rest is handled by our system automatically. In detail, the MEC Platform Manager undertakes a) the optimal deployment of MEC Applications and IDS connectors at the appropriate edge nodes, based on their available resources, b) the registration of IDS connectors as a service into the MEC Platform Service Registry, c) the network configuration and composition, in that it connects an IDS connector with a MEC App if the latter is denoted as data-spaces-enabled, and d) the assignment of a certificate to each IDS connector, during its instantiation, derived from a pool of available certificates that exist within the MEC Platform, as determined in IDS security strategic requirements analyzed in section II. For the latter functionality, we assume that each MEC Host is equipped with a set of pre-accredited certificates, as an output of relevant actions
performed by the _ID management_ component of the _MEC Platform Manager_, upon its interactions with centralized or distributed authorization entities. Apart from the LCM automation of those services and Applications, the MEC Platform is responsible for handling all necessary actions for the successful and efficient termination of a MEC App and its associated IDS connector. Hence, the termination step also includes the de-registration of the IDS connector service from the MEC Platform Service Registry, and the release of the associated certificate back to the certification pool.
## IV Evaluation
Our prototype implementation of the proposed architecture is depicted in Fig. 3 and it included the integration of IDS architecture, implemented by Fraunhofer 2, together with a MEC Platform realisation and \(\pi\)-Edge, that was first introduced in [14], both implemented from scratch. Specifically, the latter is an orchestration and management platform for automating the LCM of Platform as a Service (PaaS) functions at the edge. Moreover, its functionalities were used to instantiate and register the MEC Applications and the connectors. Besides the above technologies, Docker was used to containerize each module, and Kubernetes for their deployment. Moreover, \(\pi\)-Edge encompasses an internal MongoDB Database, while each IDS connector has its own PostgreSQL. For our experiments, the Edge infrastructure is comprised of an Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz processor and 32 GB RAM. The code used for the implementation is made open and can be found at [https://github.com/jkalogero/EdgeDS](https://github.com/jkalogero/EdgeDS).
Footnote 2: [https://github.com/International-Data-Spaces-Association/DataspacecConnector](https://github.com/International-Data-Spaces-Association/DataspacecConnector)
For evaluating the proposed architecture, experiments were conducted on transferring data of different sizes, ranging from 1 MB to 150 MB and were inspired from use cases similar to the one depicted in Figure 1. For both experiments, we recorded the time needed for the end-to-end process of instantiating and registering the connectors, creating the IDS resources along with the catalogs, rule policies, contracts and all the necessary IDS entities, fetching the MEC App1' (provider) information, requesting the desired resources, and finally downloading the locally within MEC App2. As a baseline scenario for comparison, we have also conducted the data transfer via direct MEC services.
As seen in Fig. 4, the total time for a complete data transaction in the case of IDS-enabled MEC applications was greater compared to the MEC applications direct communication scenario. Specifically, the time required for the IDS services preparation step was three times greater than the instantiation time of MEC Applications in the direct MEC App communication scenario.
With regards to the IDS catalogue configuration step, the required time for the IDS enabled scenario was stable independently of the data volume, and equal to 1.37s, whereas in the case of direct MEC App communication, the required time to register the respective MEC service was two orders of magnitude lower, and equal to 0.038s.
Lastly, the data exchange time increased, as a result of the volume of the exchanged data resource, and was also two orders of magnitude greater in the case of IDS enabled MEC Applications, compared to the direct MEC Application data transfer scenario.
In order to conduct a comprehensive analysis of the scalability potential of the proposed architecture, we carried out a series of experiments to evaluate its capacity to manage a substantial volume of traffic, by subjecting the provider to numerous requests for data exchange. By concurrently generating multiple requests for data services from a single data provider, we were able to observe a significant increase in the pod's CPU utilization, as well as in the time required to complete the data exchange.
In order to tackle the aforementioned issue, we utilized the autoscaling feature, supported by the Kubernetes system. Upon conducting the same experiments, we observed the gradual provisioning of multiple additional connector pods to effectively handle the significant workload. Experiments with
Fig. 4: Data exchange for various data sizes.
Fig. 3: Experimental Setup.
various intensive workloads, resulted to successful data transfers with a limited average CPU utilization, while significantly reducing the time required to complete the transfers.
## V Discussion
During our evaluation, we have successfully deployed the integration of our introduced IDS Connector-as-a-Service approach within the MEC Architecture. Completion time of a data exchange process was used as a key performance indicator in our experiments, measuring the management overhead of introducing data space features to the MEC system. Specifically, instantiation time in the IDS Connector-as-a-Service scenario was found to be greater, as it entails the preparation of the two IDS connectors, and their registration to the MEC Platform service registry, on top of the MEC applications deployment. However, this step only takes place once and therefore does not impose any further delay in the data exchange process, in case of existing IDS enabled MEC applications.
In the direct MEC App communication, registration of the MEC service corresponds to the time needed for MEC App1 to register the respective data service to the MEC Platform, whereas for IDS-enabled MEC applications, the configuration time of the IDS catalogue involves the time MEC App1 needs to obtain the service information of Connector 2 from the MEC Platform and use the connector to register the offered data resource's metadata and usage restrictions, which is realized through several API calls. Additional delay could also result from the continuous logging the Connector provides as a live monitoring feature.
In the direct MEC App communication case, data exchange only depends on the time MEC App2 requires to obtain the service information of MEC App1 from the MEC Platform, as well as to request and receive the data directly from MEC App1. However, for the IDS-enabled scenario, this time corresponds to the retrieval of the two Connectors' service information from the MEC Platform by MEC App2 and the time needed to request Connector 2 available data sources, negotiate the contract agreement based on the usage rules attached to the requested data resource and eventually access the data. Further delays could be caused by the fact that downloaded data are locally stored in the Connector's database as a bytestream and are automatically decoded on the API call, as well as by the continuous logging feature of the Connectors, which as mentioned above, enhances security and data sovereignty according to the IDS protocol.
## VI Conclusion
In this study, we showcased a novel approach for the integration of IDS Connector components into the ETSI MEC Architecture, in order to address emerging needs for secure, trustworthy and sovereign data exchange. The proposed architecture provides the capability of utilizing and composing Dataspace-enabled MEC Applications, without human involvement.
To corroborate the usability of this proposed framework, we implemented a cloud-native prototypical solution of our architecture and performed an experiment, comprising of a data exchange use case, with varying data sizes, and compared the time needed for a complete data exchange cycle, both in the case of IDS enabled MEC applications and in the direct MEC App communication scenario. Our findings highlight the feasibility and usability of incorporating an IDS Connector-as-a-Service within a MEC Platform scheme and provide evidence for the scalability potential of such frameworks.
## Acknowledgment
This work is part of the European Union's Horizon Europe research and innovation programme under grant agreement No 101057527 (NextGEM) funded by the European Union. Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
|
2302.08114 | Energy decay for wave equations with a potential and a localized damping | We consider the total energy decay together with L^2-bound of the solution
itself of the Cauchy problem for wave equations with a localized damping and a
short-range potential. We treat it in the one dimensional Euclidean space R. We
adopt a simple multiplier method to study them. In this case, it is essential
that the compactness of the support of the initial data is not assumed. Since
this problem is treated in the whole space, the Poincare and Hardy inequalities
are not available as is developed in the exterior domain case. For compensating
such a lack of useful tools, the potential plays an effective role. As an
application, the global existence of small data solution for a semilinear
problem is provided. | Ryo Ikehata, Xiaoyan Li | 2023-02-16T06:35:23Z | http://arxiv.org/abs/2302.08114v1 | # Energy decay for wave equations with
###### Abstract
We consider the total energy decay together with \(L^{2}\)-bound of the solution itself of the Cauchy problem for wave equations with a localized damping and a short-range potential. We treat it in the one dimensional Euclidean space \(\mathbf{R}\). We adopt a simple multiplier method to study them. In this case, it is essential that the compactness of support of the initial data is not assumed. Since this problem is treated in the whole space, the Poincare and Hardy inequalities are not available as is developed in the exterior domain case for \(n\geq 1\). For compensating such a lack of useful tools, the potential plays an effective role. As an application, the global existence of small data solution for a semilinear problem is provided.
0
Footnote 0: 2000 Mathematics Subject Classification. Primary 35L70; Secondary 35L05, 35B33, 35B40.
0
Footnote 0: 2000 Mathematics Subject Classification. Primary 35L70; Secondary 35L05, 35B33, 35B40.
## 1 Introduction
We consider the Cauchy problem for wave equations with a potential and a localized damping in one dimensional Euclidean space \(\mathbf{R}\)
\[u_{tt}(t,x)-u_{xx}(t,x)+V(x)u(t,x)+a(x)u_{t}(t,x)=0,\quad(t,x)\in(0,\infty) \times\mathbf{R}, \tag{1.1}\]
\[u(0,x)=u_{0}(x),\ \ u_{t}(0,x)=u_{1}(x),\quad x\in\mathbf{R}, \tag{1.2}\]
where the initial data \([u_{0},u_{1}]\) are taken from the usual energy space for the moment
\[u_{0}\in H^{1}(\mathbf{R}),\quad u_{1}\in L^{2}(\mathbf{R}),\]
and we denote
\[u_{t}=\frac{\partial u}{\partial t},\quad u_{tt}=\frac{\partial^{2}u}{\partial t ^{2}},\quad u_{xx}=\frac{\partial^{2}u}{\partial x^{2}}.\]
Throughout this paper, \(\|\cdot\|_{q}\) stands for the usual \(L^{q}(\mathbf{R})\)-norm. For simplicity of notation, in particular, we use \(\|\cdot\|\) instead of \(\|\cdot\|_{2}\). \(H^{1}(\mathbf{R})\)-norm is denoted by \(\|\cdot\|_{H^{1}}\). The total energy \(E_{u}(t)\) to the solution \(u(t,x)\) of (1.1) is defined by
\[E_{u}(t)=\frac{1}{2}(\|u_{t}(t,\cdot)\|^{2}+\|u_{x}(t,\cdot)\|^{2}+\|\sqrt{V( \cdot)}u(t,\cdot)\|^{2}). \tag{1.3}\]
\(f\in\mathrm{BC}(\mathbf{R})\) implies that \(f\) is continuous and bounded in \(\mathbf{R}\), and \(f\in\mathrm{BC}^{1}(\mathbf{R})\) means \(f,f^{\prime}\in\mathrm{BC}(\mathbf{R})\).
We shall impose the following two assumptions on \(a(x)\):
\((\mathbf{A.1})\,a\in\mathrm{BC}(\mathbf{R})\) and \(a(x)\geq 0\),
\((\mathbf{A.2})\,\)there exists a constant \(L>0\) and \(\varepsilon_{1}>0\) such that
\[a(x)\geq\varepsilon_{1},\quad|x|\geq L.\]
Additionally, one assumes the following hypothesis on \(V(x)\):
\((\mathbf{V.1})\,V\in\mathrm{BC}^{1}(\mathbf{R})\), \(V(x)>0\) for all \(x\in\mathbf{R}\),
\((\mathbf{V.2})\,V^{\prime}(x)x\leq 0\) for all \(x\in\mathbf{R}\).
**Remark 1.1**: By \((\mathbf{V.2})\), the potential \(V(x)\) is monotone increase in \(\mathbf{R}^{-}\) and monotone decrease in \(\mathbf{R}^{+}\).
Under these conditions, it is known that for each \([u_{0},u_{1}]\in H^{1}(\mathbf{R})\times L^{2}(\mathbf{R})\), the problem (1.1)-(1.2) admits a unique weak solution \(u\in\mathrm{C}([0,\infty);H^{1}(\mathbf{R}))\cap\mathrm{C}^{1}([0,\infty);L^{ 2}(\mathbf{R}))\) (cf. [4]).
Let us mention research background on the equation (1.1).
In the exterior domain case, in [17] the author derives the total energy decay estimate \(E_{u}(t)=\mathcal{O}(t^{-1})\) as \(t\to\infty\) for the mixed problem (without potential terms)
\[u_{tt}(t,x)-\Delta u(t,x)+a(x)u_{t}(t,x)=0,\quad(t,x)\in(0,\infty)\times\Omega, \tag{1.4}\]
\[u(0,x)=u_{0}(x),\ \ u_{t}(0,x)=u_{1}(x),\quad x\in\Omega, \tag{1.5}\]
\[u(t,x)=0,\quad x\in\partial\Omega,\quad t>0, \tag{1.6}\]
where \(\Omega=\mathbf{R}^{n}\setminus\bar{\mathcal{O}}\subset\mathbf{R}^{n}\) is a smooth exterior domain. In [17], the damping \(a(x)\) is effective near infinity (like our assumption (\(\mathbf{A.2}\))) and near a part of trapping boundary of \(\partial\Omega\). Soon after [17], under additional condition on the weighted initial data, the author in [6] obtained faster decay estimates such as \(E_{u}(t)=\mathcal{O}(t^{-2})\) and \(\|u(t,\cdot)\|=\mathcal{O}(t^{-1/2})\) (\(t\to\infty\)) to the equation (1.4). However, in [6] the obstacle \(\mathcal{O}\) must be star-shaped relative to some point to erase the influence of trapped rays. Since these results rely on the Poincare and/or Hardy inequalities, only exterior domain case and higher dimensional case (\(n\geq 2\)) were treated. So, if one considers the Cauchy problems of (1.4) in the Euclidean space \(\mathbf{R}^{n}\), similar results to [6] can be obtained only in the higher dimensional case \(n\geq 3\). In this sense, to get faster energy decay like \(E_{u}(t)=\mathcal{O}(t^{-2})\) seems completely open for the low dimensional case \(n=1,2\) to the equation (1.4). A generalization of [6] without assuming star-shaped obstacle \(\mathcal{O}\) was deeply studied in [1]. By assuming that the boundary of obstacle \(\mathcal{O}\) admits no trapped rays of geometric optics, the generalization of [17] and [6] was also discussed more in detail in [2]. Decay and non-decay properties of the total energy for (1.4) were studied with a logarithmic type (time-space dependent) damping \(a(t,x)\) instead of \(a(x)\) in [13]. On the topic of energy decay for wave equation with asymptotically periodic damping, we refer the readers to [10]. Another generalization of [6] for (1.4) was considered on the noncompact Riemannian manifold in [21]. Additionally, in [22] and [16], the authors have considered the Cauchy and exterior mixed problems for the Klein -Gordon type wave equations with localized dissipations. In this case, to capture the behavior of \(L^{2}\)-norm of the solution as \(t\to\infty\) seems much more easy, because the corresponding energy functional itself contains the \(L^{2}\)-norm of solution. It should be mentioned that an interesting problem awareness on the failure of the Hardy inequality in the one dimensional case is studied similarly in the paper [20]. We use a help of potential to avoid such a failure of the Hardy inequality. Diffusion phenomenon in abstract form and its applications to wave equations with variable damping coefficients can be found in [19], there non-degenerate and bounded damping coefficients \(a(x)\) are treated, however, any potential terms are not considered.
The purpose of this paper is to consider whether faster energy decay estimates to problem (1.1)-(1.2) can be observed or not with the help of potential \(V(x)\) in the one dimensional case. The problem itself is never trivial in the sense that one has no any Hardy's and Poincare's inequalities. Thus, to derive useful estimates of the solution concerning two quantities
\[\|u(t,\cdot)\|,\quad\int_{0}^{t}\int_{\mathbf{R}}a(x)|u(s,x)|^{2}dxds\]
is both essential parts of analysis. As one more difficult point, one must absorb the localized \(L^{2}\)-norm of the solution itself into the total energy in the course of proof, that is, we have to derive the following relation such that
\[\int_{|x|\leq L}|u(t,x)|^{2}dx\leq CE_{u}(t) \tag{1.7}\]
with some \(C>0\). In [6], (1.7) can be derived with the help of Poincare inequality. In our case, this can not be available anymore. So, one borrows a role of potential to get such bounds. This causes some restrictions to the shape of \(V(x)\). In a sense, the potential \(V(x)\) compensates a lack of the Poincare and Hardy inequalities. This idea has its origin in the first author's recent paper [7]. Totally, the whole space case seems much more difficult than the exterior domain case, since useful tools are less prepared. It is important how we treat the localized \(L^{2}\)-norm in terms of potential because the damping is not effective near origin.
Furthermore, as is pointed out recently in [8], in the case of \(V(x)=a(x)=0\) for all \(x\in{\bf R}\) (i.e., free wave case), the \(L^{2}\)-norm of the solution itself to problem (1.1)-(1.2) grows to infinity as \(t\to\infty\)
\[\|u(t,\cdot)\|\sim\sqrt{t}\quad(t\to\infty). \tag{1.8}\]
Thus, as is soon imaged, if non-trivial \(V(x)\) and \(a(x)\) are considered with a quite large \(L\) (less strongly effective damping) and rapidly decaying potential \(V(x)\) (less strongly effective potential), such a singular property (1.8) naturally affects on the boundedness of \(L^{2}\)-norm of the solution itself. This causes some difficulty in deriving the priori-estimates of solutions. So, it seems that one dimensional case is interesting to study under small effects of damping and potential.
To state our results, we introduce the weighted function spaces.
Set
\[w(x):=1+V(x)^{-1}.\]
Then, the weighted \(L^{2}\)-space is defined by
\[L^{2}({\bf R},w):=\{u\in L^{2}({\bf R})\,:\,\int_{{\bf R}}|u(x)|^{2}w(x)dx<+\infty\}\]
equipped with its norm
\[\|u\|_{L^{2}({\bf R},w)}:=\left(\int_{{\bf R}}|u(x)|^{2}w(x)dx\right)^{1/2}.\]
Note that \(w^{-1},w\in L^{1}_{loc}({\bf R})\).
Our new result reads as follows.
**Theorem 1.1**: _Suppose_ (**A.1**)_,_ (**A.2**)_,_ (**V.1**) _and_ (**V.2**) _with a fixed constant \(L>0\). Then, there exists a constant \(C^{*}>0\) such that if \(V(0)<(4C^{*})^{-1}\), the weak solution \(u(t,x)\) to problem_ (1.1)-(1.2) _with initial data \([u_{0},u_{1}]\in(H^{1}({\bf R})\cap L^{2}({\bf R},w))\times L^{2}({\bf R},w)\) satisfies_
\[\|u(t,\cdot)\|\leq CI_{0},\quad E_{u}(t)\leq CI_{0}^{2}(1+t)^{-1}\]
_with some constant \(C>0\), where_
\[I_{0}^{\mu}:=\|u_{0}\|_{H^{1}}^{\mu}+\|u_{1}\|^{\mu}+\|\frac{u_{1}+a(\cdot)u_ {0}}{\sqrt{V(\cdot)}}\|^{\mu},\quad(\mu=1\mbox{ or }2).\]
**Remark 1.2**: \(C^{*}>0\) is a constant closely related with the modified Poincare inequality (see Lemma 2.2). For the condition \(V(0)<1/(4C^{*})\), see (2.11) of the text below.
**Remark 1.3**: The obtained decay rates are slower than that studied in [12] for the one dimensional case. A singularity appeared in (1.8) for free waves may affect on a property of the quantity \(\|u(t,\cdot)\|\) (as \(t\to\infty\)) of the solution \(u(t,x)\) to problem (1.1)-(1.2).
**Example 1.** Suppose \(\beta>1\). One can provide a function \(V(x)\) satisfying (**V.1**) and (**V.2**). Indeed, one can present such an example \(V\in{\rm C}^{1}({\bf R})\) satisfying
\[V(x)=\left\{\begin{array}{ll}\frac{2V_{0}}{L^{\beta}}-\frac{V_{0}}{L^{2 \beta}}|x|^{\beta},&|x|\leq L,\\ &\\ V_{0}|x|^{-\beta},&|x|\geq L,\end{array}\right.\]
where \(V_{0}\) is a positive number. Since \(V(0)=\frac{2V_{0}}{L^{\beta}}\), the smallness of \(V(0)\) assumed in Theorem 1.1 can be realized by choosing small \(V_{0}\) for each fixed \(L>0\). \(V(x)\) is the short-range potential, and in particular, \(\beta=2\) corresponds to the scale invariant case.
**Example 2.** One can give another example by \(V(x)=V_{0}e^{-\nu x^{2}}\) with \(\nu>0\) and small \(V_{0}\) determined in Theorem 1.1.
**Remark 1.4**: The author in [15] treats the case \(a(x)=\mbox{constant}>0\) in \({\bf R}\), and the potential satisfies \(V(x)\geq k_{0}(1+|x|)^{-\beta}\) with \(0\leq\beta<1\) (\(k_{0}>0\)), that is, the long-range potential case is considered under compact support condition on the initial data. Then, the exponential decay of the total energy is obtained. A strong role of potential is effective. While, we are treating the short-range potential together with a localized damping, the effect of them is less strong.
**Remark 1.5**: Our next project is to study the Cauchy problem of the equation from a similar point of view:
\[u_{tt}-\Delta u+V(x)u+a(x)u_{t}=0,\]
where \(V(x)={\cal O}(|x|^{-\alpha})\) and \(a(x)={\cal O}(|x|^{-\beta})\) as \(|x|\to\infty\) for some \(\alpha>0\), \(\beta>0\). We treat the equation in the one dimensional whole space. This seems still open (cf. [3], [11]).
This paper is organized as follows. In Section 2, we shall prove Theorem 1.1 by relying on a sophisticated multiplier method which was introduced by the first author. An application to semilinear problem of (1.1) will be presented in Section 3.
## 2 Proof of Theorem 1.1
In this section, we prove Theorem 1.1 by dividing the proof into several lemmas.
Firstly, one prepares the following important lemma, which plays an alternative role of the Poincare inequality.
**Lemma 2.1**: _Set \(V_{L}:=\min\{V(L),V(-L)\}>0\). Suppose_ **(V.1)** _and_ **(V.2)**_. Then, it holds that_
\[\int_{|x|\leq L}|u(t,x)|^{2}dx\leq\frac{2}{V_{L}}E_{u}(t),\quad(t\geq 0),\]
_where \(u(t,x)\) is the solution to problem_ (1.1)-(1.2)_._
_Proof of Lemma 2.1._ Because of **(V.1)** and **(V.2)**, once one notices the relation \(V_{L}\leq V(x)\) for \(|x|\leq L\), it is easy to derive the following inequality
\[\int_{|x|\leq L}|u(t,x)|^{2}dx =\int_{|x|\leq L}\frac{1}{V_{L}}V_{L}|u(t,x)|^{2}dx\] \[\leq\frac{2}{V_{L}}\int_{\bf R}\frac{1}{2}V(x)|u(t,x)|^{2}dx\] \[\leq\frac{2}{V_{L}}E_{u}(t), \tag{2.1}\]
where the definition of \(E_{u}(t)\) is used. \(\Box\)
We additionally prepare the Poincare type inequality in the one dimensional whole space. An essential part of its proof is found in [14, Lemma 2.1].
**Lemma 2.2**: _Let \(L>0\) be a constant. Then, there is a constant \(C^{*}>0\) which depends on \(L\) such that_
\[\int_{|x|\leq L}|w(x)|^{2}dx\leq C^{*}\left(\int_{\bf R}|w_{x}(x)|^{2}dx+\int_ {|x|\geq L}|w(x)|^{2}dx\right)\]
_for \(w\in H^{1}({\bf R})\)._
The first part of the proof of Theorem 1.1 follows [17]. For this trial, one firstly considers the smooth initial data case \([u_{0},u_{1}]\in C_{0}^{\infty}({\bf R})\times C_{0}^{\infty}({\bf R})\). Then, the corresponding solution \(u(t,x)\) to problem (1.1)-(1.2) becomes sufficiently smooth to guarantee the integration by parts.
We also prepare the following identities.
**Proposition 2.1**: _Let \(u(t,x)\) be a smooth solution to problem (1.1)-(1.2) with smooth initial data \([u_{0},u_{1}]\in C_{0}^{\infty}({\bf R})\times C_{0}^{\infty}({\bf R})\). Then, it holds that_
\[E_{u}(t)+\int_{0}^{t}\int_{\bf R}a(x)|u_{s}(s,x)|^{2}dxds=E_{u}(0), \tag{2.2}\]
\[\frac{d}{dt}(u_{t}(t,\cdot),u(t,\cdot))-\|u_{t}(t,\cdot)\|^{2}+\|u_{x}(t,\cdot )\|^{2}+\|\sqrt{V(\cdot)}u(t,\cdot)\|^{2}+\frac{1}{2}\frac{d}{dt}\int_{\bf R}a (x)|u(t,x)|^{2}dx=0. \tag{2.3}\]
Take
\[\phi(x)=\left\{\begin{array}{ll}\varepsilon_{1},&|x|\leq L,\\ \frac{L\varepsilon_{1}}{r},&|x|\geq L,\end{array}\right.\]
where \(r=|x|\). Note that \(\phi(x)\) is Lipschitz continuous in \({\bf R}\). As in [17], multiplying both sides of (1.1) by \(\phi(x)xu_{x}\) and integrating over \({\bf R}\), by the integration by parts, one finds that
\[\frac{d}{dt}\int_{\bf R}u_{t}(t,x)\phi(x)(x\cdot u_{x}(t,x))dx+ \frac{1}{2}\int_{\bf R}(\phi(x)+\phi^{\prime}(x)x)|u_{t}(t,x)|^{2}dx\] \[+\frac{1}{2}\int_{\bf R}(\phi(x)+\phi^{\prime}(x)x)|u_{x}(t,x)|^ {2}dx-\frac{1}{2}\int_{\bf R}(V(x)\phi(x)+V(x)\phi^{\prime}(x)x)|u(t,x)|^{2}dx\] \[-\frac{1}{2}\int_{\bf R}V^{\prime}(x)\phi(x)x|u(t,x)|^{2}dx+\int _{\bf R}a(x)u_{t}(t,x)\phi(x)x\cdot u_{x}(t,x)dx=0, \tag{2.4}\]
where \(\phi^{\prime}(x)=\frac{d\phi}{dx}\), and \(V^{\prime}(x)=\frac{dV}{dx}\). Thus, it follows from (2.2), (2.3) and (2.4) that
\[\frac{d}{dt}\left(\int_{\bf R}u_{t}(t,x)\phi(x)(x\cdot u_{x}(t,x) )dx+\alpha(u_{t}(t,\cdot),u(t,\cdot))+\frac{\alpha}{2}\int_{\bf R}a(x)|u(t,x)| ^{2}dx+kE_{u}(t)\right)\] \[+\int_{\bf R}(\frac{\phi(x)+x\phi^{\prime}(x)}{2}-\alpha+\frac{ ka(x)}{2}+\frac{ka(x)}{2})|u_{t}(t,x)|^{2}dx\] \[+\int_{\bf R}(\alpha+\frac{\phi(x)+\phi^{\prime}(x)x}{2})|u_{x}(t,x)|^{2}dx+\frac{1}{2}\int_{\bf R}V(x)\left(2\alpha-\phi^{\prime}(x)x\right)|u (t,x)|^{2}dx\] \[=-\int_{\bf R}a(x)u_{t}(t,x)\phi(x)(x\cdot u_{x}(t,x))dx+\frac{1}{ 2}\int_{\bf R}\left(V^{\prime}(x)x+V(x)\right)\phi(x)|u(t,x)|^{2}dx, \tag{2.5}\]
where one has just used two identities (2.2) and (2.3) multiplied by positive parameters \(k>0\) and \(\alpha>0\), respectively.
Now, by **(V.2)** and Lemma 2.2, it follows that
\[\int_{\bf R}\left(V^{\prime}(x)x+V(x)\right)\phi(x)|u(t,x)|^{2}dx\] \[\leq\int_{\bf R}V(x)\phi(x)|u(t,x)|^{2}dx\] \[\leq\int_{|x|\leq L}V(x)\varepsilon_{1}|u(t,x)|^{2}dx+\int_{|x| \geq L}\frac{L\varepsilon_{1}}{|x|}V(x)|u(t,x)|^{2}dx\] \[\leq\varepsilon_{1}V(0)C^{*}\left(\int_{|x|\geq L}|u(t,x)|^{2}dx +\int_{\bf R}|u_{x}(t,x)|^{2}dx\right)+\varepsilon_{1}V_{L}^{{}^{\prime}}\int_ {|x|\geq L}|u(t,x)|^{2}dx\] \[=\varepsilon_{1}V(0)C^{*}\int_{\bf R}|u_{x}(t,x)|^{2}dx+\left(V(0 )C^{*}+V_{L}{}^{\prime}\right)\int_{|x|\geq L}\varepsilon_{1}|u(t,x)|^{2}dx, \tag{2.6}\]
where \(V_{L}^{\prime}:=\max\{V(L),V(-L)\}\). By using **(A.2)**, one has
\[\int_{\bf R}(V^{\prime}(x)x+V(x))\,\phi(x)|u(t,x)|^{2}dx\] \[\leq \varepsilon_{1}V(0)C^{*}\int_{\bf R}|u_{x}(t,x)|^{2}dx+\left(V(0)C ^{*}+{V_{L}}^{\prime}\right)\int_{\bf R}a(x)|u(t,x)|^{2}dx. \tag{2.7}\]
Therefore, (2.5) and (2.7) yield
\[\frac{d}{dt}\left(\int_{\bf R}u_{t}(t,x)\phi(x)(x\cdot u_{x}(t,x) )dx+\alpha(u_{t}(t,\cdot),u(t,\cdot))+\frac{\alpha}{2}\int_{\bf R}a(x)|u(t,x)|^ {2}dx+kE_{u}(t)\right)\] \[\quad+\int_{\bf R}\left(\frac{\phi(x)+x\phi^{\prime}(x)}{2}- \alpha+\frac{ka(x)}{2}\right)|u_{t}(t,x)|^{2}dx+\frac{k}{2}\int_{\bf R}a(x)|u_ {t}(t,x)|^{2}dx\] \[\quad+\int_{\bf R}(\alpha+\frac{\phi(x)+\phi^{\prime}(x)x}{2}- \frac{C^{*}\varepsilon_{1}V(0)}{2})|u_{x}(t,x)|^{2}dx+\frac{1}{2}\int_{\bf R} V(x)(2\alpha-\phi^{\prime}(x)x)|u(t,x)|^{2}dx\] \[\leq -\int_{\bf R}a(x)u_{t}(t,x)\phi(x)(x\cdot u_{x}(t,x))dx+\frac{V(0 )C^{*}+{V_{L}}^{\prime}}{2}\int_{\bf R}a(x)|u(t,x)|^{2}dx. \tag{2.8}\]
Now, according to **(A.2)**, there exists a number \(\alpha>0\) and a small number \(\varepsilon_{2}>0\) such that
\[\frac{\phi(x)+x\phi^{\prime}(x)}{2}-\alpha+\frac{ka(x)}{2}>\varepsilon_{2}>0, \tag{2.9}\]
\[\alpha+\frac{\phi(x)+x\phi^{\prime}(x)}{2}>\varepsilon_{2}>0 \tag{2.10}\]
with some constant \(\varepsilon_{2}\in(0,\frac{\varepsilon_{1}}{2})\) and \(k\geq 2\). In fact, we can choose
\[\alpha=\frac{\varepsilon_{1}}{4},\quad\varepsilon_{2}=\frac{\varepsilon_{1}} {8}.\]
Furthermore, choose \(V(0)>0\) small enough to satisfy
\[\varepsilon_{2}=\frac{\varepsilon_{1}}{8}>\frac{C^{*}\varepsilon_{1}V(0)}{2},\]
that is,
\[0<V(0)<\frac{1}{4C^{*}}. \tag{2.11}\]
It's obvious that (2.10) and (2.11) guarantee the positiveness such that
\[\gamma_{0}:=2(\varepsilon_{2}-\frac{C^{*}\varepsilon_{1}V(0)}{2})>0.\]
One hand, one can obtain the following estimate
\[J_{1}(t):=\left|\int_{\bf R}a(x)u_{t}\phi(x)(x\cdot u_{x})dx\right|\ \leq\frac{L \varepsilon_{1}k}{8}\int_{\bf R}a(x)|u_{t}|^{2}dx+\frac{2L\varepsilon_{1}\|a \|_{\infty}}{k}\int_{\bf R}|u_{x}|^{2}dx. \tag{2.12}\]
Indeed,
\[J_{1}(t) \leq L\int_{|x|\leq L}\frac{\sqrt{k}}{2}\sqrt{a(x)}|u_{t}(t,x)| \varepsilon_{1}\cdot\sqrt{a(x)}|u_{x}(t,x)|\frac{2}{\sqrt{k}}dx\] \[\quad+\int_{|x|\geq L}\frac{\sqrt{k}}{2}\sqrt{a(x)}|u_{t}(t,x)| \varepsilon_{1}L\sqrt{a(x)}|u_{x}(t,x)|\frac{2}{\sqrt{k}}dx\] \[=L\varepsilon_{1}\int_{\bf R}\left(\frac{\sqrt{k}}{2}\sqrt{a(x)}| u_{t}(t,x)|\right)\left(\sqrt{a(x)}|u_{x}(t,x)|\frac{2}{\sqrt{k}}\right)dx\] \[\leq\frac{L\varepsilon_{1}}{2}\int_{\bf R}\frac{k}{4}a(x)|u_{t}(t,x)|^{2}dx+\frac{L\varepsilon_{1}}{2}\int_{\bf R}\frac{4}{k}a(x)|u_{x}(t,x)|^{ 2}dx\]
\[\leq\frac{L\varepsilon_{1}k}{8}\int_{\mathbf{R}}a(x)|u_{t}(t,x)|^{2}dx+\frac{V(0)C ^{*}+{V_{L}}^{\prime}}{2}\int_{\mathbf{R}}a(x)|u(t,x)|^{2}dx.\]
Therefore, we find that there exists a positive constant
\[\eta_{0}:=\min\{\frac{\varepsilon_{1}}{4},\ P_{0},\ 2\alpha\}\]
depending only on \(\varepsilon_{1}\), \(L>0\) and \(\|a\|_{\infty}\) such that
\[\frac{d}{dt}G_{k}(t)+\eta_{0}E_{u}(t)\leq-\frac{L\varepsilon_{1}k}{8}E_{u}^{ \prime}(t)+\frac{V(0)C^{*}+{V_{L}}^{\prime}}{2}\int_{\mathbf{R}}a(x)|u(t,x)|^ {2}dx. \tag{2.14}\]
Integrating (2.14) over \([0,t]\), one has
\[G_{k}(t)+\eta_{0}\int_{0}^{t}E_{u}(s)ds\] \[\leq G_{k}(0)-\frac{L\varepsilon_{1}k}{8}E_{u}(t)+\frac{L\varepsilon _{1}k}{8}E_{u}(0)+\frac{V(0)C^{*}+{V_{L}}^{\prime}}{2}\int_{0}^{t}\int_{ \mathbf{R}}a(x)|u(s,x)|^{2}dxds\] \[\leq G_{k}(0)+\frac{L\varepsilon_{1}k}{8}E_{u}(0)+\frac{V(0)C^{*}+ {V_{L}}^{\prime}}{2}\int_{0}^{t}\int_{\mathbf{R}}a(x)|u(s,x)|^{2}dxds. \tag{2.15}\]
Here, we note that
\[|G_{k}(0)|\leq C(\|u_{0}\|_{H^{1}}^{2}+\|u_{1}\|^{2}) \tag{2.16}\]
with some \(C>0\). Thus, one can arrive at the following lemma.
**Lemma 2.3**: _Let \([u_{0},u_{1}]\in C_{0}^{\infty}({\bf R})\times C_{0}^{\infty}({\bf R})\). Then, for the corresponding smooth solution \(u(t,x)\) to problem (1.1)-(1.2), it holds that_
\[G_{k}(t)+\eta_{0}\int_{0}^{t}E_{u}(s)ds\leq C(\|u_{0}\|_{H^{1}}^{2}+\|u_{1}\|^{2 }+\int_{0}^{t}\int_{\bf R}a(x)|u(s,x)|^{2}dxds)\quad(t\geq 0),\]
_with some generous constants \(\eta_{0}>0\) and \(C>0\), provided that \(V(0)>0\) and \(k\geq 2\) are chosen small and large, respectively._
In the next part, let us check the positiveness of \(G_{k}(t)\) defined by (2.13). The proof is similar to [6, Lemma 2.3] except for using Lemma 2.1 instead of the Poincare inequality. Indeed, for \(\varepsilon>0\), it follows from Lemma 2.1 and **(A.2)** that
\[-\alpha(u_{t}(t,\cdot),u(t,\cdot)) \leq\frac{\alpha}{2\varepsilon}\|u_{t}(t,\cdot)\|^{2}+\frac{ \alpha\varepsilon}{2}\|u(t,\cdot)\|^{2}\] \[\leq\frac{\alpha}{\varepsilon}E_{u}(t)+\frac{\alpha\varepsilon}{ 2}\left(\frac{1}{\varepsilon_{1}}\int_{|x|\geq L}a(x)|u(t,x)|^{2}dx+\int_{|x| \leq L}|u(t,x)|^{2}dx\right)\] \[\leq\frac{\alpha}{\varepsilon}E_{u}(t)+\frac{\alpha\varepsilon}{ 2}\left(\frac{1}{\varepsilon_{1}}\int_{|x|\geq L}a(x)|u(t,x)|^{2}dx+\frac{2}{V _{L}}E_{u}(t)\right)\] \[\leq\left(\frac{\alpha}{\varepsilon}+\frac{\alpha\varepsilon}{V_ {L}}\right)E_{u}(t)+\frac{\alpha\varepsilon}{2\varepsilon_{1}}\int_{\bf R}a(x )|u(t,x)|^{2}dx. \tag{2.17}\]
Furthermore, from the definition of the function \(\phi(x)\) one sees
\[-\int_{\bf R}u_{t}(t,x)\phi(x)xu_{x}(t,x)dx\] \[\leq \int_{|x|\geq L}|u_{t}(t,x)|\phi(x)|x||u_{x}(t,x)|dx+\int_{|x| \leq L}|u_{t}(t,x)|\phi(x)|x||u_{x}(t,x)|dx\] \[\leq \int_{|x|\geq L}|u_{t}(t,x)|\frac{L\varepsilon_{1}}{|x|}|x||u_{x} (t,x)|dx+\int_{|x|\leq L}|u_{t}(t,x)|\varepsilon_{1}L|u_{x}(t,x)|dx\] \[= L\varepsilon_{1}\int_{|x|\geq L}|u_{t}(t,x)||u_{x}(t,x)|dx+ \varepsilon_{1}L\int_{|x|\leq L}|u_{t}(t,x)||u_{x}(t,x)|dx\] \[\leq L\varepsilon_{1}\int_{\bf R}|u_{t}(t,x)||u_{x}(t,x)|dx\] \[\leq L\varepsilon_{1}E_{u}(t). \tag{2.18}\]
Thus, (2.17) and (2.18) imply
\[-\alpha(u_{t}(t,\cdot),u(t,\cdot))-\int_{\bf R}u_{t}(t,x)\phi(x)xu _{x}(t,x)dx\] \[\leq \left(\frac{\alpha}{\varepsilon}+\frac{\alpha\varepsilon}{V_{L}} +L\varepsilon_{1}\right)E_{u}(t)+\frac{\alpha\varepsilon}{2\varepsilon_{1}} \int_{\bf R}a(x)|u(t,x)|^{2}dx.\]
Finally, taking \(0<\varepsilon<\varepsilon_{1}\) and choosing \(k\geq 2\) large enough such that
\[\frac{\alpha}{\varepsilon}+\frac{\alpha\varepsilon}{V_{L}}+L\varepsilon_{1}<k,\]
one can arrive at the following lemma.
**Lemma 2.4**: _Let \([u_{0},u_{1}]\in C_{0}^{\infty}({\bf R})\times C_{0}^{\infty}({\bf R})\). Then, for the corresponding smooth solution \(u(t,x)\) to problem (1.1)-(1.2), it holds that_
\[G_{k}(t)\geq 0\quad(t\geq 0)\]
_for large \(k\gg 1\)._
Now, one prepares the crucial result to derive main estimates of Theorem 1.1. The idea is an application of the method recently developed in [7].
**Lemma 2.5**: _Let \([u_{0},u_{1}]\in C^{\infty}_{0}({\bf R})\times C^{\infty}_{0}({\bf R})\). Then, for the corresponding smooth solution \(u(t,x)\) to problem (1.1)-(1.2), it holds that_
\[\|u(t,\cdot)\|^{2}+\int_{0}^{t}\int_{\bf R}a(x)|u(s,x)|^{2}dxds\leq C\left(\|u_ {0}\|^{2}+\int_{\bf R}\frac{|u_{1}(x)+a(x)u_{0}(x)|^{2}}{V(x)}dx\right)\quad(t \geq 0),\]
_where \(C>0\) is a generous constant._
_Proof of Lemma 2.5._ At first, for the solution \(u(t,x)\) to problem (1.1)-(1.2), one sets
\[v(t,x):=\int_{0}^{t}u(s,x)ds.\]
This simple idea comes from [9], which is a modification of the celebrated Morawetz method. It can be seen that \(v(t,x)\) satisfies
\[v_{tt}(t,x)-v_{xx}(t,x)+V(x)v(t,x)+a(x)v_{t}(t,x)=u_{1}(x)+a(x)u_{0}(x),\quad t >0,\quad x\in{\bf R}, \tag{2.19}\]
\[v(0,x)=0,\quad\;v_{t}(0,x)=u_{0}(x),\quad x\in{\bf R}. \tag{2.20}\]
Multiplying both sides of (2.19) by \(v_{t}\) and integrating it over \([0,t]\times{\bf R}\), it follows from (2.20) that
\[\frac{1}{2}\|v_{t}(t,\cdot)\|^{2}+\frac{1}{2}\|v_{x}(t,\cdot)\|^{2}+\frac{1}{ 2}\int_{\bf R}V(x)|v(t,x)|^{2}dx+\int_{0}^{t}\int_{\bf R}a(x)|v_{s}(s,x)|^{2}dxds\]
\[=\frac{1}{2}\|u_{0}\|^{2}+(u_{1}+a(\cdot)u_{0},v(t,\cdot)),\quad t\geq 0. \tag{2.21}\]
Now, let us estimate the final term of the right hand side of (2.21) in order to absorb it into the left hand side. Indeed, by the Schwarz inequality, one has
\[|(u_{1}+a(\cdot)u_{0},v(t,\cdot))| \leq\int_{\bf R}|u_{1}(x)+a(x)u_{0}(x)||v(t,x)|dx\] \[=\int_{\bf R}\frac{|u_{1}(x)+a(x)u_{0}(x)|}{\sqrt{V(x)}}\left( \sqrt{V(x)}||v(t,x)|\right)dx\] \[\leq\left(\int_{\bf R}\frac{|u_{1}(x)+a(x)u_{0}(x)|^{2}}{V(x)}dx \right)^{1/2}\left(\int_{\bf R}V(x)|v(t,x)|^{2}dx\right)^{1/2}\] \[\leq\int_{\bf R}\frac{|u_{1}(x)+a(x)u_{0}(x)|^{2}}{V(x)}dx+\frac{ 1}{4}\int_{\bf R}V(x)|v(t,x)|^{2}dx. \tag{2.22}\]
Thus (2.21) and (2.22) imply the desired estimate
\[\frac{1}{2}\|u(t,\cdot)\|^{2}+\frac{1}{2}\|v_{x}(t,\cdot)\|^{2}+\frac{1}{4} \int_{\bf R}V(x)|v(t,x)|^{2}dx+\int_{0}^{t}\int_{\bf R}a(x)|u(s,x)|^{2}dxds\]
\[\leq\frac{1}{2}\|u_{0}\|^{2}+\int_{\bf R}\frac{|u_{1}(x)+a(x)u_{0}(x)|^{2}}{V (x)}dx,\]
because of \(v_{t}=u\). \(\square\)
Lemmas 2.3, 2.4 and 2.5 imply the following decay estimates of the total energy. The proof is standard (cf. [6, Lemma 2.4]).
**Proposition 2.2**: _Let \([u_{0},u_{1}]\in C^{\infty}_{0}({\bf R})\times C^{\infty}_{0}({\bf R})\). Then, for the corresponding smooth solution \(u(t,x)\) to problem (1.1)-(1.2), it holds that_
\[\int_{0}^{t}E_{u}(s)ds \leq C\left(\|u_{0}\|_{H^{1}}^{2}+\|u_{1}\|^{2}+\int_{\bf R}\frac {|u_{1}(x)+a(x)u_{0}(x)|^{2}}{V(x)}dx\right)\quad(t\geq 0),\] \[(1+t)E_{u}(t) \leq C\left(\|u_{0}\|_{H^{1}}^{2}+\|u_{1}\|^{2}+\int_{\bf R}\frac {|u_{1}(x)+a(x)u_{0}(x)|^{2}}{V(x)}dx\right)\quad(t\geq 0),\]
_where \(C>0\) is a generous constant._
As a consequence of Proposition 2.2, one can get the local energy decay result.
**Proposition 2.3**: _Let \([u_{0},u_{1}]\in C_{0}^{\infty}({\bf R})\times C_{0}^{\infty}({\bf R})\). Then, for the corresponding smooth solution \(u(t,x)\) to problem (1.1)-(1.2), it holds that_
\[(1+t)\int_{|x|\leq L}|u(t,x)|^{2}dx\leq CI_{0}^{2}\quad(t\geq 0),\]
_where \(C>0\) is a generous constant._
_Proof._ Indeed, it follows from Lemma 2.1 and Proposition 2.2 that
\[(1+t)\int_{|x|\leq L}|u(t,x)|^{2}dx\leq\frac{2}{V_{L}}(1+t)E_{u}(t)\leq CI_{0} ^{2}.\]
This implies the desired estimate. \(\Box\)
Finally let us prove Theorem 1.1 with above preparations.
_Proof of Theorem 1.1._ Theorem 1.1 is a direct consequence of Proposition 2.2, Lemma 2.5 and density argument (i.e., cut-off technique and the mollifier method). In fact, one can choose a sequence \([\phi_{n},\psi_{n}]\in C_{0}^{\infty}({\bf R})\times C_{0}^{\infty}({\bf R})\) such that
\[\|\phi_{n}-u_{0}\|_{L^{2}({\bf R},w)}+\|\phi_{n}^{\prime}-u_{0}^{\prime}\| \to 0\quad(n\to\infty),\]
\[\|\psi_{n}-u_{1}\|_{L^{2}({\bf R},w)}\to 0\quad(n\to\infty),\]
where
\[w(x):=1+V(x)^{-1}.\]
Let \(u^{(n)}(t,x)\) be a corresponding smooth solution to problem (1.1)-(1.2) with initial data \(u_{0}:=\phi_{n}\) and \(u_{1}:=\psi_{n}\). It is easy to obtain the following relations between \(u^{(n)}(t,x)\) and \(u(t,x)\)
\[\sup_{t\in[0,\infty)}\left(\|u_{t}^{(n)}(t,\cdot)-u_{t}(t,\cdot)\|+\|u_{x}^{( n)}(t,\cdot)-u_{x}(t,\cdot)\|+\|\sqrt{V(\cdot)}(u^{(n)}(t,\cdot)-u(t,\cdot))\| \right)\to 0\quad(n\to\infty),\]
\[\sup_{t\in[0,T]}\|u^{(n)}(t,\cdot)-u(t,\cdot)\|\to 0\quad(n\to\infty)\]
for each \(T>0\). Then, it follows from Lemma 2.5 and Proposition 2.2 that
\[(1+t)E_{u^{(n)}}(t)\leq C\left(\|\phi_{n}\|_{H^{1}}^{2}+\|\psi_{n}\|^{2}+\int _{\bf R}\frac{|\psi_{n}(x)+a(x)\phi_{n}(x)|^{2}}{V(x)}dx\right),\quad(t\geq 0),\]
\[\|u^{(n)}(t,\cdot)\|^{2}\leq C\left(\|\phi_{n}\|^{2}+\int_{\bf R}\frac{|\psi_ {n}(x)+a(x)\phi_{n}(x)|^{2}}{V(x)}dx\right),\quad(t\geq 0).\]
Letting \(n\to\infty\), one can get the desired estimates. Note that \(a\in L^{\infty}({\bf R})\), so it is non-effective on the norm. \(\Box\)
**Remark 2.1**: Obviously, Proposition 2.3 is also true for the weak solution \(u(t,x)\) with initial data \([u_{0},u_{1}]\in(H^{1}({\bf R})\cap L^{2}({\bf R},w))\times L^{2}({\bf R},w)\).
## 3 An application to semilinear problem
In this section, we consider the Cauchy problem for semilinear wave equation
\[u_{tt}(t,x)-u_{xx}(t,x)+V(x)u(t,x)+a(x)u_{t}(t,x)=|u(t,x)|^{p},\quad(t,x)\in( 0,\infty)\times{\bf R}, \tag{3.1}\]
\[u(0,x)=u_{0}(x),\ \ u_{t}(0,x)=u_{1}(x),\quad x\in{\bf R}. \tag{3.2}\]
Here, to observe the effect between the potential and the power of nonlinearity, as a trial, we fix the form of \(V(x)\) as in Example 1. Indeed, let \(V\in C^{1}({\bf R})\) satisfy
\[V(x)=\left\{\begin{array}{ll}V_{0}|x|^{-\beta},&|x|\geq L,\\ \frac{2V_{0}}{L^{\beta}}-\frac{V_{0}}{L^{2\beta}}|x|^{\beta},&|x|\leq L, \end{array}\right.\]
where \(\beta>1\), \(L>0\) as defined in **(A.2)** and \(V_{0}>0\) is small enough to guarantee the decay estimates of Theorem 1.1. Under these assumptions on \(V(x)\), Theorem 1.1 holds true naturally.
In addition, some important assumptions are imposed on \(p>1\) to derive our main results.
* There exists \(R>L\) such that \[\operatorname{supp}u_{0}\cup\operatorname{supp}u_{1}\subset B_{R}:=\{x:|x| \leq R\}.\]
* The exponent \(p\) satisfies \[p>5+2\beta=:p^{*}(\beta).\]
**Remark 3.1**: When \(\beta=2\), the lower bound exponent \(p^{*}(2)\) to get the global existence of small data solution is equal to \(9\). Note that \(\beta=2\) corresponds to the scale invariant case. As \(\beta\to\infty\) (less strong potential), the power \(p\) must be chosen large enough.
**Remark 3.2**: In [18] a semilinear problem (3.1)-(3.2) with \(a(x)=\text{constant}>0\) and a potential \(V(x)\) satisfying \(V(x)\geq\frac{k_{0}}{(1+|x|)^{\lambda}}\) is considered (\(k_{0}>0\)). There \(\lambda\in[0,\frac{1}{2})\) (long-range potential) and \(p\geq 5\) can be treated for \(1-D\) case. So, \(p^{*}(\beta)\) seems to be reasonable. Note that formally one sees \(p^{*}(0)=5\).
**Remark 3.3**: For each \(\beta>1\), to check a blowup result in the case of \(p\in(1,p^{*}(\beta)]\) is still completely open.
We prepare useful tools to get the priori estimates of the solution to semilinear problem (3.1)-(3.2).
**Lemma 3.1** ([5], Lemma 2.3): _If \(\theta>1\), there exists a constant \(C_{\theta}>0\) depending only on \(\theta\) such that_
\[\int_{0}^{t}(1+t-s)^{-\frac{1}{2}}(1+s)^{-\theta}ds\leq C_{\theta}(1+t)^{- \frac{1}{2}} \tag{3.3}\]
_for all \(t>0\)._
Based on (3.3) and the decay estimates obtained for the linear problem (1.1)-(1.2), we demonstrate the global existence of small data solution and decay property for semilinear wave equation. Our main result reads as follows.
**Theorem 3.1**: _Let \(\beta>1\). Under the assumptions_ **(B.1)**_,_ **(B.2)** _and Theorem 1.1, there exists \(\delta>0\) such that if \([u_{0},u_{1}]\in(H^{1}({\bf R})\cap L^{2}({\bf R},w))\times L^{2}({\bf R},w)\) satisfies \(I_{0}<\delta\), the semilinear problem (3.1)-(3.2) admits a unique global solution \(u\in\mathrm{C}([0,+\infty);H^{1}({\bf R}))\cap\mathrm{C}^{1}([0,+\infty);L^{ 2}({\bf R}))\) satisfying_
\[\|u(t,\cdot)\|\leq CI_{0}, \tag{3.4}\]
\[\|u_{t}(t,\cdot)\|+\|u_{x}(t,\cdot)\|+\|\sqrt{V(\cdot)}u(t,\cdot)\|\leq CI_{0 }(1+t)^{-\frac{1}{2}}, \tag{3.5}\]
_where_
\[I_{0}:=\|u_{0}\|_{H^{1}}+\|u_{1}\|+\|\frac{u_{1}+a(\cdot)u_{0}}{\sqrt{V(\cdot )}}\|.\]
_Proof._ By a standard semigroup theory, semilinear problem (3.1)-(3.2) can be rewritten as
\[U(t)=S(t)U_{0}+\int_{0}^{t}S(t-s)F(s)ds, \tag{3.6}\]
where \(U(t)=[u(t,\cdot),u_{t}(t,\cdot)]^{T}\), \(U(0)=[u_{0},u_{1}]^{T}\), \(F(s)=[0,|u(s,\cdot)|^{p}]^{T}\) and \(S(t)\) denotes the semigroup corresponding to the linear problem.
For convenience, we introduce the following notation
\[\|U(t)\|_{E}=\|u_{t}(t,\cdot)\|+\|u_{x}(t,\cdot)\|+\|\sqrt{V(\cdot)}u(t,\cdot)\|.\]
It follows from the assumption \(p>1\) that there exists a unique mild solution \(u\in\mathrm{C}([0,T);H^{1}(\mathbf{R}))\cap\mathrm{C}^{1}([0,T);L^{2}(\mathbf{ R}))\) for some \(T>0\). To show the global existence, it is sufficient to establish the priori estimates for solution and energy in the interval of existence.
We proceed our argument on the bases of [17] (see also [5]).
Adopting Theorem 1.1, one obtains
\[\|U(t)\|_{E}\leq CI_{0}(1+t)^{-\frac{1}{2}}+C\int_{0}^{t}(1+t-s)^{-\frac{1}{2}} \big{(}\|\frac{1}{\sqrt{V(\cdot)}}|u(s,\cdot)|^{p}\|+\||u(s,\cdot)|^{p}\|\big{)}. \tag{3.7}\]
Next we restrict our attention to the estimates for \(\|\frac{1}{\sqrt{V(\cdot)}}|u(s,\cdot)|^{p}\|\) and \(\||u(s,\cdot)|^{p}\|\).
By continuity, let us assume that there exists \(M>0\) such that
\[\|U(t)\|_{E}\leq MI_{0}(1+t)^{-\frac{1}{2}},\quad t\in[0,T), \tag{3.8}\]
and
\[\|u(t,\cdot)\|\leq MI_{0},\quad t\in[0,T). \tag{3.9}\]
By (3.8) and (3.9) one does not deny existence of some times \(t^{\prime},t^{\prime\prime}\in(0,T)\) such that
\[\|U(t^{\prime})\|_{E}=MI_{0}(1+t^{\prime})^{-\frac{1}{2}},\quad\|u(t^{\prime \prime},\cdot)\|=MI_{0}.\]
Note that (3.8) and (3.9) are realized if we take large \(M>0\) to satisfy
\[\|U(0)\|_{E}<MI_{0},\quad\|u_{0}\|<MI_{0}.\]
By assumption (**B.1**), one has
\[\mathrm{supp}\ u(s,\cdot)\subset B_{R+s},\qquad s\in[0,T).\]
It follows from the definition of \(V(x)\) and the Gagliardo-Nirenberg inequality that
\[\|\frac{1}{\sqrt{V(\cdot)}}|u(s,\cdot)|^{p}\|^{2} \leq V(L)^{-1}\int_{|x|\leq L}|u(s,x)|^{2p}dx+V_{0}^{-1}\int_{|x| \leq L}|x|^{\beta}|u(s,x)|^{2p}dx\] \[\leq 2\big{(}V_{0}^{-1}L^{\beta}+V_{0}^{-1}(R+s)^{\beta}\big{)} \int_{R}|u(s,x)|^{2p}dx\] \[\leq 2V_{0}^{-1}\big{(}L^{\beta}+(R+s)^{\beta}\big{)}\|u(s,\cdot) \|^{2p}_{2p}\] \[\leq CV_{0}^{-1}\big{(}L^{\beta}+(R+s)^{\beta}\big{)}\|u(s,\cdot) \|^{2(1-\theta)p}\|u_{x}(s,\cdot)\|^{2\theta p},\]
where
\[\theta=\frac{p-1}{2p}.\]
Therefore
\[\|\frac{1}{\sqrt{V(\cdot)}}|u(s,\cdot)|^{p}\|\leq CV_{0}^{-\frac{1}{2}}(L^{ \frac{\beta}{2}}+(R+s)^{\frac{\beta}{2}})\|u(s,\cdot)\|^{(1-\theta)p}\|u_{x}( s,\cdot)\|^{\theta p}. \tag{3.10}\]
Submitting (3.8) and (3.9) to (3.10) yields
\[\|\frac{1}{\sqrt{V(\cdot)}}|u(s,\cdot)|^{p}\| \leq CV_{0}^{-\frac{1}{2}}M^{p}I_{0}^{p}\big{(}L^{\frac{\beta}{2} }+(R+s)^{\frac{\beta}{2}})(1+s)^{-\frac{\theta p}{2}}\] \[\leq CV_{0}^{-\frac{1}{2}}M^{p}I_{0}^{p}\big{(}L^{\frac{\beta}{2} }+(R+s)^{\frac{\beta}{2}})(1+s)^{-\frac{p-1}{4}}. \tag{3.11}\]
Similarly, according to (3.8), (3.9) and Gagliardo-Nirenberg inequality, one has
\[\||u(s,\cdot)|^{p}\| =\|u(s,\cdot)\|_{2p}^{p}\] \[\leq C\|u(s,\cdot)\|^{(1-\theta)p}\|u_{x}(s,\cdot)\|^{\theta p}\] \[\leq CM^{p}I_{0}^{p}(1+s)^{-\frac{p-1}{4}}. \tag{3.12}\]
Submitting (3.11) and (3.12) to (3.7) leads to
\[\|U(t)\|_{E} \leq CI_{0}(1+t)^{-\frac{1}{2}}+C\int_{0}^{t}(1+t-s)^{-\frac{1}{2 }}\big{(}\|u(s,\cdot)\|_{2p}^{p}+\|\frac{1}{\sqrt{V(\cdot)}}|u(s,\cdot)|^{p}\| \big{)}ds\] \[\leq CI_{0}(1+t)^{-\frac{1}{2}}+CM^{p}I_{0}^{p}\int_{0}^{t}(1+t- s)^{-\frac{1}{2}}(1+s)^{-\frac{p-1}{4}}ds\] \[\quad+CV_{0}^{-\frac{1}{2}}M^{p}I_{0}^{p}\int_{0}^{t}(1+t-s)^{- \frac{1}{2}}\big{(}(R+s)^{\frac{\beta}{2}}+L^{\frac{\beta}{2}}\big{)}(1+s)^{- \frac{p-1}{4}}ds\] \[\leq CI_{0}(1+t)^{-\frac{1}{2}}+CM^{p}I_{0}^{p}(V_{0}^{-\frac{1} {2}}R^{\frac{\beta}{2}}+1)\int_{0}^{t}(1+t-s)^{-\frac{1}{2}}(1+s)^{-\frac{p-1 }{4}+\frac{\beta}{2}}ds \tag{3.13}\]
for all \(t\in[0,T)\).
Let
\[\gamma:=\frac{p-1}{4}-\frac{\beta}{2}.\]
By assumption (**B.2**), we see \(\gamma>1\). Using (3.3) one has
\[\|U(t)\|_{E}\leq \big{(}CI_{0}+CM^{p}I_{0}^{p}(V_{0}^{-\frac{1}{2}}R^{\frac{\beta} {2}}+1)\big{)}(1+t)^{-\frac{1}{2}}\] \[= CI_{0}\big{(}1+M^{p}I_{0}^{p-1}(V_{0}^{-\frac{1}{2}}R^{\frac{ \beta}{2}}+1)\big{)}(1+t)^{-\frac{1}{2}}\] \[= I_{0}Q_{0}(I_{0},M,R,V_{0})(1+t)^{-\frac{1}{2}} \tag{3.14}\]
for all \(t\in[0,T)\), where
\[Q_{0}(I_{0},M,R,V_{0})=C\big{(}1+M^{p}I_{0}^{p-1}(V_{0}^{-\frac{1}{2}}R^{\frac {\beta}{2}}+1)\big{)}.\]
Next, we derive \(L^{2}\)-bound for the local solution to problem (3.1)-(3.2).
In fact, one has from Theorem 1.1 and (3.6) that
\[\|u(t,\cdot)\|\leq CI_{0}+C\int_{0}^{t}\big{(}\|u(s,\cdot)\|_{2p}^{p}+\|\frac{1 }{\sqrt{V(\cdot)}}|u(s,\cdot)|^{p}\|\big{)}ds. \tag{3.15}\]
Substituting (3.11) and (3.12) to (3.15) and proceeding similar arguments to the estimates for \(\|U(t)\|_{E}\), one has
\[\|u(t,\cdot)\|\leq I_{0}Q_{0}(I_{0},M,R,V_{0})\]
for all \(t\in[0,T)\).
Take \(M>0\) further to satisfy \(M>C\), and \(I_{0}\) small enough such that
\[CM^{p}I_{0}^{p-1}(V_{0}^{-\frac{1}{2}}R^{\frac{\beta}{2}}+1)<M-C. \tag{3.16}\]
By choosing
\[\delta:=\left(\frac{M-C}{CM^{p}(V_{0}^{-\frac{1}{2}}R^{\frac{\beta}{2}}+1)} \right)^{\frac{1}{p-1}},\]
and if \(I_{0}<\delta\), then it follows from (3.16) that
\[Q_{0}(I_{0},M,R,V_{0})<M.\]
Therefore, we see that
\[\|U(t)\|_{E}<MI_{0}(1+t)^{-\frac{1}{2}}\quad\text{in}\quad[0,T), \tag{3.17}\]
\[\|u(t,\cdot)\|<MI_{0}\quad\text{in}\quad[0,T). \tag{3.18}\]
This contradicts (3.8)-(3.9), and so (3.17) and (3.18) are true in \([0,T)\). This shows that the local solution can be extended globally in time and the estimates (3.17) and (3.18) for the solution \(u(t,x)\) hold true for all \(t\geq 0\), which completes the proof. \(\Box\)
_Acknowledgement._ The work of the first author (R. IKEHATA) was supported in part by Grant-in-Aid for Scientific Research (C) 22540193 of JSPS.
|
2301.11115 | Hybrid Protection of Digital FIR Filters | A digital Finite Impulse Response (FIR) filter is a ubiquitous block in
digital signal processing applications and its behavior is determined by its
coefficients. To protect filter coefficients from an adversary, efficient
obfuscation techniques have been proposed, either by hiding them behind decoys
or replacing them by key bits. In this article, we initially introduce a query
attack that can discover the secret key of such obfuscated FIR filters, which
could not be broken by existing prominent attacks. Then, we propose a first of
its kind hybrid technique, including both hardware obfuscation and logic
locking using a point function for the protection of parallel direct and
transposed forms of digital FIR filters. Experimental results show that the
hybrid protection technique can lead to FIR filters with higher security while
maintaining the hardware complexity competitive or superior to those locked by
prominent logic locking methods. It is also shown that the protected multiplier
blocks and FIR filters are resilient to existing attacks. The results on
different forms and realizations of FIR filters show that the parallel direct
form FIR filter has a promising potential for a secure design. | Levent Aksoy, Quang-Linh Nguyen, Felipe Almeida, Jaan Raik, Marie-Lise Flottes, Sophie Dupuis, Samuel Pagliarini | 2023-01-26T14:09:17Z | http://arxiv.org/abs/2301.11115v1 | # Hybrid Protection of Digital FIR Filters
###### Abstract
A digital Finite Impulse Response (FIR) filter is a ubiquitous block in digital signal processing applications and its behavior is determined by its coefficients. To protect filter coefficients from an adversary, efficient obfuscation techniques have been proposed, either by hiding them behind decoys or replacing them by key bits. In this article, we initially introduce a query attack that can discover the secret key of such obfuscated FIR filters, which could not be broken by existing prominent attacks. Then, we propose a first of its kind hybrid technique, including both hardware obfuscation and logic locking using a point function for the protection of parallel direct and transposed forms of digital FIR filters. Experimental results show that the hybrid protection technique can lead to FIR filters with higher security while maintaining the hardware complexity competitive or superior to those locked by prominent logic locking methods. It is also shown that the protected multiplier blocks and FIR filters are resilient to existing attacks. The results on different forms and realizations of FIR filters show that the parallel direct form FIR filter has a promising potential for a secure design.
hardware obfuscation, logic locking, oracle-less and oracle-guided attacks, constant multiplications, FIR filters, direct and transposed forms.
## I Introduction
Due to the increase in the design complexity of Integrated Circuits (ICs) and the rising costs of chip fabrication at advanced technology nodes, the IC supply chain has become heavily specialized and globalized [1]. Design houses have been combining their Intellectual Properties (IPs) with many others purchased from third-parties and resorting to _untrusted_ foundries for fabrication. Although such globalization reduces the overall cost of producing an IC, it leads to serious security threats - especially for IPs - such as piracy, overuse, modification, and reverse engineering [2]. Over the years, IP protection has received a significant amount of interest and efficient methods, including watermarking [3], digital rights management [4], metering [5], and hardware obfuscation [6], have been introduced. Among these techniques, only hardware obfuscation can prevent IP theft, while the others are useful to prove the IP owner and reveal the IP owner's rights during a litigation process. Hardware obfuscation aims to make the design less clear and hard to understand for an adversary, by hiding the design content using structural transformations, locking the design functionality using additional logic with key bits, and exploiting camouflaged gates [6].
Digital filtering is frequently used in Digital Signal Processing (DSP) applications and Finite Impulse Response (FIR) filters are generally preferred due to their stability and linear phase property [7]. Since filter coefficients determine the filter behavior, they are actually an IP and need protection from reverse engineering by an adversary. Although there exist many efficient high-level and behavioral obfuscation methods proposed for protecting IPs [8, 9, 10, 11, 12, 13], digital FIR filters require specialized obfuscation techniques, since they should behave according to their specifications, such as passband and stopband frequencies and ripples [14]. However, there exist only a limited number of techniques proposed to obfuscate DSP circuits and especially, digital filters [15, 16, 17, 18]. The technique of [15] generates the desired filter and also its obfuscated versions, grouped in two categories as meaningful and unmeaningful in terms of filter behavior, using high-level transformations, and combines these realizations using a key-based finite state machine and a reconfigurator. To make the reverse engineering of coefficients harder for an end-user, adding input and output noises was proposed in [16]. Recently, we introduced a hardware obfuscation technique that hides the filter coefficients behind decoys [17, 18]. In [17], decoys can be selected based on their Hamming distance to reduce the hardware complexity or chosen randomly to increase the corruption at the filter output. Since an obfuscated FIR filter may still generate the desired behavior under a wrong key in [17], decoys are selected in such a way that the obfuscated filter presents the desired behavior only when the secret key is provided in [18]. To do so, the lower and upper bounds of each filter coefficient are found and decoys are selected beyond these bounds. In [17, 18], the folded design of an FIR filter is considered as a case study and its Time-Multiplexed Constant Multiplication (TMCM) block is obfuscated at Register-Transfer Level (RTL).
In this article, we initially introduce the query attack, which can discover the original filter coefficients hidden behind decoys [17, 18] or replaced by key bits [9]. Then, we propose a hybrid technique, which includes both hardware obfuscation and logic locking, for the protection of digital FIR filters. To do so, first, we describe a defense technique that obfuscates the multiplier blocks of parallel direct and transposed forms of an FIR filter, i.e., Constant Array Vector Multiplication (CAVM) and Multiple Constant Multiplication (MCM), respectively, using decoys. We also present their hardware-efficient realizations with and without multipliers. Second, we enhance this
obfuscation technique by locking the obfuscated design using a point function to make the protected design resilient to well-known attacks and by thwarting the query attack to determine the secret key. The hybrid protection technique works at RTL and can be easily adapted to any application including constant multiplications, such as image and video processing and neural networks. The main contributions of this article are as follows:
* Query attack developed for breaking designs generated by constant obfuscation techniques;
* Secure hybrid technique, consisting of hardware obfuscation and logic locking, developed for the protection of FIR filters with different forms and realizations;
* Comprehensive results on obfuscation and logic locking of FIR filters in terms of hardware complexity, attack resiliency, and filter behavior.
Experimental results clearly show that the proposed hybrid protection technique leads to FIR filter designs with higher security and competitive hardware complexity when compared to previously proposed hardware obfuscation and logic locking methods. As an interesting outcome of this work, we show that the parallel direct form filter has better resiliency properties than other FIR filter forms and realizations.
The remainder of this article is organized as follows: Section II presents background concepts. The query attack is described in Section III and the hybrid protection method is introduced in Section IV. Experimental results are presented in Section V. Further discussions on how other techniques may identify the original filter coefficients are given in Section VI. Finally, Section VII concludes the article.
## II Background
This section initially presents frequently used notations and then, gives details on digital FIR filters and multiplierless constant multiplications. Finally, it summarizes related work.
### _Notations_
Table I presents notations of important parameters used in the description of obfuscation and logic locking techniques.
### _Digital FIR Filters_
The FIR filter output \(Y(j)\) is given as \(\sum_{i=0}^{n-1}c_{i}\cdot X(j-i)\), where \(n\) is the filter length, \(c_{i}\) is the \(i^{th}\) filter coefficient, and \(X(j-i)\) is the \(i^{th}\) previous filter input with \(0\leq i\leq n-1\). Fig. 1 shows the parallel and folded realizations of an FIR filter. Note that the filter output is obtained in a single clock cycle in a parallel design, as shown in the direct and transposed forms in Figs. 1(a)-(b). On the other hand, the folded realization leads to a design with the least hardware complexity, since the common operations are re-used. However, it requires \(n\) clock cycles to compute the filter output, as shown in Fig. 1(c).
### _Multiplierless Design of Constant Multiplications_
Multiplication of constant(s) by variable(s) is a ubiquitous and crucial operation in many DSP applications. Among others presented in [24], the CAVM, MCM, and TMCM blocks can be used in the design of a filter, as shown in Fig. 1. They are defined as follows:
1. The _CAVM operation_ implements the multiplication of a \(1\times n\) constant array \(C\) by an \(n\times 1\) input vector \(X\), i.e., \(Y=\sum_{i}c_{i}X_{i}\) with \(1\leq i\leq n\).
2. The _MCM operation_ computes the multiplication of a set of \(n\) constants \(C\) by a single variable \(X\), i.e., \(Y_{i}=c_{i}X\) with \(1\leq i\leq n\).
3. The _TMCM operation_ realizes the multiplication of a constant selected from a set of \(n\) constants \(C\) by a single variable \(X\) at a time, i.e., \(Y=c_{i}X\) with \(1\leq i\leq n\).
Since the constants are determined beforehand, these constant multiplications can be realized using addition, subtraction, and shift operations under the shift-adds architecture. Note that parallel shifts can be implemented virtually for free in hardware using only wires. A straightforward shift-adds design technique, called the Digit-Based Recoding (DBR) [25], can realize constant multiplications in two steps: i) define the constants under a particular number representation, e.g., binary; ii) for the nonzero digits in the representation of constants, shift the input variables according to digit positions
Fig. 1: Designs of an FIR filter: (a) parallel direct form; (b) parallel transposed form; (c) folded transposed form, where the counter counts from 0 to \(n-1\).
and add/subtract the shifted variables with respect to digit values. Furthermore, the number of operations can be reduced by maximizing the sharing of common subexpressions among constant multiplications [20, 21, 22, 26, 27].
As a simple example, consider the CAVM, MCM, and TMCM blocks realizing constant multiplications, where \(C\) includes \(57=(111001)_{bin}\) and \(81=(1010001)_{bin}\). These constant multiplications are shown in Fig. 2. Note that the adder/subtractor shown in Fig. 2(h) behaves as an adder or a subtractor when its select input is 0 or 1, respectively. Observe from Figs. 2(b)-(c) and (e)-(f) that the sharing of common subexpressions can lead to a significant reduction under the shift-adds architecture in terms of the number of operations with respect to the DBR method.
### _Related Work_
Hardware obfuscation can take place at different stages in the IC design flow, e.g., high-level synthesis [11], RTL [9], gate-level [28], and layout level [29]. In hardware obfuscation, locking the design functionality is a common practice. Fig. 3 presents conventional logic locking applied at gate-level in the IC design flow. Note that after the layout of the locked netlist is shipped to the foundry without revealing the secret key, the locked IC is produced and delivered back to the design house. Then, values of the secret key are stored in a tamper-proof memory and the functional IC is sent to the market.
#### Ii-D1 Defenses
Earlier logic locking methods have been applied at gate-level. After the introduction of the concept of Random Logic Locking (RLL) using xor/xnor gates in [28], many works focused on different types of key logic, such as and/or, multiplexors (MUXes), and look-up tables, taking into account the hardware complexity of the locked circuit [30]. However, the satisfiability (SAT)-based attack [31] overcame all the defenses existing at that time. To thwart the SAT-based attack and its variants, circuits have been locked using a point function that forces these attacks to explore an exponential number of queries [32, 33, 34, 35, 36]. Moreover, the obfuscation of a locked design is considered in [37].
However, as mentioned in [8], at a higher level in the IC design flow, the selection of critical blocks of the design to be obfuscated gets easier, the exploration of tradeoffs between overhead and attack resiliency becomes more efficient, and the optimization of the obfuscated design is more effective. Recently, high-level and behavioral obfuscation techniques have been presented in [8, 9, 10, 11, 12]. Related to digital FIR filters including a large number of constants, filter coefficients are obfuscated by replacing their bits by key bits in [9, 11].
We note that our proposed hybrid protection technique works at one level higher than the gate-level, i.e., at RTL, as also shown in Fig. 3.
#### Ii-D2 Attacks
In logic locking, there are generally two threat models, namely oracle-less (OL) and oracle-guided (OG). In the OL threat model, only the gate-level netlist of the locked circuit is available to an adversary. In the OG threat model, it is assumed that an adversary can also obtain the functional IC programmed with the secret key from the market and use it as an oracle to apply inputs and observe outputs. Hence, in this model, the adversary has both the netlist of the locked circuit and the functional IC.
Under the OL threat model, due to the limited information available to the adversary, patterns in the structure of the locked netlist are studied using statistical analysis, Automated Test Pattern Generation (ATPG), and machine learning [38, 39, 40, 41]. Structural attacks, which identify and remove the logic inserted by a logic locking method, are proposed in [42, 43, 44].
Under the OG threat model, the ATPG-based attack of [45] leverages testing principles, such as justification and sensitization while finding the secret key. The SAT-based attack [31] iteratively finds Distinguishing Input Patterns (DIPs) that rule out wrong keys and achieves decryption as shown in Algorithm 1. It generates two locked circuits with the same inputs (\(X\)), but two different keys (\(K_{1}\) and \(K_{2}\)) described in a Conjunctive Normal Form (CNF) formula in a SAT problem
Fig. 3: Conventional logic locking and proposed hybrid protection technique in the IC design flow (adapted from [23]).
Fig. 2: Realizations of the CAVM (a-c), MCM (d-f), and TMCM (g-h) blocks including constants 57 and 81: (a) using multipliers; (b) the DBR method [19]; (c) the method of [20]; (d) using multipliers; (e) the DBR method [19]; (f) the method of [21]; (g) using a multiplier; (h) the method of [22].
(line 2). Then, it finds a DIP, which generates different outputs on these circuits, using a SAT solver (line 4) and computes the output based on the found DIP using the oracle (line 5). It adds the Boolean equations including key bits into the SAT problem, which are obtained after inserting the values of these inputs and outputs into these circuits (line 6). This process is iterated until the SAT problem becomes unsatisfiable (line 3), meaning that there exists no DIP to distinguish wrong keys from the secret key. Finally, it determines the secret key as the one found in the last iteration (line 8).
In a similar fashion, the SAT-based attack of [46] eliminates at least 2 DIPs in a single iteration. A Satisfiability Modulo Theory (SMT) solver is used instead of a SAT solver, providing more flexibility while encoding the problem [47, 48]. The so-called approximate attack of [49] aims for approximate functional recovery. The SAT-based attack of [50] achieves sequential deobfuscation using dynamic simplifications of key conditions. The attack of [51] discovers the vulnerabilities of the SAT-resilient logic locking methods of [32, 33]. In [52], a generic framework is developed to attack compound locking techniques. A security diagnosis tool, which can evaluate the structural vulnerability of a design locked by a provably secure logic locking technique, is introduced in [53].
## III The Query Attack
The SAT-based attack [31] presented in Algorithm 1 guarantees that the found values of **all** key bits are equal to those of the secret key. To do so, it may use a large number of queries that are required to eliminate all the wrong keys. On the other hand, our query attack proves that the found value of a **single** key bit is equal to that of the associated one in the secret key. To do so, it uses a small number of queries that make each key bit observable at a primary output. Hence, it slightly increases the SAT problem size when compared to the SAT-based attack. Thus, it can easily cope with circuits including a large number of gates and key bits [54] and logic structures, such as a multiplier and a tree of and gates [31], which the SAT-based attack generally finds hard to handle. In this section, we initially describe the proposed query attack and then, present its results on obfuscated designs.
```
1:Inputs: Locked circuit LC and oracle. Output: Proven values of the secret key \(\mathbf{K}\).
2:\(G=find\_queries(LC)\)
3:\(F=LC(X,K,Y)\)
4:for\(i:=1\) to \(2p\)do
5:\(Y_{i}:=\)\(oracle(Q_{i})\)
6:\(F:=F\wedge LC(Q_{i},K,Y_{i})\)
7:\(K:=sat\_assignment_{K}(F)\)
8:for\(i=0\) to \(p-1\)do
9:if\(unsat[F\wedge\overline{K_{i}}]\)then
10:\(\mathbf{K_{i}}=K_{i}\)
11:for\(i:=0\) to \(p-2\)do
12:for\(j=i+1\) to \(p-1\)do
13:if\(underfined(\mathbf{K_{i}})\)\(\mathbf{\&}\)\(underfined(\mathbf{K_{j}})\)then
14:if\(unsat[F\wedge(K_{i}\neq K_{j})]\)then
15:\(\mathbf{K_{i}}=\mathbf{K_{j}}\)
16:elseif\(unsat[F\wedge(K_{i}\neq\overline{K_{j}})]\)then
17:\(\mathbf{K_{i}}=\mathbf{K_{j}}\)
```
**Algorithm 2** The query attack
### _Description_
Our proposed OG SAT-based query attack is described in Algorithm 2. It initially finds queries using two strategies (line 1). In the first one, an ATPG tool is used to find the test patterns for the stuck-at-fault of each key bit on the locked circuit and the values of the related primary inputs are stored as queries. The aim of this strategy is to find input patterns that can propagate each key bit to a primary output, making it observable. In the second one, queries are obtained randomly. The aim of this strategy is to find input patterns that may make multiple key bits observable at primary outputs. In our experiments, we generate a total of \(2p\) queries, where \(p\) denotes the total number of key bits.
Then, the locked circuit is described in a CNF formula \(\mathbb{F}\) by expressing each gate in its CNF (line 2). For each query (lines 3-5), it is applied to the oracle and the values of primary outputs are obtained (line 4). Then, the related input and output values are assigned to the associated nets in the locked circuit, the constant values of these nets are propagated, and the Boolean equations including key bits are derived in a CNF formula and added into \(\mathbb{F}\) (line 5).
After all the queries are considered, the SAT problem \(\mathbb{F}\) is solved using a SAT solver and the values of key bits are determined (line 6). Note that the locked circuit with the found values of key bits behaves exactly the same as the oracle under the given queries, but not under all possible input values. Hence, the found key is not guaranteed to be the secret key.
However, the found value of a key bit can be proven correct by using the concept of _proof by contradiction_. To do so, for each key bit (lines 7-9), the complement of its found value is added into \(\mathbb{F}\) and the SAT solver is run. If there exists no solution to \(\mathbb{F}\), i.e., the SAT problem is unsatisfiable, the value of the related key bit in the secret key is proven to be the one in the found solution.
As an example, consider the majority circuit in Fig. 4(a) and suppose that it is locked using xor/xnor gates as given in Fig. 4(b). Assume that a query is found as \(x_{1}x_{2}x_{3}=000\) and thus, the value of its output \(y\) is obtained as 0 using the oracle. After propagating these values on the locked circuit, a Boolean equation \(\overline{k_{0}}\lor k_{1}=0\), i.e., \(k_{0}\wedge\overline{k_{1}}\) in CNF, is obtained as shown in Fig. 4(c). In the SAT solution, the key bit values are found as \(k_{1}k_{0}=01\). Note also that these are the proven key values since a SAT solver guarantees that there exists no
solution to the SAT problem \(\bar{F}\) when it is extended by either the constraint \(k_{0}=0\), i.e., \(\overline{k_{0}}\) in CNF or \(k_{1}=1\), i.e., \(k_{1}\) in CNF, due to a conflict with the found Boolean equation, i.e., \(k_{0}\land\overline{k_{1}}\) in CNF.
We note that the query attack is also capable of proving if the value of a key bit, \(k_{i}\), is equal to the value of another key bit, \(k_{j}\), or its opposite (lines 10-16). To do so, we extend the SAT problem with \(k_{i}\neq k_{j}\), i.e., \((k_{i}\lor k_{j})\land(\overline{k_{i}}\lor\overline{k_{j}})\) in CNF, and \(k_{i}\neq\overline{k_{j}}\), i.e., \((k_{i}\lor\overline{k_{j}})\land(\overline{k_{i}}\lor k_{j})\) in CNF, respectively, where \(i\neq j\) and \(0\leq i,j\leq p-1\). We run the SAT solver and check if the SAT problem is unsatisfiable. In this case, relations between two key bits are found independent of their values.
Returning back to our majority circuit, consider its another locked version given in Fig. 4(d). Assume that a query is again found as \(x_{1}x_{2}x_{3}=000\) and hence, the output \(y\) is computed as 0. Thus, after the propagation of input and output values as shown in Fig. 4(e), a Boolean equation \(\overline{k}_{0}\oplus k_{1}=0\), i.e., \((k_{0}\lor k_{1})\land(\overline{k_{0}}\lor\overline{k_{1}})\) in CNF, is found. In the SAT solution, the key bit values are found as \(k_{1}k_{0}=10\). Although the actual values of key bits could not be proven, it is found that \(k_{0}\) and \(k_{1}\) have opposite values after the SAT problem is extended with the Boolean equation \(k_{0}\neq\overline{k_{1}}\) and it becomes unsatisfiable. Hence, the values of key bits \(k_{1}k_{0}=10\) or \(k_{1}k_{0}=01\) in the locked design lead to the original majority circuit.
### _Results_
First, three FIR filters with a large number of coefficients and a large bit-width of filter input and coefficients were used to demonstrate the performance of the query attack. They were taken from [55]. Table II presents their details, where \(n\) and \(mbw\) are the number and maximum bit-width of coefficients, respectively. The folded realization of these filters was considered. Table II presents details on their TMCM blocks, where _#in_ and _#out_ are respectively their number of inputs and outputs when the bit-width of the input variable, i.e., \(ibw\), is set to 32. These TMCM blocks were obfuscated using decoys under the architecture including MUXes and a multiplier [17], denoted as tmcm-mul, and also obfuscated by replacing constants with key bits [9], denoted as tmcm-crk.
Table III presents the number of key bits \(p\) and the results of the query attack along with the SAT- and ATPG-based attacks taken from [56]. In this table, _time_ denotes the run-time of an attack in seconds and _prv_ stands for the number of key bits, whose values are proven by the query attack. Also, _OoT_ indicates that an attack could not find a solution due to the time limit, which was set to 2 days. The attacks were run on a computing cluster including Intel Xeon processing units at 2.4 GHz with 40 cores and 96 GB memory. The query attack was developed in Perl and equipped with the ATPG tool Atalanta [57] and the SAT solver CaDiCaL [58]. It is available at _[https://github.com/Centre-for-Hardware-Security/_](https://github.com/Centre-for-Hardware-Security/_).
Observe from Table III that the query attack can easily find the secret key of obfuscated designs while it is hard for the well-known attacks to find a solution. The main reason is that the TMCM block includes a multiplier block, where one of its inputs is the 32-bit input variable, and the SAT-based attack is not effective on designs including a multiplier as mentioned in [31]. However, the query attack can deal with a small number of queries, which are sufficient to determine the value of each key bit, using a little computational effort.
Second, we generated a total of 112 FIR filters, where \(n\) ranges between 16 and 127 when _mbw_ was set to 12, to find the impact of the number of constants and key bits on the performance of the query attack. Again, the folded design of these filters were considered and _ibw_ was set to 32. The TMCM blocks were obfuscated using \(2^{(logn)+1}\) key bits under the tmcm-mul and tmcm-crk architectures. The SAT-based [31] and query attacks were run on these obfuscated TMCM blocks, where the time limit was set to 2 days. Fig. 5 presents the run-time of these attacks.
Fig. 4: Examples on the query attack: (a) majority circuit; (b)-(c) a locked majority circuit; (d)-(e) another locked majority circuit.
Fig. 5: Run-time of attacks on obfuscated TMCM blocks.
Observe from Fig. 5 that as \(n\) and \(p\) increase, the run-time of the query attack increases slightly. Note that while the query attack can find the secret key of each instance, the SAT-based attack can find a solution on 39 and 43 instances under the tmcm-mul and tmcm-crk architectures, respectively. Observe that the query attack runs faster than the SAT-based attack on these instances. Note that Section V presents more results of the query attack on different multiplier blocks obfuscated and locked by different techniques.
## IV Proposed Hybrid Protection Technique
This section initially presents the obfuscation technique used to hide filter coefficients behind decoys in the CAVM and MCM blocks of parallel direct and transposed forms of FIR filters (cf. Section IV-A and Section IV-B, respectively). Then, it describes the logic locking method using a point function described at RTL (cf. Section IV-C). Finally, it introduces the hybrid protection technique including both of these methods (cf. Section IV-D).
The original constants can be obfuscated using decoys as described in [17]. The motivation behind such obfuscation is that the use of decoys enables us to control the tradeoff between hardware complexity, output corruption, and filter behavior [17, 18] when compared to logic locking. The obfuscation technique using decoys requires two main steps: i) given the number of key bits, determine decoys for each original constant; ii) realize the obfuscated design, where original constants are hidden behind decoys using MUXes and key bits. The selection of decoys for the original constants is done as shown in Algorithm 3. In its _AssignDecoy_ function (line 7), decoy selection can be done based on a given criterion, namely hardware complexity, output corruption, and filter behavior. In these criteria, decoys are chosen to be unique to increase the obfuscation.
```
1:Inputs: Original constants \(C=\{c_{1},c_{2},\ldots,c_{n}\}\) and \(v\) key bits.
2:Output: Decoy set \(D\).
3:\(noi=0\)\(\triangleright\) Number of iterations
4:\(nok=0\)\(\triangleright\) Number of used key bits
5:\(D=\emptyset\)\(\triangleright\) Set of \(n\) decoy constant arrays
6:while\(nok<v\)do
7:\(nd=2^{noi}\)\(\triangleright\) Number of decoys to be assigned
8:for\(i=1\)to\(n\)do
9:\(D_{i}=\) AssignDecoy(\(D_{i}\), \(c_{i}\), \(n\)od)
10:\(nok=nok+1\)
11:if\(nok==v\)then
12:break
13:\(noi=noi+1\)
```
**Algorithm 3** Selection of decoys for original constants
### _Hardware Obfuscation of the CAVM Block_
Given \(1\times n\) original constant array \(C=[c_{1},c_{2},\ldots,c_{n}]\) and the number of key bits for obfuscation, i.e., \(v\), let \(D\) denote a set of \(n\) decoy constant arrays, i.e., \(D=\{[d_{1}^{1},\ldots,d_{1}^{nd_{1}}],[d_{2}^{1},\ldots,d_{2}^{nd_{2}}],\ldots,[d_{n}^{1},\ldots,d_{n}^{nd_{n}}]\}\), where \(nd_{i}\) is the number of decoy constants selected for the \(i^{th}\) original constant determined based on a given criterion with \(1\leq i\leq n\). Then, the set \(R\), which includes each original constant and its decoys, i.e., \(R_{i}=c_{i}\cup D_{i}=[c_{i},d_{i}^{1},\ldots,d_{i}^{nd_{i}}]\) with \(1\leq i\leq n\), is formed. Let \(r_{i,j}\) denote the \(j^{th}\) constant in \(R_{i}\) with \(1\leq i\leq n\) and \(1\leq j\leq nd_{i}+1\). Thus, the straightforward realization of the obfuscated CAVM block is given in Fig. 6(a). Note that the key bits determined for each constant, i.e., \(kc_{i}\), have the size of \(\lceil log_{2}(nd_{i}+1)\rceil\) with \(1\leq i\leq n\). The secret key, which is formed as the concatenation of these key bits, is determined based on the location of the original constant in the constant array \(R_{i}\).
Note that the size of a multiplier given in Fig. 6(a) is related to the bit-width of the original constant and its decoy(s). Hence, to reduce the hardware complexity of the straightforward design, the size of constants, which are inputs of MUXes, can be decreased. To do so, we implement a CAVM block, where each entry of its constant array \(S\) is an element of each \(R\) array, i.e., \(S=[s_{1},s_{2},\ldots,s_{n}]\) with \(s_{i}\in R_{i}=[c_{i},d_{1}^{1},\ldots,d_{i}^{nd_{i}}]\) and \(1\leq i\leq n\). Then, the original constant and its decoys at inputs of each MUX are computed as \(T_{i}=R_{i}-s_{i}\) with \(1\leq i\leq n\). Fig. 6(b) presents the obfuscated design under the proposed architecture called cavm-mul. Note that the CAVM block realizes \(s_{1}X_{1}+s_{2}X_{2}+\ldots+s_{n}X_{n}\) and is implemented under the shift-adds architecture using the algorithm of [20]. The constants to be in \(S\) are decided based on the hardware complexity of the CAVM block and the size of multipliers. This problem is formulated as a 0-1 Integer Linear Programming (ILP) problem.
To further reduce the hardware complexity of the design in Fig. 6(b), each multiplier with a MUX, which represents a TMCM block, is realized under the shift-adds architecture using the algorithm of [22]. Fig. 6(c) presents the obfuscated design under the proposed architecture called cavm-sa.
Fig. 6: Realizations of the obfuscation of the CAVM block using decoys: (a) straightforward design; (b) cavm-mul; (c) cavm-sa.
Fig. 7: Realizations of the obfuscation of the CAVM block including constants 57 and 81: (a) straightforward design; (b) cavm-mul; (c) cavm-sa.
Returning to our example in Fig. 2 with \(C=[57,81]\) and assuming that the number of key bits is 3, the set \(D\), that includes decoys for each constant, is found as \(D=\{[61,59,56],[80]\}\) based on the hardware complexity criterion. Thus, the set \(R\) is formed as \(R=\{[61,59,56,57],[81,80]\}\). The straightforward realization of the obfuscated CAVM block using decoys is shown in Fig. 7(a), where the secret key is \(\mathbf{K}=k_{2}k_{1}k_{0}=011\). Under the cavm-mul and cavm-sa architectures, the constant array \(S\) is determined as \(S=[56,80]\). Thus, the set \(T\) is formed as \(\{[5,3,0,1],[1,0]\}\) leading to multipliers with smaller sizes when compared to those given in Fig. 7(a). The realization of the obfuscated cavm design under the cavm-mul architecture is given in Fig. 7(b). Furthermore, Fig. 7(c) presents the shift-adds realization of the tmcm blocks implementing the constant multiplications including those in the set \(T\) under the cavm-sa architecture.
In addition to the obfuscation using decoys on the cavm block, we also developed the constant obfuscation technique used in the assuve tool [9]. Given the number of key bits, constants in the original cavm block are replaced by key bits under the architecture called cavm-crk.
### _Hardware Obfuscation of the MCM Block_
Similarly, the MCM block can also be obfuscated using decoys. After decoys selected for each original constant are found based on the given criterion, and the set \(R\) is determined, the straightforward realization of the obfuscated design can be obtained as illustrated in Fig. 8(a). Moreover, the size of multipliers can be reduced by determining the set of constants \(S\) from the set \(R\) and the set \(T\) is computed accordingly as described in Section IV-A. The multiplications of constants in the set \(S\) by the variable \(X\), i.e., \(s_{1}X,s_{2}X,\ldots,s_{n}X\), are realized in an mcm block, which is implemented under the shift-adds architecture using the algorithm of [21]. Fig. 8(b) presents the obfuscated design under the proposed architecture called mcm-mul. Furthermore, the multiplicress realization of the design obfuscated under the mcm-mul architecture can be obtained by realizing the tmcm block under the shift-adds architecture using the algorithm of [22]. Fig. 8(c) shows the obfuscated design under the proposed architecture called mcm-sa.
Returning to our example, Fig. 9 presents the straightforward realization of the obfuscated mcm block and its designs under the mcm-mul and mcm-sa architectures.
In addition to the obfuscation using decoys, constants in the original mcm block are replaced by key bits under the architecture called mcm-crk.
### _Logic Locking with a Point Function_
As shown in Sections III and V, the constant obfuscation techniques are vulnerable to the SAT-based attack and its variants. The motivation behind locking the obfuscated design with a point function is to make it resilient to these techniques. In order to increase the number of di to be explored in a SAT-based attack, one can lock primary outputs of a multiplier block using a point function1 at RTL as done at gate-level in [32, 33, 34, 23].
Footnote 1: A one-point function is a Boolean function that evaluates to 1 at exactly one input pattern.
Suppose that a Boolean function \(f:\mathbb{B}^{q}\rightarrow\mathbb{B}\) is locked using a one-point function with \(w\) key bits, where \(w\leq q\), leading to a locked Boolean function \(g:\mathbb{B}^{w+q}\rightarrow\mathbb{B}\) and let \(\mathbf{K}\) denotes the secret key. Then, \(f(X)=g(X,\mathbf{K})\) under all possible input values. Fig. 10(a) shows the behavior of the locked function \(g\) under each possible key value when \(q=w=3\) and \(k_{2}k_{1}k_{0}=011\) is the secret key. In this figure, \(K^{i}\) stands for the assignment of the value \(i\) in binary to key bits, i.e., \(k_{w-1}\ldots k_{1}k_{0}=(i)_{bin}\) with \(0\leq i\leq 2^{w}-1\). Also, the value of logic 0 (1) under each possible key value denotes that the locked function \(g\) is (not) equal to the original function \(f\). Note that the locked function under the secret key, i.e., \(\mathbf{K}=K^{3}\) highlighted in our example, always generates the
Fig. 8: Realizations of the obfuscation of the MCM block using decoys: (a) straightforward design; (b) mcm-mul; (c) mcm-sa.
Fig. 10: (a) Behavior of a Boolean function locked by one-point function; (b) behavior of a Boolean function locked by relaxed one-point function.
Fig. 9: Realizations of the obfuscation of the MCM block including constants 57 and 81: (a) straightforward design; (b) mcm-mul; (c) mcm-sa.
same output as the original function for every input pattern. Observe from Fig. 10(a) that each input pattern eliminates at most one wrong key, leading to an exponential number of DIPs to find the secret key, i.e., \(2^{w}\). Moreover, such a one-point function can be relaxed to increase the corruption at a primary output. For example, Fig. 10(b) presents the behavior of the locked function \(g\), where each input pattern can eliminate at most 2 wrong keys. Observe that the exponential number of tries to find the secret key is still valid, i.e., \(2^{w-1}\) in this case. Furthermore, multiple primary outputs can be locked using point functions with different key bits.
Listing 1 presents the Verilog code snippet, which describes logic locking using the one-point function at RTL. In this code, \(X\) is an array of primary inputs and \(K\) is an equally sized array of key bits. Moreover, the relaxed one-point function can also be described at RTL as shown in the same listing. In its code, \(cv\) stands for the corruption value, which denotes the maximum number of wrong keys that can be distinguished by a single input pattern.
We note that since the point function is described at RTL, the synthesis tool shapes its circuit based on the given synthesis parameters. Thus, its realization does not have a regular structure like logic locking methods of [33, 34, 36].
### _Combination of Obfuscation and Logic Locking_
The hybrid protection technique includes obfuscation and logic locking using a point function described in previous subsections. Initially, the obfuscation technique using decoys is applied and then, the obfuscated design is locked using a point function. Fig. 11 illustrates the hybrid protection technique, where \(X\) and \(Y\) denote the inputs and outputs of the original design and \(K^{obf}\) and \(K^{ll}\) stand for the key bits used for obfuscation and locking, respectively. Additionally, to thwart the structural attacks, the key bits used for logic locking, \(K^{ll}\), are hidden among the key bits used for obfuscation, \(K^{obf}\), using xor/xnor gates. In this scheme, an xor (xnor) gate, which has \(k_{i}^{ll}\) and \(k_{j}^{obf}\) as inputs, is generated if the value of \(k_{j}^{obf}\) in the secret key is equal to logic 0 (1) value, where \(0\leq i\leq w-1\) and \(0\leq j\leq v-1\). Then, the output of this gate is connected to the net, which would be driven by \(k_{i}^{ll}\). Moreover, to thwart the query attack, each \(k_{i}^{obf}\) is hidden among another \(k_{j}^{obf}\) using an xor/xnor gate, where \(i\neq j\) and \(0\leq i,j\leq v-1\). By doing so, each key bit is observed with other key bits at a primary output, making it harder for the query attack to prove the value of the related key bit.
We developed a Computer-Aided Design (CAD) tool to automate the design and verification process for the obfuscation and locking of the CAVM, MCM, and TMCM blocks and the parallel direct and transposed form and folded FIR filters. The CAD tool takes the filter coefficients, the number of key bits, the design architecture, and other design parameters as inputs and generates the description of the obfuscated design in Verilog, the testbench for verification, and synthesis and simulation scripts. Note that designs are described in a behavioral fashion at RTL.
## V Experimental Results
In this section, we introduce the gate-level synthesis results of multiplier blocks protected by the proposed hybrid technique, obfuscated by previously proposed techniques, and locked by prominent logic locking methods. We also provide the results of well-known logic locking attacks and the proposed query attack on these designs. Furthermore, we present the results of obfuscated and locked FIR filters and also, introduce the results of prominent attacks on these designs. Finally, we explore the impact of parameters used in the point function on the hardware complexity and resiliency to the SAT-based attack and present gate-level synthesis results of the obfuscated and locked CAVM block of the direct form filter, which has promising security properties.
### _Results of Obfuscated and Locked Multiplier Blocks_
Based on our experimental observations, similar outcomes have been observed on FIR filters with a different number of coefficients and different bit-width of filter input and coefficients in comparison of design architectures, obfuscation and locking techniques, and attacks. Hence, in this experiment, a single FIR filter with a small number of coefficients and a small bit-width of filter input and coefficients was used to reveal the effectiveness of obfuscation and locking techniques.
Fig. 11: Proposed hybrid protection technique.
Table IV presents the details of this filter taken from [55], where _#in_ and _#out_ are respectively the number of inputs and outputs of the multiplier blocks when \(ibw\) is 16.
Table V presents the synthesis results of the CAVM, MCM, and TMCM blocks of the FIR filter obfuscated by previously proposed methods [9, 17] and protected by the proposed hybrid technique. Note that the tmm-sa architecture denotes the TMCM block obfuscated using decoys under the shift-adds architecture. Logic synthesis was performed by Cadence Genus using a commercial 65 nm cell library with the aim of area optimization. For this aim, a very high virtual clock period value, i.e., 80 ns, was used. The encrypted designs were validated by simulation using 10,000 randomly generated inputs, where the switching activity data of each node in the design were collected and stored in a Switching Activity Interchange Format (SAIF) file, which is later used by the synthesis tool while computing the power dissipation. For obfuscation, 32 key bits were used. There were 16 key bits for locking using the one-point function. Thus, a total of 48 key bits were used in designs protected by the hybrid technique. In this table, _area_, _delay_, and _power_ stand for the total area in \(\mu m^{2}\), delay in the critical path in \(ps\), and total power dissipation in \(\mu W\), respectively. This table also presents the results of OG attacks, namely SAT- and ATPG-based attacks, the approximate AppSAT attack taken from [56], and the DoubleDIP attack taken from [59], and the OL SCOPE attack taken from [60]. For the SCOPE attack, _cdk_ and _dk_ denote the number of correctly deciphered key bits and the number of deciphered key bits, respectively. The time limit given to the attacks was 2 days. In this table, designs, whose secret key has not been discovered by the given attacks, are highlighted.
#### Iv-A1 Comments on Hardware Complexity
Observe from Table V that the hybrid protection technique increases the hardware complexity when compared to the obfuscation techniques simply due to the inclusion of the point function and logic for the obfuscation of key bits. Note that the increase of area in the CAVM, MCM, and TMCM blocks reaches up to 1.7%, 2%, and 13.8%, respectively. The obfuscation and hybrid protection of the CAVM and MCM blocks under the proposed architectures, i.e., cavm-mul, cavm-sa, mcm-mul, and mcm-sa, lead to designs with less area when compared to those realized under the cavm-crk and mcm-crk archi
tectures. Note that such a decrease reaches up to 17.6% and 18% in the CAVM and MCM blocks, respectively. This is simply because the proposed techniques exploit common subexpressions shared in constant multiplications. On the other hand, the obfuscation and hybrid protection of the TMCM blocks under the tmcm-mul and tmcm-crk architectures lead to designs with less area with respect to those realized under the tmcm-sa architecture. Note that such a decrease reaches up to 24.3%. This is because a single multiplier is replaced by a large number of addition and subtraction operations under the tmcm-sa architecture. It is also observed that the minimum achievable delay values in the critical path of obfuscated and protected multiplier blocks are very close to each other, meaning that the inclusion of the point function and logic for the obfuscation of key bits does not have a significant impact while realizing the design with the smallest delay.
#### Iv-A2 Comments on Attack Resiliency
Observe also from Table V that while the OG attacks can easily discover the secret key on the obfuscated designs, the OL attack can decipher all the key bits with high accuracy, except for the CAVM design obfuscated under the cavm-crk architecture. On the other hand, none of these attacks can break the defense built by the hybrid protection technique. Note that the designs protected by the hybrid technique were also applied to Fa-SAT [52] and the Valkyrie tool [53], but without any success due to the combination of both obfuscation and locking.
Moreover, these multiplier blocks under an architecture including a multiplier are locked by prominent logic locking methods, namely, RLL [28] and the SAT-resilient methods of AntiSAT [33], SARLock [32], SFLL [23], CASLock [34], and SKGLock [36]. In this case, the multiplier block described at RTL is initially synthesized and its gate-level netlist is obtained, and then, this netlist is locked. Note that while the RLL, AntiSAT, and SFLL methods were applied using the NEOS tool [61], the script for the SARLock method was provided by P. Subramanyan, and we implemented the CASLock and SKGLock methods. In the RLL method, 32 key bits are used, the same as the obfuscation techniques presented in Table V. Same as the hybrid protection method shown in Table V, there are a total of 48 key bits in the combination of RLL and a SAT-resilient method, while 16 key bits are designated to a SAT-resilient method. Note that due to the locking nature of AntiSAT, CASLock, and SKGLock, they require twice the number of designated key bits. Hence, a total of 32 key bits are used in these methods. Table VI presents the results of locked multiplier blocks. The locked designs, whose secret key has not been discovered by the given attacks, are also highlighted.
#### Iv-A3 Comments on Hardware Complexity
Observe from Table VI that SAT-resilient methods with a combination of RLL lead to designs with hardware complexity very close to each other. When compared to the results of the hybrid protection technique given in Table V under the architectures using multiplier(s), the locked CAVM and MCM blocks have larger and smaller area, respectively and the locked TMCM blocks have competitive area. A locked MCM block has less hardware complexity than an obfuscated or protected MCM block because the logic locking is applied after the common subexpressions are exploited by the synthesis tool.
#### Iv-A4 Comments on Attack Resiliency
Also, observe from Table VI that the secret key of designs locked by RLL, RLL+SARLock, and RLL+SFLL2 can be found by the given attacks. Note also that the MCM block locked by RLL+CASlock could be broken by the AppSAT. While the query attack is also capable of proving the values of most of the RLL key bits in all logic locking methods and extra SKGLock key bits, the SCOPE attack can predict the values of some key bits of designs locked by RLL+SARLock and RLL+SKGLock with high accuracy.
Footnote 2: Confirmed by the developer of the SFLLL method that when a small number of key bits are used and their values are biased towards all logic 0s or is in the SFLL method, the exponential growth in the number of iterations in the SAT-based attack is no longer valid.
### _Results of the Obfuscated and Locked FIR Filters_
Table VII presents the synthesis results of parallel direct and transposed form and folded FIR filters, whose CAVM, MCM,
and TMCM blocks are obfuscated by the previously proposed techniques [9, 17] and the hybrid protection technique, respectively. It also shows the synthesis results of FIR filters locked by prominent logic locking methods. It introduces the results of attacks that can be applied to sequential circuits namely, the OG KC2 attack, which was taken from [61], and the OL SCOPE attack. In this table, _failed_ denotes that the found solution of the KC2 attack is actually a wrong key verified through simulation.
#### V-B1 Comments on Hardware Complexity
Observe from Table VII that the direct form filter has less area and consumes less power, but has a higher delay when compared to the transposed form filter. On the other hand, the folded design has the smallest area, but the filter output is computed in 30 clock cycles, increasing the latency and energy consumption. The conclusions drawn based on the gate-level synthesis results on the obfuscated and locked multipliers blocks given in Tables V-VI are also valid on the obfuscated and locked FIR filters. However, due to the registers in the FIR design, the overhead on the overall FIR filter design gets smaller. Note that the proposed hybrid technique achieves the maximum area reduction with respect to the logic locking methods on the parallel direct form FIR filters, i.e., 4.4%, obtained when the FIR filter under the cavm-sa architecture is compared to the FIR filter locked by RLL+SKGLock.
#### V-B2 Comments on Attack Resiliency
Observe also from Table VII that the KC2 attack is capable of discovering the secret key of the obfuscated FIR filters using previously proposed techniques, except for the direct form filters, but it is not successful on the FIR filters protected by the proposed hybrid technique. It can also find the secret key locked by RLL, except for the direct form filter, but fails on the filters locked by both RLL and a SAT-resilient method. Similarly, the SCOPE attack generally deciphers all the key bits on the obfuscated FIR filters with high accuracy, but it can only decipher a small number of key bits of the FIR filters protected by the hybrid technique. However, it is capable of deciphering more key bits on the locked FIR filters when compared to its results on the locked multiplier blocks. This is heavily due to the resynthesis of the FIR filter including the locked multiplier block. Note that the proposed hybrid technique increases the area and power dissipation of FIR filters when compared to the previously proposed obfuscation techniques [9, 17] in order to increase their resiliency to the existing attacks.
### _Analysis on the Point Function_
To find the impact of the point function and its parameters in the hybrid protection technique on the hardware complexity, the number of iterations taken in the SAT-based attack [31], and the run-time of the SAT-based attack [31], we used the TMCM block of our FIR filter under the tmcm-mul architecture. Again, the TMCM block is obfuscated using decoys with 32 key bits when \(ibw\) is 16. In logic locking with the point function, the number of key bits, i.e., \(w\), is determined to be between 8 and 16, the corruption value, i.e., \(cv\), is set to be between 0 and 2, and a single primary output is locked. Fig. 12 presents the impact of point function parameters on the area of the protected TMCM block and the number of iterations and run-time of the SAT-based attack.
Observe from Fig. 12(a) that as the number of key bits used in logic locking, \(w\), increases, the area of the protected TMCM block using the hybrid technique increases slightly. Note that as the corruption value, \(cv\), increases, the design area increases simply due to the increased range of comparator logic given in Listing 1. Also, observe from Figs. 12(b)-(c) that as \(w\) increases, the number of iterations and run-time of the SAT-based attack increases. An exponential growth in the number of iterations and run-time can be observed till \(w\) is 15. As can be seen from Fig. 12(c), for the 15- and 16-bit keys in the point function, the SAT-based attack cannot find the secret key in the time limit, i.e., 2 days, denoted by the red line. Thus, the number of iterations given in Fig. 12(b) for these number of key bits, is the one obtained in the time limit. Note also that in all TMCM designs locked by the point function with the given parameters, the number of iterations increases exponentially, while it decreases as \(cv\) is increased, but still keeping the exponential growth.
To find the impact of locking an obfuscated design using a point function on hardware complexity and attack resiliency, we used the same 112 FIR filters presented in Section III-B, where \(n\) ranges between 16 and 127. In our experiments, the TMCM blocks of folded FIR filters were obfuscated using \(2^{\lfloor log_{2}n\rfloor+1}\) key bits under the tmcm-mul architecture when \(ibw\) was set to 16. For the point function, 16 key bits were used. Fig. 13 presents the run-time of the SAT-based attack on obfuscated and protected TMCM blocks when its time limit was 2 days.
Observe from Fig. 13 that locking an obfuscated design using a point function increases the SAT-based attack resiliency
Fig. 12: Impact of point function parameters: (a) area; (b) number of iterations; (c) run-time.
significantly. Note that the SAT-based attack can find a solution to all obfuscated TMCM blocks except one. However, the average area, delay, and power dissipation of the protected designs are increased by 10.7%, 7.1%, and 9.6%, respectively when compared to those of the obfuscated designs.
### _Analysis on the Direct Form FIR Filter_
Among the parallel design of FIR filters, the direct form is a good candidate to be used in a secure implementation for several reasons based on the results obtained in this work. First, as shown in Table VII, its obfuscated hardware complexity in terms of area and power dissipation is significantly smaller than the transposed form filter. Second, it includes a large number of multiplication operations in chain, which make the SAT-based attack and its variants hard to discover the secret key. Third, the CAVM block of the direct form filter has a large number of inputs than the MCM block of the transposed form filter, which enables an increase in the number of key bits in the point function, improving the resiliency of the design protected by the hybrid technique as shown in Figs. 12(b)-(c). This last observation is also true when compared to the TMCM block used in folded FIR filter design.
To find the impact of the number of coefficients on the hardware complexity of the protected CAVM block of an FIR filter, 10 filters were taken from [55], where \(n\) ranges between 25 and 121 and \(mbw\) is between 9 and 15. In the design of CAVM blocks, \(ibw\) is set to 16. In the hybrid protection technique, these CAVM blocks were obfuscated using decoys with \(n\) key bits and locked using the point function with \(ibw\) key bits under the cavm-mul architecture using a total of \(n+ibw\) key bits. These protected designs are also compared with those locked by both RLL and CASLock, where the number of RLL and CASLock key bits is \(n-ibw\) and \(2*ibw\), respectively. This logic locking method was chosen because it generally generates a locked multiplier block with a small area as shown in Table VI. Table VIII presents the gate-level synthesis results of the CAVM designs protected by the hybrid technique and locked by both RLL and CASLock.
Observe from Table VIII that as the number of coefficients, \(n\), increases, the hardware complexity of the protected and locked CAVM blocks generally increases. The hybrid protection technique generally leads to a design with a smaller area when compared to the RLL+CASLock method, where the gain reaches up to 32.6%. Note that on filters _Shi11\(A\) and _Maskell07_, the RLL+CASLock method leads to a locked design with a smaller area, since the bit-width of coefficients is small, enabling the synthesis tool to optimize the logic.
To find the impact of obfuscation techniques on the filter behavior, the Zero-Phase Frequency Response (ZPFR) of the FIR filter _Nielsen89_ is obtained when the secret key and 100 randomly generated wrong keys are applied. Fig. 14 presents ZPFRs of FIR filters protected by the hybrid technique and locked by RLL+CASlock.
Observe from Fig. 14 that both obfuscation techniques may lead to a filter behavior different from the original one when a random wrong key is applied. While the filter behavior of the protected design under a wrong key is meaningful, but out of desired filter specification, the locked design exhibits an unmeaningful behavior under a wrong key due to the logic related to RLL key bits. Thus, the hybrid protection technique
Fig. 14: Behavior of the FIR filter _Nielsen89_: (a) protected by the hybrid technique; (b) locked by RLL+CASLock.
Fig. 13: Run-time of the SAT-based attack on TMCM blocks.
may make the adversary believe that the filter behavior under the wrong key is actually the desired one.
## VI Discussion
Other than the logic locking attacks used in this article, there exist Reverse Engineering (RE) and Side-Channel Analysis (SCA) techniques that can identify the filter coefficients in an obfuscated design. In [18], a machine learning tool that can determine the decoy selection method used in an obfuscated design was developed. The same work also proposed an RE technique that can identify filter coefficients hidden among decoys determined based on a decoy selection method. It was shown that if more than one decoy is used to obfuscate a filter coefficient, where the Hamming distance between each decoy and filter coefficient is 1, then the coefficient can be identified. In other cases, the RE technique was not capable of identifying original filter coefficients.
To the best of our knowledge, there exists no SCA technique proposed specifically to identify the original filter coefficients in an obfuscated filter design. The challenge for such a technique would be to understand how the synthesis tools embed the constants, i.e., filter coefficients and decoys, into the gate-level design using efficient methods, which optimize the hardware complexity of constant multiplications. This procedure would almost entail reverse engineering the algorithms used by the synthesis tools. In this case, it will be hard to reveal the filter coefficients from the power dissipation or delay values obtained from the obfuscated design since those data come from a logic combining both filter coefficients and decoys. Studying SCA and its efficiency to overcome obfuscation methods remains a formidable path for future research.
## VII Conclusions
This article focused on the obfuscation of digital FIR filters. Initially, it showed that the techniques previously proposed for the obfuscation of FIR filters are vulnerable to our SAT-based query attack, which applies several queries and proves that the found key bit value is the actual value of the related key bit in the secret key. Then, to secure an FIR filter design, it proposed the hybrid protection technique, which includes both obfuscation and locking with a point function. The proposed technique is applied to parallel direct and transposed forms of an FIR filter and its folded implementation. Experimental results clearly showed that the hybrid protection technique is competitive to prominent logic locking techniques in terms of hardware complexity and leads to obfuscated designs resilient to well-known attacks. It is also shown that the direct form FIR filter is a good candidate for secure filter implementation.
## Acknowledgment
The authors would like to thank Nimisha Limaye and Satwik Patnaik for running our obfuscated designs on their tools and Mohammad Yasin, Leon Li, and Christian Pilato for fruitful discussions. The attacks were carried out in the High Performance Computing Centre of TalTech.
|
2307.02581 | Two-component breather solution of the nonlinear wave equation | A nonlinear wave equation that describes different nonlinear effects in
various fields of research was considered. In two particular cases, this
equation was reduced to the Sine-Gordon equation and the Born-Infeld equation.
Using the slowly varying envelope approximation and the generalized
perturbative reduction method, the nonlinear wave equation was transformed to
coupled nonlinear Schrodinger equations for auxiliary functions. An explicit
analytical solution of a nonlinear wave equation in the form of a two-breather
molecule was obtained. One breather oscillated with the sum, and the other with
the difference of frequencies and wave numbers. The obtained solution coincides
with the solutions of the two-breather molecule found in a number of well-known
equations from different areas of physics. It is shown that in a particular
case of the small amplitude waves, a solution in the form of a two-breather
molecule for the nonlinear Klein-Gordon equation coincides with the vector 0\pi
pulse of the self-induced transparency which is presented under less stringent
conditions compared to the same solution of this equation obtained earlier. | G. T. Adamashvili | 2023-07-05T18:26:55Z | http://arxiv.org/abs/2307.02581v1 | # Two-component breather solution of the nonlinear wave equation
###### Abstract
A nonlinear wave equation which describes different nonlinear effects in various fields of research, was considered. In two particular cases, this equation was reduced to the Sine-Gordon equation and the Born-Infeld equation. Using the slowly varying envelope approximation and the generalized perturbative reduction method, the nonlinear wave equation was transformed to coupled nonlinear Schrodinger equations for auxiliary functions. An explicit analytical solution of a nonlinear wave equation in the form of a two-breather molecule was obtained. One breather oscillated with the sum, and the other with the difference of frequencies and wave numbers. The obtained solution coincides with the solutions of the two-breather molecule found in a number of well-known equations from different areas of physics. It is shown that in a particular case of the small amplitude waves, a solution in the form of a two-breather molecule for the nonlinear Klein-Gordon equation coincides with the vector \(0\pi\) pulse of the self-induced transparency which is presented under less stringent conditions compared to the same solution of this equation obtained earlier.
_Keywords:_ Generalized perturbation reduction method, Two-breather molecule, Born-Infeld equation, Sine-Gordon equation, Nonlinear Klein-Gordon equation.
pacs: 05.45.Yv, 02.30.Jr, 52.35.Mw
## I Introduction
At the propagation of nonlinear waves, one of the most important and interesting phenomena is the formation of nonlinear solitary waves of a stationary form. Although these waves have been studied and observed for a very long time, interest in them does not fade away. This is because they express the most fundamental property of strongly nonequilibrium states of nonlinear systems when the parameters characterizing such states remain unchanged. Although nonlinear solitary waves occur in various fields of physics, for waves of a completely different nature and physical functions characterizing them, for example, the strength of the electric field of optical waves, the tensor of deformation for sound waves, the displacement of the surface from the undisturbed state for the water waves and others, nevertheless their properties are identical. The nonlinear solitary waves which arise in different nonlinear physical phenomena in various systems, are described by means of the various nonlinear partial differential equations. These equations include the Korteweg de Vries equation, the Boussinesq equation, the Hirota equation, the nonlinear Schrodinger equation, the Benjamin-Bona-Mahony equation, the Bloch-Maxwell system of equations and many others [1-9].
Among the nonlinear partial differential equations which have solutions in the form of nonlinear solitary waves, one can single out the well-known Sine-Gordon equation and the Born-Infeld equation. Each of these equations describes nonlinear solitary waves in different areas of research. For example, the Sine-Gordon equation describes the geometry of surfaces with Gaussian curvature, the Josephson transition, dislocations in crystals, waves in ferromagnets associated with the rotation of magnetization, the properties of elementary particles and the nonlinear phenomena in the various fields of physics: optics, acoustics, plasma physics, semiconductor quantum dots, graphene, optical and acoustical metamaterials, and others [1-4]. The Born-Infeld equation is used to describe the interaction of electromagnetic waves in nonlinear electrodynamics, various phenomena in the field theory, the theory of strings, and some of the atomic experiments [1, 10-14]. These equations have been studied in sufficient detail by various mathematical methods and their solutions are well known.
Although each of these two equations describes a fairly wide range of absolutely different phenomena in various fields of physics, nevertheless, both of these equations are nonlinear Maxwell wave equations but with different nonlinear terms. Consequently arises the natural question: is it possible to describe all the above effects connected with these two equations in a unified way, by means of one more general equation?
In order to answer this question we consider the following more general nonlinear wave equation
\[\frac{\partial^{2}U}{\partial t^{2}}-C\frac{\partial^{2}U}{\partial z^{2}}=- \alpha_{0}^{2}\sin U-A(\frac{\partial U}{\partial t})^{2}\frac{\partial^{2}U} {\partial z^{2}}-\sigma(\frac{\partial U}{\partial z})^{2}\frac{\partial^{2}U }{\partial t^{2}}+B\frac{\partial U}{\partial t}\frac{\partial U}{\partial z }\frac{\partial^{2}U}{\partial t\partial z}, \tag{1}\]
or in the dimensionless form
\[\frac{\partial^{2}U}{\partial t^{2}}-\frac{\partial^{2}U}{\partial z^{2}}=- \sin U-(\frac{\partial U}{\partial t})^{2}\frac{\partial^{2}U}{\partial z^{2} }-(\frac{\partial U}{\partial z})^{2}\frac{\partial^{2}U}{\partial t^{2}}+2 \frac{\partial U}{\partial t}\frac{\partial U}{\partial z}\frac{\partial^{2} U}{\partial t\partial z},\]
where \(U(z,t)\) is a real function of space coordinate \(z\) and time \(t\) and represents the wave profile, while \(\alpha_{0}^{2},\;\;A,\;B,\;C,\) and \(\sigma\) are the real constants.
Eq.(1) in particular cases goes into the Sine-Gordon equation and the Born-Infeld equation. Indeed, when the condition \(A=B=\sigma=0\) is fulfilled, Eq.(1) is transformed into the Sine-Gordon equation
\[\frac{\partial^{2}U}{\partial t^{2}}-C\frac{\partial^{2}U}{\partial z^{2}}=- \alpha_{0}^{2}\sin U. \tag{2}\]
This nonlinear equation was analyzed by means of inverse scattering transform [2, 4], by which it is possible to obtain the complete solution of Eq.(2) in the form of the nonlinear solitary waves.
Among nonlinear solitary waves, the nonlinear waves have a relatively low amplitude for which
\[U<<1 \tag{3}\]
are considered quite often. Under the condition Eq.(3), the Sine-Gordon equation (2) is reduced to the nonlinear Klein-Gordon equation [1]
\[\frac{\partial^{2}U}{\partial t^{2}}-C\frac{\partial^{2}U}{\partial z^{2}}=- \alpha_{0}^{2}U+\frac{\alpha_{0}^{2}}{6}U^{3}-{\cal O}(U^{5}). \tag{4}\]
In case when the condition \(\alpha_{0}^{2}=0\) is fulfilled, Eq.(1) is reduced to the Born-Infeld equation
\[\frac{\partial^{2}U}{\partial t^{2}}-C\frac{\partial^{2}U}{\partial z^{2}}=-A (\frac{\partial U}{\partial t})^{2}\frac{\partial^{2}U}{\partial z^{2}}- \sigma(\frac{\partial U}{\partial z})^{2}\frac{\partial^{2}U}{\partial t^{2} }+B\frac{\partial U}{\partial t}\frac{\partial U}{\partial z}\frac{\partial ^{2}U}{\partial t\partial z}. \tag{5}\]
This is a nonlinear modification of the Maxwell wave equation, which includes the so-called Born-Infeld nonlinearity.
Sometimes \(2+1\) (two space coordinates and time) dimensional version of the Born-Infeld equation also has been investigated [15].
Under the condition Eq.(3), the nonlinear wave equation (1) is reduced to the form
\[\frac{\partial^{2}U}{\partial t^{2}}-C\frac{\partial^{2}U}{\partial z^{2}}=- A(\frac{\partial U}{\partial t})^{2}\frac{\partial^{2}U}{\partial z^{2}}- \sigma(\frac{\partial U}{\partial z})^{2}\frac{\partial^{2}U}{\partial t^{2} }+B\frac{\partial U}{\partial t}\frac{\partial U}{\partial z}\frac{\partial ^{2}U}{\partial t\partial z}-\alpha_{0}^{2}U+\frac{\alpha_{0}^{2}}{6}U^{3}-{ \cal O}(U^{5}). \tag{6}\]
When considering nonlinear solitary waves, there are two different types of them: single-component (scalar) and two-component (vector) solitary nonlinear waves. These waves can be formed both in different and in the same physical system. Their properties and formation conditions are different. A single-component wave propagates in a medium without changing its profile. Completely different is a two-component nonlinear wave, which is a bound state of two wave packets having the same speeds and directions of propagation, and in general cases, different oscillation frequencies. These wave packets can have both parallel polarizations or mutually perpendicular polarizations, for example, for waveguide modes. Such a two-component wave is a vector breather or a two-breather molecule. In the process of propagation, the breathers that make up the molecule interact with each other and exchange energy.
Some effects cannot be described using single-component nonlinear waves and it becomes necessary to use the concept of a breather molecule [16-18]. The most striking example is the effect of self-induced transparency, one of the main pulses of which is precisely a special type of the two-breather molecule, which consists of two connected breathers with the same polarizations, one of the breathers oscillates with the sum, and the second with the difference in frequencies and wave numbers. Such a two-component solitary nonlinear wave was first considered at the nonlinear coherent interaction of the optical wave with the system of the resonance atoms and is called a vector \(0\pi\) pulse of self-induced transparency [19-22].
Various mathematical approaches are used to analyze the properties of solitary nonlinear waves. Among them, the perturbative reduction method for analysis of the single-component nonlinear waves very often meet [23, 24]. This method uses one complex auxiliary function and two real parameters, which is enough to study single-component nonlinear waves but not enough to study two-component nonlinear waves. To study two-component nonlinear waves, a generalized perturbative reduction method was developed (see, for instance [9, 19-22] and references therein) which uses two complex functions and eight real parameters which are enough for the analysis of two-component nonlinear waves.
In the theory of self-induced transparency, the role of the second derivatives of the strength of the electric field of the wave with respect to the spatial coordinate and time in the Maxwell wave equation was studied using various mathematical methods. In particular, using the perturbative reduction method it was found that the second derivatives lead to only small quantitative corrections to the values of the parameters of the pulses of self-induced transparency (see, for instance [25, 26]).
Using the generalized perturbative reduction method, in the theory of self-induced transparency qualitatively new result was obtained. Namely, thanks to this method it was shown that in the Maxwell wave equation the second derivatives of the strength of the electric field of the pulse with respect to the spatial coordinate and time the determining role play for an adequate study of the effect of the self-induced transparency. To take into account the terms of the second derivative in the Maxwell wave equation was allowed to prove that one of the main pulses of self-induced transparency is the vector \(0\pi\) pulse (two-breather molecule), but not the single-component scalar \(0\pi\) pulse, as this was previously believed. The reason for this is that in the study of self-induced transparency in the Maxwell wave equation, the second derivatives of the strength of the electric field of the pulse with respect to the spatial coordinate and time were neglected, or their influence in the process of the formation of the pulses of self-induced transparency was investigated by using the standard perturbative reduction method, or by means of some other mathematical methods which intended for analysis of the single-component waves [19-22, 25-34].
Although the vector \(0\pi\) pulse of self-induced transparency, which is a special type of two-breather molecule, was first noticed for optical nonlinear waves, later, using the generalized perturbative reduction method, the same two-breather molecules were discovered for waves of a different nature described by other nonlinear partial differential equations, such as are the Boussinesq-type equation, the Benjamin-Bona-Mahony equation, and the Hirota equation [7, 9, 35-37].
Taking into account that a two-breather molecule in which one breather oscillates with the sum, and the second with the difference in frequencies and wave numbers, qualitatively changes the physical picture of self-induced transparency that cannot be described within the framework of single-component nonlinear waves, as well as the fact that such a two-breather molecule arises in completely different areas of physics for a function of a different nature, allows concluding that existence of this type of a two-breather molecule, i.e. type of the vector \(0\pi\) pulse of self-induced transparency, expresses the rather general property of matter. Thus, its further study for new equation (6) is relevant and reasonable.
The main purpose of this work is to use the generalized perturbative reduction method to obtain a solution to the nonlinear wave equation (6) in the form of the two-breather molecule, in which one breather oscillates with the sum and the second with the difference in frequencies and wave numbers.
The rest of this paper is organized as follows: In Section II, we consider the nonlinear wave equation (6) for slowly varying complex envelope functions. Using the generalized perturbative reduction method, we transformed the obtained equation into coupled nonlinear Schrodinger equations for auxiliary functions. In Section III, we present the explicit analytical solution of a nonlinear wave equation (6) in the form of a two-breather molecule Eq.(III). Finally, in Section IV, we discuss the obtained results.
## II The generalized perturbative reduction method
We considered a solitary nonlinear wave with the carrier frequency \(\omega\) and wave number \(k\) propagating along the positive \(z\)-axis. To study solitary nonlinear waves, the value of the width of pulse \(T\) is of great importance. We considered pulses whose width satisfies the condition \(T>>1/\omega\), i.e. the pulse duration is much longer than the inverse frequency of the carrier wave. For such a wave, we can use the slowly varying envelope approximation, and for real function \(U\), use the expansion [5, 38]
\[U(z,t)=\sum_{l=\pm 1}\hat{u}_{l}(z,t)Z_{l}, \tag{7}\]
where \(Z_{l}=e^{il(kz-\omega t)}\) is the exponential fast oscillating function, and \(\hat{u}_{l}\) is the slowly varying complex envelope function, which satisfies inequalities
\[\left|\frac{\partial\hat{u}_{l}}{\partial t}\right|\ll\omega|\hat{u}_{l}|, \hskip 28.452756pt\left|\frac{\partial\hat{u}_{l}}{\partial z}\right|\ll k| \hat{u}_{l}|.\]
For the reality of \(U\), it is supposed that the expression \(\hat{u}_{+1}=\hat{u}_{-1}^{*}\) is valid.
The nonlinear wave equation (6) can be presented as
\[\frac{\partial^{2}U}{\partial t^{2}}-C\frac{\partial^{2}U}{\partial z^{2}}=- \alpha_{0}^{2}U+N(U), \tag{8}\]
where
\[N(U)=-A(\frac{\partial U}{\partial t})^{2}\frac{\partial^{2}U}{\partial z^{2} }-\sigma(\frac{\partial U}{\partial z})^{2}\frac{\partial^{2}U}{\partial t^{ 2}}+B\frac{\partial U}{\partial t}\frac{\partial U}{\partial z}\frac{ \partial^{2}U}{\partial t\partial z}+\frac{\alpha_{0}^{2}}{6}U^{3}-\mathcal{ O}(U^{5}) \tag{9}\]
is the nonlinear term of the equation.
Substituting Eq.(7) into (8), we obtained the following dispersion relation
\[\omega^{2}=Ck^{2}+\alpha_{0}^{2}, \tag{10}\]
and the rest of the equation (8) in the form
\[\sum_{l=\pm 1}Z_{l}(-2il\omega\frac{\partial\hat{u}_{l}}{\partial t}-2ilkC\frac{ \partial\hat{u}_{l}}{\partial z}+\frac{\partial^{2}\hat{u}_{l}}{\partial t^{2} }-C\frac{\partial^{2}\hat{u}_{l}}{\partial z^{2}})=N(\hat{u}_{l}). \tag{11}\]
where \(N(\hat{u}_{l})\) is the nonlinear part of Eq.(9), expressed by means of the complex envelope function \(\hat{u}_{l}\).
In order to consider the two-component nonlinear wave solution of Eq.(6), we used the generalized perturbative reduction method developed in Refs.[9, 19-22], which made it possible to transform the nonlinear wave equation (11) for the function \(\hat{u}_{l}\) to the coupled nonlinear Schrodinger equations for auxiliary functions.
Following this method, the complex envelope function \(\hat{u}_{l}\) can be represented as
\[\hat{u}_{l}(z,t)=\sum_{\alpha=1}^{\infty}\sum_{n=-\infty}^{+\infty}\varepsilon ^{\alpha}Y_{l,n}f_{l,n}^{(\alpha)}(\zeta_{l,n},\tau), \tag{12}\]
where \(\varepsilon\) is a small parameter,
\[Y_{l,n}=e^{in(Q_{l,n}z-\Omega_{l,n}t)},\ \ \ \zeta_{l,n}=\varepsilon Q_{l,n}(z-v_{g_{l,n}}t),\]
\[\tau=\varepsilon^{2}t,\ \ \ v_{g_{l,n}}=\frac{\partial\Omega_{l,n}}{\partial Q_{l,n }}.\]
It is assumed that the complex auxiliary functions \(f_{l,n}^{(\alpha)}\) and oscillating real parameters \(\Omega_{l,n}\) and \(Q_{l,n}\) satisfy the inequalities for any \(l\) and \(n\)
\[\omega\gg\Omega_{l,n},\ \ k\gg Q_{l,n},\]
\[\left|\frac{\partial f_{l,n}^{(\alpha)}}{\partial t}\right|\ll\Omega_{l,n} \left|f_{l,n}^{(\alpha)}\right|,\ \left|\frac{\partial f_{l,n}^{(\alpha)}}{\partial\eta}\right|\ll Q_{l,n} \left|f_{l,n}^{(\alpha)}\right|.\]
Substituting Eq.(12) into Eq.(11), we obtained the nonlinear wave equation for auxiliary function \(f_{l,n}^{(\alpha)}\) in the following form:
\[\sum_{l=\pm 1}\sum_{\alpha=1}^{\infty}\sum_{n=\pm 1}\varepsilon^{\alpha}Z_{l}Y_{l,n}[W_{l,n}+\varepsilon J_{l,n}-\varepsilon^{2}ilh_{l,n}\frac{\partial}{ \partial\tau}-\varepsilon^{2}Q^{2}H_{l,n}\frac{\partial^{2}}{\partial \zeta^{2}}+O(\varepsilon^{3})]f_{l,n}^{(\alpha)}=N(\hat{u}_{l}), \tag{13}\]
where
\[W_{l,n}=-2nl\omega\Omega_{l,n}+2nlkQ_{l,n}C-\Omega_{l,n}^{2}+CQ_{l,n}^{2},\]
\[J_{l,n}=2iQ_{l,n}[l\omega v_{g_{l,n}}-lkC+n\Omega_{l,n}v_{g_{l,n}}-CnQ_{l,n}],\]
\[h_{l,n}=2(\omega+ln\Omega_{l,n}),\]
\[H_{l,n}=C-v_{g_{l,n}}^{2}. \tag{14}\]
Following the standard procedure and equating to zero, the terms with the same powers of \(\varepsilon\), from Eq.(13), we obtained a series of equations. In the first order of \(\varepsilon\), we found a connection between the parameters \(\Omega_{l,n}\) and \(Q_{l,n}\). When
\[2(CkQ_{\pm 1,\pm 1}-\omega\Omega_{\pm 1,\pm 1})-\Omega_{\pm 1,\pm 1}^{2}+CQ_{\pm 1,\pm 1}^{2}=0, \tag{15}\]
then \(f_{\pm 1,\pm 1}^{(1)}\neq 0\) and when
\[2(CkQ_{\pm 1,\mp 1}-\omega\Omega_{\pm 1,\mp 1})+\Omega_{\pm 1,\mp 1}^{2}-CQ_{\pm 1,\mp 1}^{2}=0, \tag{16}\]
then \(f_{\pm 1,\mp 1}^{(1)}\neq 0\).
From Eq.(14), in the second order of \(\varepsilon\), we obtained the equation \(J_{\pm 1,\pm 1}=J_{\pm 1,\mp 1}=0\) and the expression
\[v_{g_{i,n}}=C\frac{k+lnQ_{l,n}}{\omega+ln\Omega_{l,n}}. \tag{17}\]
Next, we considered the nonlinear term \(N(\hat{u}_{l})\) of the nonlinear wave equation (13). Substituting Eqs.(12) and (7) into Eq.(9) for the nonlinear term of the nonlinear wave equation, we obtained
\[\varepsilon^{3}\;Z_{+1}[(\tilde{q}_{+}|f_{+1,+1}^{(1)}|^{2}+\tilde{r}_{+}|f_{+ 1,-1}^{(1)}|^{2})f_{+1,+1}^{(1)}Y_{+1,+1}+(\tilde{q}_{-}|f_{+1,-1}^{(1)}|^{2} +\tilde{r}_{-}|f_{+1,+1}^{(1)}|^{2})Y_{+1,-1}f_{+1,-1}^{(1)}] \tag{18}\]
and plus terms proportional to \(Z_{-1}\). In Eq.(18) we use the notations
\[\tilde{q}_{\pm}=\frac{\alpha_{0}^{2}}{2}+\mathfrak{q}_{\pm},\]
\[\tilde{r}_{\pm}=\alpha_{0}^{2}+\mathfrak{r}_{\pm}, \tag{19}\]
where
\[\mathfrak{q}_{\pm}=(A+\sigma-B)(\omega\pm\Omega_{\pm})^{2}(k\pm Q_{\pm})^{2},\]
\[\mathfrak{r}_{\pm}=2[A(k\pm Q_{\pm})^{2}(\omega\mp\Omega_{\mp})^{2}+\sigma( \omega\pm\Omega_{\pm})^{2}(k\mp Q_{\mp})^{2}-B(\omega+\Omega_{+})(\omega- \Omega_{-})(k+Q_{+})(k-Q_{-})].\]
From Eqs.(13) and (18), in the third order of \(\varepsilon\), we obtained the system of nonlinear equations
\[i\frac{\partial f_{+1,+1}^{(1)}}{\partial\tau}+Q_{+}^{2}\frac{H_{+1,+1}}{h_{ +1,+1}}\frac{\partial^{2}f_{+1,+1}^{(1)}}{\partial\zeta_{+1,+1}^{2}}+(\frac{ \tilde{q}_{+}}{h_{+1,+1}}|f_{+1,+1}^{(1)}|^{2}+\frac{\tilde{r}_{+}}{h_{+1,+1} }|f_{+1,-1}^{(1)}|^{2})f_{+1,+1}^{(1)}=0,\]
\[i\frac{\partial f_{+1,-1}^{(1)}}{\partial\tau}+Q_{-}^{2}\frac{H_{+1,-1}}{h_{+1,-1}}\frac{\partial^{2}f_{+1,-1}^{(1)}}{\partial\zeta_{+1,-1}^{2}}+(\frac{ \tilde{q}_{-}}{h_{+1,-1}}|f_{+1,-1}^{(1)}|^{2}+\frac{\tilde{r}_{-}}{h_{+1,-1} }|f_{+1,+1}^{(1)}|^{2})f_{+1,-1}^{(1)}=0. \tag{20}\]
## III Two-Breather molecule of the nonlinear wave equation
After transforming back to the space coordinate \(z\) and time \(t\), from Eqs.(20), we obtained the coupled nonlinear Schrodinger equations for the auxiliary functions \(\Lambda_{\pm}=\varepsilon f_{+1,\pm 1}^{(1)}\) in the following form:
\[i(\frac{\partial\Lambda_{\pm}}{\partial t}+v_{\pm}\frac{\partial\Lambda_{\pm }}{\partial z})+p_{\pm}\frac{\partial^{2}\Lambda_{\pm}}{\partial z^{2}}+q_{ \pm}|\Lambda_{\pm}|^{2}\Lambda_{\pm}+r_{\pm}|\Lambda_{\mp}|^{2}\Lambda_{\pm}=0, \tag{21}\]
where
\[v_{\pm}=v_{g_{i+1,\pm 1}}=C\frac{k\pm Q_{\pm}}{\omega\pm\Omega_{\pm}},\ \ \ \ \ \ \ \ \ \ p_{\pm}=\frac{C-v_{\pm}^{2}}{2(\omega\pm\Omega_{\pm})},\]
\[q_{\pm}=\frac{\tilde{q}_{\pm}}{2(\omega\pm\Omega_{\pm})},\ \ \ \ \ \ \ \ \ \ \ \ \ \ r_{\pm}=\frac{\tilde{r}_{\pm}}{2(\omega\pm\Omega_{\pm})},\]
\[\Omega_{+}=\Omega_{+1,+1}=\Omega_{-1,-1},\
\[Q_{+}=Q_{+1,+1}=Q_{-1,-1},\hskip 56.905512ptQ_{-}=Q_{+1,-1}=Q_{-1,+1}. \tag{22}\]
The solution of Eq.(21) is given by [19; 20; 22]
\[\Lambda_{\pm}=\frac{A_{\pm}}{bT}Sech(\frac{t-\frac{z}{V_{0}}}{T})e^{i(k_{\pm}z -\omega_{\pm}t)}, \tag{23}\]
where \(A_{\pm},\ k_{\pm}\) and \(\omega_{\pm}\) are the real constants, and \(V_{0}\) is the velocity of the nonlinear wave. We assume that \(k_{\pm}<<Q_{\pm}\) and \(\omega_{\pm}<<\Omega_{\pm}\).
Combining Eqs.(7), (12) and (23), we obtained the two-breather molecule solution to the nonlinear wave equation (6) in the following form:
\[U(z,t)=\mathfrak{A}Sech(\frac{t-\frac{z}{V_{0}}}{T})\{\cos[(k+Q_{+}+k_{+})z-( \omega+\Omega_{+}+\omega_{+})t]\]
\[+(\frac{p_{-}q_{+}-p_{+}r_{-}}{p_{+}q_{-}-p_{-}r_{+}})^{\frac{1}{2}}\cos[(k- Q_{-}+k_{-})z-(\omega-\Omega_{-}+\omega_{-})t]\}, \tag{24}\]
where \(\mathfrak{A}\) is the amplitude of the nonlinear pulse. The expressions for the parameters \(k_{\pm}\) and \(\omega_{\pm}\) are given by
\[k_{\pm}=\frac{V_{0}-v_{\pm}}{2p_{\pm}},\]
\[\omega_{+}=\frac{p_{+}}{p_{-}}\omega_{-}+\frac{V_{0}^{2}(p_{-}^{2}-p_{+}^{2})+ v_{-}^{2}p_{+}^{2}-v_{+}^{2}p_{-}^{2}}{4p_{+}p_{-}^{2}}. \tag{25}\]
## IV Conclusion
We investigated the two-breather molecule solution of the nonlinear wave equation (6) when the slowly varying envelope approximation Eq. (7) was valid. The nonlinear pulse with the width \(T>>\Omega_{\pm}^{-1}>>\omega^{-1}\) was considered.
Using the generalized perturbation reduction method Eq.(12), Eq.(6) is transformed to the coupled nonlinear Schrodinger equations (21) for the function \(\Lambda_{\pm 1}\). As a result, the two-component nonlinear pulse oscillating with the sum \(\omega+\Omega_{+}\ (k+Q_{+})\) and difference \(\omega-\Omega_{-}\ (k-Q_{-})\) of the frequencies (wave numbers) Eq.(24) (two-breather molecule) was obtained. The dispersion relation and the relations between oscillating parameters \(\Omega_{\pm}\) and \(Q_{\pm}\) were determined from Eqs.(10), (15) and (16). The parameters of the nonlinear pulse from Eqs.(17), (19), (22) and (25) were determined.
Eq.(6) can describe all effects, which can be considered by means of the nonlinear Klein-Gordon equation Eq.(4) and the Born-Infeld equation Eq.(5) separately. Eqs.(24) are also solutions of these equations, and all obtained results are valid for these two equations.
In the special case when \(\alpha_{0}^{2}=0\), Eq.(6) is reduced to the Born-Infeld equation Eq.(5). The dispersion law Eq.(10) for the Born-Infeld equation is reduced to the form \(\omega^{2}=Ck^{2}\), and the parameters of the wave have the form \(\tilde{q}_{\pm}=\mathfrak{q}_{\pm}\) and \(\tilde{r}_{\pm}=\mathfrak{r}_{\pm}\).
In the second special case when the condition \(A=B=\sigma=0\) is fulfilled, Eq.(6) is reduced to the nonlinear Klein-Gordon equation Eq.(4). The dispersion law Eq.(10) is valid, and the parameters of the wave have the form \(\tilde{q}_{\pm}=\frac{\alpha_{0}^{2}}{2}\) and \(\tilde{r}_{\pm}=\alpha_{0}^{2}\).
Although the Sine-Gordon equation (2) and the nonlinear Klein-Gordon equation (4) arise in a number of physical fields and applied mathematics, there is one more very important effect of self-induced transparency, the simplified version of which is based on the Sine-Gordon equation, where the function \(U\) is the optical pulse envelope [38]. Indeed, the nonlinear coherent interaction of an optical pulse with resonant optical atoms is governed by the Bloch-Maxwell equations [27; 38]. When the Rabi frequency of the wave is real and the longitudinal and transverse relaxations are ignored, these equations are reduced to the Sine-Gordon equation [1-4], and for small amplitude waves when \(U<<1\), the Sine-Gordon equation is reduced to the nonlinear Klein-Gordon equation (4).
Ref.[19] shows that, under the conditions of self-induced transparency, Eq.(4) has a two-component vector \(0\pi\) pulse solution. One component oscillates with the sum \(w+\Omega_{+}\ (\kappa+Q_{+})\), and the other with the difference \(w-\Omega_{-}\ (\kappa-Q_{-})\) between the frequencies (wave numbers) in the region of the parameters \(w\) and \(\kappa\) which that are two or three orders
lower than the carrier wave frequency \(\omega\) and wave number \(k\). In other words, the expressions \(\omega/w\) and \(k/\kappa\) are of orders \(10^{2}\div 10^{3}\). In this case, the conditions of the nonlinear wave existence
\[\omega>>w>>\Omega_{\pm}>>T^{-1},\qquad\qquad k>>\kappa>>Q_{\pm}>>(V_{0}T)^{-1} \tag{26}\]
are fulfilled. Here \(w\), \(\kappa\), \(\Omega_{\pm}\) and \(Q_{\pm}\) are the different oscillating parameters. The phase modulation is neglected.
Taking into account the phase modulation, the character of the wave process changes significantly. As shown in Refs.[20-22], the two-component vector \(0\pi\) pulse solution of the Bloch-Maxwell equations oscillates with the sum and difference in frequencies in the region of the carrier wave frequency and wave number of the optical pulse. In this case, the conditions of formation of the two-component vector \(0\pi\) pulse have a form
\[\omega>>\Omega_{\pm}>>T^{-1},\qquad\qquad\qquad k>>Q_{\pm}>>(V_{0}T)^{-1}, \tag{27}\]
that is considerably weaker than Eq.(26), as shown in Ref.[19].
From Eq.(6), in the special case when the condition \(A=B=\sigma=0\) is fulfilled, Eq.(6) is transformed to the nonlinear Klein-Gordon equation (4), which has a solution in the form of Eq.(24) under the condition Eq.(27). When function \(U\) is the area of the optical pulse envelope, the two-breather molecule Eq.(24) coincides with the vector \(0\pi\) pulse of self-induced transparency [19-22].
We found that Eq.(24) is a solution of Eqs.(6) and (4) under the condition Eq.(27), as was obtained earlier in the case of the Bloch-Maxwell equations but without the phase modulation. In other words, in the present work, we obtained the solution in the form of the two-breather molecule or the vector \(0\pi\) pulse of self-induced transparency Eq.(24) for the Eq.(4) under the less stringent conditions Eq.(27), compared to the solution of this equation obtained earlier in Ref.[19] under the condition Eq.(26).
The considered solution in the form of a two-breather molecule - one of the breathers oscillates with the sum and the second with the difference in frequencies Eq.(24) - shows that this nonlinear wave occurs in many completely different areas of research for various nonlinear partial differential equations. Therefore, it can be concluded that the existence of such a nonlinear solitary wave expresses a rather general property of matter, as well as this takes place for the soliton or the breather.
We considered the nonlinear wave equation (1) (Eq.(6)), which unified the Sine-Gordon equation (2) (the nonlinear Klein-Gordon equation (4)) and the Born-Infeld equation (5). A similar approach used earlier for the Hirota equation, which in one particular case reduced to a scalar nonlinear Schrodinger equation, and in another particular case, it transformed into a complex modified Korteweg de Vries equation. Notable, there was a difference; unlike the nonlinear wave equations (1) and (6), where the function \(U(z,t)\) satisfying these equations was a real function, in the cases of the Hirota equation, the scalar nonlinear Schrodinger equation and the complex modified Korteweg de Vries equation, the solutions of these equations were found to be complex functions [7, 39, 40].
|
2301.08444 | Extraction of the frequency moments of spectral densities from
imaginary-time correlation function data | We introduce an exact framework to compute the positive frequency moments
$M^{(\alpha)}(\mathbf{q})=\braket{\omega^\alpha}$ of different dynamic
properties from imaginary-time quantum Monte Carlo data. As a practical
example, we obtain the first five moments of the dynamic structure factor
$S(\mathbf{q},\omega)$ of the uniform electron gas at the electronic Fermi
temperature based on \emph{ab initio} path integral Monte Carlo simulations. We
find excellent agreement with known sum rules for $\alpha=1,3$, and, to our
knowledge, present the first results for $\alpha=2,4,5$. Our idea can be
straightforwardly generalized to other dynamic properties such as the
single-particle spectral function $A(\mathbf{q},\omega)$, and will be useful
for a number of applications, including the study of ultracold atoms, exotic
warm dense matter, and condensed matter systems. | Tobias Dornheim, Damar C. Wicaksono, Juan E. Suarez-Cardona, Panagiotis Tolias, Maximilian Böhme, Zhandos Moldabekov, Michael Hecht, Jan Vorberger | 2023-01-20T06:52:13Z | http://arxiv.org/abs/2301.08444v1 | Extraction of the frequency moments of spectral densities from imaginary-time correlation function data
###### Abstract
We introduce an exact framework to compute the positive frequency moments \(M^{(\alpha)}(\mathbf{q})=\langle\omega^{\alpha}\rangle\) of different dynamic properties from imaginary-time quantum Monte Carlo data. As a practical example, we obtain the first five moments of the dynamic structure factor \(S(\mathbf{q},\omega)\) of the uniform electron gas at the electronic Fermi temperature based on _ab initio_ path integral Monte Carlo simulations. We find excellent agreement with known sum rules for \(\alpha=1,3\), and, to our knowledge, present the first results for \(\alpha=2,4,5\). Our idea can be straightforwardly generalized to other dynamic properties such as the single-particle spectral function \(A(\mathbf{q},\omega)\), and will be useful for a number of applications, including the study of ultracold atoms, exotic warm dense matter, and condensed matter systems.
## I Introduction
The accurate understanding of interacting quantum many-body systems constitutes a highly active frontier in physics, quantum chemistry, and related fields. Current challenges include the understanding of the energy loss dynamics of a projectile in a medium [1; 2], photoionization processes in atoms and molecules [3; 4], and energy relaxation towards a state of equilibrium [5; 6]. The accurate description of such nonequilibrium dynamics constitutes a most formidable challenge [7; 8]. Indeed, there as of yet exists no reliable method that is available for all systems and parameters of interest. Instead, one usually introduces approximations with respect to the coupling strength.
In thermodynamic equilibrium, different variants of the _ab initio_ quantum Monte Carlo (QMC) paradigm [9] are, in principle, capable of exactly taking into account the full complex interplay between nonideality (i.e., coupling) and quantum effects. Moreover, the widely used path integral Monte Carlo (PIMC) method [10; 11; 12] allows to further include thermal excitations without any approximation. Unfortunately, by construction, most QMC methods are limited to the imaginary time domain and, thus, cannot be used in a direct way to compute dynamic properties of interest. On the other hand, many imaginary-time correlation functions (ITCF) [13; 14] are connected to a dynamic spectral function via an integral expression. For example, the dynamic structure factor (DSF) \(S(\mathbf{q},\omega)\) is connected to the imaginary-time density-density correlation function via a two-sided Laplace transform
\[F(\mathbf{q},\tau)=\int_{-\infty}^{\infty}\mathrm{d}\omega\ S(\mathbf{q}, \omega)\ e^{-\tau\omega}=:\mathcal{L}\left[S(\mathbf{q},\omega)\right]\, \tag{1}\]
with \(-i\hbar\tau\in-i\hbar[0,\beta]\) the imaginary time argument. In practice, the LHS of Eq. (1) is known with high accuracy from _ab initio_ QMC simulations [15; 16; 17; 18; 19; 20; 21; 22; 23]; the task at hand is thus to numerically invert Eq. (1) to obtain \(S(\mathbf{q},\omega)\). This so-called _analytic continuation_ is ubiquitous within different fields of physics, including the study of ultracold atoms [15; 16; 19; 21; 22; 24] and exotic warm dense matter [25; 26; 18]. In particular, it is of high importance within condensed matter physics [27; 28; 29] and constitutes an important ingredient to dynamical mean-field theory simulations [30; 31]. Yet, the analytic continuation constitutes a notoriously difficult problem [32; 33]. Indeed, it is ill-posed with respect to the Monte Carlo error bars of \(F(\mathbf{q},\tau)\) and subject to a number of practical instabilities.
Due to the pressing need for an accurate dynamic description of interacting quantum many-body systems, a number of methods have been suggested to deal with the above problem. For example, maximum entropy methods [34; 17; 35] are based on Bayes' theorem and have been successfully applied in different contexts. Yet, the thus reconstructed spectral properties might be biased by the prior model function, although improvements over the original idea are continually being developed [17]. A second line of thought is based on averaging over a large number of noisy random trial solutions [15; 19; 27; 36], which includes the genetic inversion by falsification of theories (GIFT) method by Vitali and co-workers [15; 37] and the stochastic optimization method [38] introduced by Mishchenko _et al._[27]. While being computationally more expensive, such methods have the advantage that no prior information about the spectrum of interest is required. Finally, we mention the sparse-modelling technique by Otsuki _et al._[39; 40; 41], which is capable of efficiently filtering out the relevant information from the noisy QMC input data.
Despite the aforementioned considerable methodological advances, a direct analytic continuation only based on Eq. (1) is often insufficient to capture all physical fea
tures [16; 18]. Therefore, one must consider additional information such as the frequency moments
\[M_{\rm S}^{(\alpha)}({\bf q})=\langle\omega^{\alpha}\rangle_{S}=\int_{-\infty}^{ \infty}{\rm d}\omega\ S({\bf q},\omega)\ \omega^{\alpha} \tag{2}\]
to further constrain the analytic continuation. Hitherto, four moments have been known for interacting quantum systems: the normalization \(M_{\rm S}^{(0)}({\bf q})=S({\bf q})\) that is given by the static structure factor, the inverse moment \(M_{\rm S}^{(-1)}({\bf q})\) that is determined by the imaginary-time version of the fluctuation-dissipation theorem [23], and the cases \(\alpha=1,3\) that can be evaluated from commutator expressions, known as sum rules [42]. In fact, it is possible to reconstruct the DSF from its moments \(M_{\rm S}^{(\alpha)}({\bf q})\), which is known as the _Hamburger problem_ in the literature [43]. This formalism has been successfully utilized by Tkachenko and co-workers to estimate the dynamic structure factor of a number of classical and quantum systems [44; 45; 46]. Yet, it is clear that the lack of accurate data for the even moments (except \(\alpha=0\)) constitutes a substantial bottleneck both with respect to the Hamburger problem, and to constrain the traditional analytic continuation based on Eq. (1).
In this work, we overcome this fundamental limitation by introducing a new exact approach to estimate the positive integer (even and odd) frequency moments based on imaginary-time QMC data. As a practical example, we consider the uniform electron gas (UEG) [47; 48; 49], also known as jellium or quantum one-component plasma in the literature, at the electronic Fermi temperature [50]\(\Theta=k_{\rm B}T/E_{\rm F}=1\), where \(T\) is the temperature and \(E_{\rm F}\) is the usual Fermi energy. This system has attracted considerable interest over the last decade [51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66] due to its fundamental importance for so-called warm dense matter [67; 68; 69; 70]--an exotic state that naturally occurs in astrophysical objects like giant planet interactions [71; 72] and is realized in experiments for example in the context of inertial confinement fusion [73; 74].
In particular, we find good agreement with the sum rules for \(\alpha=1\) and \(\alpha=3\), and present the first results for \(M_{\rm S}^{(2)}({\bf q})\), \(M_{\rm S}^{(4)}({\bf q})\), and \(M_{\rm S}^{(5)}({\bf q})\) over a wide range of wave numbers. Our results can directly be used as input for an improved prediction of \(S({\bf q},\omega)\) via different methods, and constitute an valuable benchmark for the development of new dynamic simulation methods and approximations. Moreover, our idea can be straightforwardly generalized to other dynamic properties such as the single-particle spectral function \(A({\bf q},\omega)\), as we demonstrate in Sec. IV. Therefore, we expect it to be of use in a number of fields including the study of ultracold atoms, warm dense matter, and condensed matter physics. Finally, we note that the extraction of physical properties from imaginary-time correlation functions is interesting in its own right, and has given important insights both from a theoretical perspective [75; 23], and also for the interpretation of X-ray Thomson scattering experiments [76; 77].
The paper is organized as follows: In Sec. II, we introduce the required theoretical background, including a brief introduction to the PIMC estimation of the ITCF (II.1), its connection to the frequency moments of the dynamic structure factor (II.2), the frequency moment sum rules (II.3), and some basic relations from linear response theory (II.4). Sec. III is devoted to the presentation of our numerical results, starting with an overview of the UEG model in Sec. III.1 and a discussion of the polynomial fitting procedure in Sec. III.2. In Sec. III.3, we present our new data for the first six (i.e., \(\alpha=0,\ldots,5\)) frequency moments of the DSF and compare them to various theoretical estimates. The paper is concluded by a summary and outlook in Sec. IV.
## II Theory
We assume Hartree atomic units throughout this work.
### Path integral Monte Carlo
It is well established that different QMC methods [9] allow for the highly accurate computation of different imaginary-time correlation functions. In this work, we focus on the path integral Monte Carlo approach, which operates at finite temperatures \(T\) and gives one straightforward access [13] to both the Matsubara Green function [16; 78] and also the imaginary-time density-density correlation function that is explored in this work, and that is defined as
\[F({\bf q},\tau)=\left\langle\hat{n}({\bf q},\tau)\hat{n}(-{\bf q},0)\right\rangle _{0}. \tag{3}\]
Since a detailed introduction to the PIMC method has been presented elsewhere [10; 47], we here restrict ourselves to a brief discussion of its main features that are of relevance for the evaluation of Eq. (3). The basic idea behind PIMC is the celebrated classical isomorphism [79], where the complex, nonideal quantum many-body system of interest is mapped onto a classical ensemble of interacting ring polymers. In particular, each quantum particle is represented by an entire path of particle coordinates on each of the \(P\) imaginary-time slices, that are separated by an interval of \(\epsilon=\beta/P\).
This is schematically illustrated in Fig. 1, where we show an example configuration of \(N=4\) electrons in the \(x\)-\(\tau\)-plane. The basic idea of PIMC is to randomly generate a Markov chain of such path configurations that are taken into account according to their appropriate configuration weight; this can be done efficiently based on modern implementations of the Metropolis algorithm [81]. An additional detail originates from the indistinguishability of fermions and bosons under the exchange of particle coordinates, which requires extending the usual partition function by a summation over all possible permutations. In the path-integral picture, such permutations manifest as so-called exchange-cycles [82], which are trajectories with more than a single particle in it; see the two intermediate paths in Fig. 1 that form such a cycle.
While the sampling of all possible permutation topologies within a PIMC simulation is not trivial, it can be efficiently accomplished via the worm algorithm introduced in Refs. [78; 83].
A particular challenge is given by the PIMC simulation of quantum degenerate Fermi systems, as the sign of the respective contribution to the total partition function alternates with each pair exchange. This is the root cause of the notorious fermion sign problem, which leads to an exponential increase in the required compute time with important system parameters such as the temperature \(T\) or the system size \(N\); a topical review on the sign problem in PIMC has been presented in Refs. [84; 80]. On the one hand, the sign problem can be formally avoided by imposing a nodal restriction on the thermal density matrix. Yet, this simplification comes at the cost of an uncontrolled approximation in practice [85], as the exact nodal structure of an interacting quantum many-body system is not known. On the other hand, such a nodal restriction would destroy the imaginary-time translation invariance in any case, which prevents the straightforward estimation of imaginary-time correlation functions. Therefore, we perform direct PIMC simulations in this work that are computationally demanding due to the sign problem, but exact within the given Monte Carlo error bars.
In addition, the direct PIMC method allows for a straightforward estimation of \(F(\mathbf{q},\tau)\) via Eq. (3) in terms of the correlated evaluation of the density in reciprocal space on different imaginary-time slices; see the dashed green lines in Fig. 1.
### Dynamic structure factor and frequency moments
The dynamic structure factor \(S(\mathbf{q},\omega)\) constitutes the central property in scattering experiments and is given by the Fourier transform of the intermediate scattering function \(F(\mathbf{q},t)\)[86],
\[S(\mathbf{q},\omega)=\int_{-\infty}^{\infty}\mathrm{d}t\ e^{i\omega t}F( \mathbf{q},t)\, \tag{4}\]
with the latter being defined as
\[F(\mathbf{q},t)=\left\langle\hat{n}(\mathbf{q},t)\hat{n}(-\mathbf{q},0) \right\rangle_{0}. \tag{5}\]
Naturally, the direct evaluation of Eqs. (4) and (5) requires the availability of dynamic simulations; this is relatively straightforward for classical systems e.g. via molecular dynamics (MD) simulations [87; 88], but constitutes a most formidable challenge for interacting quantum many-body systems. Similarly, the numerical inversion of Eq. (1) to compute \(S(\mathbf{q},\omega)\) based on QMC results for the ITCF \(F(\mathbf{q},\tau)\) constitutes an ill-posed problem as it has been explained in the introduction. In lieu of the full DSF \(S(\mathbf{q},\omega)\), we will show here how one can still obtain dynamic properties of the given system of interest in the form of the frequency moments \(M_{\mathrm{S}}^{(\alpha)}(\mathbf{q})\) defined in Eq. (2) above.
Let us start by considering the derivative of the ITCF with respect to the imaginary-time argument \(\tau\)
\[\frac{\partial^{\alpha}}{\partial\tau^{\alpha}}F(\mathbf{q},\tau)=(-1)^{\alpha }\int\limits_{-\infty}^{\infty}d\omega\,\omega^{\alpha}e^{-\tau\omega}S( \mathbf{q},\omega). \tag{6}\]
In particular, the \(\tau\)-derivative at the origin is given by
\[\frac{\partial^{\alpha}}{\partial\tau^{\alpha}}F(\mathbf{q},\tau)\Big{|}_{ \tau=0}=(-1)^{\alpha}\int\limits_{-\infty}^{\infty}d\omega\,\omega^{\alpha}S( \mathbf{q},\omega)\, \tag{7}\]
thereby giving one direct access to all positive frequency moments of the DSF without the need for an explicit analytic continuation. While being formally exact, we note that the numerical differentiation inherent to Eq. (7) is cumbersome in practice and exclusively relies on high-quality information around \(\tau=0\). A better alternative is to perform a Taylor expansion of \(F(\mathbf{q},\tau)\) around \(\tau=0\),
\[F(\mathbf{q},\tau) =\sum_{\alpha=0}^{\infty}\left\{\frac{1}{\alpha!}\left.\frac{ \partial^{\alpha}F(\mathbf{q},\tau)}{\partial\tau^{\alpha}}\right|_{\tau=0} \tau^{\alpha}\right\} \tag{8}\] \[=\sum_{\alpha=0}^{\infty}c_{\alpha}(\mathbf{q})\tau^{\alpha}. \tag{9}\]
Figure 1: Schematic illustration of the PIMC method and the estimation of the ITCF \(F(\mathbf{q},\tau)\). Shown is a path-integral configuration with \(N=4\) electrons in the \(\tau\)-\(x\)-plane. The single pair exchange of the two paths in the center leads to a negative contribution in the case of fermions, thereby contributing to the notorious fermion sign problem [80]. The dotted green lines illustrate the correlated evaluation of the density in reciprocal space at different imaginary-time arguments for the estimation of \(F(\mathbf{q},\tau)\) via Eq. (3). The yellow Gaussian on the RHS illustrates the kinetic contribution to the thermal density matrix, which effectively connects beads on adjacent imaginary-time slices via a harmonic spring potential. Taken from Ref. [23] with the permission of the authors.
In practice, we truncate Eq. (9) at a finite degree \(\alpha_{\rm max}\) and perform a corresponding polynomial fit to our PIMC data for \(F({\bf q},\tau)\). Combining Eqs. (7) and (9), we thus obtain the frequency moments of the DSF as
\[M_{\rm S}^{(\alpha)}({\bf q})=\left(-1\right)^{\alpha}\alpha!\;c_{\alpha}({\bf q })\;. \tag{10}\]
### Frequency moment sum rules
In what follows, we briefly summarize the established literature background on the DSF frequency moment sum rules. From a practical perspective, their main utility lies in the computation of the various \(\langle\omega^{\alpha}\rangle_{S}\) purely on the basis of equilibrium expectation values and, therefore, without the need for dynamic simulations or of an analytic continuation.
The _inverse frequency moment_ is easily calculated by combining the static limit of the Kramers-Kronig relation for the real part of the density response function (see also Sec. II.4 below) with the fluctuation-dissipation theorem and the detailed balance condition. It is given by [89; 90]
\[\langle\omega^{-1}\rangle_{S}=-\frac{\chi({\mathbf{q}})}{2n}\,, \tag{11}\]
where \(\chi({\mathbf{q}})\equiv\chi({\mathbf{q}},0)\) is the static density response function that is a real quantity and \(n=N/{\cal V}\) is the number density with \({\cal V}=L^{3}\) the volume of the PIMC simulation cell. Given its origin, it is no surprise that it also directly follows from the ITCF via the imaginary time version of the fluctuation-dissipation theorem [23; 91]
\[\chi({\mathbf{q}})=-n\int_{0}^{\beta}d\tau F({\mathbf{q}},\tau)\,, \tag{12}\]
leading to the equivalent relation
\[\langle\omega^{-1}\rangle_{S}=\frac{1}{2}\int_{0}^{\beta}d\tau F({\mathbf{q}}, \tau)\,. \tag{13}\]
The _zero frequency moment_ is the normalization of the DSF and directly emerges from the static structure factor (SSF) definition \(S({\mathbf{q}})=\langle\hat{n}({\mathbf{q}})\hat{n}(-{\mathbf{q}})\rangle_{0}=F({\mathbf{q}},0)\). It simply reads [92; 49; 88]
\[\langle\omega^{0}\rangle_{S}=S({\mathbf{q}})\,. \tag{14}\]
Recall that the combination of the zero frequency moment with the fluctuation-dissipation theorem is the major building block of all schemes of the self-consistent dielectric formalism and the reason that it has been dubbed as self-consistent [93; 94; 47; 95].
Odd frequency moments of the imaginary part of the density response function \(\langle\omega^{2m+1}\rangle_{\rm Im\chi}\) can be expressed as the statistical averages of equal time commutators at equilibrium [94; 94]. This general result stems from the high-frequency expansion of the Kramers-Kronig relation and the short time expansion of the standard definition \(\chi({\mathbf{q}},t)=-{\rm i}{\rm H}(t)\langle[\hat{n}({\mathbf{q}},t),\hat{n}(-{\bm {q}},0)]\rangle_{0}\), where \({\rm H}(.)\) is the Heaviside step function [92; 96]. Repeated application of the Heisenberg equation of motion converts the arbitrary order time derivatives into iterated commutators, reminiscent of those emerging in the Baker-Campbell-Hausdorff formula, with the number of Hamiltonian nests coinciding with the order of the frequency moment [49]. The connection between \(\langle\omega^{2m+1}\rangle_{\rm Im\chi}\) and \(\langle\omega^{2m+1}\rangle_{S}\) is naturally established by the fluctuation-dissipation theorem and it reads as
\[\langle\omega^{2m+1}\rangle_{S}=-\frac{1}{2\pi n}\langle\omega^{2m+1}\rangle_ {\rm Im\chi}\,. \tag{15}\]
The _first frequency moment_, the universal f-sum rule, expresses particle number conservation [97] and is given by [98; 99; 90; 49]
\[\langle\omega^{1}\rangle_{S}=\frac{q^{2}}{2}\,. \tag{16}\]
The _third frequency moment_, the cubic sum rule, involves the static structure factor (or the pair correlation function) and is given by [92; 99; 49]
\[\langle\omega^{3}\rangle_{S} = \frac{q^{2}}{2}\left\{\frac{q^{4}}{4}+2q^{2}K+4\pi n+\right. \tag{17}\] \[\left.\frac{4\pi}{{\cal V}}\sum_{{\mathbf{k}}\neq{\mathbf{q}},0}\left( \frac{{\mathbf{q}}\cdot{\mathbf{k}}}{qk}\right)^{2}\left[S({\mathbf{q}}-{\mathbf{k}})-S({\mathbf{k }})\right]\right\}\,,\]
where \(K=\langle\sum_{i}\hat{p}_{i}^{2}/2\rangle_{0}\) is the total kinetic energy. The first two terms are kinetic and the last terms pair interacting in nature, with the third term being the Hartree contribution. In the literature, an equivalent form is also encountered that involves a symmetrized Coulomb pair interaction rather than the symmetrized SSF [100; 101; 102; 103; 104; 105; 42]. It is worth noting that the third frequency moment is directly connected to the high-frequency limit of the dynamic local field correction (LFC) \(G({\mathbf{q}},\omega)\)[101; 103]. This has been exploited for the construction of a static LFC functional of the SSF in the generalized random phase approximation of Pathak & Vashishta [104] and for the construction of a dynamic LFC functional of the SSF in the dielectric scheme of Utsumi & Ichimaru [105]. To our knowledge, the nested commutator formula has not been yet utilized for the computation of higher-order odd-frequency moments. Higher moments have only been reported within the classical limit [106; 107; 108; 109], where the commutators are replaced with Poisson brackets and the calculations simplify considerably after the application of Yvon's theorem [88]. Nevertheless, there should be a correspondence between the frequency order of the moments and the correlation order of the static distribution functions. For instance, the fifth frequency moment must involve the triplet structure factor \(S^{(3)}({\mathbf{q}},{\mathbf{q}}^{\prime})=\langle\hat{n}({\mathbf{q}})\hat{n}({\mathbf{q}}^{ \prime})\hat{n}(-{\mathbf{q}}-{\mathbf{q}}^{\prime})\rangle_{0}\) in reciprocal space or the ternary correlation function \(g^{(3)}({\mathbf{r}},{\mathbf{r}}^{\prime})\) in real space [100].
Furthermore, it is important to point out that the even moments of the imaginary part of the density response
function \(\langle\omega^{2m}\rangle_{\rm Im\chi}\) are zero given the odd frequency parity of \({\rm Im}\chi(\mathbf{q},\omega)\)[49]. Consequently, the even moments of the DSF \(\langle\omega^{2m}\rangle_{S}\) cannot be reduced to the equilibrium expectation value of an equal-time commutator. Finally, we point out that in the classical limit \(\beta\hbar\omega\ll 1\), the fluctuation-dissipation theorem establishes a different correspondence rule, this time between the even DSF moments and the odd \({\rm Im}\chi\) moments, that reads as [108; 109; 110; 111]
\[\langle\omega^{2m}\rangle_{S}^{\rm cl}=-\frac{1}{\pi n\beta}\langle\omega^{2m- 1}\rangle_{\rm Im\chi}^{\rm cl}\,, \tag{18}\]
and the detailed balance condition collapses to the even frequency parity of the DSF, which implies that the odd DSF moments identically vanish, _i.e._[109; 110; 111]
\[\langle\omega^{2m+1}\rangle_{S}^{\rm cl}=0\,. \tag{19}\]
### Linear response theory
An alternative route towards the dynamic structure factor comes from linear response theory [49; 112]. More specifically, the well-known fluctuation-dissipation theorem relates the imaginary part of the dynamic linear density response function \(\chi(\mathbf{q},\omega)\) to the DSF,
\[S(\mathbf{q},\omega)=-\frac{{\rm Im}\chi(\mathbf{q},\omega)}{\pi n(1-e^{-\beta \omega})}. \tag{20}\]
Here \(\chi(\mathbf{q},\omega)\) describes the response of a given system to an external perturbation of wave vector \(\mathbf{q}\) and frequency \(\omega\), see Ref. [112] for a recent comprehensive discussion. It is more convenient to utilize the following exact expression for \(\chi(\mathbf{q},\omega)\)[96]
\[\chi(\mathbf{q},\omega)=\frac{\chi_{0}(\mathbf{q},\omega)}{1-\frac{4\pi}{q^{ 2}}\left[1-G(\mathbf{q},\omega)\right]\chi_{0}(\mathbf{q},\omega)}\, \tag{21}\]
where \(\chi_{0}(\mathbf{q},\omega)\) denotes the Lindhard function which describes the density response of a noninteracting Fermi gas and can be easily evaluated in practice [49]. The complete wave-vector and frequency-resolved information about electronic exchange-correlation effects is encoded into the dynamic local field correction \(G(\mathbf{q},\omega)\), which is formally equivalent to the exchange-correlation kernel \(K_{\rm xc}(\mathbf{q},\omega)\) known from time-dependent density functional theory simulations [113]. While the exact \(G(\mathbf{q},\omega)\) is generally unknown, Eqs. (20) and (21) allow one to compute various approximations to \(S(\mathbf{q},\omega)\). For example, when employing \(G(\mathbf{q},\omega)\equiv 0\), there is no polarization field and the random phase approximation (RPA) emerges that constitutes a mean-field description [7]. Or, when assuming that \(G(\mathbf{q},\omega)\equiv 1\), the polarization field exactly cancels out the mean field and the non-interacting density response \(\chi(\mathbf{q},\omega)\equiv\chi_{0}(\mathbf{q},\omega)\) is retrieved. A sophisticated more accurate approach is given by the _static approximation_\(G(\mathbf{q},\omega)\equiv G(\mathbf{q},0)\), which has been shown [18; 25] to give highly accurate results for \(S(\mathbf{q},\omega)\) in the regime of metallic densities \(r_{s}\lesssim 4\). In practice, the _static approximation_ can be readily evaluated either using the neural-net representation of \(G(\mathbf{q},0;r_{s},\Theta)\) from Ref. [114], or employing the analytic representation of \(G(\mathbf{q},0;r_{s},\Theta)\) from Ref. [115].
In the context of the present work, the main utility of Eqs. (20) and (21) is a) to generate realistic synthetic data that can be used to verify our idea, and b) to generate approximate reference data for the frequency moments that we extract from our PIMC data for the UEG.
## III Results
All PIMC results that are presented in this work have been obtained using the extended ensemble approach introduced in Ref. [116], which is a canonical adaption of the worm algorithm by Boninsegni _et al._[78; 83]. More specifically, we employ a primitive factorization scheme with \(P=200\) imaginary-time slices, and the convergence with \(P\) has been carefully checked. Furthermore, we have carried out simulations with \(N=34\) unpolarized electrons, and finite-size effects are known to be small in this regime [26; 55; 117].
### Uniform electron gas model
The UEG [47; 48; 49], also known as _jellium_, is the quantum version of the classical one-component plasma [87; 88; 118; 89] and constitutes one of the most fundamental model systems in physics and related disciplines. From a theoretical perspective, it is convenient to characterize the UEG in terms of a few reduced parameters [50]. The density parameter serves as the quantum coupling parameter of the UEG and is defined as the ratio of the Wigner-Seitz radius to the Bohr radius, \(r_{s}=d/a_{\rm B}\). In the limit of \(r_{s}\to 0\) (i.e., high density), the UEG becomes an ideal Fermi gas as the ratio of potential to kinetic energy vanishes proportionally to \(r_{s}\) in this regime. Conversely, the UEG becomes a strongly coupled electron liquid [18; 61; 65] for \(r_{s}\gtrsim 10\), which gives rise to a number of interesting physical phenomena such as the roton minimum in the spectrum of density fluctuations [119; 66]. In addition, the degeneracy temperature \(\Theta=k_{\rm B}T/E_{\rm F}\) indicates the degree of quantum degeneracy, with \(\Theta\ll 1\) being fully degenerate and \(\Theta\gg 1\) being semi-classical [63]. In principle, a third parameter is given by the spin-polarization \(\xi=(N^{\uparrow}-N^{\downarrow})/N\) with \(N^{\uparrow}\) and \(N^{\downarrow}\) being the number of electrons with majority and minority spin-orientation. In the present work, we restrict ourselves to the fully unpolarized, i.e., the paramagnetic case with \(N^{\uparrow}=N^{\downarrow}\) and \(\xi=0\). For completeness, we note that finite values of \(0\leq\xi\leq 1\) (see also Refs. [47; 52; 56; 58; 120; 121; 122; 123; 124]) are relevant for various applications such as spin-density functional theory calculations in quantum chemistry [125].
In the ground-state limit with \(\Theta=0\), the UEG constitutes a simple model for the conduction electrons in alkali metals [90]. Moreover, the accurate parametrization of its properties [124; 126; 127; 128] based on highly accurate ground-state QMC simulations [129; 130; 131; 132] has facilitated many applications including the arguably unrivaled success of density functional theory with respect to the description of real materials [132]. In this work, we consider the case of \(\Theta=1\) and \(r_{s}\sim 1\), which is commonly referred to as _warm dense matter_ (WDM) in the literature [67; 47; 68]. These extreme conditions naturally occur in astrophysical objects such as giant planet interiors [69; 72] and can be realized in the laboratory for example in inertial confinement fusion experiments [73; 74]. From a theoretical perspective, the accurate description of WDM is notoriously challenging due to the intriguingly intricate interplay of Coulomb coupling with strong thermal excitations and quantum degeneracy effects such as Pauli blocking and diffraction [67; 68]. Therefore, first accurate results for the warm dense UEG [133; 47] have become available only recently based on different thermal QMC methods [57; 58; 134; 135; 136; 137; 138; 139]. In addition, we consider the case of \(\Theta=1\) and \(r_{s}=10\), which is located at the margin of the electron liquid regime. These conditions are particularly interesting as they exhibit a wealth of interesting phenomena, including the aforementioned _roton minimum_ in the dynamic structure factor [25; 18] which has been explained only recently [66].
### Canonical representation of the ITCF
In Fig. 2, we demonstrate the proposed fitting procedure for the UEG at \(r_{s}=10\) and \(\Theta=1\), and for the wave number \(q=0.63q_{\text{F}}\). The red circles in panel a) have been computed by obtaining \(S(\mathbf{q},\omega)\) within the _static approximation_ (see Sec. II.4), and subsequently evaluating the two-sided Laplace transform Eq. (1) on a realistic \(\tau\)-grid (\(P=200\)). The solid black curve depicts a corresponding canonical fit according to Eq. (9). Empirically, we find a maximum significant order of \(\alpha_{\text{max}}\) for the polynomial expansion for this example, see Appendix A for more details about the fitting procedure. For completeness, we note that \(F(\mathbf{q},\tau)\) is symmetric around \(\tau=\beta/2\), i.e., \(F(\mathbf{q},\tau)=F(\mathbf{q},\beta-\tau)\). In practice, even though the information about the ITCF for \(\beta/2<\tau\leq\beta\) is technically redundant, it is still strongly beneficial to fit Eq. (9) over the entire \(\tau\)-range as the symmetry condition is not automatically incorporated into the canonical representation of the polynomial. Therefore, the range \(\beta/2<\tau\leq\beta\) significantly helps to determine the coefficients \(c_{\alpha}(\mathbf{q})\) and in this way to get more reliable information in particular about the higher frequency moments \(M_{\text{S}}^{(\alpha)}(\mathbf{q})\). We further note that the detailed discussion of the physical behavior of \(F(\mathbf{q},\tau)\) is beyond the scope of the present work and has been presented in the recent Refs. [23; 75].
In Fig. 3, we show the corresponding frequency moments that have been obtained from the canonical fitting
Figure 2: Illustration of the fitting procedure for the UEG at \(r_{s}=10\) and \(\Theta=1\) for \(q=0.63q_{\text{F}}\). a) Synthetic data for \(F(\mathbf{q},\tau)\) within the _static approximation_ (red) and corresponding canonical fit of order \(\alpha_{\text{max}}=10\) (black). b) PIMC data for \(F(\mathbf{q},\tau)\) (green) and corresponding fit with \(\alpha_{\text{max}}=6\); c) magnified segment showing the PIMC error bars.
Figure 3: Comparison of synthetic frequency moments (lines) computed directly from \(S(\mathbf{q},\omega)\) via Eq. (2) for the ideal Fermi gas (yellow) and within the _static approximation_ (red) to results that have been extracted from the coefficients of canonical fits to the ITCF (circles) via Eq. (10).
coefficients via Eq. (10) as a function of the wave number \(q\). The panels a)-f) show the orders \(\alpha=0,\ldots,5\). The solid red and dashed yellow curves show exact reference data that have been directly computed via Eq. (2) from synthetic results for \(S(\mathbf{q},\omega)\) within the _static approximation_ and for the ideal Fermi gas model, respectively. The corresponding circles show the frequency moments that have been extracted from the ITCF via Eq. (10). Clearly, the proposed extraction of the \(M_{\alpha}\) from canonical fits to the ITCF works exceptionally well in all cases, and over the entire relevant range of wave numbers. This constitutes a strong empirical verification of our method and serves as an important benchmark for the following analysis of PIMC results, for which reliable benchmark data exist only for a subset of moments. In particular, this analysis of synthetic results demonstrates that even the extraction of the fifth moment \(M_{S}^{(5)}(\mathbf{q})\) is, in principle, possible.
In Fig. 2b), we show the canonical fitting to our PIMC data for \(F(\mathbf{q},\tau)\) for the same conditions as in panel a). In this case, the input data for the ITCF are afflicted with statistical error bars, see the magnified segment shown in panel c). Therefore, our fitting procedure gives access to a smaller number of polynomial coefficients compared to the synthetic data from the _static approximation_, and we find \(\alpha_{\text{max}}=6\) in this case.
### Frequency moments
Let us begin our analysis of the frequency moments extracted from PIMC results for the ITCF with a discussion of \(M_{S}^{(0)}(\mathbf{q})=S(\mathbf{q})=F(\mathbf{q},0)\) shown in Fig. 4. In the following, all results have been obtained for \(\Theta=1\), and panels a) and b) show results for \(r_{s}=10\) and \(r_{s}=4\). From a physical perspective, these cases correspond to an electron liquid [18] that exhibits interesting effects such as the roton feature in the dispersion [66], and to a metallic density that can be realized either in experiments with e.g. sodium [139] or in hydrogen jets [140]. The green squares show our direct PIMC results for \(S(\mathbf{q})\) and are in perfect agreement with the zero-order fitting coefficient \(c_{0}(\mathbf{q})\) for both densities and over the entire range of wave numbers. As a reference, we also include synthetic data computed via Eqs. (20) and (21) within the RPA (dashed blue), the _static approximation_ (solid red), and for the ideal Fermi gas (dotted yellow). Overall, the _static approximation_ that has been evaluated using the neural-net representation from Ref. [114] exhibits the highest degree of accuracy, as it is expected. The RPA and the ideal Fermi gas model are substantially less accurate and can easily be distinguished both from the exact PIMC results and from the extracted fitting coefficients.
In Fig. 5, we repeat this analysis for the first moment \(M_{S}^{(1)}(\mathbf{q})\), with the solid green curve depicting the exact f-sum rule, Eq. (16). Clearly, all data sets are in perfect agreement with the latter, including all synthetic curves. As an alternative route to the polynomial expansion Eq. (9), one might also attempt to numerically evaluate the first derivative of the ITCF with respect to \(\tau\) on the given PIMC \(\tau\)-grid,
\[\left.\frac{\partial F(\mathbf{q},\tau)}{\partial\tau}\right|_{\tau=0}\approx \frac{F(\mathbf{q},\epsilon)-F(\mathbf{q},0)}{\epsilon}\,. \tag{22}\]
The results are shown as the yellow circles in Fig. 6a). Evidently, the numerical derivative only agrees with the exact f-sum rule for \(q\lesssim 3q_{\text{F}}\), but becomes increasingly inaccurate in the limit of large \(q\). This observation can be directly traced back to the behavior of the ITCF, as it is illustrated in Fig. 6b). Specifically, the ITCF becomes increasingly steep for large \(q\). For \(q=0.63q_{\text{F}}\) (red crosses), the ITCF is comparably flat, and the corresponding evaluation of Eq. (22) is accurate. In contrast, we find a very sharp \(\tau\)-decay around \(\tau=0\) for \(q=4.56q_{\text{F}}\) (green stars), and the available \(\tau\)-grid in the PIMC simulation is not
Figure 4: The zero frequency moment \(M_{S}^{(0)}(\mathbf{q})\) for the UEG at \(\Theta=1\) for a) \(r_{s}=10\) and b) \(r_{s}=4\). Green squares: PIMC reference data for the static structure factor \(S(\mathbf{q})=F(\mathbf{q},0)\); black crosses: moments extracted via Eq. (10); dashed blue, solid red, and dotted yellow: reference data within RPA, _static approximation_, and for the ideal Fermi gas computed from synthetic \(S(\mathbf{q},\omega)\) directly via Eq. (2).
sufficient to accurately estimate the first derivative directly. At the same time, we stress that the proposed polynomial fit of \(F(\mathbf{q},\tau)\) [Eq. (9)] completely overcomes this issue and, therefore, constitutes the preferable option.
Next, we consider the second moment \(M_{S}^{(2)}(\mathbf{q})\) shown in Fig. 7. In this case, no exact reference data are available either from a sum-rule or from another source. At the same time, the RPA and _static approximation_ are in close agreement with each other and also closely agree with the extracted moments for both densities. In contrast, the reference data computed from the ideal Fermi gas model exhibits significant deviations for small wave numbers. Lastly, we point out that the extracted moments for the higher density of \(r_{s}=4\) exhibit small yet visible fluctuations for \(q\gtrsim 3q_{\rm F}\) for \(M_{S}^{(2)}(\mathbf{q})\), while no such fluctuations are visible for \(r_{s}=10\) with the naked eye. This is a direct consequence of the increased statistical uncertainty in the PIMC data, which, in turn, is due to the more severe fermion sign problem at \(r_{s}=4\)[80]. At the same time, we find that the observed fluctuations in \(M_{S}^{(2)}(\mathbf{q})\) are well captured by the corresponding error bars, the calculation of which is explained in more detail in Appendix A.
A particularly interesting frequency moment of the DSF is given by \(M_{S}^{(3)}(\mathbf{q})\), as it is directly connected to the high-frequency limit of the local field correction [101]. The corresponding results are shown in Fig. 8, with the green squares being reference data computed from the cubic sum rule, Eq. (17), using PIMC data for the kinetic energy \(K\) and the static structure factor \(S(\mathbf{q})\). Overall, the latter is in close agreement with the RPA and _static approximation_ data sets over the entire \(q\)-range for
Figure 6: Panel a) shows the first frequency moment \(M_{S}^{(1)}(\mathbf{q})\) for the UEG at \(\Theta=1\) and \(r_{s}=10\), with the yellow circles having being evaluated from the approximate derivative Eq. (22); b) \(\tau\)-dependence of PIMC results for the ITCF for \(q=0.63q_{\rm F}\) (red crosses) and \(q=4.56q_{\rm F}\) (green stars).
Figure 5: The first frequency moment \(M_{S}^{(1)}(\mathbf{q})\) for the UEG at \(\Theta=1\) for a) \(r_{s}=10\) and b) \(r_{s}=4\). Green line: f-sum rule, Eq. (16); black crosses: moments extracted via Eq. (10); dashed blue, solid red, and dotted yellow: reference data within RPA, _static approximation_, and for the ideal Fermi gas computed from synthetic \(S(\mathbf{q},\omega)\) directly via Eq. (2).
both densities; the ideal Fermi gas model again deviates for small \(q\). For \(r_{s}=10\), the proposed canonical fitting method gives accurate results over four orders of magnitude in \(M_{S}^{(3)}(\mathbf{q})\) and is in good agreement with the sum-rule reference data. For \(r_{s}=4\), the agreement is noticeably less good. This is likely a consequence of the larger statistical errors in the PIMC results for the ITCF but only captured by the error bars of the extracted moments for \(q\gtrsim q_{\mathrm{F}}\). At the same time, it is important to note that the results for the cubic sum rule, too, are not carved in stone and might be subject to a small bias, e.g. due to the discrete sum in Eq. (17) that only becomes a continuous integral in the thermodynamic limit (i.e., \(N\to\infty\)).
Let us proceed to the fourth moment \(M_{S}^{(4)}(\mathbf{q})\) shown in Fig. 9. In this case, we find that our method is still capable of accurately resolving \(M_{S}^{(4)}(\mathbf{q})\) over five orders of magnitude for \(r_{s}=10\), whereas the quality is noticeably less good for \(r_{s}=4\). Still, we can obtain valuable insights into the correct qualitative behavior even for the higher density.
Finally, we analyze \(M_{S}^{(5)}(\mathbf{q})\) in Fig. 10. In this case, both the synthetic data and our extracted values span seven orders of magnitude in the depicted relevant range of wave numbers. For \(r_{s}=10\), we obtain reasonable results for all \(q\), although there do appear noticeable fluctuations in the extracted moments. For \(r_{s}=4\), the quality is significantly worse, as it is expected, and the fluctuations are not fully captured by the error bars.
Figure 8: The third frequency moment \(M_{S}^{(3)}(\mathbf{q})\) for the UEG at \(\Theta=1\) for a) \(r_{s}=10\) and b) \(r_{s}=4\). Green squares: cubic sum rule, Eq. (17); black crosses: moments extracted via Eq. (10); dashed blue, solid red, and dotted yellow: reference data within RPA, _static approximation_, and for the ideal Fermi gas computed from synthetic \(S(\mathbf{q},\omega)\) directly via Eq. (2).
Figure 7: The second frequency moment \(M_{S}^{(2)}(\mathbf{q})\) for the UEG at \(\Theta=1\) for a) \(r_{s}=10\) and b) \(r_{s}=4\). Black crosses: moments extracted via Eq. (10); dashed blue, solid red, and dotted yellow: reference data within RPA, _static approximation_, and for the ideal Fermi gas computed from synthetic \(S(\mathbf{q},\omega)\) directly via Eq. (2).
## IV Summary and outlook
In this work, we have presented a new, formally exact method to extract all positive integer frequency moments of dynamic properties from imaginary-time correlation functions. As a practical example, we have investigated the DSF \(S(\mathbf{q},\omega)\) of the UEG, which is directly connected to the ITCF \(F(\mathbf{q},\tau)\) via the two-sided Laplace transform Eq. (1). We have demonstrated that the frequency moments \(M_{S}^{(\alpha)}(\mathbf{q})\) directly correspond to the fitting coefficients from a polynomial fit to PIMC data for the ITCF in the canonical representation. In practice, we find good agreement between our newly extracted results and the existing sum rules for \(\alpha=0,1,3\). In addition, we have presented, to our knowledge, the first data for the cases of \(\alpha=2,4,5\). These results are interesting in their own right and will serve as a valuable benchmark for future developments such as the derivation of the \(M_{S}^{(5)}(\mathbf{q})\) sum-rule featuring static three-body correlation functions, or the construction of novel DSF approximation schemes. From a physical perspective, we observe that only \(M_{S}^{(0)}(\mathbf{q})\) (and also \(M_{S}^{(-1)}(\mathbf{q})\), see e.g. Ref. [112]) exhibits a pronounced structure with respect to the wave number \(q\), whereas the cases of \(\alpha=1,\dots,5\) are strictly monotonic. This constitutes a nontrivial finding that deserves to be explored in more depth in future works.
We are convinced that our work opens up enticing opportunities for impactful future research in a gamut of research fields, including the study of ultracold atoms [16; 22; 24], warm dense matter [18; 25; 141], as well as condensed matter physics [27; 31]. For example, the accurate knowledge of different \(M_{S}^{(\alpha)}(\mathbf{q})\) is directly useful to further constrain the analytic continuation from the imaginary-time domain to real frequencies [15; 16].
Figure 10: The fifth frequency moment \(M_{S}^{(5)}(\mathbf{q})\) for the UEG at \(\Theta=1\) for a) \(r_{s}=10\) and b) \(r_{s}=4\). Black crosses: moments extracted via Eq. (10); dashed blue, solid red, and dotted yellow: reference data within RPA, _static approximation_, and for the ideal Fermi gas computed from synthetic \(S(\mathbf{q},\omega)\) directly via Eq. (2).
Figure 9: The fourth frequency moment \(M_{S}^{(4)}(\mathbf{q})\) for the UEG at \(\Theta=1\) for a) \(r_{s}=10\) and b) \(r_{s}=4\). Black crosses: moments extracted via Eq. (10); dashed blue, solid red, and dotted yellow: reference data within RPA, _static approximation_, and for the ideal Fermi gas computed from synthetic \(S(\mathbf{q},\omega)\) directly via Eq. (2).
Moreover, the frequency moments are the key input for the method of moments [43], which constitutes a promising route for the direct calculation of dynamic properties based on static QMC simulation data without the need for an explicit numerical inversion of Eq. (1). The corresponding recent results for the warm dense UEG based on the odd moments already look promising [46], and it is likely that the incorporation of the hitherto unknown moments of \(\alpha=2,4,5\) would lead to further improvement.
In addition to its value for quantum many-body theory, our approach for the study of the frequency moments of the DSF is also of direct practical use for the interpretation of XRTS experiments of matter under extreme conditions. In particular, the measured XRTS intensity signal is given by the convolution of the DSF with the combined probe and instrument function \(R(\omega)\)[142],
(23)
In practice, XRTS thus does not give one direct access to the DSF (and its frequency moments \(M_{\alpha}\)) as the deconvolution is typically rendered highly unstable by the inevitable noise in the experimental measurement. This restriction does not pose an obstacle in the Laplace domain, where one can make use of the well-known convolution theorem, which, in combination with Eqs. (1) and (23), gives [76, 23]
\[F(\mathbf{q},\tau)=\frac{\mathcal{L}\left[I(\mathbf{q},\omega)\right]}{ \mathcal{L}\left[R(\omega)\right]}. \tag{24}\]
Since, in addition to the actual intensity \(I(\mathbf{q},\omega)\), the source and instrument function is often known with high accuracy e.g. from additional source monitoring as it is employed at modern X-ray free-electron laser facilities [143], the evaluation of the RHS of Eq. (24) gives one access to the ITCF \(F(\mathbf{q},\tau)\) of the probed system. Therefore, our new framework for the estimation of the frequency moments \(M_{S}^{(\alpha)}(\mathbf{q})\) is also directly useful for the interpretation of XRTS experiments of real materials.
Finally, we stress that our idea is not limited to the DSF and the corresponding ITCF \(F(\mathbf{q},\tau)\) and can easily be extended to other dynamic properties. For example, the Matsubara Green function \(G_{\mathrm{M}}(\mathbf{q},\tau)\) [see Ref. [78] for an accessible discussion] is connected to the single-particle spectral function \(A(\mathbf{q},\omega)\) via the relation [144, 16]
\[G_{\mathrm{M}}(\mathbf{q},\tau)=\int_{-\infty}^{\infty}\frac{\mathrm{d}\omega }{2\pi}\frac{e^{-\tau\omega}}{1\pm e^{-\beta\omega}}A(\mathbf{q},\omega)\, \tag{25}\]
with the \(\pm\) in the denominator corresponding to fermions and bosons, respectively. Single particle excitations of the system are most visible in \(A(\mathbf{q},\omega)\) as peaks, plasmons as well as other quasi-particles leave their signatures in the spectral function. An integration over the momenta will produce the density of states from the spectral function [49]. It is easy to see that the frequency moments of \(A(\mathbf{q},\omega)\) --here denoted as \(M_{A}^{(\alpha)}(\mathbf{q})\) --can be obtained from \(G_{\mathrm{M}}(\mathbf{q},\tau)\) via
\[M_{A}^{(\alpha)}(\mathbf{q}) = (-1)^{\alpha}\,2\pi\left\{\left.\frac{\partial^{\alpha}G_{ \mathrm{M}}(\mathbf{q},\tau)}{\partial\tau^{\alpha}}\right|_{\tau=0}\right.\] \[\pm\left.\left.\frac{\partial^{\alpha}G_{\mathrm{M}}(\mathbf{q}, \tau)}{\partial\tau^{\alpha}}\right|_{\tau=\beta}\right\}\.\]
In contrast to the DSF, the frequency moments of the single-particle spectral function thus require evaluation of the derivatives around both \(\tau=0\) and \(\tau=\beta\). This makes intuitive sense as the Matsubara Green function does not have a symmetry relation around \(\tau=\beta/2\) such as \(F(\mathbf{q},\tau)\), for which both derivatives would be equal up to a sign change. The practical evaluation of Eq. (26) thus requires polynomial expansions around both boundary values of \(\tau\), which does not pose an obstacle.
## Appendix A Methodology of the fitting scheme
Classic 1-dimensional polynomial interpolation goes back to Newton, Lagrange, and others, see, e.g., Ref. [145]. Its generalization to regression tasks was mainly proposed and developed by Gauss, Markov, and Gergonne [146, 147] and is omnipresent in mathematics and computing till today. Due to Ref. [148], however, there are theoretical and practical limits when it comes to fitting functions sampled on equidistant data nodes or grids. Here, often the term "over-fitting" is used for pointing to Runge's phenomenon, being a classic problem in applied mathematics [149, 150, 151].
In Ref. [152] the problem is addressed even in multi-dimensions, and, based on the results in Refs. [153, 154, 155, 156], implementations are condensed into the open source package minterpy[157]. In contrast to naively fitting functions with respect to the canonical polynomial basis \(1,x,x^{2},\ldots,x^{n}\), minterpy rests on Lagrange polynomials
\[l_{i}(x)=\prod_{j\neq i}^{n}\frac{x-q_{j}}{q_{i}-q_{j}}\,,\quad l_{i}(q_{j})= \delta_{ij}\,,0\leq i,j\leq n\]
being located in the Chebyshev-Lobatto nodes [158]
\[q_{i}\in\mathrm{Cheb}_{n}=\left\{\cos\left(\frac{i\pi}{n}\right):0\leq i\leq n \right\}\,.\]
Fitting a function \(f:[-1,1]\longrightarrow\mathbb{R}\), sampled in (equidistant) data points \(P=\{p_{1},\ldots,p_{m}\}\subseteq[-1,1]\), \(F=(f(p_{1}),\ldots,f(p_{m}))\in\mathbb{R}^{m}\), \(m\in\mathbb{N}\) is realised due to solving a classic least square problem
\[C=\mathrm{argmin}_{X\in\mathbb{R}^{n+1}}\|RX-F\|^{2}\,,\]
where \(R=(r_{k,i})_{1\leq k\leq m,1\leq i\leq n+1}\in\mathbb{R}^{m\times n+1}\), with \(r_{k,i}=l_{i}(p_{k})\), denotes the regression matrix. Once the coefficients \(C=(c_{0},\ldots,c_{n})\) are computed, the polynomial
\(Q_{f,n}\) of degree \(n\in\mathbb{N}\) fitting the function \(f\) is given by
\[f(x)\approx Q_{f,n}(x)=\sum_{i=0}^{n}c_{i}l_{i}(x)\,.\]
This Lagrange-regression scheme turns out to maintain stability for high polynomial degrees, and shows more approximation power, suppressing Runge's phenomenon than regression with respect to the canonical basis [152; 155; 158]. minterpy includes a domain re-scaling routine and a basis transformation that enables a numerically stable transformation of the Lagrange coefficients to the canonical coefficients \(D\in\mathbb{R}^{n+1}\)
\[Q_{f,n}=\sum_{i=0}^{n}c_{i}l_{i}(x)=\sum_{i=0}^{n}d_{i}x^{i}\,,\quad D=(d_{0}, \ldots,d_{n})\,,\]
fitting the initial data.
For determining the maximum degree \(\alpha_{\text{max}}\) used for the truncation in Eq. (9), we apply a Monte-Carlo cross-validation strategy. In this strategy, a large proportion of the dataset (90%) is randomly sampled and used to fit a polynomial of a given degree, while the rest of the dataset (10%) is used to compute the maximum absolute error. Furthermore, to have a more robust estimate of the maximum degree, we randomly and uniformly perturb \(F\) within the range of its PIMC error bars. This procedure is then repeated multiple times (250), each time giving an estimate of the maximum polynomial degree for the given dataset split. We pick \(\alpha_{\text{max}}\) as the polynomial degree that both minimizes the maximum absolute error and appears the most over many repetitions.
Once \(\alpha_{\text{max}}\) has been determined, we estimate the error associated with the polynomial coefficients by fitting polynomials of the same degree many times (1000) using the whole dataset. As before, \(F\) is also randomly and uniformly perturbed within the range of its error estimate. The standard deviation of the polynomial coefficients over many repetitions represents the error estimate of the coefficients.
In Fig. 11, we show the truncated polynomial degree \(\alpha_{\text{max}}\) over the entire considered range of wave numbers \(q\). Overall, we find that \(\alpha_{\text{max}}\) tends to increase with \(q\), as \(F(\mathbf{q},\tau)\) exhibits more curvature for large wave numbers. These trends are similar for \(r_{s}=10\) (red squares) and \(r_{s}=4\) (green stars), although \(\alpha_{\text{max}}\) tends to be slightly lower in the latter case.
###### Acknowledgements.
This work was partially supported by the Center for Advanced Systems Understanding (CASUS) which is financed by Germany's Federal Ministry of Education and Research (BMBF) and by the Saxon state government out of the State budget approved by the Saxon State Parliament. The PIMC calculations were partly carried out at the Norddeutscher Verbund fur Hoch- und Hochstleistungsrechnen (HLRN) under grant shp00026 and on a Bull Cluster at the Center for Information Services and High Performance Computing (ZIH) at Technische Universitat Dresden.
|
2310.00981 | Using Reinforcement Learning to Optimize Responses in Care Processes: A
Case Study on Aggression Incidents | Previous studies have used prescriptive process monitoring to find actionable
policies in business processes and conducted case studies in similar domains,
such as the loan application process and the traffic fine process. However,
care processes tend to be more dynamic and complex. For example, at any stage
of a care process, a multitude of actions is possible. In this paper, we follow
the reinforcement approach and train a Markov decision process using event data
from a care process. The goal was to find optimal policies for staff members
when clients are displaying any type of aggressive behavior. We used the
reinforcement learning algorithms Q-learning and SARSA to find optimal
policies. Results showed that the policies derived from these algorithms are
similar to the most frequent actions currently used but provide the staff
members with a few more options in certain situations. | Bart J. Verhoef, Xixi Lu | 2023-10-02T08:43:29Z | http://arxiv.org/abs/2310.00981v1 | Using Reinforcement Learning to Optimize Responses in Care Processes: A Case Study on Aggression Incidents
###### Abstract
Previous studies have used prescriptive process monitoring to find actionable policies in business processes and conducted case studies in similar domains, such as the loan application process and the traffic fine process. However, care processes tend to be more dynamic and complex. For example, at any stage of a care process, a multitude of actions is possible. In this paper, we follow the reinforcement approach and train a Markov decision process using event data from a care process. The goal was to find optimal policies for staff members when clients are displaying any type of aggressive behavior. We used the reinforcement learning algorithms Q-learning and SARSA to find optimal policies. Results showed that the policies derived from these algorithms are similar to the most frequent actions currently used but provide the staff members with a few more options in certain situations.
Keywords:prescriptive process mining, reinforcement learning, Markov decision process, process optimization, process mining
## 1 Introduction
_Prescriptive_ process monitoring focuses on analyzing process execution data to not only predict the future behavior of a process but also provide actionable recommendations or interventions to optimize the process [1; 2; 3]. It goes beyond _descriptive_ or _predictive process monitoring_ by actively suggesting specific actions or decisions for improving process performance, compliance, or efficiency. Considering the decision points in business processes, the ability to offer specific guidance to users regarding optimal actions is crucial, as it can lead to improved decision-making and efficiency.
One prominent approach is to use reinforcement learning, which learns online by interacting with an environment to adapt and improve its recommendations over time. The environments can be learned and built using the historical execution traces and the feedback they received. While reinforcement learning methods have been applied in business processes, healthcare processes exhibit distinct characteristics and present new challenges for these techniques [4], such as dynamic workflows, diverse stakeholders, and patient safety considerations. In particular, patients may exhibit very diverse statuses, and a wide range of
actions is possible at any stage. Moreover, each patient may react differently to these actions. These challenges may cause RL methods not to converge or not be able to improve the current policies. In such dynamic settings, it is worth investigating the validity and effectiveness of the RL approaches.
In this paper, we focus on the healthcare domain, and in particular, the process of actions and responses in the aggression incidents by clients with intellectual impairments in community care facilities [5]. Being in aggressive situations can have a severe impact on staff members since there is a mediation effect between experiencing aggressive behavior from clients and burnout through fear of assault [6]. This means that experiencing aggressive behavior leads to fear of assault, which in turn leads to burnout. It also has a negative impact on the clients themselves because aggressive behavior can lead to more aggressive behavior [7]. Therefore, learning the optimal way to act during aggression incidents helps de-escalate the incidents and reduce negative impact.
Previous studies have analyzed the aggression incidents of such clients within Dutch residential care facilities using a process mining approach [8] or proposing to mine potential causal patterns [9; 10; 11]. This meant that insights into the use of actions and their effects could be made visible to show which actions had a negative and which actions had a positive outcome in each situation. However, this approach can only provide recommendations for a single incident and does not take consecutive incidents and their consequences into account.
In this paper, we investigate the use of prescriptive process monitoring, inspired by [12], particularly reinforcement learning techniques, for this healthcare process, in which the optimal policies of the best possible action in a given situation (or state) can be determined. First, we train a _Markov Decision Process_ (MDP) from the aggression incident log [10]. Second, we apply reinforcement learning techniques, aiming to find optimal policies for staff members to minimize aggressive incidents by clients with intellectual impairments. We use the model-free, value-based control algorithms: Q-learning and SARSA. The reason for choosing these methods, rather than the Monte Carlo methods used in [12], stems from their practical advantage of achieving earlier convergence on stochastic processes [13].
The structure of the paper is as follows. Section 2 discusses the related work. Then we explain the methods in Section 3, including the description of the data set and the design of the environment. Section 4 presents the results, and Section 5 discusses the results. Section 6 concludes the paper.
## 2 Related work
Research in prescriptive process monitoring has been done in the recent couple of years, mainly with a focus on business processes. Fahrenkrog-Petersen et al. [1] used it to make a framework that parameterized a cost model to assess the cost-benefit trade-off of generating alarms. Bozorgi et al. [14] researched it in the context of reducing the cycle time of general supply chain processes. Both use supervised learning methods instead of reinforcement learning methods and
predict a threshold value that, when exceeded, recommends an action. The algorithms themselves do not make a recommendation; only predictions are made, and based on the predictions, a user-defined action is recommended.
Weinzierl et al. [15] also made this remark and proposed an alternative approach to prescriptive process monitoring in which there is a learning and a recommendation phase, in which the recommendation gives the next best action to take. Branchi et al. [12] used prescriptive process monitoring with Monte Carlo methods to determine the best actions to lend out loans and ensure most traffic fines are paid. The Monte Carlo methods are valid algorithms, although TD methods such as Q-learning and SARSA tend to converge earlier on stochastic processes in practice [13]. In this paper, we use Q-learning and SARSA to find optimal policies.
## 3 Methodology
This section describes the methods used in the research. First, we describe the data set. We then explain the preprocessing steps and the way the environment is built. Finally, we discuss the evaluation measures used.
### Data set
The data set is from a Dutch residential care organization with several facilities. The event data contains 21,384 reported aggression incidents from 1,115 clients with intellectual impairments. The data has been anonymized for privacy reasons. The reported incidents were reported by staff members between the 1st of January 2015 and the 31st of December 2017. The event data includes attributes such as the date of the incident, pseudonym client ID, the type of aggression, the countermeasure that the staff took, and the type of persons involved (such as family, staff members, and other clients). A simplified example of the event data is listed in Table 1.
In the event data, four types of aggression are reported, which are _verbal aggression_ (va), _physical aggression against people_ (pp), _physical aggression against objects_ (po), and _self-injurious behavior_ (sib). Eight distinct countermeasures are reported by the staff members: _talk to the client, held with force, no measure taken, seclusion, send to another room, distract client, terminate contact, and starting preventive measures.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Pseudonym client** & **Date of incident** & **Aggression type** & **Involved** & **Measures** \\ \hline ab45 & 05/01/2016 & va & family & talk to client \\ \hline ab45 & 06/01/2016 & pp & client & none \\ \hline lz12 & 06/01/2015 & sib & unknown & seclusion \\ \hline lz12 & 18/01/2015 & po & client & none \\ \hline \end{tabular}
\end{table}
Table 1: A snippet of the incident data where the last column describes the countermeasures taken by staff members to stop the aggression.
### Data cleaning and preprocessing
To use reinforcement learning with this dataset, we preprocess the data. We follow the same steps as in [10]. First, we add the type of next aggression incident as an attribute of the current event, in order to create tuples of three which contain the type of current aggression, the countermeasures taken by a staff member, and the type of next aggression. The aim is to use the aggression types as the _states_ a client is in and use the countermeasures as _actions_. Such a triplet describes a transition from one state to the next state after taking an action.
In the second step, we group incidents into _episodes_. According to a behavioral expert at the care organization [10], an _episode_ is a sequence of incidents by the same client that occurred after each other, where the time between incidents is less than or equal to nine days. Following this domain knowledge, we segment the sequences of incidents into episodes. When two consecutive incidents \(e_{i}\) and \(e_{i+1}\) of a client are more than nine days apart, we insert a _Tau_ after \(e_{i}\) as the final state of an episode. The incident \(e_{i+1}\) is the start state of the next episode. An overview of the approach is shown in Figure 1.
We assign each episode a unique ID. The episodes that do not end in a _Tau_ state are considered incomplete and, therefore, filtered. We obtained a total of 8,800 episodes after this filter, consisting of 19,848 incidents. In addition, the episodes where the incidents miss the values on the measures column are removed; these are incidents in which the staff member did not report the measures they had taken. Applying this filter reduced the number of episodes to 8,013, consisting of 15,464 incidents. Finally, we decided to remove the most infrequent action, 'preventive measures started' due to its ambiguity and to reduce the search space. Any episode that contains this action was removed, resulting in 14,676 incidents and 7,812 episodes for training the final MDP. In Table 2, a simplified example of the preprocessed log is listed.
### Building the environment
Now that the data is cleaned and preprocessed, we use it to build a finite MDP. For this, we need the five-tuple consisting of the states, actions, transition probabilities, rewards, and discount factor [13]. The discount factor is a hyperparameter that can be tuned; therefore, we later perform hyperparameter tuning to determine the discount factor for the agent.
Figure 1: Preprocessing pipeline used to get enriched and clean data
We describe the MDP using the standard formalization in [13] as follows:
* \(\mathcal{S}\) = {va, po, sib, pp, Tau}, i.e., the set of states;
* \(\mathcal{A}\) = {talk to the client, no measure taken, seclusion, holding with force, send to another room, distract client, terminate contact}, i.e., the set of actions;
* \(\mathcal{P}\), which is the probability of going from one state to the next based on the action. This is determined using the following function \[P(s,a,s^{\prime})=\frac{\text{Number of times $a$ leads to $s^{\prime}$}}{\text{Number of times $a$ is chosen in state $s$}}\] (1)
* \(\mathcal{R}:\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{Z}\), which is the reward function. We defined the reward function based on the literature in assessing the severity of the action and the state [16]. The reward (penalty) for each individual action or state is listed in Table 3.
Another design choice has been made regarding the calculation of the transition probabilities. In the data set, multiple actions could be filled in at each incident. For this paper, a decision was made to consider only the most frequent action as the transition from one state to the next, in order to limit the number of possible actions and avoid having too many infrequent actions. Also, the reward function was designed based on the severity of the action and the state as indicated in the existing literature in aggression [16]. The simple reward function was designed on purpose such that the results can be more easily communicated to the experts. A subgraph of the environment can be seen in Figure 2.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Pseudonym client** & **Aggression type** & **Measures** & **Next aggression type** & **Episode Id** \\ \hline ab45 & va & talk to client & pp & 1 \\ \hline ab45 & pp & none & Tau & 1 \\ \hline lz12 & sib & secluded & Tau & 2 \\ \hline lz12 & po & none & Tau & 3 \\ \hline \end{tabular}
\end{table}
Table 2: A simplified example of the preprocessed event data
\begin{table}
\begin{tabular}{|l|c|} \hline
**Action or state** & **Reward** \\ \hline Tau & 1 \\ \hline Verbal Aggression (va) & 0 \\ \hline Physical Aggression against objects (po) & -1 \\ \hline Self-injurious behavior (sib) & -3 \\ \hline Physical Aggression against people (pp) & -4 \\ \hline Client distracted, Contact terminated, Send to other room & -1 \\ \hline Hold with force, Seclusion & -2 \\ \hline Other actions & 0 \\ \hline \end{tabular}
\end{table}
Table 3: The rewards (penalty) assigned for each action or state, based on the severity of the action and the state [16]. When an agent takes an action and ends in the follow-up state, the combination of the action and state is used to compute the reward.
### Training the agents
We used the following parameters in the tuning: the learning rate, \(\alpha\in[0,1]\), the discount factor, \(\gamma\in[0,1]\), and the amount of exploration, \(\epsilon\in[0,1]\), which have an impact on the training of the agents and therefore the results. The best parameters are obtained experimentally by hyperparameter tuning using the best average reward of 100 runs as the goal, each consisting of 2000 episodes. The search spaces are \(\alpha\in[0.1,0.2,0.3,0.4,0.5]\), \(\gamma\in[0.2,0.4,0.6,0.8,1.0]\) and \(\epsilon\in[0.1,0.2,0.3,0.4,0.5]\). First \(\gamma\) was obtained while keeping \(\alpha\) and \(\epsilon\) on 0.1. After this \(\epsilon\) was obtained using the optimal \(\gamma\) value and \(\alpha=0.1\) and finally \(\alpha\) was obtained using the optimal \(\gamma\) and \(\epsilon\) value. Each parameter has been used for ten different runs to get a fair average. The hyperparameter values for both Q-learning and SARSA are \(\alpha=0.2\), \(\gamma=0.2\) and \(\epsilon=0.1\).
### Evaluation of policies
We evaluate the agents both quantitatively, by comparing the average rewards, and qualitatively, by discussing the policies. For the quantitative evaluation, we compute the average reward for the best-trained agent using Q-learning and the best-trained agent using SARSA. These are then compared with the average reward of taking random actions and the average reward with the current policy. The current policy has been derived as the most frequent action taken in a state. The current policy is "talking to the client" when they display verbal
Figure 2: A subgraph of the MDP, depicting the current state of self-injurious behavior (SIB), a sample of actions that can be chosen, and a sample of transitions. \(P\) is the probability of going to that state, and \(R\) is the reward associated with that action and next state.
aggression (va), physical aggression against people (pp), and physical aggression against objects (po). For the state of self-injurious behavior (sib) "no action" is the most frequently used action. For the qualitative evaluation, we discuss the results by looking at the most frequent variants for each agent and comparing these variants with the ones of the current policy.
## 4 Results
In this section, we first present the results regarding the rewards. Next, we discuss the results qualitatively, presenting the optimal policy and the variants. We used two baselines to compare the results: (1) using random actions and (2) taking the most frequent action at each state as the policy for the agent. The data set is shared under the NDA and thus unavailable. The code and the MDPs used in this paper are online available1, which can be used to reproduce the results.
Footnote 1: [https://git.science.uu.nl/6601723/ppm-aggressive-incidents](https://git.science.uu.nl/6601723/ppm-aggressive-incidents)
### Quantitative results
In this section, the average reward per policy is described and evaluated. It is listed in Table 4.
We run each policy for 10,000 runs, each consisting of 100 episodes, resulting in 1,000,000 episodes total. To test if the differences between the policies are significant, we performed a one-way ANOVA with the data from SARSA, Q-Learning, and the current policy. The one-way ANOVA was done using the Scipy library from Python 3, specifically the stats.f_oneway function. The p-value was \(7.719e-26\), which is smaller than 0.05, therefore, we can reject the null hypothesis that the groups have the same mean, meaning there is a significant difference between them, but we do not know between which. Therefore, we use a least significant difference test as a posthoc test using the pairwise_tukeyhsd from the Python 3 library statsmodels. This made three comparisons, it tested Q-learning against the current policy, tested SARSA against the current policy, and tested Q-learning against SARSA. It rejected all three null hypotheses, meaning the average reward per algorithm significantly differs from one another.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Policy** & **Average reward** \\ \hline Random & -3.783 \\ \hline Most frequent action & -1.105 \\ \hline Q-learning & -1.127 \\ \hline SARSA & -1.168 \\ \hline \end{tabular}
\end{table}
Table 4: Average reward per policy based on 10,000 runs, each with 100 episodes.
### Qualitative results
This section describes the qualitative results where we show the derived policies and the most common variants of episodes per agent. The derived policies can be found in Table 5, where the action taken at each state for each policy can be found.
The five most common variants with their frequencies for each of the agents can be found in Tables 6, 7, and 8. In the tables, each variant is a distinct episode of tuples, where the first element of the tuple is the _current state_, the second element is the _action_ taken, and the last element is the _next state_ after the action. If the state is _Tau_, the episode is ended; otherwise, another action is taken.
In the tables, it can be seen that the four most frequent variants of episodes end in one action for all policies. For each state doing that action leads immediately to Tau regardless of the policy. Also, most of the episodes ended when only one action had been taken. When we take a closer look at the current policy, the Q-learning policy, and the SARSA policy, we see that most variants are the same
\begin{table}
\begin{tabular}{|c|c|} \hline
**Path** & **Frequency** \\ \hline (va, Talk with client, Tau) & 14454 \\ \hline (sib, No measure, ‘Tau’) & 13987 \\ \hline (po, Talk with client, Tau) & 13100 \\ \hline (pp, Talk with client, Tau) & 12769 \\ \hline (pp, Talk with client, pp)(pp, Talk with client, Tau) & 4454 \\ \hline \end{tabular}
\end{table}
Table 6: Five most common
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Policy** & **VA** & **SIB** & **PP** & **PO** \\ \hline Most frequent action & talk with client & no measure & talk with client & talk with client \\ \hline Q-learning & talk with client & no measure & talk with client & no measure \\ \hline SARSA & no measure & no measure & talk with client & talk with client \\ \hline \end{tabular}
\end{table}
Table 5: Derived policies for Q-learning and SARSA together with the most frequent actions taken on the 10,000 runs, each with 100 episodes.
with only two differences: (1) in the verbal aggression state (va), "no measure" action is suggested by the Q-learning; (2) in the self-injury-behavior state (sib), "no measure" action is suggested by SARSA.
### Additional analysis
Due to the results from taking all episodes, we decided to do an additional analysis on a subset of the data. We kept the episodes that had a length of more than or equal to three incidents and performed the same experiment as we did on the whole dataset. This subset contained 6687 incidents over 1360 episodes. Taking only the episodes longer than or equal to three incidents, we focus on the clients who display more severe behavior, which are the ones we want to help reduce in the first place. We again used Q-learning and SARSA as described above and compared them to taking random actions and the current policy, which in this case was "talk with client" in every state. The hyperparameter tuning was done the same as described in Section 3.4, resulting in the best performing Q-learning agent and best performing SARSA agent both with \(\alpha=0.1\), \(\gamma=0.2\) and \(\epsilon=0.1\). In the remaining parts of the additional analysis, we present the quantitative and qualitative results.
#### 4.3.1 Additional analysis quantitative results
In this section, the average reward per policy is shown and can be found in Table 9.
The same statistical tests as on the whole dataset were done. The p-value of the one-way ANOVA was \(2.917e-12\), which is smaller than \(0.05\), therefore, we can reject the null hypothesis. We use a least significant difference test as a posthoc test. This made three comparisons. It rejected two out of three null hypotheses. With a p-adj value of \(0.6833\), it did not reject the hypothesis that the rewards from Q-Learning and SARSA had the same mean. When taking a
\begin{table}
\begin{tabular}{|c|c|} \hline
**Path** & **Frequency** \\ \hline (va, No measure, Tau) & 14926 \\ \hline (sib, No measure, Tau) & 14025 \\ \hline (pp, Talk with client, Tau) & 13079 \\ \hline (po, Talk with client, Tau) & 12971 \\ \hline (sib, No measure, sib)(sib, No measure, Tau) & 4313 \\ \hline \end{tabular}
\end{table}
Table 8: Five most common variants when using the policy derived by SARSA.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Policy** & **Average reward** \\ \hline Random & -11.925 \\ \hline Most frequent action & -7.342 \\ \hline Q-learning & -7.266 \\ \hline SARSA & -7.275 \\ \hline \end{tabular}
\end{table}
Table 9: Average reward per policy based on 10000 runs along 100 episodes
look at Table 10, we can see that they have learned exactly the same policy, so this result was expected.
#### 4.2.3 Additional analysis qualitative results
We list the derived policies with the current policy and compare the most common variants taken between a random baseline, the most frequent actions taken, and the two policies derived by the agents.
The derived policies can be found in Table 10, where the action taken at each state for each policy can be found. The Q-learning and SARSA agent learned that "talking to a client" was the best option when the state is verbal aggression (va), physical aggression against people (pp), or physical aggression against objects (po), and "no measure" when the state is self-injury behavior (sib).
The five most common variants with their frequencies for the current policy and the RL agents can be found in Tables 11 and 12. Both the Q-learning agent and SARSA agent learned the same policy. One noticeable difference between the frequent episodes of the policies of the RL agents and the most frequent policy is that the second most frequent episode of self-injurious behavior is added in Tables 11 and 12.
agents' learned policy, the variants differ significantly in the frequencies of the self-injurious behavior (sib) state. The frequencies of the single-incident episodes for this state are similar between the RL agents and the current policy (4901 vs. 4772). When the episodes consist of two incidents that concern the self-injurious behavior (sib) state, the frequency of such episodes is much higher in the RL agents learned policy than the current policy (2720 vs. 1776), meaning that "no measure" results faster to Tau than "Talk with client" in this case.
## 5 Discussion
The results indicate that the current policy and the RL-derived policies reach similar conclusions. The current policy performs slightly better than the RL agents when considering all episodes, but the RL agents provide staff members with additional options without having a significant negative impact on rewards. When considering the selected subset of the episodes, the RL agents slightly outperform the current policy, offering an alternative choice.
In both cases, the staff member can choose to talk to the client or take no action. Although the RL algorithms exhibit slight variations in performance compared to the current policy, the policies derived do not significantly differ. This alignment is reasonable considering the reward function used, which penalizes all actions except "no measure" and "talk with client". These options align with the least disruptive impact on both the client and staff member, as indicated by previous studies.
However, it is important to note that the models may oversimplify the real situation, and further factors such as location, time, and individuals involved have not been included. Collecting relevant data and consulting behavioral experts could enhance future research in this field. For example, it is possible to learn the time distribution until the next incident and use this in the reward function.
Additionally, practical relevance should be acknowledged, as staff members face challenges in assessing situations and may need to use force in certain cases. Future research may aim to provide insights tailored to specific clients or client groups. Combining reinforcement learning and process mining in prescriptive process monitoring shows promise but requires careful consideration of data availability and exploration limitations.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Path** & **Frequency** \\ \hline (va, Talk with client, Tau) & 5207 \\ \hline (sib, No measure, Tau) & 4901 \\ \hline (po, Talk with client, Tau) & 4661 \\ \hline (pp, Talk with client, Tau) & 4620 \\ \hline (pp, Talk with client, pp)(pp, Talk with client, Tau) & 2749 \\ \hline (sib, No measure, sib)(sib, No measure, Tau) & 2720 \\ \hline \end{tabular}
\end{table}
Table 12: Five most common variants by Q-learning for 1,000,000 episodes, plus the most common variant when considering SIB.
## 6 Conclusion
This paper presents the application of reinforcement learning (RL) to optimize response policies in healthcare processes, specifically addressing aggressive incidents in care settings. The research aims to investigate the validity of RL in healthcare and the ability to find optimal response policies for staff members towards such incidents. The results have shown that RL algorithms can find such an optimal policy, which consists of taking no measures or talking with the client depending on the state. The policies are very similar to the current policy, i.e., the most frequent action taken by staff members.
Despite the simple MDP, the results do show that prescriptive process monitoring can be used in the healthcare domain. Interestingly, it may be more beneficial to use the techniques in more complex situations, rather than the simple situation. However, further research is necessary to validate this finding.
For future work, one may refine the environment by extending the MDP with more refined states and actions. Future research should be multidisciplinary, where such an environment can be more elaborately built based on experts in the field of aggressive behavior and staff members who work daily with clients. Results can then also be validated by the experts or staff to help them make better decisions and therefore their input is crucial.
#### 6.0.1 Acknowledgement
This research was supported by the NWO TACTICS project (628.011.004) and Lunet in the Netherlands. We would like to thank the experts from the Lunet for their assistance. We also thank Dr. Shihan Wang and Dr. Ronald Poppe for the invaluable discussions.
|
2305.15684 | Perturbation-based Self-supervised Attention for Attention Bias in Text
Classification | In text classification, the traditional attention mechanisms usually focus
too much on frequent words, and need extensive labeled data in order to learn.
This paper proposes a perturbation-based self-supervised attention approach to
guide attention learning without any annotation overhead. Specifically, we add
as much noise as possible to all the words in the sentence without changing
their semantics and predictions. We hypothesize that words that tolerate more
noise are less significant, and we can use this information to refine the
attention distribution. Experimental results on three text classification tasks
show that our approach can significantly improve the performance of current
attention-based models, and is more effective than existing self-supervised
methods. We also provide a visualization analysis to verify the effectiveness
of our approach. | Huawen Feng, Zhenxi Lin, Qianli Ma | 2023-05-25T03:18:18Z | http://arxiv.org/abs/2305.15684v1 | # Perturbation-based Self-supervised Attention
###### Abstract
In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This paper proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifically, we add as much noise as possible to all the words in the sentence without changing their semantics and predictions. We hypothesize that words that tolerate more noise are less significant, and we can use this information to refine the attention distribution. Experimental results on three text classification tasks show that our approach can significantly improve the performance of current attention-based models, and is more effective than existing self-supervised methods. We also provide a visualization analysis to verify the effectiveness of our approach.
Attention bias, perturbation, self-supervised learning, text classification.
## I Introduction
Attention mechanisms [1, 2, 3] play an essential role in Natural Language Processing (NLP) and have been shown to be effective in various text classification tasks, such as sentiment analysis [4, 5, 6], document classification [7] and natural language inference [8]. They achieve significant performance gains, and can be used to provide insights into the inner workings of the model. Generally, the attention learning procedure is conditioned on access to large amounts of training data without additional supervision information.
Although the current attention mechanisms have achieved remarkable performance, several problems remain unsolved. First, learning a good attention distribution without spurious correlations for neural networks requires large volumes of informative labeled data [9, 10]. As described in the work of Wallace et al. [11], after inserting 50 poison examples with the name "_James Bond_" into its training set, a sentiment model will frequently predict a positive whenever the input contains this name, even though there is no correlation between the name and the prediction. Second, attention mechanisms are prone to focus on high-frequency words with sentiment polarities and assign relatively high weights to them [12, 13, 5], while the higher frequency does not imply greater importance.
Especially when there's an adversative relation in a text, some high-frequency words with strong sentiment valence need to be selectively ignored based on the context of the whole text. In these cases, these words will mislead the model because the important words don't get enough attention. The sentences in Figure 1 illustrate this problem. In most training sentences, as shown in the first four rows, "_better_" and "_free_" appear with positive sentiment, which makes the attention mechanism accustomed to attaching great importance to them and relating them to positive predictions. However, the two words are used ironically in the fifth sentence, and the model pays the most attention to them while the critical word - "_leave_" - is not attended to, resulting in an incorrect prediction. Based on these observations, there's reason to believe that the attention mechanisms could be improved for text classification.
To tackle this problem the most direct solution is to add human supervision collected by manual annotation [14, 10, 15] or special instruments [9, 16, 17, 18] (e.g., eye-tracking), to provide an inductive bias for attention. These approaches are costly, the labeling is entirely subjective, and there is often high variance between annotators. In particular, Sen et al. [19] point out that there is a huge difference between machine and human attention and it is difficult to map human attention to machine attention.
Another flexible solution is to measure attribution scores, i.e., how much each token in a text contributes to the final prediction, to approximate an importance distribution as an attention supervision signal [20, 21, 5, 6]. Generally, the attribution scores are obtained by masking each token one by one to generate counterfactual examples, reflecting the difference in the softmax probability of the model after masking each token. These approaches have little or no additional annotation overhead and augment supervision information from the training corpus to refine the attention distribution. Despite their success, masking schemes can give rise to an out-of-distribution (OOD) problem [22, 23, 24]. That is, the generated counterfactuals deviate from the training data distribution of the target model, resulting in an overestimation of the contribution of unimportant tokens. The OOD problem induced by existing masking schemes makes it difficult to identify whether high-scoring tokens contribute significantly to the prediction. Furthermore, most of them are limited to generating uniform attention weights for the selected important words. Obviously, the contribution of different important words to the model should also be different according to the context, e.g., the word _leave_ should have a higher attention weight than _better_ and _free_ for the fifth sentence in Figure 1.
Some efforts reveal that the output of neural networks can be theoretically guaranteed to be invariant for a certain magnitude of input perturbations through establishing the concept of maximum safety radius [25, 26] or minimum disturbance rejection [27]. In simple terms, these approaches evaluate the minimum distance of the nearest perturbed text in the
embedding space that is classified differently from the original text. Inspired by this work, we propose a novel perturbation-based self-supervised attention learning method without any additional annotation overhead for text classification. Specifically, we design an attention supervision mining mechanism called Word-based Concurrent Perturbation (WBCP), which effectively calculates an explainable word-level importance distribution for the input text. Concretely, WBCP tries to concurrently add as much noise as possible to perturb each word embedding of the input, while ensuring that the semantics of input and the classification outcome is not changed. Under this condition, the words that tolerate more noise are less important and the ones sensitive to noise deserve more attention. We can use the permissible perturbation amplitude as a measure of the importance of a word, where small amplitude indicates that minor perturbations of that word can have a significant influence on the semantic understanding of input text and easily lead to prediction error.
According to the inverse distribution of perturbation amplitude, we can get sample-specific attention supervision information. Later, we use this supervision information to refine the attention distribution of the target model and iteratively update it. Notably, our method is model-agnostic and can be applied to any attention-based neural network. It generates attention supervision signals in a self-supervised manner to improve text classification performance without any manual labeling and incorporates Perturbation-based Self-supervised Attention (PBSA) to avoid the OOD problem caused by the masking scheme. In addition, it can also generate special attention supervision weights adaptively for each sample based on the perturbation amplitude, rather than allocate them uniformly.
In summary, the contributions of this paper are as follows:
(1) Through analysis of current methods, we point out the disadvantages and drawbacks of current attention mechanisms for text classification.
(2) We propose a simple yet effective approach to automatically mine the attribution scores for the input text, and use it as supervision information to guide the learning of attention weights of target models.
(3) We apply our approach to various text classification tasks, including sentence classification, document categorization, and aspect-level sentiment analysis. Extensive experiments and visualization analysis show the effectiveness of the proposed method in improving both model prediction accuracy and robustness.
(4) Theoretically, our algorithm can be applied to the models with attention mechanisms, but it is impossible to compare with all of them. Considering this, we conduct our experiments on several typical baselines (LSTM, BERT [28], DEBERTA [29], ELECTRA [30], Memory Net [31], etc.) to justify the effectiveness of our method. Notably, we also compared our algorithm with other advanced attention self-supervised methods (PGAS [32], AWAS [5], SANA [6]).
## II Related work
Work related to our method can be categorized into three types: Introducing human attention; using external resources or tools; and using self-supervision.
**Introducing human attention** Adding human supervision to attention has been shown to effectively alleviate attention bias and improve model prediction accuracy on a range of tasks [14, 15, 16, 17, 18]. In general, the annotators need to explicitly highlight the important words or rationales [14, 10, 15] for the given sample. Obviously, the annotation is very labor-intensive and expensive in real-world scenarios, so an alternative is to use implicit signals such as eye gaze [9, 16, 17, 18]. For these methods, it is expected that the model can generate similar attention to human supervision. However, human recognition and model reasoning processes may be inconsistent [33], and aligning the two is challenging [19].
**Using external resources or tools** With the development of NLP, many corpora and tools, such as Dependency Tree and
Fig. 1: The attention visualization for five sentences. The ”A/B” style tags before each row mean the model’s prediction is A and the label is B. The first four sentences are selected from training sets as representatives containing high-frequency words - ”better” (yellow box) and ”free” (green box). The last sentence including both of the two words is selected from testing sets, typically showing that the distribution of attention weights when some words in the sentence appear frequently in the corpus but are unimportant to the current prediction.
Synonym Dictionary, are created to obtain a deeper understanding of words and sentences. Therefore, some methods [34, 35, 36, 37] that generate attention supervision information according to existing corpora and tools emerge. For example, Nguyen et al. [36] introduce attention supervision information based on important words selected by semantic word lists and dependency trees. Similarly, Zhao et al. [37] first train the model on the document-level sentiment classification and then transfer the attention knowledge to a fine-grained one for aspect-level sentiment classification. And Hu et al. [38] introduce the tree structure's representation into attention computations. However, these methods still rely on annotations based on parsers or external resources, and the performance depends heavily on the quality of the parser.
**Self-supervised attention learning** Currently, self-supervised attention learning frameworks [20, 21, 5, 6, 32] have become the mainstream method because they do not require additional annotation overhead. They usually mask or erase each token one by one and quantify the difference in predictions of the model after masking each token, to approximate an importance distribution as attention supervision information. For example, Tang et al. [5] divide the words in sentences into the active set and the misleading set by progressively masking each word with respect to the maximum attention weight, and augment them to make the model focus on the active context words. Similarly, Choi et al. [6] adopt the masking method to find the unimportant words and gradually reduce their weights. These methods use a self-supervised paradigm to mine important words, which can greatly reduce the annotation cost and improve the robustness of the model. Nevertheless, the masking scheme they follow has an OOD problem. The counterfactuals generated by the mask operation deviate from the original training set distribution, which easily leads to the over-evaluation of unimportant words. In addition, the above methods usually assign the same weight to the extracted important words, but in our opinion, different words should have different contributions to the classification.
## III Proposed method
In this section, we propose a Perturbation-based Self-supervised Attention (PBSA) mechanism to enhance the attention learning process and provide a good inductive bias. We first design a Word-based Concurrent Perturbation (WBCP) to automatically mine the attribution score for each word and use this as a measure of its degree of importance. Then we use the measure mentioned above to compute a word-level importance distribution as supervision information. Finally, we describe how to use the supervision information to refine the attention mechanism of the target model, improving the accuracy and robustness of text classification tasks.
### _Word-based Concurrent Perturbation_
The basic assumption of our design is based on the following fact: under the premise of trying not to change the semantics of the input text, unimportant words can withstand more changes than more significant ones. Specifically, a little noise on keywords can lead to dramatic changes in the final results, while greater noise on the unimportant ones won't easily lead to changes in results. Therefore, we can estimate the importance distribution of the words according to the maximum amount of noise they can tolerate. To be specific, we try to concurrently add as much noise as possible to perturb each word embedding without changing the latent representations (e.g., the hidden
Fig. 2: The diagram of WBCP. The left part of the figure corresponds to the last term of Eq. (2), which illustrates the process of adding noise that follows a Gaussian distribution to each word. The right part of the figure corresponds to the first two terms of Eq. (2), indicating the constraint of trying to not change the semantics and predictions after the noise is introduced.
states for classification) of the text and the prediction result. The above process can be optimized according to the maximum entropy principle.
Given a sentence consisting of \(n\) words \(s=\{w_{1},w_{2},...,w_{m}\}\), we map each word into its embedding vector \(\mathbf{X}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\). Actually, WBCP (Word-based Concurrent Perturbation) is based on the embedding of each token \(\mathbf{X}\) but not each word \(s\). Intuitively, one word can be tokenized into several parts, and various parts have various influences on the representation. Considering that, in experiments, the perturbation is added to each token generated by the tokenizer, which means each token has its own \(\sigma_{i}\) (maximum safety radius). For ease of explanation and comprehension, here we take the traditional embedding where \(m=n\) (each word has only one embedding, e.g. word2vec, glove, and so on) as an example in Figure 2 and Section III-A. We assume that the noise on word embeddings obeys a Gaussian distribution \(\mathbf{\epsilon}_{i}\sim\mathcal{N}\left(\mathbf{0},\mathbf{\Sigma}_{i}=\sigma_{i}^{2} \mathbf{1}\right)\) and let \(\widetilde{\mathbf{x}}_{i}=\mathbf{x}_{i}+\mathbf{\epsilon}_{i}\) denote an input with noise \(\mathbf{\epsilon}_{i}\). We use \(\mathbf{h}\), \(\mathbf{y}\) and \(\widetilde{\mathbf{h}}\), \(\widetilde{\mathbf{y}}\) to indicate the hidden state for classification and the prediction result of a pre-trained model with no noise and with noise respectively. Then we can write the loss function of WBCP as follows:
\[\mathcal{L}_{WBCP}=||\widetilde{\mathbf{h}}-\mathbf{h}||_{2}^{2}+|| \widetilde{\mathbf{y}}-\mathbf{y}||_{2}^{2} \tag{1}\] \[-\lambda\sum\nolimits_{i=1}^{n}H(\mathbf{\epsilon}_{i})|_{\mathbf{ \epsilon}_{i}\sim\mathcal{N}\left(\mathbf{0},\mathbf{\Sigma}_{i}=\sigma_{i}^{2}\mathbf{1} \right)},\]
where \(\lambda\) is a hyperparameter that balances the strength of noise.
The first and the second term of Eq. (1) mean that we need to minimize the L2-normalized euclidean distance between the two hidden states and between the two predictions respectively, to quantify the change of information [39]. The first term maintains latent representations to prevent modification of the text semantics, and the second term prevents excessive perturbations from causing the model to mispredict. The last term indicates that we need to maximize the entropy \(H(\mathbf{\epsilon}_{i})|_{\mathbf{\epsilon}_{i}\sim\mathcal{N}\left(\mathbf{0},\mathbf{ \Sigma}_{i}=\sigma_{i}^{2}\mathbf{1}\right)}\) to encourage adding as much noise as possible to each word embedding. We can simplify the maximum entropy of the Gaussian distribution as follows:
\[Maximize(H(\mathbf{\epsilon}_{i}))\] \[=Maximize(-\int p(\mathbf{\epsilon}_{i})\ln p(\mathbf{\epsilon}_{i})d \mathbf{\epsilon}_{i})\] \[=Maximize(\frac{1}{2}(\ln(2\pi{\sigma_{i}}^{2})+1))\] \[=Maximize(\ln 2(\frac{1}{2}\log(2\pi e)+\log\sigma_{i}))\] \[=Maximize(\log\sigma_{i})\]
Finally we can use Eq. (2) to rewrite our final objective function:
\[\mathcal{L}_{WBCP}=||\widetilde{\mathbf{h}}-\mathbf{h}||_{2}^{2}+|| \widetilde{\mathbf{y}}-\mathbf{y}||_{2}^{2}+\lambda{\sum\nolimits_{i=1}^{n}}\log(- \sigma_{i}) \tag{2}\]
The illustration of WBCP is given in Figure 2. After fixing the parameters of the pre-trained model, the only learnable parameters \(\sigma=\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\) can be considered as the perturbation radii, which is positively associated with the perturbation amplitude. Specifically, the larger \(\sigma_{i}\) WBCP gets, the more likely \(\mathbf{\epsilon}_{i}\) is a big number, the more noise is added to \(\mathbf{x}_{i}\), and the less important it is. As what is shown in the picture, it is obvious that \(\sigma_{2}>\sigma_{1}>\sigma_{4}>\sigma_{3}\). According to the analysis listed above, we know that \(\mathbf{w}_{2}\) (\(a\)) is the least important word and \(\mathbf{w}_{3}\) (_nice_) is the most significant one, for \(\mathbf{x}_{2}\) can tolerate the most noise while \(\mathbf{x}_{3}\) can hardly stand any perturbation.
During the training stage of WBCP, \(\sigma\) is first initialized as the normal distribution and then normalized by the standard deviation of sentence embeddings before generating noise. And we set the epochs to \(500\) for most datasets. Actually, most perturbation models converge within less than \(200\) steps, but we choose more epochs for the time cost is acceptable. However, IMDB's settings differ because of the large training and testing set. Therefore, we set epochs to 300 for it. As for the optimizer, we select AdamW with a learning rate of 0.01.
### _Attention supervision_
We obtain the \(\sigma\)s, the perturbation magnitudes, by optimizing Eq. (2) on the pre-trained model. If a word embedding \(\mathbf{x}_{i}\) can tolerate more noise without impacting the semantics of input text, \(\sigma_{i}\) will be larger, which means the word \(\mathbf{x}_{i}\) is less important. Conversely, small \(\sigma_{i}\) indicates that slight perturbations of word embedding \(\mathbf{x}_{i}\) will lead to semantic drift and may affect the classification result. We can therefore use the perturbation magnitude to compute a word-level importance distribution as attention supervision information, as shown below:
\[\alpha_{i}^{\prime}=1-\frac{\sigma_{i}}{\text{max}_{j}\{\sigma_{j}\}} \tag{3}\]
\[\widetilde{\mathbf{\alpha}}=\text{Softmax}(\mathbf{\alpha^{\prime}})\]
It is worth noting that our method generates sample-specific attention supervision, where the weight of each word is quantified according to the perturbation magnitude, instead of using the same importance weight for all words [5, 6]. Also, the quantification occurs in the embedding space rather than replacing the token with a predefined value, thus avoiding the OOD problem caused by masking schemes.
```
Input: training dataset \(D\), attention-based model \(f(\cdot,\theta)\), the number of iterations \(T\). Pre-train model \(f(\cdot,\theta)\) on \(D\) and update \(\theta\) using Adam. for\(t=1,...T\)do Fix \(\theta\), and minimize WBCP objective function by Eq. (2) using Adam. Obtain the perturbation amplitude \(\sigma\) for each sample in \(D\). Calculate the attention supervision \(\widetilde{\alpha}\) by Eq. (3) for each sample in \(D\). Re-train model on \(D\) with the attention supervision \(\widetilde{\alpha}\) by Eq. (4) and update \(\theta\) using Adam. end for
```
**Algorithm 1**Perturbation-based self-supervised attention
### _Perturbation-based Self-supervised Attention_
We do not use \(\widetilde{\mathbf{\alpha}}\) to generate a new attention distribution to replace the original one \(\mathbf{\alpha}\). Rather, we use it as a supervision target for the attention weights. We want the attention
supervision to make the model notice more words that have an influence on the output. In this way, some low-frequency context words with great importance that would normally be ignored can be discovered by attention learning. In this section, we describe how to exploit the supervision information \(\widetilde{\alpha}\) to guide the learning of model attention strengths.
Our method is shown in Algorithm 1. We first pre-train an attention-based model \(f(\cdot,\theta)\) based on the classification dataset \(D\). We then fix the model parameters \(\theta\) and minimize the WBCP objective using Eq. (2) to obtain the perturbation amplitude \(\sigma\) for each sample, and used to compute the attention supervision \(\widetilde{\alpha}\) using Eq. (3). We then retrain the model using \(\widetilde{\alpha}\) to guide the attention distribution \(\alpha\) produced by the model. The above process can iterate \(T\) times to capture the important distribution more accurately. The training objective function with attention supervision \(\widetilde{\alpha}\) is defined as follows:
\[\mathcal{L}_{cls}=\frac{1}{M}\boldsymbol{\sum}_{m=1}^{M}\hat{y}_{m}\log y_{m} +\gamma\text{KL}(\widetilde{\alpha}_{m}||\alpha_{m}), \tag{4}\]
where \(M\) is the number of samples, \(\gamma\) is a hyperparameter that controls the strength of attention supervision, \(\hat{y}_{m}\) and \(y_{m}\) are the ground-truth label and predicted output for the \(m\)-th sample respectively. The first term is the Cross-Entropy Loss for classification, and the second term is the Kullback-Leibler Divergence between the distributions of attention \(\alpha_{m}\) produced by model and attention supervision information \(\widetilde{\alpha}_{m}\) for the \(m^{th}\) sample.
It's worth noting that our method requires extra computations, but the time cost is usually acceptable because nearly all the process is parallel. The analysis are explained in Appendix A.
## IV Experiments
We tried PBSA on several text classification tasks, including sentence classification, document categorization, and aspect-level sentiment analysis. Experimental results demonstrate that PBSA consistently enhances the performance and robustness of various attention-based baselines, and outperforms some strong models following self-supervised attention learning. Furthermore, a visualization analysis confirms that our model is capable of generating high-quality attention for target tasks. We aim to answer the following questions:
**RQ1:**: Does PBSA improve model accuracy?
**RQ2:**: Is PBSA more effective than other approaches?
**RQ3:**: How do hyperparameters affect the results?
**RQ4:**: How does PBSA work?
### _Datasets and Baselines_
The statistics of widely-studied datasets used by different tasks are listed in Table I. These datasets come from different topics, such as movie reviews, customer reviews, social reviews, and question type. In particular, since there is no standard partition of MR, CR, SUBJ, and MPQA, we follow the data splitting protocol, 7:1:2 for them to get the training, validation, and test sets. For the aspect-level tasks, we remove the instances with conflict sentiment labels in Laptop and Restaurant as implemented in [49].
As for hyperparameters, we use a grid search to find the optimal value of \(\gamma\) and \(T\) for each dataset, from the sets \(\gamma\in\{0.05,0.1,1.0,2.0,10,100\}\) and \(T\in\{1,2,3,4\}\). We use the Adam optimizer with learning rate 0.001 and the batch size is set to 64.
We use Att-BiLSTM, Memory Network, BERT, DEBERTA, ELECTRA, Att-BERT, BERTABSA, Att-BERTABSA as baselines and explain the details about them in Appendix B.
The setup of hyperparameters for Att-BiLSTM and Memory Net are listed in Table II. To make a fair compare with other algorithms, we set our hyperparameters the same as theirs.
### **RQ1: Sentence-level and Document-level Classification**
To verify that PBSA can improve the performance of the attention-based model, in this section, we use the classic Att-BiLSTM [52] and the pre-trained models BERT [28], DEBERTA [29], and ELECTRA [30] as the baselines. It is worth noting that Transformers use multiple-layer and multiple-head attention, so selecting the suitable head as the supervised target is difficult [32]. Hence, how to effectively combine its multiple-layer and multiple-head attention with our method is an exciting and valuable question.
The previous researchers have yet to find a good way to apply their methods to Transformers, and we have made some explorations in this field, which is also one of our innovations. We explore two simple strategies to combine our approach with Transformers, 1) We first add a scaled dot-product attention layer to the output of BERT to derive a fixed-sized sentence representation for classification, and we call this model Att-BERT for short. 2) We also try a simple but effective way to combine the internal multi-head attention in Transformers with our method. Specifically, we average the multi-head attention of all the layers and compress the attention matrix to a vector to be guided by our mechanism.
Table III reports the experimental results on the seven datasets of sentence classification and document categorization. We observe that our method consistently helps improve the accuracy of the baseline on all the datasets. The average accuracy of our approach on the five baselines across seven datasets are 83.65, 90.86, 92.55, 92.43, and 94.06, an improvement of 1.44%, 0.45%, 0.83%, 0.66%, and 0.66% over the baselines (82.21, 90.41, 91.71, 91.82, and 93.44). The results demonstrate that our approach delivers significant performance improvements over the baselines. It also indicates that the current model limits the potential of attention mechanisms when without any supervision information. However, PBSA can mine the potential important words and then guide the attention mechanism of the model to learn a good inductive bias.
However, we find the improvements on pre-trained models are relatively marginal compared with smaller models like Att-BiLSTM. The phenomenon indicates that the pre-training on large corpora relieves the attention bias to some extent, which is further verified in Section IV-D. Moreover, we also find the size of the pre-trained model also impacts the performance of PBSA. We conduct the experiments on BERT-small and ELECTRA-small (shown in Table VII), and PBSA gains greater improvements under the same settings. To sum up, the attention bias may be more likely to appear in smaller-size models and
\begin{table}
\begin{tabular}{c|c|c c c c c|c} \hline \hline \multicolumn{1}{c|}{**Task**} & \multicolumn{1}{c|}{**Dataset**} & \multicolumn{1}{c}{**Class**} & \multicolumn{1}{c}{**AvgLen**} & \multicolumn{1}{c}{**Train**} & \multicolumn{1}{c}{**Test**} \\ \hline \multirow{8}{*}{Sentence Classification} & SST2 [40] & 2 & 19 & 6,920 & 1821 \\ & TREC [41] & 6 & 10 & 5,452 & 500 \\ & MR [42] & 2 & 19 & 10,662 & – \\ & CR [43] & 2 & 19 & 3,775 & – \\ & SUBJ [44] & 2 & 23 & 10,000 & – \\ & MPQA [45] & 2 & 3 & 10,606 & – \\ \hline Document Categorization & IMDB [46] & 2 & 280 & 25,000 & 25,000 \\ \hline \multirow{3}{*}{Aspect-based Sentiment Analysis} & REST [47] & 3 & 16 & 3,591 & 1,121 \\ & LAPTOP [47] & 3 & 17 & 2,292 & 639 \\ \cline{1-1} & TWITER [48] & 3 & 19 & 6,248 & 692 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Setup for att-BiLSTM and Memory Net
smaller-scaled datasets. And the performance of PBSA will be more significant in these scenarios.
### **Rq1: Aspect-level Sentiment Analysis**
To further verify the effectiveness of our approach, we apply PBSA into MN [31, 50], BERTABSA [53], and Att-BERTABSA [32]. Both BERTABSA and Att-BERTABSA are typical and simple ways to apply BERT to aspect-level classification tasks. The difference is that BERTABSA directly uses the hidden states of the aspect words to classify, while Att-BERTABSA adds an attention layer to the output of BERT. To show that our method truly improves the results, we only use the most critical parts of the model without any other tricks or mechanisms (e.g. the gating mechanism). We conduct experiments on three benchmark datasets of aspect-based sentiment analysis and PBSA outperforms all the baselines on all datasets both in accuracy and Macro-F1. As shown in Table V, compared with other tasks, PBSA has a more significant improvement on these small-scale datasets, indicating that the original attention lacks a good inductive bias due to limited labeled data. With the help of PBSA, the robustness of the model can be improved effectively.
### **Rq1: Performances under Different Sample Ratios**
To verify the performance of our approach on low-resource tasks, we conduct experiments on different values of sample ratio. We get sample sets from the original datasets with sample ratio \(\in\{0.001,0.005,0.01,0.05,0.1\}\), and measure the performances of BERT and BERT+PBSA on these sample sets according to their accuracy.
As shown in Figure 4, the performances of BERT and BERT+PBSA have the same trend. As the accuracy of BERT increases, the accuracy of BERT+PBSA increases and vice versa. As explained in Section III-C, the attention supervision information is obtained based on the pre-trained model, whose performance has a direct influence on the quality of the attention
supervision information and further affects the results of re-training. That may explain the strong correlation between the performance of BERT and BERT+PBSA.
The improvement is more prominent when the ratio is in the middle range (sample ratio\(\in(0.005,0.05)\)). As listed above, when the ratio is small, the pre-trained model has a bad performance, which results in meaningless attention supervision information and further limits the performance of
Fig. 4: The chart of the fluctuations of accuracy when we change the value of the sample ratio. Each triangle point and circular point corresponds to the accuracy of BERT and BERT+PBSA under the current sample ratio, respectively.
Fig. 3: The visualization result of several samples on SST2 test set.
PBSA. As the value of the sample ratio increases, the original model performs better, and the quality of attention supervision information is enhanced, and then PBSA improves the model even more. However, the improvement is not without limitation. As the value of the sample ratio exceeds a certain value, the phenomenon of attention bias is no longer evident, and the improvement reduces. It may be because BERT is pre-trained on a large-scale corpus, and when we fine-tune it, its attention fits well on these 'larger-scale' sample sets, which makes the original model has scant room for improvement.
To sum up, the distribution of the attention parameters is not stable enough when the data is limited or the model size is small, which can be refined by PBSA. And the performance and lifting area of PBSA are closely related to the performance of the baseline.
### **RQ2: Comparison with other methods**
On the tasks listed above, we compare our method with other advanced self-supervised attention learning approaches. SANA [6] generates counterfactuals by a masking scheme and measures the difference in the softmax probability of the model between the counterfactual and original sample as an indicator of important words. AWAS [5] and PGAS [32] progressively mask the word with the largest attention weight or partial gradient. Most of these works don't publish their critical code and do their experiment only on certain specific tasks, so we directly compare our algorithm with their best results published on different tasks respectively. To make a fair comparison, we use the same word embedding and the same settings of hidden size to reproduce their baselines, which is listed in Table II.
On the document-level and sentence-level tasks (Table IV), PBSA is superior to SANA by 1.11% and 1.37%, which verifies that the word-based concurrent perturbation can mine the importance distribution of words more accurately than the masking scheme. On the aspect-level task (Table VI), compared with AWAS and PGAS, our method improves the model more. As we mentioned in the Introduction (Section I), our method can generate word-specific attention supervision while others treat the important words equally without discrimination. We speculate that this may be one of the main reasons for our improvement.
### **RQ2: Comparison with human intuition methods**
From the aspect of human intuition, the gradient-based methods and leave-one-out methods are usually used to improve the interpretability of model. The current self-supervised attention learning methods are mostly based on word masking, which can be seen as a variation of leave-one-out methods. We also try to use the gradient-based method [51] to generate supervision information. As shown in Table III and Table V, the gradient-based method does badly on most of the datasets, especially on aspect-level datasets. These results demonstrate that although the gradient-based method can improve the interpretability of the model, it does not necessarily improve the performance. However, our method enhances interpretability while also improving its performance.
### **RQ3: Hyperparameter sensitivity**
As shown in Figure 5, our method achieves the best results on REST and TWITTER when \(T=2\) and \(T=1\) respectively. When the increase of \(T\), the performance increases initially and then decreases due to over-fitting. The performance of models won't change sharply with the increase of \(T\) once they achieve the best results. In practice, we find that one iteration has achieved promising results. The hyperparmeter \(\lambda\) controls the perturbation degree of WBCP, when \(\lambda\) is too large, it will deteriorate performance due to injecting too much noise. In all of our experiments, we set \(\lambda\) as \(0.1\). The hyperparmeter \(\gamma\) controls the strength of attention supervision, when \(\gamma\) is too large, it easily leads to overly penalize the alignment between the model attention and perturbation attention, which may hurt the model's internal reasoning process.
Compared with \(\gamma\), \(\lambda\) has less effect on results when the value of which changes slightly, but we cannot remove \(\sum_{i=1}^{n}\log(-\sigma_{i})\) from our loss function. Otherwise, the model will try not to add any noise to \(x\) without the term, which makes PBSA get a meaningless supervision distribution that varies dramatically for the same sentence each time (the distribution is supposed to be essentially unchanged for the same sentence). On the other hand, results are more sensitive to \(\gamma\), which determines if the models can reach the peak of the results.
### **RQ4: Visualization analysis**
In this section, we select several attention visualizations on SST2 test set to explain how PBSA works. As shown in Figure 3, we see that PBSA **makes the model pay more attention to important but low-frequency words, reduces the focus on high-frequency words that do not affect the results, increases the difference in weight between words with conflicting meanings, and increases sensitivity to adversative relations in sentences.**
Pay more attention to important but low-frequency wordsSome words do have important effects on the results, but if they do not appear frequently enough then the traditional attention mechanism may not pay enough attention to them. As shown in Figure 3-(1), the word _drowsy_ has an important influence on the emotional polarity of the film. However, it is a low-frequency word in the corpus, which makes the attention mechanisms do not allocate enough weights to it, resulting in a classification error. After being trained by PBSA, the model can assign enough weights to _drowsy_, which changes the result from false to correct.
Reduce the focus on high-frequency words that do not affect the resultsIn baseline, some high-frequency words which do not contain any emotional polarity usually get high weights, while some important words that should have been focused on are ignored. As Figure 3-(2) shows, _romantic_ and _doesn't_ are words with strong emotional polarity. However, the baseline assigns greater weights to other high-frequency words (e.g., _between_) with no emotional polarity, and thus ignores the words _romantic_ and _doesn't_ which results in misclassification. After being trained by PBSA, the model reduces the focus on _between_ and the weights allocated to the significant words increase correspondingly, which turns the result.
Increase the difference in weight between words with conflicting meaningsAs shown in Figure 3-(3), the baseline focuses on too many words: _horror_, _revenge_, _perfect_, _relentless_, _torture_, and so on. Maybe all of the words are important but the meanings of them are conflicting, which interferes with the classification task. The model feels confused because it does not know how to make a prediction according to so many emotional words. After being trained by PBSA, the difference in the weight of emotional words becomes larger, which makes it get the right result. It should be noted that the entropy of attention distribution may not decrease because PBSA keeps attention to important words while diluting the distribution of the other words.
Be more sensitive to adversative relations in sentencesIf there are adversative conjunctions (e.g., but, however, and so on) in the sentence, it is likely to express two opposite emotions before and after the adversative conjunction. This is when the model needs to keenly feel the changes of emotional polarity in the sentence. From this aspect, the model is also supposed to assign higher weights to those adversative conjunctions. Judging from our results, it is unfortunate that the original attention mechanism tends to ignore these conjunctions for they seem to have no effect on results outwardly. As Figure 3-(4) and Figure 3-(5) show, the baseline ignores the word _but_ and results in errors. After being trained by PBSA, the baseline pays more attention to _but_ which makes both of the emotions before and after the adversative conjunction can be taken into consideration.
## V Conclusions and future work
In this paper, we propose a novel self-supervised attention learning method based on word-based concurrent perturbation. The algorithm adds as much as noise to each word in the sentence under the premise of unchanged semantics to mine the supervision information to guide attention learning. Our experiments demonstrate that our method achieves significant performance improvements over the baselines on several text classification tasks. Moreover, we use several visualization samples to interpret how our method guides the internal reasoning process of models.
It is worth to note that we combine our method with transformers, which is not implemented in most of the previous attention guiding methods. Our strategies may not be the best ways to apply our algorithm into transformers, but they still prove the effectiveness of the proposed method. We will try to find more appropriate and effective strategies and incorporate our algorithm into other NLP tasks in the future.
|
2306.02870 | On "Scientific Debt" in NLP: A Case for More Rigour in Language Model
Pre-Training Research | This evidence-based position paper critiques current research practices
within the language model pre-training literature. Despite rapid recent
progress afforded by increasingly better pre-trained language models (PLMs),
current PLM research practices often conflate different possible sources of
model improvement, without conducting proper ablation studies and principled
comparisons between different models under comparable conditions. These
practices (i) leave us ill-equipped to understand which pre-training approaches
should be used under what circumstances; (ii) impede reproducibility and credit
assignment; and (iii) render it difficult to understand: "How exactly does each
factor contribute to the progress that we have today?" We provide a case in
point by revisiting the success of BERT over its baselines, ELMo and GPT-1, and
demonstrate how -- under comparable conditions where the baselines are tuned to
a similar extent -- these baselines (and even-simpler variants thereof) can, in
fact, achieve competitive or better performance than BERT. These findings
demonstrate how disentangling different factors of model improvements can lead
to valuable new insights. We conclude with recommendations for how to encourage
and incentivize this line of work, and accelerate progress towards a better and
more systematic understanding of what factors drive the progress of our
foundation models today. | Made Nindyatama Nityasya, Haryo Akbarianto Wibowo, Alham Fikri Aji, Genta Indra Winata, Radityo Eko Prasojo, Phil Blunsom, Adhiguna Kuncoro | 2023-06-05T13:43:50Z | http://arxiv.org/abs/2306.02870v1 | # On "Scientific Debt" in NLP: A Case for More Rigour
###### Abstract
This evidence-based position paper critiques current research practices within the language model pre-training literature. Despite rapid recent progress afforded by increasingly better pre-trained language models (PLMs), current PLM research practices often conflate different possible sources of model improvement, without conducting proper ablation studies and principled comparisons between different models under comparable conditions. These practices (i) leave us ill-equipped to understand which pre-training approaches should be used under what circumstances; (ii) impede reproducibility and credit assignment; and (iii) render it difficult to understand: "_How exactly does each factor contribute to the progress that we have today?_" We provide a case in point by revisiting the success of BERT over its baselines, ELMo and GPT-1, and demonstrate how -- under comparable conditions where the baselines are tuned to a similar extent -- these baselines (and even-simpler variants thereof) can, in fact, achieve competitive or better performance than BERT. These findings demonstrate how disentangling different factors of model improvements can lead to valuable new insights. We conclude with recommendations for how to encourage and incentivize this line of work, and accelerate progress towards a better and more systematic understanding of what factors drive the progress of our foundation models today.
## 1 Introduction
In recent years, language models that are pre-trained on large amounts of data have become the foundation models (Bommasani et al., 2021) for achieving state-of-the-art results on many NLP tasks (Peters et al., 2018; Devlin et al., 2019; Raffel et al., 2020; Liu et al., 2019, _inter alia_), and performed well at novel tasks through demonstrations (Brown et al., 2020). Hence, given the vast potential of pre-trained language models (PLMs), substantial effort has since been dedicated into developing the next state-of-the-art PLM, whether through new objective functions (Devlin et al., 2019; Yang et al., 2019; Clark et al., 2020; Raffel et al., 2020; Tay et al., 2022, _inter alia_), increasing model size (Kaplan et al., 2020; Brown et al., 2020; Chowdhery et al., 2022), better data filtering (Lee et al., 2022), or making a better use of the compute budget by training longer (Hoffmann et al., 2022).
To a large extent, this rapid progress is made possible by a strong emphasis on designing better foundation models and achieving new state-of-the-art results. In pursuit of this goal, each new PLM often leverages all possible means for improving performance: Indeed, it is often the case that a new state-of-the-art PLM paper not only proposed a new pre-training loss function as its key novelty, but was also trained on more data, used a larger model size, and benefited from the latest proven hyper-parameter settings and tricks of the trade compared to earlier PLMs. We argue, however, that progress through these practices also comes at a cost: As each new PLM differs from earlier one in _multiple_ dimensions at once, it has become increasingly harder to (i) disentangle how these different components contribute to the model performance and progress that we observe today; (ii) understand which approaches work well under what circumstances; (iii) distill generalizable patterns and understand how well each result would transfer to new tasks, datasets, and settings (_e.g.,_ in low-resource data and compute scenarios); and (iv) replicate prior results and conduct the appropriate credit assignment for the techniques and prior work that are most responsible for our progress today.
Much like how _technical debt_ often arises when developing new software at breakneck speed,1 we
propose the term "**scientific debt**" to refer to the issues above that arise due to these PLM research practices. In this evidence-based position paper, we argue that scientific progress in NLP should strike a delicate balance between achieving the best performance on various benchmarks and leaderboards -- an area where great progress has been made in recent years -- and also on understanding: _How exactly does each different component affect the PLM performance that we observe today?_ While this question is difficult to answer in light of the current PLM research practices that conflate different sources of model performance, we encourage the community to dedicate more effort into disentangling the performance gains from these interacting factors. Doing so would pave the way for achieving more progress in the future, in a way that is more scientifically rigorous, generalizable, reproducible, and _well-grounded_ in a better understanding of how well each approach works under different settings.
Footnote 1: [https://www.cai.com/](https://www.cai.com/)
We begin by motivating the importance of disentangling multiple possible factors of model improvement through an analogy with medicine, and discuss their parallels for PLM research (SS2). We then provide empirical evidence by revisiting the success of BERT Devlin et al. (2019), and demonstrate that prior PLMs like ELMo Peters et al. (2018) and GPT-1 Radford et al. (2018) can, in fact, achieve nearly the same performance (\(\sim 1\%\) difference in aggregate GLUE) as BERT under _comparable_ experimental conditions (SS3). These findings serve to (i) further our understanding of the effectiveness of the masked language modelling loss compared to prior approaches, under comparable experimental conditions; and (ii) provide an example for how such work can yield valuable new insights regarding which approaches should be used under what conditions. We then conclude with several key recommendations, lessons learnt, and calls for change that would encourage and facilitate this line of work in the future, and accelerate our progress towards resolving the scientific debt that arises due to current PLM research practices (SS4).
## 2 Analogy with Medicine and Parallels with PLM Research
Consider the following analogy with clinical trials in medicine. A drug trial showing that drug A (taken 10 times a day) works better than drug B (taken only twice a day) would raise a few critical questions: Would drug A still work just as well if it is taken at a lower dose? Can we get the same results by increasing the dose of drug B? How well would each drug work under comparable conditions, and are there any particular trade-offs (_e.g.,_ at a lower dose, drug A works better than drug B; at a higher dose, drug B works better)? Answering these questions -- which requires disentangling the effects of the drug dose regimen -- would yield valuable insights, and facilitate more informed decisions over which drugs to produce at scale, and which drugs should be offered to which patients.
Parallels with PLM research.It is straightforward to see a parallel between this (flawed) drug trial setup with the way PLM research in NLP is done today. As each new PLM differs from earlier ones in multiple dimensions, it is increasingly harder to disentangle how much each component (_e.g.,_ objective function, model size, pre-training data amount) contributes to performance. This leaves us ill-equipped to answer questions like:
* How well would earlier PLMs in the literature work if we augment them with the latest techniques, such as using the latest hyper-parameter settings or training them on more data? Would they match the performance of newer PLMs?
* To what extent should we attribute each PLM's performance to its key novelty (_e.g.,_ the _bidirectional_ masked language modelling loss for BERT), as opposed to other factors like the size of the model or the size of its training data?
* Which pre-training approaches should we use under considerations where efficiency considerations are paramount, such as for low-resource languages with limited amounts of monolingual data or under compute resource constraints?2
Reasons for the scientific debt.While answering the questions above would drive better-informed progress in the field, in practice there are multiple barriers to doing so; to some extent, these barriers account for why this scientific debt arises in the first place. These include (i) variations in the choice of hyper-parameters and experimental settings, which can result in a large variance in model performance (Bouthillier et al., 2019; Dodge et al., 2020); (ii) the proprietary and opaque nature of many PLMs and their training datasets -- especially large-scale ones -- rendering standardization difficult; (iii) the increasing computational costs of large-scale PLMs, which increases the costs of running multiple pre-training experiments (_e.g.,_ with different model sizes or training data); and (iv) a strong emphasis in the field for achieving state-of-the-art results -- indirectly creating an incentive to spend all of one's compute, time, and effort to achieve the best results, albeit sometimes at the expense of scientific rigour, performing rigorous experiments under comparable conditions, tuning the baselines, and disentangling different possible sources of model improvements. We revisit these barriers, and outline our recommendations in SS4.
The costs and benefits of scientific debt, and why we should address it.In practice, the community goes into scientific debt because there are certain benefits of doing so. Indeed, by rapidly sharing and publishing new progress and state-of-the-art models -- even when we do _not yet_ fully understand how each factor contributes to the final performance of the model -- the community is able to share, use, and build on exciting findings (_e.g.,_ more accurate and faster models, etc.) much more quickly at a time of rapid progress. Yet on the other hand, accumulating too much scientific debt -- without a good plan to address it and eventually pay it off -- also carries an important risk: The lack of fair comparisons and proper ablations can lead the community down the wrong path, make sub-optimal choices, and waste precious community time, effort, and computational resources in the wrong direction. Paying off this scientific debt would crucially (i) enable the community to direct our collective efforts and resources into the research directions that matter the most for improving model performance, (ii) understand what factors enable PLMs' remarkable success today, and (iii) better comprehend the tradeoffs between different approaches under various types of settings.
Large variability in current PLMs.To illustrate the extent of this issue, we summarize several key design choices behind some well-known PLMs in Table 3 (Appendix A), revealing a large _variability_ in the key design choices (_e.g.,_ model size, training data corpus and size, subword pre-processing algorithm, pre-training task, etc.) behind each PLM. Some common patterns include scaling the model while also using different, often larger pre-training data, as well as using different training regimes altogether. As each design choice impacts model performance in different ways (Sennrich and Zhang, 2019; Jiao et al., 2019) -- combined with the fact that not all prior work conducted thorough ablations to understand how each component affects overall performance -- it has become increasingly difficult to understand _why_ a PLM outperforms the baselines, _which_ design choices should be used under what settings, and _how much_ of the improvement can be attributed to each work's novelty.
## 3 Empirical Evidence: The Case of BERT
Having motivated the importance of better disentangling the impact of different design choices in PLM research, we conduct experiments in pursuit of this goal. These experiments serve to (i) further our understanding of the effectiveness of the masked language modelling _objective_(Devlin et al., 2019), compared to alternative approaches under comparable conditions; (ii) demonstrate how these experiments can yield new insights; and (iii) form the basis for our recommendations and lessons learnt for accelerating progress in this line of work.
At the time of its release, BERT (Devlin et al., 2019) attracted a lot of attention by virtue of its strong performance on many tasks, outperforming earlier PLMs like ELMo (Peters et al., 2018) and GPT-1 (Radford et al., 2018) by substantial margins. At its core, BERT combines the following:
* Language model (LM) pre-training on large amounts of unlabelled data (Dai and Le, 2015; Peters et al., 2018; Radford et al., 2018; Howard and Ruder, 2018).
* Conducting _whole-model fine-tuning_(Radford et al., 2018) on each downstream task, as opposed to only using the resulting contextual
word representations (_i.e.,_ the neural model's frozen hidden state vectors) as features for a downstream task, as was done in the case of ELMo.
* Like the GPT-1 model, using a Transformers Vaswani et al. (2017) architectural backbone, as opposed to (bidirectional) LSTMs Hochreiter and Schmidhuber (1997) in the case of ELMo.
* Using a novel _bidirectional_ masked language modelling loss, which predicts the identity of each masked token by attending to _both_ its left and right context,3 rather than only the left/right context as in GPT-1 and ELMo pre-training.
Footnote 3: The bidirectional attention used within BERT is enabled by the use of Transformer architectures, which — unlike LSTM models — have no inherent directionality constraints.
Based on the different components of BERT above, it combines previous known techniques with a key novelty: the masked language modelling objective, which -- unlike prior approaches like ELMo and GPT-1 -- enables it to leverage and fuse bidirectional context at pre-training. Nevertheless, BERT differs from its prior approaches on many _other_ factors (_e.g.,_ the pre-training data corpus and size, model size, using Transformers vs. LSTMs, length of the training cycle, tokenizer, etc.). This renders a principled comparison difficult, and makes it hard to _isolate_ the importance of the masked language modelling objective from other factors that also affect model performance. We therefore here ask:
* To what extent can we attribute BERT's superior performance over its GPT-1 and ELMo baselines to its key novelty (_i.e.,_ the bidirectional masked LM loss), as opposed to other design choices?
* Can the baseline models achieve similar performance with BERT, if we augment them with a similar set of design choices that the BERT model used (_e.g.,_ whole model fine-tuning, using Transformers as opposed to LSTMs, etc.)?
* Can we come up with simpler variants of the baseline models, which can approximate the performance of more sophisticated approaches?
* How exactly would the findings change in the case where pre-training efficiency considerations are paramount (_e.g.,_ where there is a more limited amount of pre-training compute available)?
### Experimental Setup
We aim to isolate the importance of BERT's bidirectional masked LM objective, in comparison to two prior baselines: ELMo and GPT-1, under _comparable_ experimental conditions. We compare our experimental setup with each model's original pre-training configuration in Table 4 (Appendix B).
**Training data.** For all three models, we use the original BERT pre-training data Devlin et al. (2019), containing a combination of Wikipedia4 and BookCorpus Zhu et al. (2015). This dataset -- which is larger than either of the ELMo or GPT-1 pre-training dataset5 -- consists of \(\sim 3.3\)B words.
Footnote 4: All model and data license information is in Appendix E.
Footnote 5: As pre-training data size and quality have been shown to be an important factor of LM success Liu et al. (2019); Hoffmann et al. (2022), we hypothesize that BERT’s larger pre-training data is an important factor behind its success compared to prior approaches — independently of the masked LM objective.
**Model.** We use a Transformer backbone for all three models, which has been shown to outperform LSTMs, and is more amenable to scaling to larger training datasets. Concretely, we use a BERT-Base architectural backbone, as implemented on HuggingFace4Wolf et al. (2020), with \(\sim 110\) million parameters. Whereas the underlying model architectures are identical across all models, the pre-training objective function is naturally tailored to each approach, _e.g.,_ masked LM for BERT, causal / left-to-right LM for GPT-1, and two independent causal LMs for ELMo: One operating in a left-to-right fashion, and another operating right-to-left.
Footnote 6: By using the same codebase for all three models, we eliminate confounds arising from minor technical differences, such as whether or not to use segment embeddings (as BERT does), what kind of positional encoding schemes are used, what vocabulary size and subword preprocessing algorithms
The implementation of the different pre-training objectives requires two changes. First, when applicable, we update the "input mask" function on each attention layer (_e.g.,_ using a standard causal attention mask in the case of GPT-1 to enforce a left-to-right directionality constraint). Second, we change the loss function to reflect each pre-training objective. For instance, we predict each next word conditional on its _left_ context for GPT-1; for BERT, we predict the \(\sim 15\%\) masked words conditional on the (slightly corrupted) _bidirectional_ context. One key difference is that our BERT implementation excludes the next-sentence prediction (NSP) pre-training loss, in accordance with the findings of Liu et al. (2019).6 All other
variables (_e.g._, dataset, hyper-parameter choices, etc.) are kept identical across all models to facilitate a fair comparison.
Pre-processing.We use the same WordPiece tokenization as the original BERT model. We follow the procedure of Liu et al. (2019) for sampling sequences for each batch, where the input is constructed by repeatedly sampling multiple sentences until we reach the maximum sequence length of 512, while respecting document boundaries.
Fine-tuning.We use whole-model fine-tuning (Radford et al., 2018) for all models, which works better than the feature-based contextual word embedding approach of the original ELMo. As is standard practice, we take the top-layer contextual embedding of the [CLS] token to represent the whole sequence when fine-tuning BERT; for the left-to-right GPT-1 rerun, we take the top-layer contextual embedding of the _last token_, where the model has observed the entire sequence.7 For each GLUE task, we run a grid search over 7 fine-tuning learning rates, 2 batch sizes, and 3 random seeds (Appendix B). We submit the best-performing model on the validation set to the GLUE leaderboard.
Footnote 7: By the same logic, we take the top-layer contextual embedding of the _first_ token to represent the whole sequence instead when fine-tuning the right-to-left GPT-1.
ELMo rerun.Following Peters et al. (2018), we pre-train ELMo by independently pre-training two separate, causal / unidirectional models: a left-to-right one and a right-to-left one.8 At fine-tuning, Peters et al. (2018) combined the output layer of the left-to-right and right-to-left models, and used that combination as the representation of the whole sequence, based on which the fine-tuning cross-entropy loss is then calculated. In contrast, we employ a slight modification of ELMo where we first calculate the probability of each downstream task label under the left-to-right and right-to-left models, denoted as \(p_{\mathbf{\theta}}^{\text{L2R}}(y\,|\,\mathbf{x})\), and \(p_{\mathbf{\psi}}^{\text{R2L}}(y\,|\,\mathbf{x})\), respectively; \(y\) denotes the downstream task label for a sequence \(\mathbf{x}\), while \(\mathbf{\theta}\) and \(\mathbf{\psi}\) denote the parameters of the left-to-right and right-to-left models, respectively. The aim of ELMo fine-tuning is to find fine-tuned (FT) ELMo parameters \(\{\mathbf{\theta}_{\text{FT}}^{\star},\mathbf{\psi}_{\text{FT}}^{\star}\}\):
Footnote 8: Note that there is no need for cross-model communication when pre-training the left-to-right and right-to-left ELMo models; hence this ELMo pre-training stage can be done completely in parallel, _e.g._, on two completely different machines.
\[\mathbf{\theta}_{\text{FT}}^{\star},\mathbf{\psi}_{\text{FT}}^{\star} \stackrel{{\text{def}}}{{=}}\operatorname*{arg\,min}_{\mathbf{ \theta},\mathbf{\psi}}\sum_{(\mathbf{x},y)\,\in\,D}\] \[-\log\left(\lambda\,p_{\mathbf{\theta}}^{\text{L2R}}(y\,|\,\mathbf{x })+(1-\lambda)\,p_{\mathbf{\psi}}^{\text{R2L}}(y\,|\,\mathbf{x})\,\right).\]
We simply set the interpolation coefficient \(\lambda=0.5\) for all tasks. We find this variant of ELMo to perform better than the original ELMo formulation by a small margin (\(\sim 0.5\)% in aggregate GLUE validation performance) in our preliminary experiments.
Hyper-parameters.We summarize the pre-training and fine-tuning hyper-parameters for each model in Appendix B. We spend a similar amount of compute in tuning the hyper-parameters of each model, facilitating a fair comparison across models.
Evaluation Tasks.We evaluate each model on the GLUE classification tasks (Wang et al., 2018), as done in the original BERT paper. We leave other evaluation benchmarks like SuperGLUE (Wang et al., 2019) to future work, although due to BERT's bidirectional nature, conducting generative evaluation that requires sampling text from BERT is non-trivial (Wang and Cho, 2019; Goyal et al., 2022).
Compute.Pre-training each model took 5 days with 8 V100 GPUs, while it took roughly a day to run GLUE fine-tuning with 8 GPUs (42 fine-tuning hyper-parameters for each {model, task}, SS3.1). All in all, we used 8700 GPU hours for pre-training and 1100 GPU hours for fine-tuning all models.
### Empirical Findings
We summarize the GLUE test set results in Table 1, based on which we remark on four observations.
* Under comparable conditions, the test GLUE performance of our variant of ELMo is at \(76.8\)%, representing a relatively small 1.2% gap with the standard BERT rerun (\(78\)%). This improved performance represents a vast, \(>6\)% improvement from the original ELMo's reported result of \(70.3\)%; we attribute this gap to the use of a larger, BERT-equivalent pre-training data, a Transformer backbone, and whole model fine-tuning. We also see a small gain of our GPT-1 Rerun -- an improvement we attribute to a larger and better pre-training dataset than
the original model. Hence, we conclude that most of the substantial \(>8\)% gap between the original BERT and ELMo results can, in fact, be attributed to using Transformers rather than LSTMs, conducting whole model fine-tuning, and using a larger pre-training data, rather than BERT's bidirectional masked LM loss in and of itself. Altogether, this result reaffirms how augmenting the baselines with a similar set of techniques and advances as later generation models can substantially improve their performance, and yield results that are close to those of more recent models Melis et al. (2018); Merity (2019); Lei (2021), _inter alia_).
* However, there remains a larger gap between the causal / unidirectional PLMs and BERT; this holds for both the left-to-right (\(2.6\)% worse than BERT) and the right-to-left (\(3.4\)% worse) models. Note, however, that this \(2.6\)% gap under comparable conditions is smaller than the original reported GPT-1 result, which had a \(>4\%\) gap with the original BERT -- a result we attribute to the smaller dataset used to pre-train the original GPT-1. These findings further emphasize the importance of comparing the baselines and more recent PLMs under comparable conditions.
* Although the left-to-right and right-to-left models still lag behind BERT, a simple _ensemble_ of two independently pre-trained and fine-tuned left-to-right and right-to-left models can nevertheless approach BERT's and ELMo's performance (76.3% for the ensemble, 78% and 76.8% for BERT and ELMo reruns, respectively). Remarkably, we do not observe the same gains when ensembling two left-to-right models from different random seeds, suggesting that ensembling unidirectional models with different directionalities is crucial for performance. All in all, these results highlight the need to explore _simpler baselines_, which can approximate the performance of more sophisticated approaches.9 Footnote 9: Ensembling independently pre-trained and fine-tuned left-to-right and right-to-left models is a _late-fusion_ approach, without the need for ELMo’s _joint_ fine-tuning stage.
* As shown in Table 2, when efficiency considerations are paramount (_i.e.,_ where each model is only pre-trained for 200k steps, and not the full 1M), the performance gap between the left-to-right GPT-1 rerun and BERT nearly vanishes (\(0.5\)% gap, as opposed to a \(2.6\)% gap in the 1M-pre-training-steps setup). We attribute this to the fact that BERT only uses \(\sim 15\)% of the tokens in a batch as the masked LM target, whereas the left-to-right LM can leverage all 100% of the tokens as pre-training supervision in a similar fashion as Electra Clark et al. (2020), hence resulting in more efficient learning. Remarkably, despite its simplicity, an ensemble of independently-pre-trained-and-fine-tuned left-to-right and right-to-left models **outperforms** the BERT model by \(1.1\)% in this efficient learning scenario. This finding (i) suggests that approaches that work best in the high-data/compute scenario may not necessarily transfer to cases where efficiency considerations are paramount; and (ii) highlights the need to train, evaluate, and ultimately _benchmark_ models in efficient learning scenarios, such as in languages where monolingual data are not abundant or in cases where there is only a limited amount of compute available for pre-training.
Validation set results with error bars.To preserve test set integrity, we only submitted the single-best validation model to the test set (Table 1). In Appendix D, we report the validation set performance that includes error bars over three different random seeds, which broadly show the same trend.
## 4 Paying Off the Scientific Debt: Recommendations and Lessons Learnt
We proceed to outline several key recommendations and lessons learnt for encouraging, incentivizing, and accelerating progress in this line of work.
Establish standard, publicly available pre-training corpora at multiple data scales.As seen in SS3, the size and quality of the pre-training data is an important driver behind model performance Liu et al. (2019); Hoffmann et al. (2022), which makes a rigorous comparison between different PLMs difficult. Hence, our first recommendation is to establish standard pre-training corpora that are publicly available.10 We further recom
mend releasing the pre-training corpora under multiple data scales, as approaches that work best under strict compute or data resource requirements may be different from the case where there is a large amount of compute and data available (SS3, Clark et al., 2020; Treviso et al., 2022). Note that this does _not_ mean that we are discouraging the use of non-standard or even-larger corpora than those that are publicly available. On the contrary, researchers _should_ continue to push the boundaries of what is possible by training on more, better quality, and more recent data. In such cases, we recommend researchers to _also_ release versions of their models that are trained on the standard pre-training corpora -- above and beyond the version trained on proprietary & large-scale data that would presumably be necessary to achieve a new state-of-the-art -- in order to facilitate a fair and principled comparison with prior work. We encourage the community to _continually_ release new standardized pre-training datasets as time passes to avoid the effect of pre-training data staleness Lazaridou et al. (2021).
**Explicitly delineate the different types of contributions behind each work, including both the key novelty and engineering contributions.** We recommend that PLM research explicitly state the key novelty behind each work (_e.g.,_ the bidirectional masked LM loss for BERT), delineate and explicitly state other contributions (including engineering ones) and design choices that can impact performance, and outline how these differ from prior work (_e.g.,_ better model partitioning for training larger models on multiple devices, better filtering of the training data, more extensive hyper-parameter tuning, etc.). Combined with strong baselines and extensive ablations (see below), this will enable us to better understand _how much_ of the performance gains can be attributed to each factor, including the key novelty behind the approach.
**Invest comparable effort into tuning both the baselines _and_ the newly-proposed models.** In practice, many of the contributions (_e.g.,_ better hyper-parameters, better data, etc.) would also be applicable to the baselines. We recommend each PLM work to discern which of their design choices can _also_ be applied to the baselines, and apply those techniques in order to create stronger baselines that may nevertheless rival the performance of more recent models (SS3, Melis et al., 2018; Lei,
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model & CoLA & MNLI(-m) & MRPC & QNLI & QQP & RTE & SST-2 & STS-B & Avg \\ \hline \hline BERT & 52.1 & 84.6 & 88.9 & 90.5 & 71.2 & 66.4 & 93.5 & 85.8 & 79.1 \\ BERT Large & 60.5 & 86.7 & 89.3 & 92.7 & 72.1 & 70.1 & 94.9 & 86.5 & 81.6 \\ \hline GPT-1 & 45.4 & 82.1 & 82.3 & 87.4 & 70.3 & 56.0 & 91.3 & 80.0 & 74.4 \\ \hline BiLSTM + ELMo + Attn & 36.0 & 76.4 & 84.9 & 79.8 & 64.8 & 56.8 & 90.4 & 73.3 & 70.3 \\ \hline \hline \multicolumn{10}{c}{**Our replication with proper controls \& comparable experimental conditions**} \\ \hline \hline BERT & 50.8 & 84.5 & 89.0 & 90.5 & 71.0 & 61.0 & 93.1 & 84.4 & 78.0 \\ \hline Comparable GPT-1 Rerun - L2R & 41.6 & 87.4 & 84.7 & 86.6 & 68.8 & 62.9 & 91.8 & 79.3 & 75.4 \\ Comparable GPT-1 Rerun - R2L & 42.5 & 82.0 & 85.5 & 88.3 & 69.1 & 57.6 & 92.8 & 79.1 & 74.6 \\ \hline \hline Comparable ELMo-variant Rerun & 46.8 & 83.6 & 85.8 & 89.9 & 70.8 & 61.9 & 93.1 & 82.1 & 76.8 \\ Ensemble of Comparable GPT-1: L2R + R2L & 45.1 & 83.7 & 85.8 & 88.9 & 70.8 & 62.4 & 92.9 & 81.0 & 76.3 \\ Ensemble of Comparable GPT-1: L2R + L2R & 42.4 & 83.5 & 85.1 & 87.8 & 70.0 & 63.1 & 93.1 & 79.9 & 75.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: GLUE **test** results. We use F1 scores for MRPC and QQP, Matthew’s Correlation for CoLA, SpearmanR for STS-B, and accuracy for the rest; all models are pre-trained with the same batch size & compute (1M steps).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Model & CoLA & MNLI(-m) & MRPC & QNLI & QQP & RTE & SST-2 & STS-B & Avg \\ \hline \hline BERT & 43.8 & 80.9 & 86.4 & 87.9 & 69.3 & 59.3 & 90.0 & 80.4 & 74.8 \\ \hline Comparable GPT-1 Rerun: L2R & 43.5 & 80.3 & 84.1 & 86.3 & 68.4 & 63.0 & 91.0 & 77.8 & 74.3 \\ Comparable GPT-1 Rerun: R2L & 36.2 & 80.6 & 82.4 & 88.2 & 68.7 & 53.7 & 93.0 & 77.8 & 72.6 \\ \hline Ensemble of GPT-1 Rerun: L2R + R2L & 45.1 & 82.6 & 84.4 & 88.3 & 70.8 & 62.9 & 93.5 & 79.9 & 75.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: GLUE test set results using the pre-trained model, after training for 200,000 steps followed by fine-tuning.
2021).
More extensive ablation studies.When proposing multiple contributions at once (as many PLM papers do), we recommend conducting as many ablation studies as is feasible to isolate the impact of each component under comparable conditions. In light of recent trends where models -- including open-sourced ones -- are publicly released without technical reports or papers that outline technical details regarding model evaluation and benchmarking Taori et al. (2023); Chiang et al. (2023), we argue that our recommendation for conducting more thorough evaluations is even more critical.
Better credit assignment is needed.As shown in Table 1, the vast gap between BERT and ELMo can nearly be bridged by using (i) the same (larger) pre-training data, (ii) Transformer architectures, and (iii) whole model fine-tuning; all of which were already used and proposed by the GPT-1 model. As these techniques account for a more significant chunk of the performance difference than the bidirectional masked LM loss, disentangling each factor's contribution thus provides an opportunity to conduct better credit assignment in the field.
Strike a balance between pushing the state-of-the-art and advancing our scientific understanding.In some sense, recent rapid progress is made possible by a strong emphasis on building the next state-of-the-art PLMs and foundation models, although it comes at a cost of understanding -- from the scientific point of view -- where the performance improvements are coming from, and which techniques work best under what circumstances. We argue that _both_ lines of work -- one that pushes the state-of-the-art at breakneck speed and through all available means, and another that aims to resolve the scientific and technical debt by disentangling the impact of multiple factors of model improvements (which we argue is still currently underrepresented in the field) -- should be conducted, encouraged, and rewarded within the field. We outline two concrete recommendations for striking a better balance between the two lines of work. First, public release of PLMs or their downstream applications should be _promptly_ accompanied by a technical description of the model, ideally in the form of a technical report or a scientific paper.11 This would enable the community to better understand the key component behind these models' success, allow future work to replicate the results, and promptly disentangle the different components behind model improvements. Second, we as a community should not necessarily expect _both_ types of contributions under the same paper. Just like how the cleaning up of technical debt happens _after_ the initial code has been written, it is often the case that prior work that resolves the scientific debt through principled comparisons was only conducted after substantial progress in advancing the state-of-the-art (often through all available means for improving model performance) had been made. We should, however, encourage the community to conduct such understanding line of work promptly after major milestones or exciting results.
Footnote 11: In some cases, the relevant technical descriptions were not promptly released following model release, such as the community project BLOOM Scao et al. (2022) or ChataGPT (whose blog post does not cover much technical detail.).
Reward and encourage a line of work that focuses on understanding (not just those that make a new state-of-the-art), even when they are imperfect.The current, rapid pace of the field provides an incentive to spend one's (finite) computational resources and effort for building the next state-of-the-art, albeit at the expense of scientific rigour and principled comparisons. Given a finite amount of compute, there is arguably more incentive in tuning one's proposed approach through all possible means (_e.g.,_ using larger datasets and larger models, training for longer, etc.), topping the leaderboards, and publishing the paper, even if this leaves no computational resources to tune the baselines and conduct rigorous ablations. Furthermore, the rapidly increasing cost of training ever-larger PLMs means that any principled comparisons are most likely imperfect (SS7) -- _e.g.,_ how do our findings in SS3 change with models that are trained for longer, like RoBERTa? Or with encoder-decoder models like T5? Or in other languages? Indeed, our experiments in SS3 are fairly narrow in scope, involving only three non-recent models (BERT, ELMo, GPT-1) and a training dataset that is small by today's standards. Yet due to the rigorous hyper-parameter tuning of all three models, conducting these principled comparisons required an enormous amount of compute resources -- equivalent to training 10 BERTs from scratch. This cost would have been even higher
with the inclusion more models, languages, and larger datasets. On this point, we remark that doing such principled comparisons -- even when they are limited in scope and done on smaller models -- _still_ contributes towards paying off the scientific debt, better understanding where our current progress is coming from, and deriving valuable insights that can contribute to the development of next generation PLMs. We additionally call on those in our community who serve as reviewers to recognize and reward these types of research contributions, which are _complementary_ (if perhaps equally important) to a parallel line of work that pushes the state-of-the-art in PLM research through all possible means.
We need more comprehensive PLM scaling laws.Our experiments and recommendations still leave a major open question: How can we scale these kinds of investigations to much larger PLMs, which are much more computationally expensive? To that end, **scaling laws**Kaplan et al. (2020); Hoffmann et al. (2022) provide an account of how PLM performance changes with respect to different factors, allowing us to accurately extrapolate that a PLM with X parameters and Y training steps should achieve a perplexity of Z. However, we argue that current scaling laws are still overly narrow in scope: Concretely, existing scaling laws often only apply to decoder-only / unidirectional PLMs, and only provide an account of how their performance changes with respect to (i) model size and (ii) the number of training tokens. We call on the community to develop more comprehensive scaling laws that take into account and characterize how other factors impact LM performance and downstream behavior, including how model performance and behavior change with respect to the choice of the objective function and model hyper-parameters, and the quality of the pre-training data. The existence of such scaling laws -- which can happen by pooling community data on various PLM pre-training runs and their corresponding perplexity and downstream performance -- would allow other researchers to accurately _extrapolate_ how their findings would generalize to other PLM model sizes, objective functions, etc. Most importantly, comprehensive scaling laws can disentangle and _quantify_ how these different factors contribute to determine the final model performance under various experimental conditions.
How conducting rigorous experiments and ablation studies can lead to new state-of-the-art results.Lastly, we argue that conducting rigorous experiments and ablation studies for paying off the scientific debt should _not_ necessarily come at the expense of achieving a new state-of-the-art. In contrast, doing so can be a key ingredient for building the next state-of-the-art PLMs. In 2020, Kaplan et al. (2020) proposed a seminal scaling law that showed how larger PLMs are more sample-efficient, and that one should always increase model size when increasing the pre-training compute budget, leading the community to develop ever-larger PLMs in response (Rae et al., 2021; Smith et al., 2022, _inter alia_). Nevertheless, subsequent rigorous experiments from Hoffmann et al. (2022) demonstrated that the optimal pre-training compute allocation should, in fact, _also_ be scaled in another dimension: The amount of pre-training data that the model is trained on. This insight was then used to build smaller, more efficient, and cheaper-to-run PLMs that, at the time of its release, achieved new state-of-the-art results that outperformed much larger PLMs that were undertrained in comparison. Going forward, we conjecture that rigorous experiments and ablation studies that look at factors _above and beyond_ model size and data quantity, such as the _quality_ of the pre-training data, the exact hyper-parameters, the pre-training objective, etc., will not only be useful to understand how these factors improve performance and thus pay off the scientific debt, but also form a key ingredient for building the next generation of better PLMs.
## 5 Related Work
A number of prior work has made progress in disentangling the impact of different language modelling pre-training objectives by conducting principled ablation studies under comparable experimental conditions (Dong et al., 2019; Raffel et al., 2020; Tay et al., 2022; Artetxe et al., 2022, _inter alia_). However, some of the recently released models do not provide any technical details on how they are trained, such as ChatGPT12 and GPT-4 (Bubeck et al., 2023). We discuss these in an extended related work section (Appendix C), but briefly remark on how our findings complement theirs. First, we revisit and augment ELMo -- which incorporates a degree of bidirectionality
at fine-tuning (albeit not at pre-training) -- with Transformers and whole model fine-tuning, facilitating a fair comparison with BERT. We show that the resulting ELMo achieves competitive performance with BERT on GLUE; to our knowledge, no such ELMo baseline -- or an even simpler ensemble of a left-to-right and right-to-left PLM -- was explored in prior work.
While our work shares several similarities with (Melis et al., 2018), our work differs by virtue of being a position paper that focuses on an important issue in the field (_i.e.,_ the lack of fair comparisons between past PLMs), and chart the way forward for mitigating this issue. Our experiments in this work mostly aim to provide an example of this issue in action, and form the basis for some of the lessons learnt and recommendations that we outline in SS4. Unlike Melis et al. (2018), our experiments are not designed to achieve new state-of-the-art results. Moreover, above and beyond our empirical contributions, we outline key recommendations that would encourage and incentivize this line of work; we hope that these recommendations would be adopted by the broader community, with the aim of accelerating progress towards resolving the scientific debt in foundation model research.
## 6 Conclusion
Recent rapid progress within the PLM literature has led to tremendous advances within NLP. Despite this progress, current PLM research practices that change multiple different things at once -- often without proper ablation studies and conducting principled comparisons that disentangle the impact of different components -- have introduced certain issues that we call "scientific debt". Through experiments that disentangle the contribution of BERT's bidirectional masked LM objective through principled comparison with prior work, we demonstrate how asking "_which factors contribute the most to the model performance that we observe today?_" can lead to valuable new insights, including the existence of simple yet stronger-than-expected and more efficient baselines. We outlined several recommendations that would encourage and incentivize this line of work that aims to better understand how each factor contributes to the rapid progress of our PLMs today, and better address the ongoing issue of accumulating scientific debt within our current PLM research literature.
## 7 Limitations
Our work has the following limitations.
Comparisons with more recent models.In SS3, we conducted a principled comparison between BERT, ELMo, and GPT-1 under comparable experimental conditions. This comparison notably excludes more recent models that benefit from more parameters, larger training data, or different loss functions, such as RoBERTa, Electra, and T5. Due to the even-higher cost of pre-training these more recent models, we leave a principled comparison that includes these models to future work, although we identified the development of more comprehensive PLM scaling laws as a promising future research direction that would allow us to extrapolate how our findings would generalize to different pre-training data sizes, objective functions, etc. (SS4).
Interaction between different factors.In SS3, we have conducted a principled comparison by varying only the pre-training objective function and the length of model training, whilst keeping all the other variables constant. In practice, however, the exact choice of these different control variables (_e.g.,_ what positional encodings to use, how we pre-process the data, etc.) can _interact_ and affect the findings in a material way. It is conceivable -- and rather likely -- that our findings on the performance gap between BERT, ELMo, and GPT-1 may change under different experimental settings.
Simulated efficient learning scenario.Our efficient learning scenario in SS3 constitutes a simulated one, where we artificially limit the number of updates to 200,000 steps (as opposed to 1M steps in the full setting). We leave the extension to more realistic efficient learning scenarios, such as in languages where there is only a limited number of monolingual data, or where there is a hard limit on what pre-training computational resources we can use (_e.g.,_ 1 GPU for 3 days), to future work.
Extension to multi-lingual settings.Our experiments are thus far conducted only in English. We leave the extension to other languages -- including low-resource languages with only a limited amount of monolingual data as a realistic and necessary benchmark of efficient learning -- to future work.
The increasing prevalence of closed-source / proprietary PLMs.Despite our recommenda
tions and calls for change, we acknowledge the fact that recent PLM trends have shifted more towards proprietary and closed-source models -- a development we attribute to the rapidly increasing commercialization potential of this technology. Under this trend, very little is known about how each PLM is developed, as the vast majority of the technical details (_e.g.,_ the amount and source of the pre-training data, the data filtering strategy, the size and hyper-parameters of the model, how the model is implemented, etc.) are kept proprietary. While these trends may mean that our recommendations are more unlikely to be adopted by proprietary PLMs, we argue that our position paper and recommendations are still important (if not even more so) for two reasons. First, open-sourced community models, such as BLOOM (Scao et al., 2022), OPT (Zhang et al., 2022), and Alpaca (Taori et al., 2023), are gaining traction, and have rapidly narrowed the gap with proprietary models. This progress reflects the community's strong desire to have open-sourced models that can rival proprietary ones in terms of model quality. The rise of these open-sourced models thus gives rise to the question: How can these community-driven models help the community pay off our scientific debt? To that end, our recommendations provide concrete and actionable steps in this direction. For instance, our recommendations call for standardizing the pre-training dataset, which has not yet been done thus far, even though there are plausible, open-sourced datasets that can be used for doing so. Furthermore, we also encourage the community to release the full evaluation results of their models, alongside the relevant hyper-parameter information, etc., such that we can _collectively_ build a more comprehensive scaling law through crowd-sourcing (SS4). Second, prior work that conducts extensive ablation studies and rigorous experiments (Raffel et al., 2020; Sun and Iyyer, 2021, _inter alia_) remains the exception, rather than the rule. Our position paper includes a call for change that will make it _easier_ to pay off this scientific debt going forward, which is ever-more important in light of impressive progress from both proprietary and open-sourced PLMs.
## Ethical Considerations
Our experiments replicate prior work under comparable experimental conditions. For this reason, we do not expect our work to introduce any novel ethical issues, although our experiments may inherit a similar set of issues concerning PLM (especially large-scale ones), as outlined by various prior work (Gehman et al., 2020; Bender et al., 2021; Rae et al., 2021; Dinan et al., 2021; Bommasani et al., 2021; Kenton et al., 2021; Weidinger et al., 2021, _inter alia_). We remark, however, that conducting these principled comparisons across different models -- which requires a degree of hyper-parameter tuning for each model (both at pre-training and fine-tuning stages) in order to enable a fair comparison -- requires a large number of computational resources, which may contribute to increased carbon emissions (Strubell et al., 2019; Patterson et al., 2021).
## Acknowledgement
We thank Chris Dyer, John T. Hale, and Laura Rimell at DeepMind, Noah A. Smith at the University of Washington & Allen Institute for Artificial Intelligence, and Qi Huang at Bloomberg for their valuable insights, feedback, and suggestions.
|
2303.16003 | Sofic approximation sequences and sofic mean dimension | The main purpose of this paper is to strengthen our understanding of sofic
mean dimension of two typical classes of sofic group actions. First, we study
finite group actions. We prove that sofic mean dimension of any amenable group
action does not depend on the choice of sofic approximation sequences.
Previously, this result was known only if the acting group is an infinite
amenable group. Moreover, we investigate the full shifts, for all sofic groups
and all alphabets. We show that sofic mean dimension of any full shift depends
purely on its alphabet. Our method is a refinement of the classical technique
in relation to the estimates from above and below, respectively, for mean
dimension of some typical actions. The key point of our results is that they
apply to all compact metrizable spaces without any restriction (in particular,
the alphabet concerned in a full shift and the space involved in a finite group
action are not required to be finite-dimensional).
Furthermore, we improve the quantitative knowledge of sofic mean dimension,
restricted to finite-dimensional compact metrizable spaces, for those two
typical classes of sofic group actions. As a direct consequence of the main
ingredient of our proof, we obtain the exact value of sofic mean dimension of
all the actions of finite groups on finite-dimensional compact metrizable
spaces. Previously, only an upper bound for these actions was given. Besides,
we also get the exact value of sofic mean dimension of full shifts when the
alphabet is finite-dimensional. | Lei Jin, Yixiao Qiao | 2023-03-28T14:25:18Z | http://arxiv.org/abs/2303.16003v2 | # Sofic approximation sequences and
###### Abstract.
We prove that sofic mean dimension of any amenable group action does not depend on the choice of sofic approximation sequences. Previously, this result was known only if the acting group is an infinite amenable group; however, in the case of a finite group action, this knowledge was restricted to finite-dimensional compact metrizable spaces only. We also prove that sofic mean dimension of any full shift depends purely on its alphabet. Previously, this was shown only when the alphabet is a finite-dimensional compact metrizable space.
Our method is a refinement of the classical technique in relation to the estimates for mean dimension from above and below, respectively. The key point of our results is that both of them apply to all compact metrizable spaces without any restriction (in particular, any of the alphabets and spaces concerned in our results is not required to be finite-dimensional).
Key words and phrases:Sofic mean dimension; Sofic approximation sequence; Amenable group action; Finite group action; Full shift 2010 Mathematics Subject Classification: 37B99; 54F45 \({}^{1}\)Actually, it has not yet been verified if there exists a countable group which is not sofic.
paper, all the acting groups are always assumed to be _countable_ (i.e., either finite or countably infinite) and _discrete_. If we let a sofic group \(G\) act continuously on a compact metrizable space \(X\) and if we let \(\Sigma\) be a sofic approximation sequence for \(G\), then we denote its sofic mean dimension by \(\operatorname{mdim}_{\Sigma}(X,G)\). Formally, it is usually called the sofic mean dimension of \((X,G)\) with respect to the sofic approximation sequence \(\Sigma\) for the acting group \(G\).2
Footnote 2: We notice that the notation for mean dimension (i.e., the classical version of mean dimension) of amenable group actions is not involved in this paper.
Li [13, Section 3] showed that if an _infinite_ amenable group \(G\) acts continuously on a compact metrizable space \(X\) and if \(\Sigma\) is a sofic approximation sequence for \(G\), then \(\operatorname{mdim}_{\Sigma}(X,G)\) is equal to its mean dimension. This is a highly satisfactory bridge between these two levels. However, a finite group action may be3 an exception. In fact, Li [13, Section 3] provided an upper bound for \(\operatorname{mdim}_{\Sigma}(X,G)\), which is finer than the mean dimension of \((X,G)\) and which leads immediately to a bunch of examples of _double finite actions_\((X,G)\) (namely, a _finite_ group \(G\) acts continuously on a _finite-dimensional_ compact metrizable space \(X\)), such that the sofic mean dimension \(\operatorname{mdim}_{\Sigma}(X,G)\) and the mean dimension of \((X,G)\) do not coincide.
Footnote 3: Note that automatically, any finite group is amenable.
We remark here that the sofic mean dimension of any double finite action remained unclear at that time. Recently, the authors [11] finally settled the question of the _exact_ value of sofic mean dimension of all the double finite actions, and moreover, as a direct consequence, obtained an "if and only if" condition under which such a pleasant equality (connecting those two levels) for double finite actions becomes true or false. Even so, the known results in this direction (e.g. sofic mean dimension of finite group actions) still do not depart from the usual assumption that the space is additionally required to be finite-dimensional. Indeed, as we will see in a moment, general and sharp statements about sofic mean dimension of full shifts are also restricted to finite-dimensional alphabets (i.e., compact metrizable spaces) only. There is a lack of knowledge of such assertions for _infinite-dimensional_ compact metrizable spaces. The reason behind this restriction will be explained at length in subsequent subsections.
Nonetheless, let us turn to a natural issue which is unknown by reason of the same obstacle, too, but which we have been able to address. Although it seems somewhat unfortunate that we _cannot_ always expect an amenable group action to have sofic mean dimension agreeing with its mean dimension, a further question that is worth studying is to find some slightly weaker property that could encompass _all_ the amenable group actions. As we mentioned previously, actions of infinite amenable groups and double finite actions have sofic mean dimension already very clear to us. In particular, this implies that the value \(\operatorname{mdim}_{\Sigma}(X,G)\), for any action \((X,G)\) among them, is independent of the sofic approximation sequences \(\Sigma\) for \(G\). Nevertheless, this is not confirmed when the
acting group \(G\) is finite and the space \(X\) is infinite-dimensional. We solve this problem. It is quite reasonable to expect all the amenable group actions to possess such a more essential (and more abstract) property. The first main result of this paper is to establish this statement.
**Theorem 1.1** (Main theorem 1).: _Sofic mean dimension of any amenable group action does not depend on the choice of sofic approximation sequences._
More precisely, we shall prove an inner equality (in relation to sofic mean dimension) for the class of amenable group actions:
* If an amenable group \(G\) acts continuously on a compact metrizable space \(X\), and if \(\Sigma\) and \(\Sigma^{\prime}\) are two sofic approximation sequences for \(G\), then we have \(\operatorname{mdim}_{\Sigma}(X,G)=\operatorname{mdim}_{\Sigma^{\prime}}(X,G)\).
Instead of an outer equality between the defined values at those two levels, Theorem 1.1 eventually allows us to unify the reduction of sofic mean dimension to mean dimension with a view towards a common value shared among all the approximation sequences for an acting group (i.e. uniquely determined4 by the action).
Footnote 4: We would like to remind the reader to keep in mind that this makes it reasonable to remove the approximation sequence from the notation of mean dimension, while it is not planned to mean if this value is definable or not in the context of the non-existence of an approximation sequence.
As we mentioned before, the main new ingredient of Theorem 1.1 is the statement for finite group actions which are considered, together with the full shifts, as the most standard method for generating a group action from a space, in the sense that they reflect the topological nature of a space with some canonical (sometimes, even trivial) dynamical behaviours. Conversely, these typical actions have some desired feature similar to the phenomena in (topological) dimension theory, for example, in the universality aspect. In particular, mean dimension is closely related to the (dynamical) _embedding problem_ (i.e., embedding dynamical systems into the shift action on Hilbert cubes) which is a wonderful application of mean dimension to dynamical systems. We shall not study this topic in this paper. Therefore we do not describe in detail how it deeply relates (abstract) dynamical systems to different areas (e.g. classic analysis). For the latest progress on this problem we refer the reader to [15, 16, 17, 18, 19, 20, 21]. From all those facts and results we became aware that any careful understanding of mean dimension of full shifts is valuable.
Let \(G\) be a group. Let \(K\) be a compact metrizable space. We denote by \((K^{G},\sigma_{G})\) the shift action of \(G\) on the product space \(K^{G}\). Usually it is simply called a full shift over the alphabet \(K\).
The alphabet \(K\) plays a crucial role in the full shift \((K^{G},\sigma_{G})\). Lindenstrauss and Weiss [15] showed that when \(G\) is an amenable group and \(K=[0,1]^{D}\) (where \(D\) is a positive integer, or possibly, \(+\infty\)), the mean dimension of \((K^{G},\sigma_{G})\) is equal to \(D\), which
is the same as the dimension (by which, here and in the sequel, we mean the topological dimension, namely, the Lebesgue covering dimension) of \(K\). Since mean dimension theory is apparently an analogue of dimension theory, this result naturally brings about a seemingly plausible impression, i.e., the mean dimension of the full shift \((K^{G},\sigma_{G})\) is equal to the dimension of the alphabet \(K\) on all occasions. However, this turns out to be incorrect in general. Masaki Tsukamoto [17] proved a satisfactory result which surprisingly denies this impression and which enables mean dimension of full shifts over _finite-dimensional_ alphabets to be understood completely. As far as we note in [11], although Tsukamoto's result [17] is stated for the case \(G=\mathbb{Z}\), generalising it to all the amenable groups \(G\) is very straightforward. For instance, this can be fulfilled without any additional effort with the help of [11].
Before we proceed any further, we would like to remark that in the context of amenable groups the key difficulty with the mean dimension of full shifts is an effective estimate from below (which was conquered by Tsukamoto [17]). This is because the mean dimension of a full shift is dominated from above by the dimension of its alphabet [18], which can improve itself (e.g. employing [11]) up to an optimal upper bound (provided that the alphabet is finite-dimensional) with a standard trick which however does not apply to the sofic framework any more (for details, please refer to [11]) and which becomes a main obstacle (to the exact value of sofic mean dimension of full shifts) to overcome. For a sofic group \(G\) and a sofic approximation sequence \(\Sigma\) for \(G\), Li [19] showed that \(\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})\) is bounded from above by the dimension of the alphabet \(K\) (which is not optimal). The authors [11] refined Li's estimate with a substantially different method, and thus, successfully extended Tsukamoto's result [17] to all the sofic groups \(G\) as long as the alphabet \(K\) is finite-dimensional.
It is important to notice that among all the above results, the additional condition that assumes the alphabet \(K\) to be finite-dimensional is essential to the proof. By reason of almost the same obstruction, it is not clear to us for a long time if the value \(\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})\), for an arbitrary alphabet \(K\), will change as we choose different sofic approximation sequences \(\Sigma\), or even along with different sofic groups \(G\). The previous result carried out by the authors in [11] implies in particular that when the alphabet \(K\) is finite-dimensional, the term \(\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})\) does be independent of the sofic approximation sequences \(\Sigma\) for the group \(G\). The purpose of our second main result is to carry out this assertion for all alphabets, proving that sofic mean dimension of any full shift depends purely on its alphabet.
We recall our general setting in front of the statement of the second main result. Let \(K\) be a compact metrizable space. Let \(G\) and \(G^{\prime}\) be sofic groups. Let \(\Sigma\) and \(\Sigma^{\prime}\) be sofic approximation sequences for \(G\) and \(G^{\prime}\), respectively. We have an equality for sofic mean dimension of full shifts all the time:
**Theorem 1.2** (Main theorem 2).: \[\mathrm{mdim}_{\Sigma}(K^{G},\sigma_{G})=\mathrm{mdim}_{\Sigma^{\prime}}(K^{G^{ \prime}},\sigma_{G^{\prime}}).\]
The point of Theorem 1.2 is that the statement applies, without any restriction, to all the full shifts of all sofic groups along with any of their sofic approximation sequences. In particular, the alphabet \(K\) in this equality is not required to be finite-dimensional any longer.
### Strategy
This subsection aims to explain the difficulty with our main results and to describe5 our main idea. The proof of Theorem 1.1 has some similarity, especially in the key estimates, to the proof of Theorem 1.2. So let us focus, for convenience, on Theorem 1.2. Since we have already gotten a clear picture in [11] of this result for all the finite-dimensional alphabets \(K\), it then suffices to concentrate on the statement for infinite-dimensional alphabets.
Footnote 5: For simplicity, we shall not get into technical details in this subsection. But we believe that an experienced reader interested in this topic may see the point of our strategy.
It was first noted by Tsukamoto [14] that there are at least two difficulties when we deal with the full shifts over infinite-dimensional alphabets, which come from two remarkable phenomena in infinite-dimensional topology. Quoting [14] (with a slight modification):
* _There exists an infinite-dimensional compact metrizable space_ \(K\) _containing no intermediate dimensional subspaces. Namely, every closed subset_ \(A\) _of_ \(K\) _has dimension either_ \(0\) _or_ \(\infty\)_._
* _There exists an infinite-dimensional compact metrizable space_ \(K\) _which cohomologically looks like a surface. Namely, every closed subset_ \(A\) _of_ \(K\) _satisfies that for any_ \(n\geq 3\) _the Cech cohomology group_ \(\check{H}^{n}(K,A)\) _vanishes._
Furthermore, Tsukamoto [14] posed the precise reason (from outer and inner aspects, respectively) why they become essential obstacles to the issue. Quoting [14] (with a slight modification):
* _These two difficulties are genuinely infinite-dimensional phenomena. The first difficulty implies that we cannot reduce the problem to a finite-dimensional case, and the second difficulty implies that the ordinary cohomology theory is insufficient to solve the problem._
We have indicated (in the previous subsection) that a main problem is to produce an effective lower bound (for sofic mean dimension of full shifts). The above citations concern such an estimate from below, which are specifically related to the approach given in [14], which may fail unfortunately and which seems impossible to be improved with the same method, if the alphabet \(K\) is infinite-dimensional.
We overcome6 the above difficulties by taking a detour. In comparison with the previous work, we no longer pay attention to the exact value of sofic mean dimension of full shifts. As an alternative, our idea turns to looking for a common expression that can be shared between the estimates from both above and below. Thus, to this attempt our first problem is as follows:
Footnote 6: Strictly speaking, we avoid them.
* How to find a good candidate which is able not only to unify the upper and lower bounds qualitatively for sofic mean dimension of full shifts, but also to remove the group information (e.g. about the sofic approximation sequences for an acting group) quantitatively from the expression of estimates?
The word "good" here is just planned to mean that making such a candidate work on all the alphabets will be within the authors' reach.
We observe that the candidate selected in [11] for this purpose is the term of dimension, which is actually one of the most fundamental and global invariants of topological spaces. This definitely meets all the expectations, but meanwhile, it exactly causes the difficulty (which lies mainly in one direction of the estimates).
We shall adopt a different method for realising it. Instead of the dimension of a compact metrizable space, we develop more delicate estimates for our aim (i.e. for sofic mean dimension of these typical actions) from both of the two directions, which can adapt directly to each other, with terms of \(\epsilon\)-width dimension. This is milder in some sense. Intuitively speaking, we take a step backwards, and in consequence, we have sight less clear than before; however, it then becomes possible for us to get a wider range of vision, such that we are almost able to treat finite-dimensional and infinite-dimensional alphabets with a unified process.
Although the candidate that we choose has some advantage, we have to be aware that it has correspondingly weaker properties. We do not list them here, as they will be settled with a more careful technique in the proof. The method that we provide is a refinement of the classical method for producing estimates for sofic mean dimension from above and below, respectively. In contrast to the classical technique, our approach to the estimates applies to both finite-dimensional and infinite-dimensional compact metrizable spaces, and in particular, it does _not_ rely on a practical lemma7 for \(\epsilon\)-width dimension explored by Tsukamoto [11]. This is the major difference between our method and the previous ones.
Footnote 7: This lemma was widely used for a similar purpose (i.e., if there is an obstacle heavily dependent on an effective estimate for \(\epsilon\)-width dimension from below) by the researchers interested in those topics relevant to this direction (e.g., we employed it in the paper [10]).
### Further discussion on open problems
The aim of this subsection is to pose two open problems in company with our remark in this direction, which seem to be
worth considering. These questions are natural and fundamental, and as a consequence, solutions to them will give rise to a complete picture of sofic mean dimension of amenable group actions and full shifts. We put them as follows. As usual we let \(\Sigma\) be a sofic approximation sequence for an acting group \(G\).
* Let \(K\) be an infinite-dimensional compact metrizable space. Is the statement \(\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})=+\infty\) true?
* Let a finite group \(G\) act continuously on an infinite-dimensional compact metrizable space \(X\). Is the statement \(\operatorname{mdim}_{\Sigma}(X,G)=+\infty\) true?
The first problem arose originally from [16] where it was stated for the case \(G=\mathbb{Z}\) (while its current version appeared in [11]). The second problem was raised initially by the authors in the previous paper [11].
Moreover, we have indicated in [11] that these two problems are intimately connected with each other, which are also involved in the main results of [11] as well as Theorem 1.1 and Theorem 1.2. The novel ingredient here is to point out (a little further on) that they are8 actually _equivalent_ to (i.e., they reduce to) each other:
Footnote 8: This fact follows directly from the main proposition of this paper, which is located in Section 3.
* Let \(K\) be a compact metrizable space. Let \(G\) be a sofic group and \(\Sigma\) a sofic approximation sequence for \(G\). Let \(G^{\prime}\) be a finite group and \(\Sigma^{\prime}\) a sofic approximation sequence for \(G^{\prime}\). Let \(G^{\prime}\) act continuously on \(K\). We have that \(\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})=+\infty\) if and only if \(\operatorname{mdim}_{\Sigma^{\prime}}(K,G^{\prime})=+\infty\).
### Organization of the paper
By the end of this section we overview the structure of the present paper. In Section 2, we briefly review basic definitions related to sofic mean dimension. In Section 3, we state the main proposition of this paper. As its corollaries, we prove our main theorems (i.e., Theorem 1.1 and Theorem 1.2) and also recover the previous results (assuming the main proposition). Besides, we include a toy-model of the main proposition (in company with a simple proof), which is logically independent of all the results mentioned in the paper. This part is dedicated to the technical intersection of difficulties arising from the spaces and the acting groups. Namely, it explains (for this simple case) how we remove the finite-dimensional restriction from the spaces, and moreover, it helps us to see the additional obstacles (in a more precise way as we would show) to going a step forward (from amenable groups to sofic groups). Section 4 and Section 5 are devoted to the proof of our main proposition.
**Acknowledgement.** This paper was written when the first-named author stayed in for a time during his depression. He would like to take this opportunity to thank his wife for keeping persistent company with him for the difficult period.
## 2. Review of sofic mean dimension
### Background
The background of sofic mean dimension has been contained briefly in Subsection 1.1 already, which proved to be the main motivation of this paper. So we do not repeat it here. We list fundamental material on terminologies and notations. The writing of this section is mostly borrowed, for consistency, from the papers [1, 2] by the authors, with some necessary modification adapting to the present paper. We are now starting with basic definitions. Throughout this paper the symbol \(\mathbb{N}\) is to denote the set of positive integers.
### Group actions
Let \(G\) be a group. By the terminology "\(G\)**acts continuously on** a compact metrizable space \(X\)" we understand a continuous mapping
\[\Phi:G\times X\to X,\quad(g,x)\mapsto gx\]
satisfying the following conditions:
\[\Phi(e,x)=x,\quad\Phi(gh,x)=\Phi(g,\Phi(h,x)),\quad\forall x\in X,\;\forall g,h\in G,\]
where \(e\) is the identity element of the group \(G\). We generally omit the mapping \(\Phi\) provided we have already gotten the action of \(G\) on \(X\) clear.
The full shifts are a typical class of group actions. This notion was concerned in Theorem 1.2. Let \(K\) be a compact metrizable space. The **shift action** of \(G\) on the product space \(K^{G}\) is defined as follows:
\[\sigma_{G}:G\times K^{G}\to K^{G},\quad(g,(x_{h})_{h\in G})\mapsto(x_{hg})_{h \in G}.\]
We usually denote this object by \((K^{G},\sigma_{G})\) and call it the full shift of the group \(G\) over the _alphabet_\(K\).
Let \(G\) act continuously on compact metrizable spaces \(X_{n}\), respectively, where \(n\) ranges over some subset \(R\) of \(\mathbb{N}\). We define the **product action** of \(G\) on the product space \(\prod_{n\in R}X_{n}\) as follows:
\[g(x_{n})_{n\in R}=(gx_{n})_{n\in R},\quad\forall g\in G,\;\forall(x_{n})_{n \in R}\in\prod_{n\in R}X_{n}.\]
Product actions will be considered in the sequel (e.g. the definition of sofic mean dimension and the proof of our main proposition).
### Sofic groups
We denote by \(|F|\) the cardinality of a set \(F\). For every \(d\in\mathbb{N}\) we write \([d]\) for the set \(\{k\in\mathbb{N}:1\leq k\leq d\}\) and \(\operatorname{Sym}(d)\) for the group of permutations of \([d]\). A group \(G\) is **sofic** if there is a sequence
\[\Sigma=\{\sigma_{i}:G\to\operatorname{Sym}(d_{i})\}_{i\in\mathbb{N}}\]
(together with a sequence \(\{d_{i}\}_{i\in\mathbb{N}}\subset\mathbb{N}\)) such that the following three conditions are satisfied:
\[\bullet \lim_{i\to\infty}\frac{1}{d_{i}}|\{k\in[d_{i}]:\sigma_{i}(st)(k)= \sigma_{i}(s)\sigma_{i}(t)(k)\}|=1\quad\text{for all $s,t\in G$;}\] \[\bullet \lim_{i\to\infty}\frac{1}{d_{i}}|\{k\in[d_{i}]:\sigma_{i}(s)(k) \neq\sigma_{i}(t)(k)\}|=1\quad\text{for all distinct $s,t\in G$;}\] \[\bullet \lim_{i\to\infty}d_{i}=+\infty.\]
Such a sequence \(\Sigma\) is called a **sofic approximation sequence** for \(G\).
**Remark 2.1**.: Note that the third condition will be fulfilled automatically if we additionally assume the group \(G\) to be infinite.
### \(\epsilon\)-embeddings and \(\epsilon\)-width dimension
We denote by \(\dim(K)\) the topological dimension (i.e. the Lebesgue covering dimension) of a compact metrizable space \(K\). If the space \(K\) is empty, then we set \(\dim(K)=-\infty\).
Let \(X\) and \(P\) be compact metrizable spaces. Let \(\rho\) be a compatible metric on \(X\). For \(\epsilon>0\) a continuous mapping \(f:X\to P\) is called an \(\epsilon\)**-embedding with respect to**\(\rho\) if \(f(x)=f(x^{\prime})\) implies \(\rho(x,x^{\prime})<\epsilon\), for all \(x,x^{\prime}\in X\). Let \(\operatorname{Widim}_{\epsilon}(X,\rho)\) be the minimum (topological) dimension \(\dim(P)\) of a compact metrizable space \(P\) which admits an \(\epsilon\)-embedding \(f:X\to P\) with respect to \(\rho\). This term was generally known as the \(\epsilon\)-width dimension.
**Remark 2.2**.: We may verify that the dimension of any compact metrizable space \(X\) can be recovered by: \(\dim(X)=\lim_{\epsilon\to 0}\operatorname{Widim}_{\epsilon}(X,\rho)\).
### Distances \(\rho_{2}\) and \(\rho_{\infty}\)
Let \(K\) be a compact metrizable space. Let \(\rho\) be a compatible metric on \(K\). For every \(n\in\mathbb{N}\) we define two (different) compatible metrics \(\rho_{2}\) and \(\rho_{\infty}\) on the product space \(K^{n}\) as follows:
\[\rho_{2}\left((x_{i})_{i\in[n]},(y_{i})_{i\in[n]}\right)=\sqrt{\frac{1}{n} \sum_{i\in[n]}(\rho(x_{i},y_{i}))^{2}},\]
\[\rho_{\infty}\left((x_{i})_{i\in[n]},(y_{i})_{i\in[n]}\right)=\max_{i\in[n]} \rho(x_{i},y_{i}).\]
We do not include \(n\in\mathbb{N}\) in the notations \(\rho_{2}\) and \(\rho_{\infty}\) because this does not cause any ambiguity.
### Mean dimension
A group \(G\) is said to be **amenable** if there exists a sequence \(\{F_{n}\}_{n\in\mathbb{N}}\) of nonempty finite subsets of \(G\) such that for any \(g\in G\)
\[\lim_{n\to\infty}\frac{|F_{n}\triangle gF_{n}|}{|F_{n}|}=0.\]
Such a sequence \(\{F_{n}\}_{n\in\mathbb{N}}\) is called a **Folner sequence** of the group \(G\). Obviously, all the finite groups are amenable.
Next we state the definition of mean dimension for amenable group actions. We remark again that this notion is not involved in the main results of this paper. We put it here for theoretical completeness.
Let an amenable group \(G\) act continuously on a compact metrizable space \(X\). Take a Folner sequence \(\{F_{n}\}_{n\in\mathbb{N}}\) of \(G\) and a compatible metric \(\rho\) on \(X\). For a nonempty finite subset \(F\) of \(G\) we set
\[\rho_{F}(x,x^{\prime})=\rho_{\infty}\left((gx)_{g\in F},(gx^{\prime})_{g\in F }\right),\quad\forall\,x,x^{\prime}\in X.\]
It is clear that \(\rho_{F}\) also becomes a compatible metric on \(X\). The **mean dimension** of \((X,G)\) is defined by
\[\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X, \rho_{F_{n}})}{|F_{n}|}.\]
It is well known that both of the limits in the above definition always exist, and that this value is independent of the choices of a Folner sequence \(\{F_{n}\}_{n\in\mathbb{N}}\) of \(G\) and a compatible metric \(\rho\) on \(X\).
### Sofic mean dimension
Suppose that a sofic group \(G\) acts continuously on a compact metrizable space \(X\). Let
\[\Sigma=\{\sigma_{i}:G\to\operatorname{Sym}(d_{i})\}_{i\in\mathbb{N}}\]
be a sofic approximation sequence for \(G\). We now equip \(X\) with a compatible metric \(\rho\) temporarily. For a finite subset \(F\) of \(G\), \(\delta>0\) and a map \(\sigma:G\to\operatorname{Sym}(d)\) (where \(d\in\mathbb{N}\)) we define
\[\operatorname{Map}(\rho,F,\delta,\sigma)=\{\phi:[d]\to X:\rho_{2}(\phi\circ \sigma(s),s\phi)\leq\delta,\,\forall s\in F\}.\]
We consider the set \(\operatorname{Map}(\rho,F,\delta,\sigma)\) as a compact subspace of the product space \(X^{d}\). In our context we usually write \(\phi=(\phi_{l})_{l\in[d]}\in X^{d}\) for \(\phi:[d]\to X\) (i.e. for \(l\in[d]\) we write \(\phi_{l}\) for \(\phi(l)\)). The **sofic mean dimension** of \((X,G)\) with respect to \(\Sigma\) is defined by
\[\operatorname{mdim}_{\Sigma}(X,G)=\sup_{\epsilon>0}\inf_{F\subset G\text{ finite, }\delta>0}\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}\left( \operatorname{Map}(\rho,F,\delta,\sigma_{i}),\rho_{\infty}\right)}{d_{i}}.\]
A standard fact (similar to mean dimension) is that the definition of \(\operatorname{mdim}_{\Sigma}(X,G)\) does not depend on the choice of compatible metrics \(\rho\) on \(X\). Nevertheless, as we mentioned implicitly in Subsection 1.1, it is not clear if there is an example of a sofic approximation sequence \(\Sigma^{\prime}\) different from \(\Sigma\), which leads to a value \(\operatorname{mdim}_{\Sigma^{\prime}}(X,G)\) different from \(\operatorname{mdim}_{\Sigma}(X,G)\).
**Lemma 2.3** ([13, Section 2]).: _Let a sofic group \(G\) act continuously on compact metrizable spaces \(X_{n}\), respectively, where \(n\) runs over some \(R\subset\mathbb{N}\). Let \(\Sigma\) be a sofic approximation sequence for \(G\). For the product action of \(G\) on the product space \(\prod_{n\in R}X_{n}\) we have an inequality for sofic mean dimension: \(\operatorname{mdim}_{\Sigma}(\prod_{n\in R}X_{n},G)\leq\sum_{n\in R} \operatorname{mdim}_{\Sigma}(X_{n},G)\)._
## 3. Main proposition
### Statement of the main proposition
In this subsection we state our main proposition (Theorem 3.1). The main theorems of this paper follow from this. The proof of Theorem 3.1 is located in the next two sections.
Let \(X\) be a compact metrizable space. We take a compatible metric \(\rho\) on \(X\) arbitrarily and temporarily. We consider an expression with terms of \(\epsilon\)-width dimension:
\[\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n}, \rho_{\infty})}{n}.\]
Clearly, this is a topological invariant of \(X\). More precisely, both of the limits in this expression always exist (the inner limit exists because \(\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})\) is subadditive9 in \(n\) while the outer limit exists because \(\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})\) is monotone in \(\epsilon\)), and moreover, the defined value is independent of the choice of compatible metrics \(\rho\) on \(X\).
Footnote 9: Here we have used the classically-known fact that the dimension of the product of finitely many compact metrizable spaces is at most the sum of the ones.
**Theorem 3.1**.: (Main proposition).__
1. _Let_ \(G\) _be a finite group and_ \(\Sigma\) _a sofic approximation sequence for_ \(G\)_. Let_ \(G\) _act continuously on a compact metrizable space_ \(X\)_. The following equality is true:_ \[\operatorname{mdim}_{\Sigma}(X,G)=\frac{1}{|G|}\cdot\lim_{\epsilon\to 0} \lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})}{n}.\]
2. _Let_ \(G\) _be a sofic group and_ \(\Sigma\) _a sofic approximation sequence for_ \(G\)_. Let_ \(X\) _be a compact metrizable space. The following equality is true:_ \[\operatorname{mdim}_{\Sigma}(X^{G},\sigma_{G})=\lim_{\epsilon\to 0} \lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})}{n}.\]
### Proof of the main theorems
We are now ready to prove our main results assuming Theorem 3.1.
**Theorem 3.2** (=Theorem 1.1).: _If an amenable group \(G\) acts continuously on a compact metrizable space \(X\), and if \(\Sigma\) and \(\Sigma^{\prime}\) are two sofic approximation sequences for \(G\), then we have \(\operatorname{mdim}_{\Sigma}(X,G)=\operatorname{mdim}_{\Sigma^{\prime}}(X,G)\)._
Proof.: If \(G\) is an infinite amenable group, then by [13, Section 3] we have \(\operatorname{mdim}_{\Sigma}(X,G)=\operatorname{mdim}_{\Sigma^{\prime}}(X,G)\) (both \(\operatorname{mdim}_{\Sigma}(X,G)\) and \(\operatorname{mdim}_{\Sigma^{\prime}}(X,G)\) are equal to the mean dimension of \((X,G)\)). If \(G\) is a finite group, then Statement (i) of Theorem 3.1 shows in particular that \(\operatorname{mdim}_{\Sigma}(X,G)=\operatorname{mdim}_{\Sigma^{\prime}}(X,G)\).
**Theorem 3.3** (=Theorem 1.2).: _Let \(K\) be a compact metrizable space. Let \(G\) and \(G^{\prime}\) be sofic groups. Let \(\Sigma\) and \(\Sigma^{\prime}\) be sofic approximation sequences for \(G\) and \(G^{\prime}\), respectively. The following equality is true:_
\[\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})=\operatorname{mdim}_{\Sigma^{ \prime}}(K^{G^{\prime}},\sigma_{G^{\prime}}).\]
Proof.: This follows from Statement (ii) of Theorem 3.1.
### Corollaries
We notice that this subsection is logically independent of the main theorems of this paper. In this subsection we would like to present several corollaries of the main proposition. To this aim we shall employ a practical lemma due to Tsukamoto [17, Lemma 3.1], which applies to finite-dimensional compact metrizable spaces. As follows we state this lemma.
**Lemma 3.4** ([17, Lemma 3.1]).: _Let \(K\) be a finite-dimensional compact metrizable space. Let \(\rho\) be a compatible metric on \(K\). Then there is some \(\delta>0\) such that for all \(n\in\mathbb{N}\) and all \(0<\epsilon<\delta\) we have \(\operatorname{Widim}_{\epsilon}(K^{n},\rho_{\infty})\geq n\cdot(\dim(K)-1)\)._
As we mentioned before, the proofs of the main proposition (Theorem 3.1) and our main results (Theorem 1.1 and Theorem 1.2) do _not_ rely on Tsukamoto's lemma. So Lemma 3.4 is borrowed _only_ in this subsection (in the proof of Lemma 3.5, and in consequence, Corollary 3.6 and Corollary 3.7). Now we recover all the previous results [1] in this direction assuming Theorem 3.1 and Lemma 3.4. We begin with the following lemma which may be regarded as a limit version of Lemma 3.4.
**Lemma 3.5**.: _If \(K\) is a finite-dimensional compact metrizable space and if \(\rho\) is a compatible metric on \(K\), then_
\[\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(K^{n },\rho_{\infty})}{n}=\inf_{n\in\mathbb{N}}\frac{\dim(K^{n})}{n}.\]
Proof.: First of all, we notice that the term \(\dim(K^{n})\) is subadditive in \(n\in\mathbb{N}\), which implies that
\[\lim_{n\to\infty}\frac{\dim(K^{n})}{n}=\inf_{n\in\mathbb{N}}\frac{\dim(K^{n}) }{n}.\]
Since \(\operatorname{Widim}_{\epsilon}(K^{n},\rho_{\infty})\leq\dim(K^{n})\), we have
\[\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(K^{n },\rho_{\infty})}{n}\leq\inf_{n\in\mathbb{N}}\frac{\dim(K^{n})}{n}.\]
To see the converse direction, we take an \(m\in\mathbb{N}\) arbitrarily and fix it temporarily. Note that the space \(K\) is finite-dimensional, and thus, the product space \(K^{m}\) is finite-dimensional as well. We apply Lemma 3.4 to the product space \(K^{m}\):
\[\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{ \epsilon}(K^{n},\rho_{\infty})}{n} =\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{ \epsilon}(K^{mn},\rho_{\infty})}{mn}\] \[\geq\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{n\cdot(\dim(K^{m})-1) }{mn}\] \[=\frac{\dim(K^{m})-1}{m}.\]
Since \(m\in\mathbb{N}\) is arbitrary, the statement follows.
**Corollary 3.6** ([1, Theorem 1.5]).: _If a finite group \(G\) acts continuously on a finite-dimensional compact metrizable space \(X\), and if \(\Sigma\) is a sofic approximation sequence for \(G\), then_
\[\operatorname{mdim}_{\Sigma}(X,G)=\frac{1}{|G|}\cdot\inf_{n\in\mathbb{N}}\frac{ \operatorname{dim}(X^{n})}{n}.\]
Proof.: This follows from Statement (i) of Theorem 3.1 and Lemma 3.5.
**Corollary 3.7** ([1, Theorem 1.1]).: _If \(K\) is a finite-dimensional compact metrizable space, and if \(\Sigma\) is a sofic approximation sequence for a sofic group \(G\), then_
\[\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})=\inf_{n\in\mathbb{N}}\frac{ \operatorname{dim}(K^{n})}{n}.\]
Proof.: This follows from Statement (ii) of Theorem 3.1 and Lemma 3.5.
### Looking at the simplest case: What are the key differences?
In this subsection we shall consider a toy-model of Theorem 3.1. It is logically independent of all the other parts of the paper.
We denote by \(\{e\}\) the trivial group (i.e. the group that consists of the identity element only). Let \(\Sigma=\{\sigma_{i}:\{e\}\to\operatorname{Sym}(d_{i})\}_{i\in\mathbb{N}}\) be a sofic approximation sequence for \(\{e\}\). Let \(X\) be a compact metrizable space. We consider the shift action of \(\mathbb{Z}\) on the product space \(X^{\mathbb{Z}}\) and the (uniquely possible) action of \(\{e\}\) on the space \(X\). We aim to show the following proposition:
* Both the mean dimension of \((X^{\mathbb{Z}},\sigma_{\mathbb{Z}})\) and the sofic mean dimension of \((X,\{e\})\) with respect to \(\Sigma\) are equal to \[\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})}{n},\] where \(\rho\) is an arbitrarily fixed compatible metric on \(X\).
As follows we prove this proposition.
Firstly, we note that
\[\lim_{i\to\infty}\frac{1}{d_{i}}|\{k\in[d_{i}]:\sigma_{i}(e)(k)=k\}|=1\]
which means that for \(\delta>0\) we have
\[\operatorname{Map}(\rho,\{e\},\delta,\sigma_{i})=\{\phi:[d_{i}]\to X:\rho_{2} (\phi\circ\sigma_{i}(e),\phi)\leq\delta\}=X^{d_{i}},\]
for all sufficiently large \(i\in\mathbb{N}\), which implies that
\[\operatorname{mdim}_{\Sigma}(X,\{e\})=\sup_{\epsilon>0}\limsup_{i\to\infty} \frac{\operatorname{Widim}_{\epsilon}(X^{d_{i}},\rho_{\infty})}{d_{i}}=\lim_{ \epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n},\rho_{ \infty})}{n}.\]
Notice that here we used \(d_{i}\to+\infty\) as \(i\to\infty\). This proves the latter part of the above proposition.
Next we show the former part of this proposition. Since \(X\) is compact, we may assume, without loss of generality, that the distance between any two points in \(X\) is bounded by \(1\) with respect to \(\rho\). Let \(D\) be the compatible metric on the product space \(X^{\mathbb{Z}}\), defined as follows:
\[D(x,x^{\prime})=\sum_{i\in\mathbb{Z}}\frac{\rho(x_{i},x^{\prime}_{i})}{2^{|i|} }\qquad(x=(x_{i})_{i\in\mathbb{Z}},x^{\prime}=(x^{\prime}_{i})_{i\in\mathbb{Z} }\,\in X^{\mathbb{Z}}).\]
For every \(n\in\mathbb{N}\) we write \(D_{(n)}\) for the compatible metric on \(X^{\mathbb{Z}}\) defined by
\[D_{(n)}(x,x^{\prime})=D_{\infty}(((x_{i+j})_{i\in\mathbb{Z}})_{j=0}^{n-1},((x^ {\prime}_{i+j})_{i\in\mathbb{Z}})_{j=0}^{n-1})\qquad(x=(x_{i})_{i\in\mathbb{Z} },x^{\prime}=(x^{\prime}_{i})_{i\in\mathbb{Z}}\,\in X^{\mathbb{Z}}).\]
Now we fix a point \(p\in X\). For every \(n\in\mathbb{N}\) we define a mapping \(f_{n}:X^{n}\to X^{\mathbb{Z}}\) by sending each \((x_{i})_{i=0}^{n-1}\in X^{n}\) to \(((p)_{i\leq-1},(x_{i})_{0\leq i\leq n-1},(p)_{i\geq n})\in X^{\mathbb{Z}}\). Clearly, the mapping \(f_{n}:X^{n}\to X^{\mathbb{Z}}\) is continuous and distance-increasing with respect to the metrics \(\rho_{\infty}\) on \(X^{n}\) and \(D_{(n)}\) on \(X^{\mathbb{Z}}\), namely
\[\rho_{\infty}((x_{i})_{i=0}^{n-1},(x^{\prime}_{i})_{i=0}^{n-1})\leq D_{(n)}(f_ {n}((x_{i})_{i=0}^{n-1}),f_{n}((x^{\prime}_{i})_{i=0}^{n-1})),\quad\,\forall\, (x_{i})_{i=0}^{n-1},(x^{\prime}_{i})_{i=0}^{n-1}\in X^{n}.\]
This implies that
\[\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})\leq\operatorname{Widim}_ {\epsilon}(X^{\mathbb{Z}},D_{(n)}),\quad\,\forall n\in\mathbb{N},\,\,\forall \epsilon>0.\]
Thus, the mean dimension of \((X^{\mathbb{Z}},\sigma_{\mathbb{Z}})\) is bounded from below by
\[\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})}{n}.\]
To estimate the mean dimension of \((X^{\mathbb{Z}},\sigma_{\mathbb{Z}})\) from above, we fix \(\epsilon>0\) arbitrarily. Let \(M\in\mathbb{N}\) be a constant (depending on \(\epsilon\)) such that
\[\rho_{\infty}((x_{i})_{i=-M}^{M},(x^{\prime}_{i})_{i=-M}^{M})<\epsilon \Longrightarrow D(x,x^{\prime})<2\epsilon,\quad\,\forall\,x=(x_{i})_{i\in \mathbb{Z}},x^{\prime}=(x^{\prime}_{i})_{i\in\mathbb{Z}}\in X^{\mathbb{Z}}.\]
For any \(n\in\mathbb{N}\) we take a compact metrizable space \(P_{n}\) satisfying
\[\dim(P_{n})=\operatorname{Widim}_{\epsilon}(X^{n+2M},\rho_{\infty})\]
in company with a continuous mapping \(f_{n}:X^{n+2M}\to P_{n}\) which is an \(\epsilon\)-embedding with respect to the metric \(\rho_{\infty}\) on \(X^{n+2M}\). Let
\[\pi_{n}:X^{\mathbb{Z}}\to X^{n+2M},\quad\,(x_{i})_{i\in\mathbb{Z}}\mapsto(x_{ i})_{i=-M}^{n+M-1}\]
be a canonical projection. Obviously, the mapping \(\pi_{n}\) is continuous. It follows that the continuous mapping \(f_{n}\circ\pi_{n}:X^{\mathbb{Z}}\to P_{n}\) is a \((2\epsilon)\)-embedding with respect to the metric \(D_{(n)}\) on \(X^{\mathbb{Z}}\). Thus,
\[\operatorname{Widim}_{2\epsilon}(X^{\mathbb{Z}},D_{(n)})\leq\dim(P_{n})= \operatorname{Widim}_{\epsilon}(X^{n+2M},\rho_{\infty}),\quad\,\forall n\in \mathbb{N}.\]
This implies that
\[\lim_{n\to\infty}\frac{\operatorname{Widim}_{2\epsilon}(X^{\mathbb{Z}},D_{(n)} )}{n}\leq\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n+2M},\rho_ {\infty})}{n}=\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n}, \rho_{\infty})}{n}.\]
Since \(\epsilon>0\) is arbitrary, we obtain (by definition) that the mean dimension of \((X^{\mathbb{Z}},\sigma_{\mathbb{Z}})\) is bounded from above by
\[\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n}, \rho_{\infty})}{n}.\]
This ends the proof of the proposition.
Although the proof of the above proposition is simple (and self-contained), it unifies the estimates from above and below, and hence explains how we relate mean dimension of full shifts to sofic mean dimension of finite group actions with a common expression. In particular, as we can see in the proof, all the compact metrizable spaces have now been treated with a unified process (i.e., the space \(X\) in this proposition is not assumed to be finite-dimensional any more).
In spite of that, we have to be faced with extra obstacles to our estimates for sofic mean dimension if we generally replace the acting group \(\mathbb{Z}\) in this proposition with a sofic (but non-amenable) group, or if we investigate the sofic mean dimension of a finite (but nontrivial) group action. More precisely, we will encounter difficulties arising mainly from the acting (sofic) groups due to some bad behaviour of sofic approximation sequences which differ substantially from Folner sequences which are used to characterize amenable groups that are endowed with a much nicer approximation structure. Section 4 and Section 5 are inspired by this cause.
## 4. Sofic mean dimension of finite group actions
In this section we prove Statement (i) of Theorem 3.1. To begin with, we put some general settings.
Let \(X\) be a compact metrizable space. We take a compatible metric \(\rho\) on \(X\) arbitrarily. Since \(X\) is compact, we may assume, for simplicity, that the diameter of \(X\) with respect to \(\rho\) is equal to \(1\).
Let \(G\) be a finite group. We fix a sofic approximation sequence \(\Sigma=\{\sigma_{i}:G\to\operatorname{Sym}(d_{i})\}_{i\in\mathbb{N}}\) for \(G\).
Now we suppose that \(G\) acts continuously on \(X\). Our aim is to prove the following equality:
\[\operatorname{mdim}_{\Sigma}(X,G)=\frac{1}{|G|}\cdot\lim_{\epsilon\to 0}\lim_{n \to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})}{n}.\]
This will be fulfilled by Lemma 4.1 and Lemma 4.2.
**Lemma 4.1** (Estimate from above).: \[\operatorname{mdim}_{\Sigma}(X,G)\leq\frac{1}{|G|}\cdot\lim_{\epsilon\to 0 }\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})}{ n}.\]
Proof.: To show the statement, we fix \(\epsilon>0\) and \(\theta>0\) arbitrarily. Since \(X\) is a compact metrizable space and since \(G\) is a finite group, there is some \(0<\eta=\eta(\epsilon)<\epsilon\) such that
if two points \(x,x^{\prime}\in X\) satisfy \(\rho(x,x^{\prime})\leq 3\eta\) then they must satisfy \(\rho(gx,gx^{\prime})<\epsilon\) for all \(g\in G\). Clearly, we have \(\eta=\eta(\epsilon)\to 0\) as \(\epsilon\to 0\), and hence, by definition it suffices to prove (for those \(\theta>0\), \(\epsilon>0\) and \(\eta>0\) fixed already) that there is some \(\delta_{0}>0\) satisfying the following inequality:
\[\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}(\operatorname{Map}( \rho,G,\delta_{0},\sigma_{i}),\rho_{\infty})}{d_{i}}\leq\frac{1}{|G|}\cdot \lim_{n\to\infty}\frac{\operatorname{Widim}_{\eta}(X^{n},\rho_{\infty})}{n}+\theta.\]
For every \(i\in\mathbb{N}\) we let \(Q_{i}\subset[d_{i}]\) be the intersection of the following two sets:
\[\left\{j\in[d_{i}]:\sigma_{i}(g)(j)\neq\sigma_{i}(h)(j),\;\forall g\neq h\in G \right\},\]
\[\left\{j\in[d_{i}]:\sigma_{i}(g)(\sigma_{i}(h)(j))=\sigma_{i}(gh)(j),\; \forall g,h\in G\right\}.\]
For any \(j\in[d_{i}]\) and \(A\subset[d_{i}]\) we set
\[\sigma_{i}(G)(j)=\left\{\sigma_{i}(g)(j):g\in G\right\}\subset[d_{i}],\quad \sigma_{i}(G)(A)=\left\{\sigma_{i}(g)(a):g\in G,a\in A\right\}\subset[d_{i}].\]
We note that by the definition of \(Q_{i}\) we have \(|\sigma_{i}(G)(j)|=|G|\) and \(\sigma_{i}(G)(\sigma_{i}(G)(j))=\sigma_{i}(G)(j)\), for all \(j\in Q_{i}\). It follows that
\[\sigma_{i}(G)(j)=\sigma_{i}(G)(l),\qquad\forall\,l\in\sigma_{i}(G)(j).\]
We put an element \(j\in Q_{i}\) into a set \(C_{i}\) and remove all the members of \(\sigma_{i}(G)(j)\) from the set \(Q_{i}\). By dealing with the resulting sets (with the same method that we just described) finitely many times we can find10 some \(C_{i}\subset Q_{i}\) of the maximum cardinality satisfying that the mapping
Footnote 10: Here the argument is simple because the acting group \(G\) is finite. It is also possible to have such an injective mapping, carried out with a much more complicated process, if the group is amenable but not necessarily finite. We do not need this in our proof. For the general statement, please refer to [11, Section 3].
\[G\times C_{i}\to[d_{i}],\quad(g,j)\mapsto\sigma_{i}(g)(j)\]
is injective. We note that this implies automatically that \(\sigma_{i}(e)(c)=c\), for all \(c\in C_{i}\), where \(e\) is the identity element of the group \(G\).
For every \(n\in\mathbb{N}\) we take a compact metrizable space \(Y_{n}\) satisfying
\[\operatorname{Widim}_{\eta}(X^{n},\rho_{\infty})=\dim(Y_{n})<+\infty\]
in company with a continuous mapping
\[f_{n}:X^{n}\to Y_{n}\]
which is an \(\eta\)-embedding with respect to the metric \(\rho_{\infty}\) on the product space \(X^{n}\). We notice that the space \(Y_{n}\) is finite-dimensional because the space \(X^{n}\) is compact.
We take \(\tau>0\) with
\[\tau\cdot\operatorname{Widim}_{\eta}(X,\rho)<\theta/2.\]
Since \(G\) is finite, there exists some \(i_{0}\in\mathbb{N}\) such that
\[|\sigma_{i}(G)(C_{i})|>(1-\tau)\cdot d_{i},\quad\forall i\geq i_{0}.\]
We take \(\delta_{0}>0\) satisfying
\[\delta_{0}<\min\left\{\eta^{2},\,\frac{1}{|G|\cdot(1+\operatorname{Widim}_{\eta}(X,\rho))}\cdot\frac{\theta}{2}\right\}.\]
Let \(CY_{1}=([0,1]\times Y_{1})/\sim\) be the cone generated by \(Y_{1}\), where \((0,y)\sim(0,y^{\prime})\) for any \(y,y^{\prime}\in Y_{1}\). We denote by \(\lambda y\) the equivalence class of \((\lambda,y)\in[0,1]\times Y_{1}\). In particular, we set \(*=0y\) (for all \(y\in Y_{1}\)). The symbol \(*\) is to denote the vertex of the cone \(CY_{1}\). The following fact is obvious:
\[\operatorname{Widim}_{\eta}(X,\rho)=\dim(Y_{1})\leq\dim(CY_{1})\leq 1+\dim(Y_{ 1})=1+\operatorname{Widim}_{\eta}(X,\rho).\]
We fix \(i\in\mathbb{N}\) for the moment.
For each \(j\in[d_{i}]\) we define a _continuous_ mapping
\[J_{j}:\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i})\to[0,1]\]
by sending \(\phi=(\phi_{j})_{j\in[d_{i}]}\in\operatorname{Map}(\rho,G,\delta_{0},\sigma_ {i})\) to
\[J_{j}(\phi)=\max\left\{\max_{g\in G}\left(\rho(g\phi_{j},\phi_{\sigma_{i}(g)(j )})-\sqrt{\delta_{0}}\right),\,0\right\}.\]
Now we define a mapping \(H_{i}\) as follows:
\[H_{i}:\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i})\to Y_{|C_{i}|}\times( CY_{1})^{d_{i}}\times Y_{1}^{|[d_{i}]\setminus\sigma_{i}(G)(C_{i})|},\]
\[\phi=(\phi_{j})_{j\in[d_{i}]}\mapsto\left(f_{|C_{i}|}((\phi_{j})_{j\in C_{i}}),(J_{j}(\phi)f_{1}(\phi_{j}))_{j\in[d_{i}]},(f_{1}(\phi_{j}))_{j\in[d_{i}] \setminus\sigma_{i}(G)(C_{i})}\right).\]
Since all the mappings \(J_{j}\) (where \(j\in[d_{i}]\)) and \(f_{n}\) (where \(n\in\mathbb{N}\)) are continuous, the mapping \(H_{i}\) constructed above is also continuous.
We claim that \(H_{i}\) is an \(\epsilon\)-embedding with respect to \(\rho_{\infty}\). To verify this, we suppose that \(H_{i}(\xi)=H_{i}(\psi)\) for some \(\xi,\psi\in\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i})\). What we need to show is that \(\rho(\xi_{j},\psi_{j})<\epsilon\) for all \(j\in[d_{i}]\). If \(j\in[d_{i}]\) satisfies \(J_{j}(\xi)=J_{j}(\psi)>0\) or it satisfies \(j\in[d_{i}]\setminus\sigma_{i}(G)(C_{i})\), then it is clear that \(f_{1}(\xi_{j})=f_{1}(\psi_{j})\) which (by noting the fact that the continuous mapping \(f_{1}:X\to Y_{1}\) is an \(\eta\)-embedding with respect to the metric \(\rho\) on \(X\)) implies that \(\rho(\xi_{j},\psi_{j})<\eta<\epsilon\). So we now assume that \(j\in\sigma_{i}(G)(C_{i})\) and \(J_{j}(\xi)=J_{j}(\psi)=0\). This implies that \(j=\sigma_{i}(s)(c)\) for some \(s\in G\) and \(c\in C_{i}\subset Q_{i}\), and that
\[\rho(s^{-1}\xi_{j},\xi_{\sigma_{i}(s^{-1})(j)})\leq\sqrt{\delta_{0}}<\eta, \qquad\rho(s^{-1}\psi_{j},\psi_{\sigma_{i}(s^{-1})(j)})\leq\sqrt{\delta_{0}}<\eta.\]
Since \(f_{|C_{i}|}((\xi_{k})_{k\in C_{i}})=f_{|C_{i}|}((\psi_{k})_{k\in C_{i}})\) and since the continuous mapping \(f_{|C_{i}|}:X^{C_{i}}\to Y_{|C_{i}|}\) is an \(\eta\)-embedding with respect to the metric \(\rho_{\infty}\) on the product space \(X^{C_{i}}\), we have \(\rho_{\infty}((\xi_{k})_{k\in C_{i}},(\psi_{k})_{k\in C_{i}})<\eta\), and hence in particular, \(\rho(\xi_{c},\psi_{c})<\eta\). By noting that
\[\sigma_{i}(s^{-1})(j)=\sigma_{i}(s^{-1})(\sigma_{i}(s)(c))=\sigma_{i}(s^{-1}s) (c)=\sigma_{i}(e)(c)=c,\]
we have
\[\rho(\xi_{\sigma_{i}(s^{-1})(j)},\psi_{\sigma_{i}(s^{-1})(j)})<\eta.\]
It follows that
\[\rho(s^{-1}\xi_{j},s^{-1}\psi_{j})\leq 3\eta.\]
Thus, we obtain \(\rho(\xi_{j},\psi_{j})<\epsilon\). This proves the claim.
It follows from this claim that \(\operatorname{Widim}_{\epsilon}(\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i}),\rho_{\infty})\) is bounded from above by \(\dim(H_{i}(\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i})))\). To estimate the latter term, we consider the set
\[P_{i}(\phi)=\{j\in[d_{i}]:J_{j}(\phi)>0\},\]
for any \(\phi\in\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i})\). When \(\phi\in\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i})\) we have
\[\delta_{0}\cdot|P_{i}(\phi)|\leq\sum_{g\in G,j\in[d_{i}]}\rho(g\phi_{j},\phi_{ \sigma_{i}(g)(j)})^{2}\leq|G|\cdot\delta_{0}^{2}\cdot d_{i}\]
and hence
\[|P_{i}(\phi)|\leq|G|\cdot\delta_{0}\cdot d_{i}.\]
This implies that any element in the image (regarded as a compact subset of the product space \((CY_{1})^{d_{i}}\) consisting of members all having \(d_{i}\) entries) of the mapping
\[\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i})\to(CY_{1})^{d_{i}},\qquad( \phi_{j})_{j\in[d_{i}]}\mapsto(J_{j}(\phi)f_{1}(\phi_{j}))_{j\in[d_{i}]}\]
has entries at most \(|G|\cdot\delta_{0}\cdot d_{i}\), which do not take the value \(*\). Since the dimension of a finite union of compact metrizable spaces is equal to the maximum among the dimension of those participants in the union, we conclude that \(\dim(H_{i}(\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i})))\) is bounded from above by
\[\dim(Y_{|C_{i}|})+\dim(CY_{1})\cdot|G|\cdot\delta_{0}\cdot d_{i}+\dim(Y_{1}) \cdot|[d_{i}]\setminus\sigma_{i}(G)(C_{i})|\]
which, for all \(i\geq i_{0}\), does not exceed
\[\dim(Y_{|C_{i}|})+(1+\dim(Y_{1}))\cdot|G|\cdot\delta_{0}\cdot d_ {i}+\dim(Y_{1})\cdot\tau\cdot d_{i}\] \[= \operatorname{Widim}_{\eta}(X^{|C_{i}|},\rho_{\infty})+(1+ \operatorname{Widim}_{\eta}(X,\rho))\cdot|G|\cdot\delta_{0}\cdot d_{i}+ \operatorname{Widim}_{\eta}(X,\rho)\cdot\tau\cdot d_{i}\] \[\leq \operatorname{Widim}_{\eta}(X^{|C_{i}|},\rho_{\infty})+\frac{\theta }{2}\cdot d_{i}+\frac{\theta}{2}\cdot d_{i}\] \[= \operatorname{Widim}_{\eta}(X^{|C_{i}|},\rho_{\infty})+\theta\cdot d _{i}.\]
Finally, we note that \(|C_{i}|\to+\infty\) (because \(d_{i}\to+\infty\)) as \(i\to\infty\) and that for all \(i\in\mathbb{N}\) we have \(d_{i}\geq|G|\cdot|C_{i}|\). Thus, we deduce that
\[\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}( \operatorname{Map}(\rho,G,\delta_{0},\sigma_{i}),\rho_{\infty})}{d_{i}} \leq\limsup_{i\to\infty}\frac{\dim(H_{i}(\operatorname{Map}(\rho,G,\delta_{0},\sigma_{i})))}{d_{i}}\] \[\leq\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\eta}(X^{|C_{ i}|},\rho_{\infty})}{d_{i}}+\theta\] \[\leq\frac{1}{|G|}\cdot\limsup_{i\to\infty}\frac{\operatorname{Widim }_{\eta}(X^{|C_{i}|},\rho_{\infty})}{|C_{i}|}+\theta\] \[=\frac{1}{|G|}\cdot\lim_{n\to\infty}\frac{\operatorname{Widim}_{ \eta}(X^{n},\rho_{\infty})}{n}+\theta.\]
This completes the proof.
**Lemma 4.2** (Estimate from below).: \[\operatorname{mdim}_{\Sigma}(X,G)\geq\frac{1}{|G|}\cdot\lim_{\epsilon\to 0} \lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})}{n}.\]
Proof.: First of all, we note that Lemma 2.3 allows us to reduce this issue to the following statement:
\[\operatorname{mdim}_{\Sigma}(X^{|G|},G)\geq\lim_{\epsilon\to 0}\lim_{n\to\infty} \frac{\operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})}{n}.\]
To show the reduced inequality, we fix the compatible metric \(D=\rho_{\infty}\) on the product space \(X^{|G|}\). We would like to remind the reader to keep in mind that only the product action \((X^{|G|},G)\) is concerned in the remaining part of this proof.
We take \(\epsilon>0\) and \(\delta>0\) arbitrarily and fix them in the proof. By definition it suffices to prove that
\[\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}(\operatorname{Map} (D,G,\delta,\sigma_{i}),D_{\infty})}{d_{i}}\geq\lim_{n\to\infty}\frac{ \operatorname{Widim}_{\epsilon}(X^{n},\rho_{\infty})}{n}.\]
For every \(i\in\mathbb{N}\) we let \(Q_{i}\subset[d_{i}]\) be the intersection of the following two sets:
\[\left\{j\in[d_{i}]:\sigma_{i}(g)(j)\neq\sigma_{i}(g^{\prime})(j),\;\forall g \neq g^{\prime}\in G\right\},\]
\[\left\{j\in[d_{i}]:\sigma_{i}(g)(\sigma_{i}(g^{\prime})(j))=\sigma_{i}(gg^{ \prime})(j),\;\forall g,g^{\prime}\in G\right\}.\]
For any \(j\in[d_{i}]\) and \(A\subset[d_{i}]\) we set
\[\sigma_{i}(G)(j)=\left\{\sigma_{i}(g)(j):g\in G\right\}\subset[d_{i}],\quad \sigma_{i}(G)(A)=\left\{\sigma_{i}(g)(a):g\in G,a\in A\right\}\subset[d_{i}].\]
We note that by the definition of \(Q_{i}\) we have
\[|\sigma_{i}(G)(j)|=|G|,\qquad\sigma_{i}(G)(\sigma_{i}(G)(j))=\sigma_{i}(G)(j),\qquad\forall j\in Q_{i}.\]
It follows that \(\sigma_{i}(G)(j)=\sigma_{i}(G)(l)\), for all \(l\in\sigma_{i}(G)(j)\). As we explained in the proof of Lemma 4.1, we can find a subset \(C_{i}\) of \(Q_{i}\), such that the mapping
\[G\times C_{i}\to[d_{i}],\quad(g,j)\mapsto\sigma_{i}(g)(j)\]
is injective. Since \(G\) is a finite group, there is some \(i_{0}\in\mathbb{N}\) (sufficiently large) satisfying that
\[|\sigma_{i}(G)(C_{i})|>(1-\delta^{2})\cdot d_{i},\qquad\forall\;i\geq i_{0}.\]
For every \(i\in\mathbb{N}\) we define a mapping \(P_{i}\) as follows:
\[P_{i}:X^{d_{i}}\to(X^{|G|})^{d_{i}},\quad p=(p_{j})_{j\in[d_{i}]}\mapsto\phi= (\phi_{j})_{j\in[d_{i}]}=((\phi_{j}(g))_{g\in G})_{j\in[d_{i}]},\]
where \(\phi_{j}(g)\) is defined by
\[\phi_{j}(g)=\begin{cases}hg^{-1}p_{\sigma_{i}(g)(k)},&\quad\text{if}\;\;j= \sigma_{i}(h)(k),\;h\in G,\,k\in C_{i}\\ p_{j},&\quad\text{if}\;\;j\in[d_{i}]\setminus\sigma_{i}(G)(C_{i})\end{cases}.\]
Notice that here we have identified \(X^{|G|}\) with \(X^{G}\) for the indices' convenience.
The mapping \(P_{i}:X^{d_{i}}\to(X^{|G|})^{d_{i}}\) is well-defined, because the mapping
\[G\times C_{i}\to[d_{i}],\quad(g,j)\mapsto\sigma_{i}(g)(j)\]
is injective. Clearly, it is continuous. Since for any \(j\in[d_{i}]\) there is some \(g\in G\) such that \(\phi_{j}(g)=p_{j}\), the mapping \(P_{i}:X^{d_{i}}\to(X^{|G|})^{d_{i}}\) is distance-increasing with respect to the metric \(\rho_{\infty}\) on \(X^{d_{i}}\) and the metric \(D_{\infty}\) on \((X^{|G|})^{d_{i}}\), i.e.
\[\rho_{\infty}(p,p^{\prime})\leq D_{\infty}(P_{i}(p),P_{i}(p^{\prime})),\qquad \forall p,p^{\prime}\in X^{d_{i}}.\]
We now claim that for all sufficiently large \(i\in\mathbb{N}\) it will be true that \(P_{i}(X^{d_{i}})\) is contained in \(\operatorname{Map}(D,G,\delta,\sigma_{i})\). This claim will be verified in a moment. Notice the fact that \(d_{i}\to+\infty\) as \(i\to\infty\). By the claim we will obtain that
\[\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}( \operatorname{Map}(D,G,\delta,\sigma_{i}),D_{\infty})}{d_{i}} \geq\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}(P_{ i}(X^{d_{i}}),D_{\infty})}{d_{i}}\] \[\geq\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^ {d_{i}},\rho_{\infty})}{d_{i}}\] \[=\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(X^{n}, \rho_{\infty})}{n}.\]
This will end the proof.
In what follows we prove that \(P_{i}(X^{d_{i}})\) is contained in \(\operatorname{Map}(D,G,\delta,\sigma_{i})\), for all integers \(i\geq i_{0}\). We fix an integer \(i\geq i_{0}\). We take an arbitrary \(p=(p_{j})_{j\in[d_{i}]}\in X^{d_{i}}\). Let \(\phi=P_{i}(p)\) and write
\[\phi=(\phi_{j})_{j\in[d_{i}]}=((\phi_{j}(g))_{g\in G})_{j\in[d_{i}]}\;\in\;(X^ {|G|})^{d_{i}}=(X^{|G|})^{d_{i}}.\]
We note that \(C_{i}\subset Q_{i}\). By the construction of the mapping \(P_{i}:X^{d_{i}}\to(X^{|G|})^{d_{i}}\) we have obviously that if \(j\in[d_{i}]\) satisfies \(j=\sigma_{i}(h)(c)\) for some \(h\in G\) and some \(c\in C_{i}\) then for any \(s\in G\) and any \(g\in G\)
\[s\phi_{j}(g)=shg^{-1}p_{\sigma_{i}(g)(c)}=\phi_{\sigma_{i}(sh)(c)}(g)=\phi_{ \sigma_{i}(s)(\sigma_{i}(h)(c))}(g)=\phi_{\sigma_{i}(s)(j)}(g).\]
Thus, for any \(j\in\sigma_{i}(G)(C_{i})\)
\[s\phi_{j}=\phi_{\sigma_{i}(s)(j)},\qquad\forall\,s\in G.\]
It follows that for all \(s\in G\)
\[D_{2}(s\phi,\phi\circ\sigma_{i}(s)) =\sqrt{\frac{1}{d_{i}}\cdot\sum_{j\in[d_{i}]}\big{(}D(\phi_{\sigma _{i}(s)(j)},s\phi_{j})\big{)}^{2}}\] \[=\sqrt{\frac{1}{d_{i}}\cdot\sum_{j\in[d_{i}]\setminus\sigma_{i}( G)(C_{i})}\big{(}D(\phi_{\sigma_{i}(s)(j)},s\phi_{j})\big{)}^{2}}\] \[\leq\sqrt{\frac{d_{i}-|\sigma_{i}(G)(C_{i})|}{d_{i}}}\] \[\leq\delta.\]
This implies that \(\phi\in\operatorname{Map}(D,G,\delta,\sigma_{i})\). Thus, we conclude.
## 5. Sofic mean dimension of full shifts
In this section we prove Statement (ii) of Theorem 3.1. Let us start with some general settings.
Let \(K\) be a compact metrizable space. We take a compatible metric \(D\) on the alphabet \(K\) arbitrarily. Let \(G\) be a sofic group. Let \(\Sigma=\{\sigma_{i}:G\to\operatorname{Sym}(d_{i})\}_{i\in\mathbb{N}}\) be a sofic approximation sequence for \(G\). We are dealing with the shift action \(\sigma_{G}\) of \(G\) on the product space \(K^{G}\).
We denote by \(e\) the identity element of the group \(G\). We fix a (countable) family \(\{\alpha_{g}\}_{g\in G}\) of positive real numbers such that
\[\alpha_{e}=1,\qquad\sum_{g\in G}\alpha_{g}<2.\]
We fix a metric \(\rho\) on \(K^{G}\) compatible with the product topology as follows:
\[\rho(x,y)=\sum_{g\in G}\alpha_{g}D(x_{g},y_{g}),\quad(x=(x_{g})_{g\in G},y=(y_ {g})_{g\in G}\in K^{G}).\]
Since \(K\) is a compact metrizable space, so is \(K^{G}\). Thus, we may assume, without loss of generality, that the diameter of the product space \(K^{G}\) with respect to the metric \(\rho\) is equal to \(1\).
Our goal is to prove the following equality:
\[\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})=\lim_{\epsilon\to 0}\lim_{n \to\infty}\frac{\operatorname{Widim}_{\epsilon}(K^{n},D_{\infty})}{n}.\]
This will follow directly from Lemma 5.1 and Lemma 5.2.
**Lemma 5.1** (Estimate from below).: \[\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})\geq\lim_{\epsilon\to 0}\lim_{n \to\infty}\frac{\operatorname{Widim}_{\epsilon}(K^{n},D_{\infty})}{n}.\]
Proof.: We take \(\epsilon>0\) arbitrarily. We take \(\delta>0\) and a (nonempty) finite subset \(F\) of \(G\). We fix them in the proof. By definition it suffices to show that
\[\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}(\operatorname{Map} (\rho,F,\delta,\sigma_{i}),\rho_{\infty})}{d_{i}}\geq\lim_{n\to\infty}\frac{ \operatorname{Widim}_{\epsilon}(K^{n},D_{\infty})}{n}.\]
For every \(i\in\mathbb{N}\) we let
\[P_{i}:K^{d_{i}}\to(K^{G})^{d_{i}},\qquad p=(p_{j})_{j\in[d_{i}]}\mapsto\phi=( \phi_{j})_{j\in[d_{i}]}\]
(where \(\phi_{j}\) has the form \(\phi_{j}=(\phi_{j}(g))_{g\in G}\in K^{G}\), for each \(j\in[d_{i}]\)) be the mapping defined by
\[\phi_{j}(g)=\begin{cases}p_{\sigma_{i}(g)(j)},&\text{ if }\,g\in G\setminus\{e \}\\ p_{j},&\text{ if }\,g=e\end{cases}.\]
Clearly, for any \(i\in\mathbb{N}\) the mapping \(P_{i}:K^{d_{i}}\to(K^{G})^{d_{i}}\) is continuous. Moreover, it is distance-increasing with respect to the metric \(D_{\infty}\) on \(K^{d_{i}}\) and the metric \(\rho_{\infty}\) on \((K^{G})^{d_{i}}\), i.e.,
\[D_{\infty}(p,p^{\prime})\leq\rho_{\infty}(P_{i}(p),P_{i}(p^{\prime})),\quad \forall p,p^{\prime}\in K^{d_{i}}.\]
We observe that for all sufficiently large \(i\in\mathbb{N}\) it is true that \(P_{i}(K^{d_{i}})\) is a subset of \(\operatorname{Map}(\rho,F,\delta,\sigma_{i})\). We will verify this observation in a moment. Assuming the observation and noting that \(d_{i}\to+\infty\) as \(i\to\infty\) we will deduce that
\[\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}( \operatorname{Map}(\rho,F,\delta,\sigma_{i}),\rho_{\infty})}{d_{i}} \geq\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}(P_{ i}(K^{d_{i}}),\rho_{\infty})}{d_{i}}\] \[\geq\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon}( K^{d_{i}},D_{\infty})}{d_{i}}\] \[=\lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(K^{n},D _{\infty})}{n}.\]
This will finally end the proof.
As follows we verify that for all sufficiently large \(i\in\mathbb{N}\) we have that \(P_{i}(K^{d_{i}})\) is contained in \(\operatorname{Map}(\rho,F,\delta,\sigma_{i})\). We choose a (nonempty) finite subset \(E\) of \(G\), containing the identity element \(e\in G\), such that if two points \(\xi=(\xi_{g})_{g\in G}\) and \(\xi^{\prime}=(\xi^{\prime}_{g})_{g\in G}\) in \(K^{G}\) satisfy \(\xi_{g}=\xi^{\prime}_{g}\) for all \(g\in E\), then they satisfy \(\rho(\xi,\xi^{\prime})<\delta/2\). For every \(i\in\mathbb{N}\) we put
\[Q_{i}=\left\{j\in[d_{i}]:\sigma_{i}(s)\circ\sigma_{i}(t)(j)=\sigma_{i}(st)(j), \ \forall s\in E,\,\forall t\in F\cup\left\{e\right\}\right\}.\]
Since both \(E\) and \(F\) are finite subsets of \(G\), there is some \(k\in\mathbb{N}\) sufficiently large such that for any integer \(i>k\)
\[\frac{|Q_{i}|}{d_{i}}>1-\frac{3\delta^{2}}{8}.\]
Now we prove
\[P_{i}(K^{d_{i}})\subset\operatorname{Map}(\rho,F,\delta,\sigma_{i}),\quad\ \forall i>k.\]
We fix an integer \(i>k\). We take \(p\in K^{d_{i}}\) arbitrarily. Let \(\phi=P_{i}(p)\). We write
\[p=(p_{j})_{j\in[d_{i}]}\in K^{d_{i}},\quad\quad\phi=((\phi_{j}(g))_{g\in G})_{ j\in[d_{i}]}\in(K^{G})^{d_{i}}.\]
It is clear that for any \(j\in Q_{i}\) we have \(\sigma_{i}(e)(j)=j\). Therefore,
\[\phi_{j}(g)=p_{\sigma_{i}(g)(j)},\quad\quad\forall g\in G,\quad\forall j\in Q _{i}.\]
It follows that for any \(j\in Q_{i}\cap(\sigma_{i}(t))^{-1}(Q_{i})\) and any \(t\in F\cup\left\{e\right\}\)
\[t\phi_{j}(s)=\phi_{j}(st)=p_{\sigma_{i}(st)(j)}=p_{\sigma_{i}(s)\circ\sigma_ {i}(t)(j)}=\phi_{\sigma_{i}(t)(j)}(s),\quad\forall s\in E.\]
By the choice of \(E\) we have
\[\rho(t\phi_{j},\phi_{\sigma_{i}(t)(j)})<\delta/2,\quad\quad\forall t\in F, \quad\forall j\in Q_{i}\cap(\sigma_{i}(t))^{-1}(Q_{i}).\]
Thus, we conclude that for all \(t\in F\)
\[\rho_{2}(\phi\circ\sigma_{i}(t),t\phi) =\sqrt{\frac{1}{d_{i}}\cdot\sum_{j\in[d_{i}]}\left(\rho(\phi_{\sigma _{i}(t)(j)},t\phi_{j})\right)^{2}}\] \[\leq\sqrt{\frac{\delta^{2}}{4}+\frac{|[d_{i}]\setminus(Q_{i}\cap( \sigma_{i}(t))^{-1}(Q_{i}))|}{d_{i}}}\] \[\leq\sqrt{\frac{\delta^{2}}{4}+2\cdot\left(1-\frac{|Q_{i}|}{d_{i}} \right)}\] \[\leq\delta.\]
This implies that \(\phi\in\operatorname{Map}(\rho,F,\delta,\sigma_{i})\). The statement follows.
**Lemma 5.2** (Estimate from above).: \[\operatorname{mdim}_{\Sigma}(K^{G},\sigma_{G})\leq\lim_{\epsilon\to 0}\lim_{n \to\infty}\frac{\operatorname{Widim}_{\epsilon}(K^{n},D_{\infty})}{n}.\]
Proof.: We take \(\epsilon>0\) and \(\eta>0\) arbitrarily and fix them in the proof. By definition it suffices to prove that there exist a finite (and nonempty) subset \(F_{0}\) of \(G\) and some \(\delta_{0}>0\) such that
\[\limsup_{i\to\infty}\frac{\operatorname{Widim}_{3\epsilon}\left(\operatorname {Map}(\rho,F_{0},\delta_{0},\sigma_{i}),\rho_{\infty}\right)}{d_{i}}\leq\eta+ \lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(K^{n},D_{\infty})}{n}.\]
To this aim, for any \(n\in\mathbb{N}\) we take a compact metrizable space \(A_{n}\) satisfying
\[\dim(A_{n})=\operatorname{Widim}_{\epsilon}(K^{n},D_{\infty})\]
in company with a continuous mapping \(f_{n}:K^{n}\to A_{n}\) which is an \(\epsilon\)-embedding with respect to the metric \(D_{\infty}\) on the product space \(K^{n}\). Since \(K\) is compact (and nonempty), the term \(\operatorname{Widim}_{\epsilon}(K^{n},D_{\infty})\) is always finite.
We choose a finite subset \(F_{0}\) of \(G\), containing the identity element \(e\), such that if two points \(\xi=(\xi(g))_{g\in G}\) and \(\xi^{\prime}=(\xi^{\prime}(g))_{g\in G}\) coming from \(K^{G}\) satisfy \(D(\xi(s),\xi^{\prime}(s))\leq 2\epsilon\) for all \(s\in F_{0}\), then they satisfy \(\rho(\xi,\xi^{\prime})<3\epsilon\).
We pick \(\delta_{0}>0\) sufficiently small such that
\[\delta_{0}<\min\left\{\frac{\eta}{(1+\operatorname{Widim}_{\epsilon}(K,D)) \cdot|F_{0}|^{2}},\,\frac{\epsilon^{2}}{4}\right\}.\]
So our main task now is to estimate \(\operatorname{Widim}_{3\epsilon}(\operatorname{Map}(\rho,F_{0},\delta_{0}, \sigma_{i}),\rho_{\infty})\) from above, for all sufficiently large \(i\in\mathbb{N}\).
We fix an arbitrary \(i\in\mathbb{N}\) for the moment.
For every \(j\in[d_{i}]\) we define a mapping
\[J_{j}:\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})\to[0,1]\]
by sending
\[\phi=(\phi_{j})_{j\in[d_{i}]}\in\operatorname{Map}(\rho,F_{0},\delta_{0}, \sigma_{i})\subset(K^{G})^{d_{i}}\]
to
\[J_{j}(\phi)=\max\left\{\max_{s\in F_{0}}\left(\rho(s\phi_{j},\phi_{\sigma_{i}(s)(j) })-\sqrt{\delta_{0}}\right),0\right\},\]
where each \(\phi_{j}\) (\(j\in[d_{i}]\)) has the form \(\phi_{j}=(\phi_{j}(g))_{g\in G}\in K^{G}\). Clearly, all the mappings \(J_{j}:\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})\to[0,1]\) (\(j\in[d_{i}]\)) are continuous.
Let \(CA_{1}=([0,1]\times A_{1})/\sim\) be the cone generated by \(A_{1}\), where \((0,a)\sim(0,a^{\prime})\) for all \(a,a^{\prime}\in A_{1}\). We denote by \(\lambda a\) the equivalence class of \((\lambda,a)\in[0,1]\times A_{1}\). We denote the vertex of the cone \(CA_{1}\) by the symbol \(*\) (namely, we set \(*=0a\), for any \(a\in A_{1}\)). We notice that the following inequality is clear:
\[\operatorname{Widim}_{\epsilon}(K,D)=\dim(A_{1})\leq\dim(CA_{1})\leq 1+\dim(A_{ 1})=1+\operatorname{Widim}_{\epsilon}(K,D).\]
We construct11 a mapping \(H_{i}\) as follows:
Footnote 11: Here the point is to generate a cone from the image of a space \(K\) under some \(\epsilon\)-embedding mapping (not from the space \(K\) itself). As we explained intuitively in [10], for any fixed \(j\in[d_{i}]\) we cannot expect \(\phi_{j}\) to move in a continuous way between good (i.e. \(J_{j}(\phi)=0\), in this case the family \(\{\phi_{l}(e):l\in[d_{i}]\}\) carries nearly all the information sufficiently close to \(\{\phi_{j}(s):s\in F_{0}\}\)) and bad (i.e. \(J_{j}(\phi)>0\), in this case the information carried by the family \(\{\phi_{l}(e):l\in[d_{i}]\}\) is then not able to recover \(\{\phi_{j}(s):s\in F_{0}\}\), and as a consequence, far from admitting some \(3\epsilon\)-embedding unless we record almost the whole \(\{\phi_{j}(s):s\in F_{0}\}\)) regions as \(\phi\) moves continuously within \(\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})\) (while \(\phi_{j}\) is always possible to lie in any of these two regions as \(\phi\in\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})\) moves). However, the dimension of the space \((CA_{1})^{|F_{0}|\cdot d_{i}}\) is considerably large (when simply picking out the rectangle \(\{\phi_{k}(s):s\in F_{0},k\in[d_{i}]\}\)). The reason why we need build a cone in the construction of \(H_{i}\) is that this can record every \(\phi_{j}\) (for \(j\in[d_{i}]\)) in a more flexible and productive approach.
We now claim that the mapping \(H_{i}:\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})\to A_{d_{i}} \times(CA_{1})^{|F_{0}|\cdot d_{i}}\) is a \((3\epsilon)\)-embedding with respect to \(\rho_{\infty}\). To verify this claim, we take two points \(\varphi=(\varphi_{j})_{j\in[d_{i}]}\) and \(\psi=(\psi_{j})_{j\in[d_{i}]}\) arbitrarily within \(\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})\). We assume that \(H_{i}(\varphi)=H_{i}(\psi)\). What we then need to show is that \(\rho_{\infty}(\varphi,\psi)<3\epsilon\). In fact, it follows from the assumption \(H_{i}(\varphi)=H_{i}(\psi)\) that
\[f_{d_{i}}((\varphi_{j}(e))_{j\in[d_{i}]})=f_{d_{i}}((\psi_{j}(e))_{j\in[d_{i}]});\]
\[(J_{j}(\varphi)f_{1}(\varphi_{j}(s)))_{s\in F_{0}}=(J_{j}(\psi)f_{1}(\psi_{j} (s)))_{s\in F_{0}},\qquad\forall j\in[d_{i}].\]
Since the continuous mapping \(f_{d_{i}}:K^{d_{i}}\to A_{d_{i}}\) is an \(\epsilon\)-embedding with respect to the distance \(D_{\infty}\), the former implies that \(D_{\infty}((\varphi_{j}(e))_{j\in[d_{i}]},(\psi_{j}(e))_{j\in[d_{i}]})<\epsilon\), i.e.
\[D(\varphi_{j}(e),\psi_{j}(e))<\epsilon,\qquad\forall j\in[d_{i}].\]
The latter sorts all those \(j\in[d_{i}]\) into two cases:
* either \((J_{j}(\varphi)f_{1}(\varphi_{j}(s)))_{s\in F_{0}}=(J_{j}(\psi)f_{1}(\psi_{j} (s)))_{s\in F_{0}}=(*,*,\ldots,*)\);
* or \((J_{j}(\varphi)f_{1}(\varphi_{j}(s)))_{s\in F_{0}}=(J_{j}(\psi)f_{1}(\psi_{j}(s) ))_{s\in F_{0}}\in(CA_{1}\setminus\{*\})^{|F_{0}|}\).
If some \(j\in[d_{i}]\) encounters the first case, then we have \(J_{j}(\varphi)=J_{j}(\psi)=0\) which means that
\[\rho(s\varphi_{j},\varphi_{\sigma_{i}(s)(j)})\leq\sqrt{\delta_{0}},\quad\rho(s \psi_{j},\psi_{\sigma_{i}(s)(j)})\leq\sqrt{\delta_{0}},\qquad\forall s\in F_{0}.\]
This implies that
\[D(s\varphi_{j}(e),\varphi_{\sigma_{i}(s)(j)}(e))\leq\sqrt{\delta_{0}},\quad D (s\psi_{j}(e),\psi_{\sigma_{i}(s)(j)}(e))\leq\sqrt{\delta_{0}},\qquad\forall s \in F_{0}.\]
Since \(s\varphi_{j}(e)=\varphi_{j}(s)\) and \(s\psi_{j}(e)=\psi_{j}(s)\), we have
\[D(\varphi_{j}(s),\varphi_{\sigma_{i}(s)(j)}(e))\leq\sqrt{\delta_{0}},\qquad D (\psi_{j}(s),\psi_{\sigma_{i}(s)(j)}(e))\leq\sqrt{\delta_{0}},\qquad\forall s \in F_{0}.\]
Hence, by noting that \(D(\varphi_{l}(e),\psi_{l}(e))<\epsilon\) for all \(l\in[d_{i}]\), we deduce that
\[D(\varphi_{j}(s),\psi_{j}(s))\leq 2\sqrt{\delta_{0}}+\epsilon<2\epsilon,\qquad \forall s\in F_{0}.\]
Thus, by the choice of \(F_{0}\) we obtain that
\[\rho(\varphi_{j},\psi_{j})<3\epsilon.\]
If any \(j\in[d_{i}]\) encounters the second case, then it follows directly from \(J_{j}(\varphi)=J_{j}(\psi)>0\) that \((f_{1}(\varphi_{j}(s)))_{s\in F_{0}}=(f_{1}(\psi_{j}(s)))_{s\in F_{0}}\). This implies that \(D(\varphi_{j}(s),\psi_{j}(s))<\epsilon\), for all \(s\in F_{0}\). By the choice of \(F_{0}\) we also have in this case that \(\rho(\varphi_{j},\psi_{j})<3\epsilon\). We therefore conclude that \(\rho_{\infty}(\varphi,\psi)<3\epsilon\). This proves the claim.
By this claim we get an upper bound for \(\operatorname{Widim}_{3\epsilon}(\operatorname{Map}(\rho,F_{0},\delta_{0}, \sigma_{i}),\rho_{\infty})\):
\[\operatorname{Widim}_{3\epsilon}(\operatorname{Map}(\rho,F_{0},\delta_{0}, \sigma_{i}),\rho_{\infty})\leq\dim(H_{i}(\operatorname{Map}(\rho,F_{0},\delta _{0},\sigma_{i}))).\]
To estimate the term \(\dim(H_{i}(\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})))\) from above, we need deal with a subset of \([d_{i}]\), for any \(\phi=(\phi_{j})_{j\in[d_{i}]}\) coming from \(\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})\):
\[\Omega_{\phi}(\rho,F_{0},\delta_{0},\sigma_{i})=[d_{i}]\setminus\{j\in[d_{i}] :\rho(s\phi_{j},\phi_{\sigma_{i}(s)(j)})\leq\sqrt{\delta_{0}},\ \forall s\in F_{0}\}.\]
For any \(\phi=(\phi_{j})_{j\in[d_{i}]}\in\operatorname{Map}(\rho,F_{0},\delta_{0}, \sigma_{i})\) we have
\[\sum_{j\in[d_{i}]}\rho(s\phi_{j},\phi_{\sigma_{i}(s)(j)})^{2}\leq\delta_{0}^{2 }\cdot d_{i},\quad\forall\,s\in F_{0}.\]
It follows that
\[\delta_{0}\cdot|\Omega_{\phi}(\rho,F_{0},\delta_{0},\sigma_{i})|\leq\sum_{j\in [d_{i}]}\sum_{s\in F_{0}}\rho(s\phi_{j},\phi_{\sigma_{i}(s)(j)})^{2}\leq|F_{0}| \cdot\delta_{0}^{2}\cdot d_{i}\]
and hence
\[|\Omega_{\phi}(\rho,F_{0},\delta_{0},\sigma_{i})|\leq|F_{0}|\cdot\delta_{0} \cdot d_{i}.\]
This implies that for any given \(\phi\in\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})\) the image \(H_{i}(\phi)\in A_{d_{i}}\times(CA_{1})^{|F_{0}|\cdot d_{i}}\) has entries at least \(|F_{0}|\cdot(d_{i}-|F_{0}|\cdot\delta_{0}\cdot d_{i})\), which have to take the value \(*\). More precisely, \(H_{i}(\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i}))\) is contained in
\[\bigcup_{l\in\mathbb{Z},\ 0\leq l\leq|F_{0}|\cdot\delta_{0}\cdot d_{i}}\ \bigcup_{k_{1},\ldots,k_{d_{l}}\in\{0,1\},\ k_{1}+\cdots+k_{d_{l}}=l}\ A_{d_{i}} \times\prod_{j=1}^{d_{i}}(\{*\}^{|F_{0}|})^{1-k_{j}}\times((CA_{1})^{|F_{0}|}) ^{k_{j}},\]
where we set
\[\{*\}^{|F_{0}|}\times((CA_{1})^{|F_{0}|})^{0}=\{*\}^{|F_{0}|},\qquad(\{*\}^{|F _{0}|})^{0}\times(CA_{1})^{|F_{0}|}=(CA_{1})^{|F_{0}|}.\]
Thus, we deduce that
\[\dim(H_{i}(\operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i})))\leq\dim(A_ {d_{i}})+|F_{0}|^{2}\cdot\delta_{0}\cdot d_{i}\cdot\dim(CA_{1}).\]
It follows that
\[\limsup_{i\to\infty}\frac{\operatorname{Widim}_{3\epsilon}\left( \operatorname{Map}(\rho,F_{0},\delta_{0},\sigma_{i}),\rho_{\infty}\right)}{d _{i}}\] \[\qquad\leq\limsup_{i\to\infty}\frac{\dim(H_{i}(\operatorname{Map} (\rho,F_{0},\delta_{0},\sigma_{i})))}{d_{i}}\] \[\qquad\leq\limsup_{i\to\infty}\frac{\dim(A_{d_{i}})+|F_{0}|^{2} \cdot\delta_{0}\cdot d_{i}\cdot\dim(CA_{1})}{d_{i}}\] \[\qquad\leq\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon }(K^{d_{i}},D_{\infty})}{d_{i}}+|F_{0}|^{2}\cdot\delta_{0}\cdot(1+\operatorname {Widim}_{\epsilon}(K,D))\] \[\qquad\leq\limsup_{i\to\infty}\frac{\operatorname{Widim}_{\epsilon }(K^{d_{i}},D_{\infty})}{d_{i}}+\eta.\]
Since \(d_{i}\to\infty\) as \(i\to\infty\), we conclude that
\[\limsup_{i\to\infty}\frac{\operatorname{Widim}_{3\epsilon}\left(\operatorname{ Map}(\rho,F_{0},\delta_{0},\sigma_{i}),\rho_{\infty}\right)}{d_{i}}\leq\eta+ \lim_{n\to\infty}\frac{\operatorname{Widim}_{\epsilon}(K^{n},D_{\infty})}{n}.\]
This is as desired.
|
2305.05497 | Magnon dispersion in ferromagnetic SrRuO$_3$ | The magnetic excitations in ferromagnetic SrRuO$_3$ were studied by inelastic
neutron scattering combining experiments on triple-axis and time-of-flight
spectrometers with and without polarization analysis. A quadratic spin-wave
dispersion with an anisotropy gap describes the low-energy low-temperature
response. The magnon dispersion extends to at least 35 meV and there is no
direct evidence for a continuum of Stoner excitations below this energy.
However, the magnon response is weakened at higher energy. In addition to the
anomalous softening of the spin-wave stiffness and of the gap, which is induced
by the topology of the Bloch states, the magnon excitations are broadened in
energy and this effect increases upon heating. | K. Jenni, S. Kunkemöller, A. Tewari, R. A. Ewings, Y. Sidis, A. Schneidewind, P. Steffens, A. A. Nugroho, M. Braden | 2023-05-09T14:48:31Z | http://arxiv.org/abs/2305.05497v1 | # Magnon dispersion in ferromagnetic SrRuO\({}_{3}\)
###### Abstract
The magnetic excitations in ferromagnetic SrRuO\({}_{3}\) were studied by inelastic neutron scattering combining experiments on triple-axis and time-of-flight spectrometers with and without polarization analysis. A quadratic spin-wave dispersion with an anisotropy gap describes the low-energy low-temperature response. The magnon dispersion extends to at least 35 meV and there is no direct evidence for a continuum of Stoner excitations below this energy. However, the magnon response is weakened at higher energy. In addition to the anomalous softening of the spin-wave stiffness and of the gap, which is induced by the topology of the Bloch states, the magnon excitations are broadened in energy and this effect increases upon heating.
## I Introduction
Among the Ruddlesden-Popper ruthenates Sr\({}_{n+1}\)Ru\({}_{n}\)O\({}_{3n+1}\), SrRuO\({}_{3}\) is the only simple material to exhibit ferromagnetic order at zero magnetic field [1; 2; 3]. This ferromagnetism inspired the proposition of \(p\)-wave superconductivity in Sr\({}_{2}\)RuO\({}_{4}\)[4; 5] with a pairing mechanism involving ferromagnetic fluctuations [6]. But the magnetism in SrRuO\({}_{3}\) itself is intriguing because of the connection to anomalies in various properties. At the ferromagnetic transition temperature of T\({}_{\rm C}=165\) K there is a kink in the direct-current transport measurement [7]. In addition the cell volume does not shrink in the ordered phase, which is known as the invar effect [8]. The spin degree of freedom thus seems to be coupled to charge and lattice degrees of freedom [7; 8; 9]. SrRuO\({}_{3}\) can be categorized as a 'bad metal', because the high-temperature resistivity passes through the Ioffe-Regel limit around 500 K without indication of saturation [9]. In metallic magnets the question about the local or itinerant character is always challenging [10]. Based on a pressure study of T\({}_{\rm C}\), the SrRuO\({}_{3}\) material has been classified as a moderately weak itinerant ferromagnet [11], while a recent ARPES study proposes a dual nature for majority and minority states in SrRuO\({}_{3}\)[12]. According to itinerant Stoner theory, the low-\(q\) spin-wave dispersion corresponds to a bound state and passes into a continuum of electron-hole pair excitations [10; 13] that so far has not been reported for SrRuO\({}_{3}\).
Magnetization measurements reveal a large anisotropy with the magnetic easy axis pointing along the elongation of the RuO\({}_{6}\) octahedron (orthorhombic \(c\) axis in space group _Pnma_) [14]. The anisotropy field of \(\sim\)10 T documents strong spin-orbit coupling in this material [14; 15; 16; 17]. This strong spin-orbit coupling also implies anomalous magnetotransport properties that can be attributed to Weyl points in the electronic structure [18; 19; 20; 21; 22]. For SrRuO\({}_{3}\) the relation between the intrinsic anomalous Hall effect and the topology of the electronic structure was demonstrated for the first time [18]. The combination of orbital band degeneracy, magnetic exchange splitting and spin-orbit coupling induces Weyl points and a Berry phase [20], which are accepted to explain the peculiar temperature dependence of the anomalous Hall effect [18; 20; 23; 24; 25]. Other anomalous magneto-transport properties corroborated the strong impact of Weyl points in SrRuO\({}_{3}\)[21; 22; 26].
Previous inelastic neutron scattering (INS) studies on the magnetic excitations in SrRuO\({}_{3}\) focussed on the temperature dependencies of the magnon gap, \(\Delta\), and of the spin-wave stiffness constant \(D\)[20; 25]. Single-crystal studies find anomalous temperature dependencies for both parameters that were attributed to the impact of the Bloch states topology [25]. The Weyl points lead to an interconnected renormalization of the two spin-wave dispersion parameters \(\Delta\) and \(D\)[20; 25]. Here we use the combination of polarized and unpolarized INS experiments with and without polarization analysis to characterize the spectrum of magnetic excitations in a broader energy range. At low energies we find a nearly parabolic spin-wave dispersion, and magnon scattering extends to at least 35 meV, but there is no signature of a Stoner continuum. However, the magnetic excitations in SrRuO\({}_{3}\) are extremely broad.
Experimental
SrRuO\({}_{3}\) crystallizes in an orthorhombic lattice (space group \(Pnma\)) at room temperature after undergoing two structural transitions: from cubic to tetragonal at 975 K and from tetragonal to orthorhombic at 800 K [1; 2; 3]. This symmetry reduction results in six possible twin-domain orientations which imitate the cubic symmetry. Therefore, the pseudo-cubic lattice (space group \(Pm\bar{3}m\)) with lattice parameter \(a_{c}=3.93\) A is used here and all scattering vectors \(\mathbf{Q}\) are given according to this lattice. The relations between orthorhombic lattice parameters and the cubic directions are as follows: \(\mathbf{a}\parallel[1,0,1]_{c}\), \(\mathbf{b}\parallel[0,1,0]_{c}\), and \(\mathbf{c}\parallel[\bar{1},0,1]_{c}\) with \(a\approx c\approx\sqrt{2}a_{c}\) and \(b\approx 2a_{c}\)[14]. The sample can be detwinned by applying a magnetic field of more than 1 T above T\({}_{\mathrm{C}}\) along \([\bar{1},0,1]_{c}\) and then cooling down into the ferromagnetic phase [14]. It develops a single domain state where the easy axis (orthorhombic \(c\)) points along the applied field. This mono-domain state persists at low temperatures even when the field is turned off [14]. Magnetic detwinning was used in experiments at the triple-axis spectrometers PANDA and IN20.
The inelastic neutron scattering (INS) data were collected using single crystals grown by the floating-zone method [17]. The grown crystals exhibit ferromagnetic order below T\({}_{\mathrm{C}}=165\) K with a saturation magnetization M\({}_{\mathrm{sat}}=1.6\,\mu_{B}\)/f.u. [17]. The coaligned multi-crystal assembly which was used for most of the neutron scattering experiments is depicted in Fig. 1, it contains six crystals with a total mass of about 8 g. The compact crystal assembly yields a high material density inside a sample volume of roughly \(2\,\mathrm{cm}\times 2\,\mathrm{cm}\times 2\,\mathrm{cm}\). On PANDA a mounting with only one crystal was used for experiments under magnetic field.
The neutron scattering experiments were conducted at the triple-axis spectrometers 4F and 2T at the Laboratoire Leon Brillouin (LLB) in Saclay, France, at IN20 at the Institute Laue Langevin (ILL) in Grenoble, France, and at PANDA at the Forschungsentronenquelle Heinz Maier-Leibnitz (FRM-II) in Garching, Germany. The time-of-flight data were collected on MERLIN [27] at the ISIS Neutron and Muon source in Didcot, United Kingdom. The sample was oriented in the \([1,0,0]_{c}\)/\([0,1,1]_{c}\) scattering plane for all scattering experiments. The triple-axis spectrometers are used with focusing pyrolitic graphite crystals as monochromator and analyzer and scans were performed with a fixed final energy (values of the final neutron wave vector \(k_{f}\) between 1.5 and 1.57 A\({}^{-1}\) on the cold and 2.662 A\({}^{-1}\) on the thermal instruments, respectively). Only for the polarized experiment on IN20 we used polarizing Heusler monochromator and analyzer crystals; in this experiment neutron polarization was guided at the sample by large horizontal field of up to 3.8 T. A filter in front of the analyzer (Be filter on PANDA and 4F, pyrolitic graphite filter on 2T) or a velocity selector (on IN20) are used to suppress higher order scattering. For the 4F and 2T experiments the sample was cooled with a close- cycle refrigerator, while on PANDA and IN20 cryomagnets were used. On MERLIN the following configurations of incident energy and chopper frequencies were used at 10 K: chopper frequency 450 Hz with incoming energy \(E_{i}=180\), 68, 34, and 21 meV yielding a resolution at the elastic line of 11, 2.5, 1.2, and 0.6 meV, respectively; and chopper frequency 350 Hz with \(E_{i}=120\), 43, and 22meV yielding a resolution of 7.5, 1.7, 0.7 meV, respectively. The sample was rotated by 90 \({}^{\circ}\) in 0.5 \({}^{\circ}\) steps. Since the energy resolution improves at higher energy transfer, the resolution is below the binning applied in most cases to calculate cuts in the four-dimensional data. At 160 K only the data with the lower chopper frequency were recorded. We used the HORACE program suite to calculate the intensity distribution from the data obtained on MERLIN [28].
Data obtained at IN20 and at MERLIN are available at references 29 and 30, repsectively.
## III Results and analysis
### Unpolarized experiments on triple-axis spectrometers
INS determines the magnon signal in four-dimensional \(\mathbf{Q}\)-\(E\) space, and the triple-axis spectrometer allows arbitrarily defined scans. We combine data taken on instruments installed at cold and thermal neutron moderators to cover a broader energy range. Typical neutron scattering data from three different spectrometers measuring the magnon signal are displayed in Fig. 2. Constant energy scans along high-symmetry directions around the ferromagnetic zone center \(\mathbf{Q}=(1,0,0)\) reveal the magnon dispersion as the peak position changes with increasing energy transfer (panels (a) to (c) in Fig. 2). Part of these data were presented in reference 25 focussing on the anomalous temperature dependence of the magnon
Figure 1: Coaligned multi-crystal assembly for neutron scattering experiments. The single crystals of nearly cylindrical shape with a diameter of around 4 mm and a length of up to 1.5 cm are individually fixed in aluminium clamps, which are attached to two aluminium rods. This setup enables each crystal to be rotated individually around two axes for easy coalignment.
stiffness and anisotropy gap. Note that the scans cover both sides of the magnetic zone center. Hence, the two peaks appearing in each scan visualize the symmetry of the magnon dispersion. The cold triple-axis spectrometers 4F1 and PANDA with their high energy resolution enable a direct measurement of the magnon gap via a constant \(\mathbf{Q}\) scan at the zone center (Fig. 2(d)). For the data description the MATLAB based software tool Reslib[31] is used where a given model cross section \(\mathcal{S}(\mathbf{q},\mathbf{E})\) is convoluted with the instrumental resolution function of the specific instrument and fitted to the data. This procedure enables one to separate the pure excitation-related physics from the effects of the instrumental resolution on the experimental data. The intensity stemming from the magnon is modeled by a Lorentzian \(\mathcal{L}(E)\) with the FWHM \(\gamma\) and the amplitude \(A\), see equation (4). The \(E(\mathbf{q})\) dependence is modelled by the specified dispersion relation. To describe the low-energy data, where the tail of the magnetic and nuclear Bragg peak at the zone center yields inelastic scattering, a Gaussian \(\mathcal{G}(\mathbf{q},E)\) centered at \(\mathbf{q}_{0}=(0,0,0)\) and \(E_{0}=0\) is included in the model cross section.
The dispersion relation \(E(\mathbf{q})\) can be derived from the
Figure 3: Low-energy magnon dispersion derived from triple-axis spectrometers. For low energies the magnon energy shows a quadratic dependency on the propagation vector that is given here in absolute units. The \(k\) value is calculated from the individually fitted values of \(2J_{1}S\) (given in Fig. 2(e)). The red line represents the parabolic magnon dispersion determined by equation 6 with the averaged value of \(2J_{1}S\). A similar fit was already presented in reference 25.
Figure 2: **(a)-(c)** Constant energy scans across the magnon dispersion in SrRuO\({}_{3}\) obtained at T=10 K on cold triple-axis spectrometers 4F (LLB) and PANDA (MLZ), and on the thermal spectrometer 2T (LLB). Note that the PANDA data were measured after detwinning the sample with magnetic field. The magnon scattering was modeled following the ferromagnetic dispersion relation including energy broadening (light blue lines) and then folded with the \(\mathbf{Q}\) and \(E\) dependent resolution function (colored lines). Data are vertically offset for clarity. **(d)** Constant \(\mathbf{Q}\) scan at \(\mathbf{Q}=(1,0,0)\) described with the same dispersion relation showing the anisotropy gap at T = 10 K (data taken on 4F). **(e)** The fitting yields a value of \(2J_{1}S\) for each scan along different high symmetry directions. The data are distinguishable by colored background in respect to the instrument and by symbol shape in respect to the cubic direction (circle: \(\Delta\) = [\(\xi\),0,0]; square: \(\Sigma\) = [0,\(\xi\),\(\xi\)]; triangle: \(\Lambda\) = [\(\xi\),\(\xi\),\(\xi\)]). The weighted average of \(2J_{1}S\) = 6.1(2) meV is represented by the red line while the light red area denotes its error margin. Data in gray are not used for the averaging. Data in panels (a) and (b) were already presented in reference 25.
general Heisenberg Hamiltonian (equations 1 and 2), although the itinerant character of the magnetic order in SrRuO\({}_{3}\) strongly limits the applicability of such a model as it will be discussed below.
\[\mathcal{H} =\mathcal{H}_{SS\mathcal{I}}+\mathcal{H}_{Z\mathcal{F}\mathcal{I}}+ \mathcal{H}_{\mathcal{E}\mathcal{I}\mathcal{I}} \tag{1}\] \[=-\sum_{\langle ij\rangle}2J_{ij}\mathbf{S}_{i}\cdot\mathbf{S}_{ j}-\sum_{i}K(S_{i}^{z})^{2}-\mu_{B}gB\sum_{i}S_{i}^{z} \tag{2}\]
For the description of spin waves in SrRuO\({}_{3}\) three contributions are considered: (i) the spin-spin interaction [32] with the interaction parameters \(J_{ij}\), (ii) the zero-field single-ion anisotropy parameter \(K\)[33], and (iii) the electron Zeeman term with the Lande factor \(g\) and the external field \(B\) that is set parallel to the magnetization (orthorhombic \(c\) or cubic [0,1,\(\bar{1}\)]\({}_{c}\)). Note that the indices \(i\) and \(j\) represent different spins. For zero external field \(\mathbf{B}=0\) and only nearest neighbor interaction \(J_{1}\), the dispersion relation for a ferromagnet with a cubic lattice [34] is given in equation (3). The anisotropy parameter \(K\) results in a finite magnon gap \(\Delta\) at \(\mathbf{q}=\mathbf{0}\).
The magnon dispersion, the Lorentzian distribution and the scattering function are given by:
\[E_{\mathbf{q}} =\Delta+2J_{1}S\big{[}6-2\sum_{i=x,y,z}\cos(2\pi q_{i})\big{]} \tag{3}\] \[\mathcal{L}(E) =\frac{A}{2\pi}\frac{\gamma}{(E-E_{\mathbf{q}})^{2}+\gamma^{2}}\] (4) \[\mathcal{S}(\mathbf{q},E) =\mathcal{G}(\mathbf{q},E)+\mathcal{L}(\mathbf{q},E)\cdot(n_{E}+1), \tag{5}\]
where \(n_{E}=(exp(E/k_{B}T)-1)^{-1}\) denotes the Bose population factor.
The constant-\(\mathbf{Q}\) scan at the zone center (Fig. 2(d)) can be well described with the dispersion model and yields a value of \(\Delta\)=0.94(3) meV at 10 K for the magnon gap in SrRuO\({}_{3}\). This gap is a manifestation of the single-ion anisotropy of Ru where spin-orbit coupling leads to a preferred alignment of spins along a certain crystallographic direction (easy axis parallel to orthorhombic \(c\)). Its size is in agreement with the anisotropy field of \(\approx\)10 T determined by magnetization measurements [14] and with the energy of the ferromagnetic resonance of \(\approx\) 250 GHz\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
energy. The twinning in the SrRuO\({}_{3}\) crystals can imply some broadening, since this superposes different directions of the orthorhombic lattice. The pseudo-cubic direction \([1,0,0]_{c}\) is parallel to the long orthorhombic axis \(b\) (in \(Pnma\)) of one twin orientation and parallel to the in-plane diagonals \([1,0,\pm 1]_{o}\) for the other orientations [14]. At low temperatures the splitting of the orthorhombic lattice constants renormalized to the pseudocubic ones amounts to less than 0.6% [8]. Therefore, the twinning related superposition of scattering vectors does not account for the large effects observed unless the magnon dispersion becomes very anisotropic, which appears unlikely. The sizable intrinsic broadening of the magnon results most likely from the coupling to electron-hole excitations called Landau damping [37; 38] in agreement with the kink of electric resistivity at the ferromagnetic transition[7; 9]. In addition non-linear scattering with spin excitations can limit the lifetimes of the magnons [39; 40]. The temperature dependent INS data in reference [25] and the discussion below indicate further enhanced broadening at higher temperatures.
As mentioned before, the peak positions in the constant \(E\) scans are determined by the \(2J_{1}S\) parameter which is fitted in the analysis of each scan. Fig. 2(e) shows the resulting \(2J_{1}S\) values for each scan separated for the three instruments (background color) and the different high symmetry directions (symbol). Unfortunately the fitting of all scans in a multi-fit routine using one set of generalized fitting parameters is not feasible with the used software tool due to the individual backgrounds of the different instruments. Therefore the results of fitting all scans are averaged and yield a general \(2J_{1}S\) of 6.1(2) meV. This translates to a spin stiffness in SrRuO\({}_{3}\) of \(D=94.2\pm 3.0\,\mathrm{meVA}^{2}\). This averaged spin stiffness describes the low energy data of all instruments reasonably well as one can see in the \(E\)-\(k\) dependency for the magnon signal in Fig. 3. Here the \(k\) values are calculated from the fitted \(2J_{1}S\) of each individual scan [given in Fig. 2(e)] following the quadratic relation (6) and plotted against the energy. This differs from the determination of spin stiffness \(D\) in reference [25] where D results from the approximated quadratic dispersion model for small \(q\) which is fitted to the \(E\)-\(k\) data extracted from the constant energy scans. The coupling constant \(J_{1}\) can be estimated to 3.8(1) meV by determining the spin \(S\)=0.8 from the saturation magnetization of 1.6 \(\mu_{B}\)/Ru [14] with a \(g\) factor of 2 [41]. This value is close to the expected spin for a low-spin state stabilized through the strong splitting of \(t_{2g}\) and \(e_{g}\) orbitals [42].
A possibility to directly measure the \(g\) factor is the magnetic field dependency of the magnon gap. Applying an external magnetic field adds a Zeeman term in the Hamiltonian and a constant shift of the magnon dispersion: (2). We therefore have to extend the dispersion relation (3):
\[E_{\mathbf{q}}=\Delta+2J_{1}S\Big{[}6-2\sum_{i=x,y,z}\cos(2\pi q_{i})\Big{]}+g \mu_{B}B. \tag{7}\]
The magnon energy at the zone center \(\mathbf{q}=0\) increases linearly with the magnetic field \(B\) applied along orthorhombic \(c\) or cubic [0,1,\(\bar{1}\)]\({}_{c}\). The magnetic field dependency of the magnon signal was studied on PANDA with a single sample crystal. Figure 4 displays the constant \(\mathbf{Q}\) scans measured at the ferromagnetic zone center \(\mathbf{Q}=(0,1,1)\) indicating that the magnon gap increases with increasing field. The gap value determined by the Lorentzian peak position of the fit indeed exhibits a linear correlation to the external field (see inset of Fig. 4). The slope \(m\) of the linear fit is equal to \(g\mu_{B}\) yielding a value of \(g\)=1.78(5) [41].
Usually the \(g\) factor in \(4d\) transition-metal oxides with a high crystal-field splitting like in SrRuO\({}_{3}\) is assumed to consist mainly of the spin contribution \(g_{S}=2\) because the orbital moment is quenched. However, sizable spin-orbit coupling can partly recover the orbital moment yielding a \(g\) factor deviating from 2 [42]. The orbital moment of SrRuO\({}_{3}\) is found by x-ray magnetic circular dichroism to be very small [43; 44]. Okamoto _et al._ reported an orbital moment of 0.04(4) \(\mu_{B}\)[43], and Agrestini _et al._ determined \(L_{z}/2S_{z}\) ratios of 0.01 with \(L_{z}=0.01(1)\,\mu_{B}\)[44]. These experimental reports are supported by DFT calculations which obtain an orbital moment three orders of magnitude smaller than the spin moment [45]. Our INS determination of the \(g\) factor is consistent with a small but finite orbital moment.
The strength of the magnetic interaction parameters is of the same order as those in insulating Ca\({}_{2}\)RuO\({}_{4}\)[46]. Anisotropic magnetic interaction parameters in SrRuO\({}_{3}\) were determined by density functional theory but the agreement with the experimental stiffness and with the magnon gap is poor [47]. The latter calculations determines also the Dzyaloshinski-Moriya interaction, which, however, has very little impact on the magnon dispersion. A precise measurement of the canting angle of the ferromagnetic moments in SrRuO\({}_{3}\) is better suited to experimentally determine this interaction.
Time-resolved magneto-optical Kerr effect measurements on SrRuO\({}_{3}\) thin films also quantify the linear field dependence of the ferromagnetic resonance [35]. They report a slope of \(\approx 17\,\frac{\mathrm{GHz}}{\mathrm{T}}\) which corresponds to 0.07 \(\frac{\mathrm{meV}}{\mathrm{T}}\) and thus an even smaller \(g\) factor of 1.21. The magnetization of SrRuO\({}_{3}\) amounts to \(\mu_{0}M\)=0.31 T and thus demagnetization effects cannot explain such a large deviation at high magnetic fields. In our experiment on PANDA the zero-field result was measured after cooling the sample in a strong field yielding a similar macroscopic magnetization as at high field so that the demagnetization corrections are roughly the same at all fields. The much slower slope of the optical experiment [35] must stem from the fact that the external field is not applied parallel to the easy axis of the ferromagnetic phase. Therefore, the external field at least partially acts against the local
anisotropy, and the slope of the resonance does not correspond to \(g\mu_{B}\).
### Unpolarized experiments on the time-of-flight spectrometer Merlin
The investigation of the magnon dispersion using triple-axis spectrometers becomes increasingly difficult for high energies, where phonon contributions and spurious signals appear. The time-of-flight technique can deliver a complete picture of the Brillouin zone with its different excitations. It uses higher initial energies than the triple-axis spectrometers, which enhances the access in \(\mathbf{Q}\)-\(E\) space to lower \(Q\) values and thus favors the observation of magnetic signals due to the form factor. The magnon dispersion of SrRuO\({}_{3}\) was studied using the time-of-flight spectrometer Merlin at the ISIS Neutron and Muon Source. The time-of-flight technique enables one to collect data simultaneously for several incident energies \(E_{i}\). This creates comparable data sets with different energy resolution and range. The presented data are taken from the data sets with an incident energy of 22, 43, and 68 meV since they yield the clearest picture of the magnon signal.
The magnon dispersion can be visualized by two-dimensional cuts through the four-dimensional \(\mathbf{Q}\)-\(E\) space. Two different representations of the magnon dispersion are used: (i) constant \(E\) cuts of the scattering plane \([\xi,0,0]/[0,\xi,\xi]\) which represent horizontal cuts through the \(q\)-\(E\) dispersion parabola (see Fig. 5) and (ii) constant \(\mathbf{Q}\) cuts which display the \(q\)-\(E\) dependency of the magnon signal along one of the the cubic high-symmetry directions (see Fig. 6 and 7). The data are integrated in the vertical direction \([0,\xi,-\xi]\) by \(\pm 0.1\) r.l.u and in energy by \(\pm 2\) meV. To optimize the presentation of the magnon dispersion the different energies are taken from different incident energies \(E_{i}\). Fig. 5(a) and (b) result from the with \(E_{i}=22\) meV, (c) and (d) are taken from the data with \(E_{i}=43\) meV and panels (e)-(g) display the data with \(E_{i}=68\) meV. For these maps around the magnetic Bragg peaks symmetrization yields a slightly better statistics, while the variation of the background strongly affects symmetrization of \(q\)-\(E\) maps so that we refrained from it. The branches of the magnon dispersion are not clearly visible in the data with the highest incident energy \(E_{i}=180\) meV, and also the \(E_{i}=180\) meV are only useful at higher energy transfer. In the two-dimensional constant \(E\) cuts the magnon exhibits a ring shape whose radius is increasing with increasing energy indicating the dispersion of the magnon. Fig. 6 displays the two-dimensional \(Q\)-\(E\) cuts along the three cubic high-symmetry directions where the dispersion parabola of the magnon becomes visible. The data taken with \(E_{i}=120\) meV extends to higher energy transfer and a zoom on this high-energy range is shown in Fig. 7. Note that the data are always integrated by \(\pm 0.1\) r.l.u in the two corresponding perpendicular directions. The low-energy part is dominated by the tail of the elastic scattering, which is visible by the bright area for all \(\xi\). The expansion of the elastic scattering into the inelastic regime depends on the energy resolution and increases therefore
Figure 5: Magnon dispersion measured by time-of-flight technique. Constant \(E\) cuts at 10 K of the [\(\xi\),0,0]/[0,\(\xi\),\(\xi\)] plane for different energies display the ring shape of the magnon signal. The diameter of the ring increases with energy indicating the magnon dispersion. The axis length ratio of 1:\(\sqrt{2}\) reflects the geometrical factor. The low-energy data at 5 and 10 meV (panels (a) and (b)) are taken with the incident energy of 22 meV while the data at 15 and 20 meV ((c),(d)) are taken with the incident energy of 43 meV and the data at 25, 30, and 35 meV ((e)-(g)) are taken with the incident energy of 68 meV. The used integration limits for these maps are \(-0.1\leq\eta\leq 0.1\) in \([1,-\eta,\eta]\) and \(E\pm 2.5\) meV. The time-of-flight data are overlaid with the ferromagnetic dispersion model with only nearest-neighbor interaction \(J_{1}\) taken from the analysis of the triple-axis spectrometer data (dashed line) and with the combination of nearest-neighbor and \(J_{4}\) interaction (black lines).
with the incident energy. The magnon signal is clearly visible as a parabola in its \(q\)-\(E\) dependency. Phonon contributions and how they disperse can be seen for example in Fig. 6(f),(i) at \(\mathbf{Q}=(1.5,0.5,0.5)\) around \(E=15\) meV. The data with higher incoming energy shows that there are two phonon modes in this energy range at 11.5 and 19 meV. All \(\mathbf{Q}\) versus \(E\) intensity maps show a flat intensity at 20meV that, however, can be safely attributed to a phonon as it is observed with enhanced intensity at \(\mathbf{Q}\)=(2,1,0) and (3,0,0) and as it is found in the non-spin-flip channel in the polarized experiment performed on the IN20 spectrometer, see below. Only in some of the plots one also sees a weak signal at \(\mathbf{Q}\)=(1,0,0) at 12 meV, which however seems to stem from a phonon branch with essentially flat dispersion along (1,\(\xi\),\(\xi\)) which can leak to (1,0,0) due to resolution and integration effects.
The data in Fig. 6 suffer from heavy phonon contaminations around \(\mathbf{Q}=(1.5,0,0)\) which can be easily mistaken as the magnon signal. This phonon contamination is also clearly visible in the in-plane scattering where it appears as intense scattering at the zone corners \(\mathbf{Q}=(1.5,0.5,0.5)\) and \(\mathbf{Q}=(1.5,-0.5,-0.5)\) for \(E=15\) meV (Fig. 5(c)). It disperses inwards and is visible as a strong broad signal at \(\mathbf{Q}=(1.5,0,0)\) and \(E=25\) meV (Fig. 5(e)). The phonon dispersion study for SrRuO\({}_{3}\) indeed reveals a phonon at the equivalent position \(\mathbf{Q}=(2.5,0,0)\) and the energy \(E=25\) meV [48].
To compare the results of the time-of-flight measurement with the triple-axis spectrometer results the theoretical dispersion according to the Heisenberg model of a ferromagnet is overlaid on the experimental data. Firstly the model in equation 3 with only nearest-neighbor coupling \(2J_{1}S\) and the anisotropy gap \(\Delta_{mag}\) determined by the triple-axis experiments is compared with the time-of-flight data. It is obvious that this model (black dashed line in Figures 5 and 6) only describes the low energy part of the dispersion. Note that the triple-axis spectrometer data only cover energies below 16 meV. In general the simple nearest-neighbor model underestimates the magnon stiffness at high energy as the experimental parabolas become tighter and the rings smaller than what is expected with this most simple model.
The underestimation of the higher magnon energies is best seen in the [\(\xi\),0,0] direction, see Fig. 5, 6, and 7. In
Figure 7: High-energy data taken on the time-of-flight spectrometer Merlin with an incoming energy of 120 meV. The upper panels present the intensity maps of energy versus \(\mathbf{Q}\) vector obtained at 10 K and the lower panels the data taken at 160 K.
Figure 6: Magnon dispersion measured by time-of-flight technique. Constant \(\mathbf{Q}\) cuts at 10 K along the cubic high-symmetry directions around \(\mathbf{Q}=\) (1,0,0) show the magnon dispersion. The panels are sorted in rows where each row represents the data of a certain incident energy. From the top to the bottom the data are taken with 22 meV (panels (a)-(c)), 43 meV (panels (d)-(f)), and 68 meV (panels (g)-(i)), respectively. The intensity range (colorbar) is set identical in the panels of each row for better comparability. The integration limits are \(-0.1\leq\eta\leq 0.1\) in \([1,-\eta,\eta]\) and \(-0.1\leq\zeta\leq 0.1\) in the direction perpendicular to the respective high-symmetry direction. The time-of-flight data are overlaid with the ferromagnetic dispersion model with nearest-neighbor interaction \(J_{1}\) (dashed line) and with the combination of nearest-neighbor and \(J_{4}\) interaction (black lines).
order to obtain a better description we add interaction parameters to further distant neighbors in the primitive cubic lattice. Since the magnon stiffness of the quadratic dispersion is very well determined by the triple-axis experiments we kept this value fixed. Adding the next-nearest-neighbor interaction \(J_{2}\) between two Ru ions at \(\sqrt{2}a\) however does not modify the [\(\xi\),0,0] dispersion under the constraint of constant stiffness. The same holds for the next-next-nearest shell, \(J_{3}\), at a distance \(\sqrt{3}a\). Only with a negative (antiferromagnetic) value of the fourth-neighbor interaction \(J_{4}\) between Ru ions at a distance of \(2a\) we can model the steepening of the [\(\xi\),0,0] dispersion at higher energy. The dispersion of this \(J_{1}\)-\(J_{4}\) model is given in equation 8.
\[\begin{split} E_{\mathbf{q}}=\Delta_{mag}&+2J_{1}S \Big{[}6-2\sum_{i}\cos(2\pi q_{i})\Big{]}\\ &+2J_{4}S\Big{[}6-2\sum_{i}\cos(4\pi q_{i})\Big{]}\end{split} \tag{8}\]
The coupling term \(2J_{4}S\) is determined by fitting the (\(\xi\),0,0) values extracted from the constant energy cuts with the constraint of fixed magnon stiffness. The best agreement is achieved for the values given in Tab. 1. The modified model also better describes the parabolas in Fig. 6 and 7 although the difference between the models is small for the displayed energy region in [0,\(\xi\),\(\xi\)] and [\(\xi\),\(\xi\),\(\xi\)] direction.
While the additional parameter yields a qualitative description of the stiffening of the dispersion at higher energies, it appears more likely that the physical mechanism for this effect is different. The itinerant character of the magnon dispersion limits the applicability of the model of local-moment interactions. Unfortunately the data quality is too limited at higher energies for a deeper analysis.
In a metallic ferromagnet the spin-wave dispersion following the Heisenberg model of localized moments is cut off at a finite energy above which magnetic excitations become electron-hole pair excitations between bands of opposite spin, the so-called Stoner continuum [13]. The occurrence of these Stoner excitations in \(\mathbf{Q}\)-\(E\) space can be complex since the band structure in SrRuO\({}_{3}\) has multiple bands with changing band splittings throughout the Brillouin zone. The signature of Stoner excitations in neutron scattering is a broadening of the spin-wave exci
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\Delta_{mag}\) & \(D\) & \(2J_{1}S\) & \(J_{1}\) & \(J_{1}^{\prime}\) & \(J_{4}^{\prime}\) \\ [meV] & [meVÅ\({}^{2}\)] & [meV] & [meV] & [meV] & [meV] \\ \hline
0.94(3) & 94(2) & 6.1(2) & 3.8(1) & 5.9 & -0.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Magnetic model parameters of SrRuO\({}_{3}\) determined by inelastic neutron scattering The isolated interaction parameters are calculated from the values \(2J_{i}S\) determined by the magnon dispersion by assuming \(S\)=0.8. \(J_{i}^{\prime}\) denote the values of the model with nearest and fourth-nearest interaction that keeps the magnon stiffness unchanged.
Figure 8: Constant \(E\) cuts of the [\(\xi\),0,0]/[0,\(\xi\),\(\xi\)] plane display the ring shape of the magnon signal. The magnon signal in the ferromagnetic phase at 10 K ((a)-(e)) is compared with the data at 160 K ((f)-(g)). The intensity range (colorbar) is the same for each energy, and the axis length ratio of 1:\(\sqrt{2}\) reflects the geometrical factor. The low-energy data at 5 and 10 meV are taken with the incident energy of 22 meV while the higher-energy data are taken with \(E_{i}\)=43 meV. Data are overlaid with the magnon dispersion calculated with only \(J_{1}\) (dashed lines) and with the combination of \(J_{1}\) and \(J_{4}\) (black lines).
tations while their intensity decreases rapidly for increasing energy as they enter the continuum [49]. Indeed, especially in the data taken with \(E_{i}=68\) meV, the intense magnon scattering seems to be reduced above 25 meV [see Fig. 6(h),(i)]. Above this energy some magnon scattering persists but its intensity is significantly lower. The same behavior is seen in Fig. 5(f) and (g), where the ring shaped magnetic scattering is significantly lower at 30 meV and also seems to be broadened. Nevertheless the scattering is still structured as the ring shape is clearly visible. For the analysis of the high-energy range, the data taken with the incoming energy of 120 meV are informative, see Fig. 7. The extension of the magnon dispersion up to at least \(\sim\)35 meV is unambiguous although the signal remains weak. These high-energy data also confirm the steepening of the dispersion compared to a simple next-neighbor Heisenberg model. The time-of-flight data of three dimensional material like SrRuO\({}_{3}\) suffer from the fact that it is not possible to fully integrate over one dimension as it is done for example in two-dimensional layered materials. Nevertheless it is possible to identify spin-wave excitations up to an energy of \(\sim\)35 meV.
The high-energy suppression of the magnon signal strength and the pronounced broadening strongly disagree with the simple local-moment picture and underline the itinerant character of the ferromagnetic order in SrRuO\({}_{3}\). In the elementary and other simple ferromagnets similar suppression of intensity and reduced magnon lifetimes were observed in experiment [50, 51, 52, 10] and in density-functional theory calculations [53, 37, 38, 10]. The interaction with the Stoner continuum, which can be rather complex in a multiorbital system like SrRuO\({}_{3}\), causes strong Landau damping and impacts the intensity. Similar effects are also discussed in iron- or copper-based superconductors [54, 55], but spin fluctuations posses an antiferromagnetic character in these materials.
ory [53, 10] one expects the continuum of magnetic excitations to considerably soften with heating and with the associated reduction of the magnetization. At low energies, the magnon dispersion in SrRuO\({}_{3}\) however changes only little up to 160 K and even up to 280 K when inspecting the single 8 meV scans shown in reference [25]. This strongly supports the persistence of local magnetization and exchange splitting well above the ferromagnetic phase transition.
### Polarized experiments with a horizontal magnetic field
Polarized INS experiments on a ferromagnetic material suffer from the depolarization of the neutron beam that is induced by domains and stray fields. Maintaining a good neutron polarization is experimentally challenging and requires a large guide field to align domains and to overrule any stray fields of the sample magnetization. In a previous polarized INS experiment on SrRuO\({}_{3}\) on a cold triple-axis spectrometer the feasibility of polarized experiments was demonstrated but these experiments focused on the chirality of the zone-center magnons [56]. In a usual ferromagnet the chirality of this excitation is determined by the right-handedness of the commutation rules for the components of a spin operator, but it was proposed that the strong spin-orbit coupling in SrRuO\({}_{3}\) may result in left-handed excitations [19]. The experiment, however, finds perfect right-handedness [56] well in the ferromagnetic phase.
With the polarized INS experiment on the thermal spectrometer IN20 we wanted to search for longitudinal modes, i.e. modes with an oscillating moment parallel to the static magnetization in contrast to the transversal character of the magnon modes corresponding to a precession of the moments around the static magnetization. Longitudinal spin excitations were theoretically deduced from random-phase-approximation calculations [53, 10, 39, 10] but experimental studies are limited to a few systems and to temperatures close to the magnetic transition, where the longitudinal response corresponds to critical scattering [57, 58]. In our experiment on SrRuO\({}_{3}\), the polarization analysis also yields a better separation of magnetic and phonon contributions at higher temperature, where the magnetic response becomes very broad. The large sample was mounted in a horizontal magnet cryostat which allows to apply 3.8 T. In order to avoid quenching we stayed slightly below this value and applied a magnetic field of 3.5 T along the [1,1,0] direction, which together with [0,0,1] spans the scattering plane. Most parts of the experiment were performed by using only the flipper between sample and analyzer whose currents had to be adapted to the stray fields of the horizontal magnet at the flipper position that depend on the angle between the field and the outgoing beam. The flipper between the monochromator and the sample was only used to verify the polarization at a few points in **Q**-E space. The flipping ratios were measured at the two Bragg reflections (1,1,0) and (0,0,2) to amount to 21.4 and 21.6, respectively, in the paramagnetic state at 230 K. However, the quality of the neutron polarization considerably diminishes upon cooling into the ferromagnetic state. At the temperatures of 170, 120 and 10 K we find the values 19.8[18.8], 11.6[6.8] and 8.3[4.9] at the reflection (1,1,0)[(0,0,2)]. The reduction of the polarization quality is more severe at the (0,0,2) reflection for which the magnetic field is perpendicular to the scattering vector, so that stray fields of the sample magnetization are more harmful. For a magnetic field of only 1 T the flipping ratio is even more rapidly suppressed to 5.8 measured for (0,0,2) at 160 K. At 10 K and 3.5 T the flipping ratio was also studied on a phonon at (2,2,0.2) yielding a flipping ratio of 8.0 in good agreement with a measurement of the (1,1,0) Bragg reflection. Clearly, polarization can be maintained in the ferromagnetic state of SrRuO\({}_{3}\) but a careful correction of the reduced flipping ratios is required and was applied to all data shown in Figures 10 and 11.
The horizontal magnet imposes severe restrictions on the accessible angles and it is fixed to the [1,1,0] direction and thereby imposes the direction of the neutron polarization at the sample. Therefore, it is not possible to measure the scattering in the usual \(x\),\(y\),\(z\) directions of polarized neutron experiments [59], but for **Q**=(0,0,1) and (0,0,2) we may only study the spin-flip (SF) \(y\) and
Figure 10: Polarized neutron scattering results for energy scans at the (0,0,2) Bragg point of SrRuO\({}_{3}\) performed on the IN20 triple-axis spectrometer. Panels (a) and (c) show the NSF and SF intensities corrected for the finite flipping ratios determined at the respective temperature and magnetic field. In panel (b) the NSF data are corrected for the Bose factor and the inset shows a zoom on low energies. A small constant background is subtracted from the SF data and a correction for the Bose factor is applied in panel (d).
non-spin-flip (NSF) \(y\) channels (i.e. polarization direction perpendicular to the scattering vector with in the scattering plane) and for **Q**=(1,1,0) only SF \(x\) and NSF \(x\) (i.e. polarization parallel to the scattering vector). By cooling in a finite field we obtain a nearly monodomain magnetic state with the orthorhombic \(c\) direction, the easy axis of magnetic order in SrRuO\({}_{3}\), aligned parallel to the magnetic field. Since INS only senses magnetic components perpendicular to the scattering vector and since a neutron spin-flip requires a magnetic component perpendicular to the polarization direction, we obtain the following the selection rules: For \(Q\)=(0,0,1) and (0,0,2), the NSF signal contains the nuclear scattering and the magnetic excitations polarized parallel to the static magnetization, i.e. longitudinal excitations, and the SF scattering senses magnetic excitations polarized perpendicular to the magnetization, i.e. transverse magnetic excitations. For **Q**=(1,1,0) the NSF signal contains only the nuclear scattering while the SF signal contains twice the transverse magnetic excitations. Here we assume that the magnetic excitations in the two transverse channels are identical. Due to the angle constraints it was not possible to study the low-energy response at (0,0,1) but we had to go to (0,0,2) where the magnetic formfactor already reduces the signal strength.
Fig. 10 presents the energy scans obtained at the (0,0,2) Bragg peak. The NSF signal at low energy is fully dominated by the phonon scattering arising around the strong Bragg reflection. But the correction for the Bose factor presented in Fig. 10 (b) reveals an extra weak magnetic scattering near 2 to 3 meV appearing at the temperatures where the spontaneous part of the magnetization most strongly increases. Note that the finite fields of 1 and 3.5 T suppress the sharp transition as the symmetry is already broken by the magnetization induced through the magnetic field, see reference [56]. This extra scattering visible in Fig. 10 at temperatures near the transition can be attributed to critical longitudinal scattering near the emergence of the ordered magnetic moment. This longitudinal signal fully agrees with polarized neutron scattering measurements on ferromagnetic Ni and EuS [57; 58]. However, deep in the ferromagnetic state there is no evidence for a longitudinal excitation up to 12 meV and also the uptake of intensity in the NSF channel at finite temperature most likely stems from phonon and multiphonon processes. The SF channel at (0,0,2) detects transverse excitations (polarized along orthorhombic \(a\)) but again there is no evidence for such scattering at low temperature. Since the scan path is exactly at the zone center, this is consistent with the usual picture that the parabolically dispersing magnon is the only signal below the continuum. However, such transverse magnon scattering appears at higher temperatures and is consistent with the observations at the other two studied Bragg peaks.
The polarized energy scans at the other two Bragg peaks are shown in Fig. 11. At **Q**=(1,1,0) the NSF scattering is entirely due to nuclear scattering and measures the phonon and multiphonon processes. There is a very strong phonon at 20 meV that was also seen in the time-of-flight data taken on Merlin discussed above. The SF scattering at (1,1,0) is flat at low temperature indicating the absence of magnetic scattering in agreement with the magnon dispersion. However, this SF scattering considerably increases with increasing temperature which reflects the above discussed enhancement of the widths of the magnon signal. The \(Q\) space in the center of the ferromagnetic magnon dispersion gets consecutively filled with increasing temperature, so that the character of the scattering changes from magnon to paramagnon like. The consistent observation is also made in the SF scattering at **Q**=(0,0,1), see Fig. 11 (e,f). The fact that at (1,1,0) we see two transversal magnetic channels is compensated by the square of the Ru formfactor that is about twice as large at (0,0,1). At low temperature there is a finite signal in the NSF channel at (0,0,1) that can be safely ascribed to the 20 meV phonon which has been also observed at (0,0,3) where the \(Q^{2}\) factor strongly enhances the signal. So there is no evidence for a longitudinal mode up to \(\sim\)20 meV. This agrees with the magnon dispersion extending to at least 35 meV as one would expect longitudinal excitations to be strongly suppressed deep in the ferromagnetic phase. Furthermore, recent ARPES studies indicate that the exchange-induced band energy splitting is rather large in SrRuO\({}_{3}\), of the order of 120 meV [12], which also implies a larger energy scale for the Stoner continuum and longitudinal modes.
The SF scattering at (1,1,0) has been also measured for negative energy transfer, see Fig. 11 (b), where close
Figure 11: Polarized neutron scattering results for energy scans at the (0,0,1) and (1,1,0) Bragg points of SrRuO\({}_{3}\) performed on the IN20 triple-axis spectrometer. Panels a(d) and b(e) show the NSF and SF intensities corrected for the finite flipping ratios for the two **Q** values. A small constant background is subtracted from the SF data and a correction for the Bose factor is applied in panels c and f.
to the onset of spontaneous magnetization a strong signal appears at a few meV that has no counterpart at positive energy transfer. This is due to the chirality of the zone-center magnon as it is discussed in detail in reference [56]. The Heusler polarizing monochromator and analyzer crystals transmit the neutron polarization antiparallel to the guide field. Therefore, the spin-flip process with the flipper between sample and analyzer turned on and the first flipper being turned off correspond to a scattering from antiparallel to parallel neutron polarization. A right-handed mode, however, requires the opposite for a positive energy transfer but becomes visible at negative energy transfer as it is seen in Fig. 11 (b). This experiment confirms the perfect right-handedness of the magnon in SrRuO\({}_{3}\)[56].
## IV Discussion and Conclusions
The combined INS study of the magnetic excitations in the ferromagnetic state of SrRuO\({}_{3}\) does not reveal the Stoner continuum expected for an itinerant system. This can be attributed to the still limited energy range for which reliable INS data could be obtained. The magnon modes can be followed up to \(\sim\)35 meV but already above the 25 meV the signal becomes quite reduced. In view of the recent ARPES study determining the band-energy splitting to 120 meV one may expect the Stoner continuum at comparable energy scales and thus the strongest effects even above the accessible energy range of our experiment. Due to the multi-orbital nature of the electronic band structure the crossover from magnon to Stoner excitations can be more complex in SrRuO\({}_{3}\). The limited ordered moment in SrRuO\({}_{3}\) combined with a rapidly decreasing magnetic form factor hamper neutron scattering studies, so that considerable efforts are needed to cover higher energies. So far, the polarized INS experiments cannot detect any evidence for longitudinal modes for energies below \(\sim\)20 meV in the ferromagnetic state.
The most remarkable feature of the magnetic excitations in SrRuO\({}_{3}\) concerns the anomalous temperature dependence of the magnon stiffness and of the gap that both harden upon heating [20; 25]. These anomalous temperature dependencies follow that of the anomalous Hall effect and can be explained by the impact of the Weyl points on the spin dynamics. Evidence for Weyl points situated close to the Fermi level has been deduced from DFT calculations [60] as well as from magnetotransport studies [18; 21]. However, the magnon modes remain extremely broad in SrRuO\({}_{3}\) even at low temperature. In order to reproduce the measured data profiles we have to fold the experimental resolution function with a magnon response that shows an energy broadening of 40% of its energy. This severe broadening further increases upon heating. While at low temperature the magnetic response remains essentially magnon like, although exhibiting enormous life-time reduction, the shape of the magnetic signal changes upon heating. Close to the magnetic transition the \(\mathbf{Q}\) space in the center of the magnon dispersion surface gets more and more filled which resembles the intensity distribution of nearly ferromagnetic systems with a paramagnon signal [10]. However the dispersion of the peak energies is little affected upon heating close to the transition at 160 K and even well above indicating that the local exchange splitting remains still considerably larger than the energy range of our experiments. There are several explanations for the reduced lifetimes of magnons in SrRuO\({}_{3}\). The pronounced kink of the electric resistance at the ferromagnetic transition underlines a strong electron-magnon interaction. In addition the Weyl points and the Berry curvature imply further scattering paths[25] that in view of the strong impact of the topology on the magnon dispersion may also be important for the magnon damping.
The magnetic excitations have been studied in several metallic ruthenates of the Ruddlesden-Popper series. In Sr\({}_{2}\)RuO\({}_{4}\) there are dominating incommensurate excitations that arise from pronounced Fermi surface nesting of quasi-one-dimensional sheets [61; 62] and that seem to condense into static incommensurate magnetic order upon minor substitution [63]. This nesting is rather robust and can also be observed in Ca\({}_{2-x}\)Sr\({}_{x}\)RuO\({}_{4}\) compounds with \(x\sim 0.5\) that are closer to a ferromagnetic instability and that exhibit dominant nearly ferromagnetic magnetic fluctuations[64], see below. Two recent ARPES studies yield evidence for flat Fermi-surface sheets [12; 65] in SrRuO\({}_{3}\) that resemble the strong nesting in Sr\({}_{2}\)RuO\({}_{4}\). The distance of these flat sheets in the [\(\xi\) 0 0] direction can roughly be determined to \(\xi_{nes}\)=0.29 [12] and 0.34 [65] reduced lattice units, respectively, but only reference 12 differentiates majority and minority sheets. The constant energy maps presented in Figures 5 and 8 yield no indication for such scattering at either (\(\xi_{nes}\) 0 0), (\(\xi_{nes}\) \(\xi_{nes}\) 0) or (\(\xi_{nes}\) \(\xi_{nes}\) \(\xi_{nes}\)). The strongest nesting peak in Sr\({}_{2}\)RuO\({}_{4}\)[61; 62] arises along the diagonal profiting from the nesting in two directions, while such an effect cannot be deduced from the Fermi-surface sheets reported for SrRuO\({}_{3}\)[12; 65]. In addition to the nesting induced magnetic excitations, Sr\({}_{2}\)RuO\({}_{4}\) also exhibits a broad quasi-ferromagnetic signal [6]. But this response of Sr\({}_{2}\)RuO\({}_{4}\) is still quite different from the magnon signal that SrRuO\({}_{3}\) shows at low temperature. The response in Sr\({}_{2}\)RuO\({}_{4}\) is little structured in \(\mathbf{Q}\) space and thus approaches a scenario with local interaction that is deduced from DMFT calculations [66].
The magnon-like response in SrRuO\({}_{3}\) also differs from the quasi-ferromagnetic scattering, which was observed in layered ruthenates that are close to ferromagnetic order. In Ca\({}_{2-x}\)Sr\({}_{x}\)RuO\({}_{4}\) a ferromagnetic cluster glass ordering is reached for \(x\)\(\sim\)0.5 and a metamagnetic transition is formed for further reduced Sr content [67; 68]. Also Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) exhibits a metamagnetic transition and is thus very close to ferromagnetic order [69]. The INS studies in Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\)[70] and in Ca\({}_{2-x}\)Sr\({}_{x}\)RuO\({}_{4}\)[64; 71; 72] reveal a remarkably similar picture in these layered systems
that however differs from the magnon-like response in SrRuO\({}_{3}\). The layered materials at zero field exhibit still incommensurate magnetic fluctuations though appearing at different positions in \(\mathbf{Q}\) space compared to the nesting induced signals in Sr\({}_{2}\)RuO\({}_{4}\)[62; 73]. The peaks in the magnetic susceptibility of Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) and Ca\({}_{2-x}\)Sr\({}_{x}\)RuO\({}_{4}\) appear along the bond direction and at much smaller absolute values of the propagation vector in agreement with a more ferromagnetic nature. Only for metamagnetic Ca\({}_{1.8}\)Sr\({}_{0.2}\)RuO\({}_{4}\) at finite magnetic field a parabolic and thus magnon-like dispersion was observed [72] that finally resembles the magnon dispersion in SrRuO\({}_{3}\). Overall the magnetic response in the layered enthenates including Sr\({}_{2}\)RuO\({}_{4}\) seems mostly determined by Fermi-surface effects with small but finite propagation vectors, while SrRuO\({}_{3}\) and only the high-field phase of Ca\({}_{1.8}\)Sr\({}_{0.2}\)RuO\({}_{4}\) exhibit a parabolic and thus an intrinsic ferromagnetic response. The response induced by Fermi-surface effects in the layered materials emerges in the form of stacks of scattering in \(\mathbf{Q}-E\) space. Upon heating the magnetic excitations in SrRuO\({}_{3}\), however, approach such a shape.
In conclusion the combined INS study of magnetic excitations in SrRuO\({}_{3}\) can characterize a low-temperature magnon dispersion up to rather high energies that are consistent with a large band energy splitting. Besides the anomalous temperature dependence of the magnon stiffness and of the gap, the severe broadening of magnons even at low temperature is most remarkable. Upon heating towards the magnetic transition in SrRuO\({}_{3}\) this broadening is further enhanced and finite magnetic response is found at the center of the dispersion. Although the magnon dispersion remains visible up to near T\({}_{c}\) this change indicates an enhanced local character of the interaction and it approaches the findings in other ruthenates where even response yields stacks of scattering in \(Q\),E space mostly associated with Fermi-surface effects.
###### Acknowledgements.
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project number 277146847 - CRC 1238, projects A02 and B04. Experiments at the ISIS Neutron and Muon Source were supported by a beamtime allocation RB1510482 from the Science and Technology Facilities Council.
|
2310.01447 | Space-Time from quantum Physics | A construction of the real 4D Minkowski space-time starting from quantum
harmonic oscillators is proposed. First, a 2D spinor space and its dual are
derived from the standard commutation relations obeyed by the ladder operators
of two independent 1D harmonic oscillators. The complex 4D Minkowvski vector
space V is then constructed from these spinor space. The flat, real 4D
Minkowski manifold is finally built as an approximate description of a manifold
of unitary operators constructed from V. Lorentz invariance is recovered and
several possible extensions are discussed, which connections to quantum optics
and condensed matter physics. | Fabrice Debbasch | 2023-10-01T14:50:46Z | http://arxiv.org/abs/2310.01447v1 | # Space-Time from quantum Physics
###### Abstract
A construction of the real 4D Minkowski space-time starting from quantum harmonic oscillators is proposed. First, a 2D spinor space and its dual are derived from the standard commutation relations obeyed by the ladder operators of two independent 1D harmonic oscillators. The complex 4D Minkowski vector space \(V\) is then constructed from these spinor space. The flat, real 4D Minkowski manifold is finally built as an approximate description of a manifold of unitary operators constructed from \(V\). Lorentz invariance is recovered and several possible extensions are discussed, which connections to quantum optics and condensed matter physics.
,
The first quarter of the twentieth century witnessed several breakthroughs in physics. Special relativity, which was proposed by Einstein in 1905 [1] and later extended in 1915 into the relativistic theory of gravitation known as general relativity [2; 3]. Simultaneously, quantum physics, which also originated with Einstein in 1905 [4], was developed first as a non relativistic theory of particles, which culminated in the equation proposed by Schrodinger in 1926 [5] and now bears his name. This equation was rapidly extended to the special relativistic realm. For example, the Dirac equation was introduced in 1928 [6] as a relativistic equation obeyed by the wavefunctions of spin 1/2 particles. Quantum physics then morped into a quantum theory of fields [7], making it possible to describe systems where particle numbers were themselves dynamical variables. And it is quantum field theory which, despite renormalisation issues, is today the natural framework used to describe the electro-weak and the strong interactions [8].
Despite all these achievements, modern physics has not yet been able to incorporate gravity into the quantum framework and develop a consistent quantum theory of gravitation. Several, apparently very different paths towards such a theory have been proposed [9] and are still today active areas of research, but none has delivered a quantum gravity yet.
It seems today that building a quantum theory of gravity will require solving a lot of apparently different, but all very serious conceptual and technical/mathematical problems. One of them, perhaps the most conceptual one, is to reconcile the idea of classical space-time with what we know of quantum theory. Indeed, mathematically speaking, space-time is a differential Lorentzian manifold [10] and general relativity thus relies heavily on geometry and analysis, while quantum physics, at its core, seems definitely more algebraic. One option is to take this as a fact, and try and build quantum gravity as a standard algebraic quantum theory taking place in a given geometric object called space-time. But this point of view seems somehow unnatural and many physicists since Pauli [11] have believed that space-time is not a fundamental concept and that it should be derived from quantum theory.
The aim of this Letter is to prove that the flat 4D Minkowski space-time of special relativity can indeed be seen as a local approximation of a real manifold which arises naturally from the algebra obeyed by the ladder operators [12] of two independent 'abstract' quantum harmonic oscillators. This Letter does not claim that the proposed construction is the only possible one, nor that it is the physically correct one, though it may be. The sole aim of this work is to show that Minkowski space-time can be derived from a purely quantum framework. In other words, not only is the playground of special relativistic physics not alien to quantum physics, but it can be derived from it. If other, possibly more physically relevant constructions exist, is another question not dealt with in this work.
We first show how to construct an abstract spinor space [13; 14] and its dual from the algebra obeyed by the ladder operators of two independent'abstract' quantum harmonic oscillators. We then review how this abstract spinor space can be used with its dual to build a 4D Minkowski vector space. We then elaborate on Lorentz invariance and show in particular that the space of the linear transformations which leave invariant the algebra obeyed by the ladder operators and which do not mix spinors and dual spinors is simply the Lorentz group. We then discuss the differences between the Minkowski vector space introduced earlier and the usual 4D space-time manifold of special relativistic physics and finally offer a local construction of the Minkowski space-time space-time manifold. The Letter concludes with a summary and a discussion of all results, with special emphasis on possible extensions.
Consider a Hilbert space \(\mathcal{H}\) and two linearly independent operators \(a\) and \(b\) defined on \(\mathcal{H}\) which obey the algebra:
\[\left[a,a^{\dagger}\right] = \left[b,b^{\dagger}\right]=1\] \[\left[a,b\right] = \left[a,b^{\dagger}\right]=0. \tag{1}\]
This algebra can be realized, for example, by combining two independent harmonic oscillators, choosing as \({\cal H}\) the tensor product of their Hilbert spaces and by retaining as operators \(a\), \(a^{\dagger}\), \(b\), \(b^{\dagger}\) the standard ladder operators of the two oscillators.
The operators \(a\) and \(b\) can be used to build the two operators
\[\beta_{0} = \frac{1}{\sqrt{2}}\left(a+ib^{\dagger}\right)\] \[\beta_{1} = \frac{1}{\sqrt{2}}\left(a^{\dagger}+ib\right). \tag{2}\]
The algebra (1) obeyed by \(a\), \(a^{\dagger}\), \(b\) and \(b^{\dagger}\) is equivalent to the algebra:
\[\left[\beta_{0},\beta_{1}\right] = 1\] \[\left[\beta_{0},\beta_{0}^{\dagger}\right] = \left[\beta_{1},\beta_{1}^{\dagger}\right] = \left[\beta_{0},\beta_{1}^{\dagger}\right]=0. \tag{3}\]
The commutator defines a bilinear antisymmetric form on the space spanned by \((\beta_{0},\beta_{1},\beta_{0}^{\dagger},\beta_{1}^{\dagger})\). The above commutation relations show that this form is degenerate. There are however two sub-spaces on which the form is not degenerate, and these are the sub-space \(S\) spanned by \((\beta_{0},\beta_{1})\) and the sub-space \(\bar{S}\) spanned by \((\beta_{0}^{\dagger},\beta_{1}^{\dagger})\). Each of these sub-spaces is a so-called abstract spinor space [13; 14].
By definition, a spinor \(\sigma\) in \(S\) can be decomposed on the basis \(\beta\), and we write \(\sigma=\sum_{A=0}^{1}\sigma^{A}\beta_{A}\) or, introducing Einstein's summation convention, \(\sigma=\sigma^{A}\beta_{A}\). As mentioned above, the commutator is a non-degenerate, bilinear, anti-symmetric form on this space. To make writing with components easier, we denote the commutator by \(\epsilon\). Its components \((\epsilon_{AB})\) in the basis \(\beta\) are \(\epsilon_{01}=-\epsilon_{10}=1\) and \(\epsilon_{00}=\epsilon_{11}=0\).
The two operators \((\beta_{1}^{\dagger},\beta_{0}^{\dagger})\) span another spinor space \(\bar{S}\), called the dual spinor space. To be consistent with the literature on abstract spinors, we introduce the notation \(\bar{\beta}=(\bar{\beta}_{0}=\beta_{1}^{\dagger},\bar{\beta}_{1}=\beta_{0}^{ \dagger})\). Components of a spinor \(\bar{\sigma}\) in the dual spinor space will be denoted by \(\bar{\sigma}_{\tilde{A}}\) and the Poisson bracket, as a bilinear anti-symmetric form on \(\bar{S}\) is denoted by \(\bar{\epsilon}\), with components \(\bar{\epsilon}_{\tilde{A}\tilde{B}}\).
Let us now introduce the operators
\[\gamma_{\bar{0}0} = \bar{\beta}_{0}\beta_{0}\] \[\gamma_{\bar{1}1} = \bar{\beta}_{\bar{1}}\beta_{1}\] \[\gamma_{\bar{0}1} = \bar{\beta}_{\bar{0}}\beta_{1}\] \[\gamma_{\bar{1}0} = \bar{\beta}_{\bar{1}}\beta_{0}. \tag{4}\]
The family \((\bar{\beta})\) spans a \(4D\) complex vector space \(V\) and one can write any vector \(v\in V\) as \(V=v^{\tilde{A}\tilde{B}}\gamma_{\tilde{A}\tilde{B}}\). Note that, at this stage, the 4 indices are not \(0,1,2,3\), but \(\bar{0}0,\bar{1}1,\bar{0}1,\bar{1}0\).
The commutator induces in \(V\) a metric \(\eta\) defined by \(\eta_{(\tilde{A}A)(\tilde{B}B)}=\bar{\epsilon}_{\tilde{A}B}\epsilon_{AB}\) and we denote the corresponding scalar product by a dot. A direct computation shows that the only non vanishing scalar products between the \(\gamma\)'s are \(\gamma_{00}\cdot\gamma_{11}=-\gamma_{01}\cdot\gamma_{10}=1\). In particular, each \(\gamma\) has a vanishing scalar product with itself and is thus a null vector. Also, replacing in the definitions of the \(\gamma\)'s the \(\bar{\beta}\) by their expressions in terms of the \(\beta^{\dagger}\)'s, one finds \(\gamma_{\bar{0}0}=\beta_{1}^{\dagger}\beta_{0}\), \(\gamma_{\bar{1}1}=\beta_{0}^{\dagger}\beta_{1}\), \(\gamma_{\bar{0}1}=\beta_{1}^{\dagger}\beta_{1}\), \(\gamma_{\bar{1}0}=\beta_{0}^{\dagger}\beta_{0}\). This shows that \(\gamma_{\bar{0}1}\) and \(\gamma_{\bar{1}0}\) are self-adjoint while \(\gamma_{\bar{0}0}\) and \(\gamma_{\bar{1}1}\) are dual to each other. Thus, the set of \(\gamma\)'s is the equivalent of what is called a null 4-bein [13; 15; 3] in Lorentzian geometry. Another 4-bein is
\[e_{0} = \frac{1}{\sqrt{2}}\ (\gamma_{\bar{0}1}+\gamma_{\bar{1}0})\] \[e_{1} = \frac{1}{\sqrt{2}}\ (\gamma_{\bar{0}1}-\gamma_{\bar{1}0})\] \[e_{2} = \frac{1}{\sqrt{2}}\ (\gamma_{\bar{0}0}+\gamma_{\bar{1}1})\] \[e_{3} = \frac{1}{i\sqrt{2}}\ (\gamma_{00}-\gamma_{11})\,. \tag{5}\]
The components of the metric \(\eta\) in the \(e\)-basis read \((\eta_{\mu\nu})=\mbox{diag}(-1,1,1,1)\), which is the standard form of Minkowski metric. Also, all four \(e\) vectors are Hermitian operators. But this does not make the space \(V\) identical to the physical 4D Minkowski space-time. Before constructing from Minkowski vector space the Lorentzian space-time manifold, let us discuss first how Lorentz invariance emerges in the present, operator oriented context.
It is natural to wonder if there are linear transformations in \(S\cup\bar{S}\) which leave the original commutation relations (1) obeyed by \((a,a^{\dagger},b,b^{\dagger})\) or, equivalently, the commutation relations obeyed by \((\beta_{0},\beta_{1},\beta_{0}^{\dagger},\beta_{1}^{\dagger})\) invariant. Any linear transformation in \(S\) induces a linear, dual transformation \(\bar{S}\) and, thus, a linear transformation in \(S\cup\bar{S}\). So, are there for example linear transformations in \(S\) which, together with their dual, leave the commutation relations invariant?
Without loss of generality, an arbitrary linear transformation in \(S\) can be written as
\[\beta_{0}^{\prime} = p\beta_{0}+iq\beta_{1}\] \[\beta_{1}^{\prime} = ir\beta_{0}+s\beta_{1}. \tag{6}\]
where \(p\), \(q\), \(r\), \(s\) are four arbitrary complex numbers. It induces in \(\bar{S}\) the transformation:
\[(\beta_{0}^{\dagger})^{\prime} = \bar{p}\beta_{0}^{\dagger}-i\bar{q}\beta_{1}^{\dagger}\] \[(\beta_{1}^{\dagger})^{\prime} = -i\bar{r}\beta_{0}^{\dagger}+\bar{s}\beta_{1}^{\dagger} \tag{7}\]
where bars over complex numbers denote complex conjugation. The last three commutation relations obeyed by \((\beta_{0},\beta_{1},\beta_{0}^{\dagger},\beta_{1}^{\dagger})\) in equation (3) are trivially invariant under the above transformations and the invariance of the first commutation relation is equivalent to \(ps+qr=1\). The four complex numbers \(p\), \(q\), \(r\), \(s\) are thus restricted
by a single (complex) relation, and we are thus dealing with a family of transformations which depend a priori on 3 complex, or equivalently 6 real parameters. We will now show that these transformations coincide with the Lorentz transformations.
As a preliminary, we first show that any complex number, say \(u\), can be written as the squared cosine of another complex number, say \(\theta\). The equation \(u=\cos^{2}\theta\) transcribes into \(v=2u-1=\cos(2\theta)\). Introducing \(x=\exp(2i\theta)\) leads to \(x^{2}-2vx+1=0\), which admits two (possibly identical) complex solutions. One can thus always find an \(x\) which solves the problem. Writing then \(x=\mid x\mid\exp(i\phi_{x})\) with \(\phi_{x}\in(0,2\pi)\) and \(\theta=\theta_{r}+i\theta_{i}\), with \((\theta_{r},\theta_{i})\in\mathbf{R}^{2}\), the equation \(x=\exp(2i\theta)\) for \(\theta\) can be solved by choosing for example \(\theta_{r}=\phi_{x}/2\) and \(\theta_{i}=-(\ln\mid x\mid)/2\) (note that \(x\) does not vanish for any value of \(u\)).
Coming back to the relation \(ps+qr=1\), we introduce a complex \(\theta\) such that \(ps=\cos^{2}\theta\). This implies that \(qr=\sin^{2}\theta\). One can then introduce two complex numbers \(\xi\) and \(\zeta\) such that \(p=\exp(i\xi)\cos\theta\) and \(ir=\exp(-i\zeta)\sin\theta\). Since \(ps=\cos^{2}\theta\) and \(qr=\sin^{2}\theta\), one gets immediately \(s=\exp(-i\xi)\cos\theta\) and \(iq=-\exp(-i\zeta)\sin\theta\).
A straightforward computation shows that, conversely, any linear transformation of \(S\) represented in the \((\beta_{0},\beta_{1})\) basis by a matrix \(L\) of the form
\[L=\begin{pmatrix}\exp(i\xi)\cos\theta&-\exp(i\zeta)\sin\theta\\ \exp(-i\zeta)\sin\theta&\exp(-i\xi)\cos\theta\end{pmatrix} \tag{8}\]
with arbitrary complex \(\theta\), \(\xi\) and \(\zeta\) preserves the commutation relations. The matrix \(L\) can be rewritten as the exponential of a complex linear combination of the three Pauli matrices and is thus identical to the action of an arbitrary Lorentz transformation on 2-spinors [7; 16].
The Lorentz transformations are thus the only linear transformations of \(S\cup\bar{S}\) which, not only preserve the commutation relations, but also leave each of the subspaces \(S\) and \(\bar{S}\) invariant (as sets, not point-wise).
One final remark is in order. Writing the Lorentz transformations in terms of the original ladder operator, one finds:
\[a^{\prime}+i(b^{\dagger})^{\prime} = p\left[a+i(b^{\dagger})\right]+iq\left[a^{\dagger}+ib\right]\] \[(a^{\dagger})^{\prime}+ib^{\prime} = ir\left[a+i(b^{\dagger})\right]+s\left[a^{\dagger}+ib\right]. \tag{9}\]
Taking the dual of these equations, one obtains:
\[(a^{\dagger})^{\prime}-ib^{\prime} = \bar{p}\left[a^{\dagger}-ib\right]-i\bar{q}\left[a-ib^{\dagger}\right]\] \[a^{\prime}-i(b^{\dagger})^{\prime} = -i\bar{r}\left[a^{\dagger}-ib\right]+\bar{s}\left[a-ib^{\dagger} \right]. \tag{10}\]
These four equations can be combined to deliver:
\[a^{\prime}=\frac{1}{2}\left[(p+\bar{s})a-(q+\bar{r})b+i(q-\bar{r })a^{\dagger}+i(p-\bar{s})b^{\dagger}\right]\] \[b^{\prime}=\frac{1}{2}\left[(r+\bar{q})a+(s+\bar{p})b-i(s-\bar{ p})a^{\dagger}+i(r-\bar{q})b^{\dagger}\right]. \tag{11}\]
If one interprets \(a\) and \(b\) to be ladder operators for two independent harmonic oscillators, a Lorentz transformation actually mixes these two oscillators, and also mixes their creation and destruction operators, to generate two new oscillators still independent of each other.
The 4D space \(V\) exhibits several key differences with the 4D Minkowski space-time. First, \(V\) is a vector space, and not a manifold. Second, \(V\), as built above, is a complex vector space. One can naturally argue that the physical Minkowski vector space, which is tangent to the space-time manifold, is a subspace of \(V\), but then, why does physics only deal with that subspace? Or is the physical Minkowski space-time actually complex? The third difference is more subtle. By construction, all elements of \(V\) are operators in a space on which a bilinear antisymmetric form, which we have called the commutator, is defined. One can therefore compute the commutator of different elements in \(V\) and, in particular, the commutator of the \(e_{a}\)'s with each other. In practice these commutators can be found from the commutation relations between the \(\gamma\)'s, which can be derived from those of the \(\beta\)'s and \(\beta^{\dagger}\)'s. One find that the \(\gamma\)'s do not commute with each other, and neither do the \(e\)'s. Also, all commutators between the \(e\)' are quadratic in the ladder operators \(a\), \(a^{\dagger}\), \(b\), \(b^{\dagger}\). For example, \([e_{0},e_{1}]=-i\left(a^{\dagger}b^{\dagger}+ab\right)\).
This might have been expected and shows that neither the \(\gamma\)'s nor the \(e\)'s are identical to a 4-bein in usual Minkowski vector space. One might be tempted to interpret the non commutation of the 4-bein in terms of curvature, but there is no manifold at this stage of the computation. Moreover, it is easy to check that the commutator \([e_{0},e_{1}]\) actually lies outside of \(V\). In particular, the \(e\)'s are Hermitian, and their commutators are therefore anti-Hermitian, so outside of \(V\). Observe also that the \(e\)'s are quadratic in the ladder operators \(a\), \(b\), \(a^{\dagger}\), \(b^{\dagger}\), and so are their commutators.
To summarize the above discussion, what we have at this stage is a non-commutative 4D complex Minkowski vector space embedded in a larger operator space. The question is: can we build from that the usual arena of physics i.e. a real 4D Lorentzian manifold?
The 4D Minkowski manifold is flat and can thus be viewed as an affine space, and there is a standard way to construct affine submanifolds of a vector space. Suppose you want to build an affine space from \(V\) defined above. Consider the space \(W\) of all operators acting on the Hilbert space \(\mathcal{H}\) and pick up an operator \(u\) in \(W\) which is not in \(V\). The space \(u+V=\{u+v,v\in V\}\) is then a flat submanifold of \(W\) with tangent \(V\) at each point. Note that this construction cannot work if one starts from a vector space which is not embedded in a larger vector space.
So, one first tentative way to obtain physical Minkowski \(\mathcal{M}\) space-time would be to pick an arbitrary \(u\) outside of \(V\) and to identify \(\mathcal{M}\) with the set of all \(u+\alpha^{a}e_{a}\), \(a\in\{0,1,2,3\}\) and \(\alpha\in\mathbb{R}^{4}\). This however presents at
least two shortcomings. First, the 4-bein vectors in \(V\) still do not commute, so the Minkowski manifold \(u+V\) is not the standard one. Second, the restriction to real \(\alpha\)'s comes out of nowhere in this construction.
These two shortcomings have a common solution Instead of considering operators of the form \(u+\alpha^{a}e_{a}\) with \(\alpha\in\mathbb{R}^{4}\), consider the operators \(M(u,\alpha)=u\exp\left(i\alpha^{a}e_{a}\right)\) with \(u\) unitary and not in \(V\). Possible choices for \(u\) are \(u=\exp(i\delta)\mathcal{I}\) where \(\delta\) is an arbitrary real number and \(\mathcal{I}\) is the identity operator, which is not in \(V\). The operators \(M(u,\alpha)=u\exp\left(i\alpha^{a}e_{a}\right)\) form a submanifold \(\bar{\mathcal{M}}\) of \(W\). Since the \(e\)'s are Hermitian, an operator in \(\bar{\mathcal{M}}\) is unitary iff its 4 \(\alpha\) coordinates are real. We denote the set of all unitary operators in \(\bar{\mathcal{M}}\) by \(\bar{\mathcal{M}}_{U}\).
Suppose now one looks only at points which are 'close' to \(u\), say points for which the four \(\alpha\)'s are at the most \(O(\epsilon)\) with \(0<\epsilon\ll 1\). It is natural, for these points, to introduce the rescaled coordinates \((\bar{\alpha})\) defined by \(\alpha^{a}=\epsilon\,\bar{\alpha}^{a}\) for \(a=0,1,2,3\). With this definition, all \(\bar{\alpha}\)'s can reach \(O(1)\). One can then expand \(M(u,\epsilon\bar{\alpha})\) at first order in \(\epsilon\) around \(\epsilon=0\) and obtain \(M(u,\epsilon\bar{\alpha})=u\left(1+i\epsilon\bar{\alpha}^{a}e_{a}+O(\epsilon^{ 2})\right)\). We now introduce the rescaled 4-bein \(\bar{e}_{a}=\epsilon\,e_{a}\), and write
\[M(u,\bar{\alpha})=u\left(1+i\bar{\alpha}^{a}\bar{e}_{a}+O(\epsilon^{2})\right). \tag{12}\]
Since the \(e\)'s are quadratic in the ladder operators, rescaling the \(e^{\prime}s\) by \(\epsilon\) is tantamount to rescaling the ladder operators by \(\sqrt{\epsilon}\). The rescaled commutation relations between the 4-bein vectors thus take the form \([\bar{e}_{a},\bar{e}_{b}]=O(\epsilon)C_{ab}\) where the \(C_{ab}\)'s are quadratic in the rescaled ladder operators. This means that the commutators of the rescaled 4-bein vectors tend to zero with \(\epsilon\). Thus, close to \(u\), \(\mathcal{M}_{U}\) looks like the standard, real Minkowski manifold. This proves that the usual flat space-time of special relativity can be recovered from standard quantum theory. Note that introducing the new 4-bein \(E_{a}=i\bar{e}_{a}\) makes it possible to write (12) as \(M(u,\bar{\alpha})=u\left(1+\bar{\alpha}^{a}E_{a}+O(\epsilon^{2})\right)\), where the \(i\) factor does not explicitly appears.
In the above construction, a portion of the flat Minkowski space-time appears as the portion of the affine space \(-i+V\) for which all \(\alpha\)'s at at most \(O(1)\). Suppose now a physicist is using Lorentzian 'physical' coordinates \((x^{a})\), a = 0,..., 3 and works in (or has access to) a region of Minkowski space-time of size \(L\) is these coordinates. This region can be described in the above framework by assuming that \(x=\bar{L}\bar{\alpha}\) with \(\bar{L}\geq L\).
We have shown that the real 4D Minkowski space-time can be constructed locally from the ladder operators of two independent 1D quantum oscillators. The ladder operators generate an abstract spinor space and its dual which can in turn be used to build the complex 4D Minkowski vector space. Part of this vector space generates a space of unitary operator which, locally, looks like the standard real 4D Minkowski space-time.
Let us now discuss this result, focusing on possible extensions. From the quantum point of view, the procedure described in this Letter is but a special case of a very general problem. Consider a collection of \(N\) independent 'abstract' 1D quantum harmonic oscillators i.e. \(N\) independent quantum systems characterized by ladder operators obeying the standard commutation relations. Here, 'abstract' means that there is no space-time available at this stage and that the ladder operators have therefore no relations with physical position and momentum operators. Then ask yourself what are the unitary operators or, equivalently, the Hermitian operators which can be built from these independent oscillators i.e. from their ladder operators. This question is completely natural from the point of view of quantum physics, it is interesting per se, it connects with several domains, including quantum optics and condensed matter physics, but it does not seem to have much to say about space-time. This Letter shows that, contrary to what one might think prima facie, the case \(N=2\) delivers Hermitian operators quadratic in the ladder operators which, in turn, deliver a manifold of unitary operators which, locally, looks like the usual, real 4D Minkowski space-time.
But what about other values of \(N\)? The case \(N=1\) is nearly trivial and does not seem to connect to space-time physics (computations not shown). Other values of \(N\) should be investigated to determine if and what manifolds they generate, taking into account polynomials of arbitrary degrees in the ladder operators. For example, do higher values of \(N\) and/or polynomial of higher degrees deliver Lorentzian space-times of higher dimensions? Or space-times with more than a single time-coordinate?
As constructed in this Letter, flat space-time is but a local approximation of a manifold of non-commutating operators. The above problem involving \(N\) independent quantum harmonic oscillators thus has connections with non-commutative geometry [17], and this should be investigated.
From the classical point of view, the most natural question is about general relativity. Can the above procedure be extended to deliver curved Lorentzian manifolds? Could this be done by allowing for example the oscillators to interact with each other? And could this pave the way to a possible laboratory quantum simulations of general relativistic space-times i.e. of relativistic gravitation, for example in the contexts of quantum optics or condensed matter physics?
Finally, one can only wonder if and how matter fits into the picture developed here. The link between space-time and quantum harmonic oscillators presented in this article seems to suggest that matter and space-time may be two sides of the same coin. If so, what is exactly that coin, and how does what we call dynamics emerge from a unified quantum picture? |
2309.02039 | Energy and morphology of martensite-twinned martensite interface in
CuAlNi shape memory alloy: a phase-field study | Needle-like twins are observed experimentally within the transition layer at
the martensite-twinned martensite interface. We utilize a phase-field approach
to investigate this microstructure. Our goal is to simulate the morphology of
the transition layer and to perform a detailed analysis to characterize its
interfacial and elastic micro-strain energy. To illustrate the micromechanical
framework developed for that purpose, sample computations are carried out for a
CuAlNi shape memory alloy undergoing the cubic-to-orthorhombic martensitic
transformation. A particular focus of the study is on size-dependent morphology
through examining the impact of twin spacing. Additionally, our results reveal
that certain twin volume fractions lead to the emergence of twin branching, as
a way to minimize the total free energy stored in the microstructure. | Seyedshoja Amini, Mohsen Rezaee-Hajidehi, Stanislaw Stupkiewicz | 2023-09-05T08:31:07Z | http://arxiv.org/abs/2309.02039v1 | Energy and morphology of martensite-twinned martensite interface in CuAlNi shape memory alloy: a phase-field study+
###### Abstract
Needle-like twins are observed experimentally within the transition layer at the martensite-twinned martensite interface. We utilize a phase-field approach to investigate this microstructure. Our goal is to simulate the morphology of the transition layer and to perform a detailed analysis to characterize its interfacial and elastic micro-strain energy. To illustrate the micromechanical framework developed for that purpose, sample computations are carried out for a CuAlNi shape memory alloy undergoing the cubic-to-orthorhombic martensitic transformation. A particular focus of the study is on size-dependent morphology through examining the impact of twin spacing. Additionally, our results reveal that certain twin volume fractions lead to the emergence of twin branching, as a way to minimize the total free energy stored in the microstructure.
keywords: Microstructure; Martensitic transformation; Transition layer; Phase-field method; Size effects +
Footnote †: journal:
## 1 Introduction
Pseudoelasticity and shape memory effect are the two most prominent features of shape memory alloys (SMAs). These features are inherent to the martensitic phase transformation and to the accompanying microstructures which encompass a rich array of interfaces across various spatial scales. Notably, the martensite-martensite (twin) interfaces, which are intrinsically coherent and free of (micro) stresses, stand out as the most ubiquitous type that form the primary constituent of the intricate microstructures at higher scales [1]. Experimental investigations have shown that the martensitic transformation often proceeds by the evolution of nested laminated microstructures consisting of (quasi) periodic, planar and macroscopically sharp interfaces [2; 3]. These characteristics, indeed, serve as the backbone of the crystallographic theory of martensite which postulates that the interfaces are macroscopically compatible and stress-free [1; 4].
Nevertheless, local incompatibilities do exist and are primarily concentrated within thin _transition layers_ along the macroscopic interfaces. The local incompatibilities must be accommodated by elastic strains (referred to as 'elastic micro strains') accompanied by micro stresses, as a result of which the transition layers develop a microstructured morphology [3; 5; 6; 7; 8]. A well-known example of a macroscopic interface is the habit plane that mediates the austenite and the domain of twinned martensite (note that the austenite and a single variant of martensite rarely form a compatible interface). The morphology of the corresponding transition layer and its energetic characteristics have been extensively investigated in the literature using various approaches, including analytical estimates based on simplified kinematics [9], shape optimization method [10; 11], and phase-field modeling [12; 13; 14]. It is generally acknowledged that the morphology of a transition layer is driven by the material's propensity to minimize the total free energy and that its complex pattern is governed by the interplay between the elastic micro-strain energy and the interfacial (surface) energy of the phase boundaries. Branching, i.e., refinement of twin spacing, in the vicinity of the macroscopic interface represents a well documented manifestation of morphological changes within the transition layers [8; 15; 16; 17; 18].
At the same time, morphologies featuring needle-like twins emerge at the macroscopic interface between a single variant of martensite and twinned martensite or between two distinctly oriented twinned martensite domains [6; 7; 17; 19]. The so-called \(\lambda\)-microstructure is an example of a microstructure involving such macroscopic interfaces. As shown by Seiner et al. [17; 19], a macroscopically non-uniform martensitic transformation is induced and controlled by a temperature gradient in a CuAlNi single crystal bar, and this leads to the formation of the \(\lambda\)-microstructure. This microstructure comprises four interfaces, all intersecting at one point, namely two austenite-twinned martensite and two martensite-twinned martensite interfaces (the latter referred to as "twinned-to-detwinned interface" by the authors). An optical micrograph of the resulting \(\lambda\)-microstructure is depicted in Fig. 1. A closer look at the corresponding martensite-twinned martensite interface, see Fig.1(c), reveals the needle-like appearance of the twins. The authors examined the structure of the needles via white-light interferometry and found out that the needles bend and taper as they approach the domain of pure martensite variant and that, at some locations, branching of the twins takes place. These distinctive characteristics have been also observed for a different SMA material, namely In-Tl [20], and also at the macroscopic interface between two twinned martensite domains, e.g., [6; 7]. It is worth noting that needles, in general, have been reported in a variety of microstructures, e.g., [21; 22; 23; 24], and that a particular attention in the literature has been devoted to the theoretical and numerical analysis of needle-like morphologies, e.g., [25; 26; 27; 28; 29; 30]. Within this context, a number of modeling approaches have demonstrated their potential in simulating complex spatially-resolved microstructures at the meso-scale, including the phase-field method [27] and the
sharp-interface discrete-particle method [28].
To understand the formation mechanism of intriguing macroscopic interfaces, it is necessary to analyze the morphology and to determine the energy-based characteristics of the transition layers. With these two objectives in mind, a detailed modeling-based investigation of the martensite-twinned martensite interface in a CuAlNi single crystal is pursued in this study, with a special emphasis on the related size effects. To accomplish this, we leverage a conventional (two-phase) phase-field model which has been simply derived from our earlier multiphase-field model [31], and thereby retains its essential features, especially the finite-strain kinematics, consideration of the full elastic anisotropy of martensite variants and formulation in the variational framework. It should be stressed that the viscous evolution amounts to the minimization of the total free energy, comprising the elastic strain energy and the interfacial energy, thus making the phase-field method a suitable framework to address the problem at hand. It is noteworthy that the primary focus of our analysis is on the transition layers with the needle of one variant _not_ terminating at the same variant, as shown in Fig. 1(c). To the best of our knowledge, the only closely-related study in this context is that by Seiner et al. [19], who examined the structure of such needles via a finite-element-based sharp-interface model.
A well-known drawback of the phase-field method is its requirement for a sufficiently dense finite-element mesh in order to accurately resolve the diffuse interfaces, and thereby, to properly describe the associated interfacial energy [31; 32]. At the same time, it is necessary to adopt a physically relevant value for the interfacial energy density parameter, which sets the length scale of our diffuse interfaces to the order of few nanometers. These two factors together limit the size of the
Figure 1: \(\lambda\)-microstructure in CuAlNi: (a) a sketch of the macroscopic morphology, (b) a close-up view of the \(\lambda\)-microstructure at the intersection point, and (c) a close-up view of the martensite–twined martensite interface involving needles. The optical micrographs in panels (b,c) are provided courtesy of H. Seiner, see [19] (reproduced with permission from Elsevier).
computational domain that can be simulated (in our case, the computational domain is assumed periodic and encloses one twin pair). This size (on the scale of \(<\) 200 nm) is visibly smaller than what has been observed in the experiment, which is of the order of 10 \(\mu\)m, see Fig. 1(c). Therefore, while some qualitative comparisons have been drawn throughout the analysis, we do not aim for any direct quantitative comparison with the available experimental data.
The paper is organized as follows. In Section 2, we recall the basic equations of the crystallographic theory of martensite, in order to lay the theoretical foundation for the problem at hand. The phase-field model is briefly described in Section 3. Subsequently, Section 4 presents the simulation setup, the obtained results and the ensuing discussions.
## 2 Basic equations of the crystallographic theory of martensite
According to the crystallographic theory of martensite, the requirement of kinematic compatibility is imposed at zero stress and implies that the deformation gradients on the opposite faces of a planar interface are rank-one connected [1; 4]. In line with this geometrical definition, the kinematic compatibility condition between two stress-free variants of martensite, here variant A and variant B, is mathematically expressed as
\[\mathbf{RU}_{\mathrm{B}}-\mathbf{U}_{\mathrm{A}}=\mathbf{a}\otimes\mathbf{l}, \tag{1}\]
which is called the _twinning equation_. In Eq. (1), \(\mathbf{U}_{\mathrm{A}}\) and \(\mathbf{U}_{\mathrm{B}}\) represent the (symmetric) transformation stretch tensors of the two variants involved (known from crystallography) and the unknowns are the twinning shear vector \(\mathbf{a}\), the normal to the interface \(\mathbf{l}\), and the rotation tensor \(\mathbf{R}\). In an analogous manner, in the case of an interface mediating a single variant of martensite and a twinned martensite, which is referred to as M-MM interface, the compatibility equation takes the form
\[\mathbf{\hat{R}}\left(\lambda^{0}\mathbf{RU}_{\mathrm{B}}+(1-\lambda^{0}) \mathbf{U}_{\mathrm{A}}\right)-\mathbf{U}_{\mathrm{A}}=\mathbf{b}\otimes \mathbf{m}, \tag{2}\]
where \(\lambda^{0}\) represents the twin volume fraction and is here chosen arbitrarily in the range \(0<\lambda^{0}<1\), while the unknowns are the shear vector \(\mathbf{b}\), the normal to the interface \(\mathbf{m}\), and the rotation tensor \(\mathbf{\hat{R}}\). Note that the interface normals \(\mathbf{l}\) and \(\mathbf{m}\) refer to the undeformed configuration of austenite. The solution procedure for the twinning equation (1) and the M-MM interface equation (2) can be found in the references cited above. To provide further clarity and to serve as an example, we present below the solution for a selected volume fraction of \(\lambda^{0}=0.3\).
In the present paper, the focus of our main analysis is on the type-I twin in CuAlNi shape memory alloy (the case of type-II twin is commented in A). The martensitic transformation in this alloy proceeds via a cubic-to-orthorhombic structural change and involves six martensite variants. Among the martensite variant pairs with type-I twin relation, we have selected a representative
pair \((\mathrm{A},\mathrm{B})=(1,3)\), which is characterized by the following transformation stretch tensors (here and below, all tensor and vector components are given in the austenite cubic basis),
\[\mathbf{U}_{\mathrm{A}}=\mathbf{U}_{1}=\begin{pmatrix}(\alpha+\gamma)/2&0&( \alpha-\gamma)/2\\ 0&\beta&0\\ (\alpha-\gamma)/2&0&(\alpha+\gamma)/2\end{pmatrix},\quad\mathbf{U}_{\mathrm{ B}}=\mathbf{U}_{3}=\begin{pmatrix}(\alpha+\gamma)/2&(\alpha-\gamma)/2&0\\ (\alpha-\gamma)/2&(\alpha+\gamma)/2&0\\ 0&0&\beta\end{pmatrix}, \tag{3}\]
with the stretch parameters \(\alpha=1.0619\), \(\beta=0.9178\) and \(\gamma=1.023\)[1]. The solutions of the twinning equation (1) and the M-MM interface equation (2) for \(\lambda^{0}=0.3\) are then obtained as follows
\[\begin{split}\mathbf{a}&=(-0.0515,-0.1637,-0.1869),\quad \mathbf{b}=(0.0003,0.0563,-0.0535),\\ \mathbf{l}&=(0,-0.7071,0.7071),\quad\quad\quad\quad\mathbf{m}=(0.2272,0.6226,0.7486),\end{split} \tag{4}\]
and the corresponding rotation tensors are given by
\[\mathbf{R}=\begin{pmatrix}0.9997&0.0163&-0.0185\\ -0.0185&0.9918&-0.1262\\ 0.0163&0.1265&0.9918\end{pmatrix},\quad\hat{\mathbf{R}}=\begin{pmatrix}0.9999& -0.0117&0.0108\\ 0.0108&0.9970&0.0765\\ -0.0116&-0.0763&0.9970\end{pmatrix}. \tag{5}\]
It should be noted that \(\mathbf{b}\) and \(\mathbf{m}\) presented in Eq. (4) correspond to the non-trivial solution of the M-MM interface. A trivial solution is simply obtained as \(\mathbf{m}=\mathbf{l}\), \(\mathbf{b}=\lambda^{0}\mathbf{a}\) and \(\mathbf{R}=\mathbf{I}\), where \(\mathbf{I}\) is the identity tensor, see a more detailed discussion in [33].
## 3 Phase-field model for twinning
A conventional phase-field model of twinning is utilized in this study. In this section, we provide a concise description of the model and briefly discuss its finite-element implementation. For more details, interested readers are referred to the multi-phase versions of the model that have been developed in our previous studies [31; 34], see also [13] for an earlier version featuring hierarchical order parameters.
The deformation gradient \(\mathbf{F}\) and the non-conserved order parameter \(\phi\) constitute the primary variables in the model. The finite-strain kinematic description relies on the multiplicative decomposition of the deformation gradient \(\mathbf{F}\) into the elastic part \(\mathbf{F}^{\mathrm{e}}\) and the part \(\mathbf{F}^{\mathrm{t}}\) associated with the twinning transformation, viz.,
\[\mathbf{F}=\mathbf{F}^{\mathrm{e}}\mathbf{F}^{\mathrm{t}},\quad\mathbf{F}= \nabla\boldsymbol{\varphi}, \tag{6}\]
where \(\boldsymbol{\varphi}\) denotes the deformation mapping from the reference configuration to the current configuration, \(\mathbf{x}=\boldsymbol{\varphi}(\mathbf{X})\). Within the context of twinning, where only two martensite variants are involved, a single order parameter \(\phi\) is adequate to characterize the material state. In the present model, the order parameter \(\phi\) is interpreted as the relative twin volume fraction and is bounded within the range \(0\leq\phi\leq 1\), where \(\phi=0\) and \(\phi=1\) correspond to pure martensite variants (here, variant
A and variant B, respectively), while the intermediate values \(0<\phi<1\) represent the diffuse twin interfaces.
Among the available formulations for the transformation deformation gradient \(\mathbf{F}^{\mathrm{t}}\), the rank-one mixing rule is adopted [35; 36]
\[\mathbf{F}^{\mathrm{t}}=\mathbf{U}_{\mathrm{A}}+\phi\,\mathbf{a}\otimes \mathbf{l}, \tag{7}\]
which is defined explicitly in terms of one of the solutions \((\mathbf{a},\mathbf{l})\) of the twinning equation (1) (recall that the twinning equation has two solutions). By construction, the transformation deformation gradient \(\mathbf{F}^{\mathrm{t}}\) in Eq. (7) is rank-one connected to \(\mathbf{U}_{\mathrm{A}}\) for any value of \(0\leq\phi\leq 1\), so that a planar diffuse interface with the normal \(\mathbf{l}\) is fully compatible. Note, however, that compatibility is not ensured _within_ a diffuse interface (i.e., for \(0<\phi<1\)) that has the orientation of the other solution of the twinning equation, and elastic strains are then needed to accommodate the incompatibility within such an interface, see the related discussion in Remark 2.5 in [37].
At this point, a variational formulation of the model is derived following the approach of Hildebrand and Miehe [38], see also [13]. This implies that the model unknowns \((\boldsymbol{\varphi},\phi)\), and therefore the microstructure evolution, are governed by the minimization problem formulated for the total incremental potential of the system,
\[\Pi=\Delta\mathcal{F}+\mathcal{D}_{\tau}\ \to\ \min_{\boldsymbol{\varphi},\phi} \tag{8}\]
which is subject to the inequality constraint for the order parameter \(0\leq\phi\leq 1\). Here, \(\Delta\mathcal{F}\) denotes the increment of the total Helmholtz free energy, and \(\mathcal{D}_{\tau}\) denotes the incremental dissipation potential. We consider the model to be constrained to isothermal processes. Consequently, the total Helmholtz free energy \(\mathcal{F}\) and the dissipation potential \(\mathcal{D}_{\tau}\) are defined (for the entire body \(B\)) as
\[\mathcal{F}=\int_{B}\left(F_{\mathrm{el}}+F_{\mathrm{int}}\right)\mathrm{d}V, \quad\mathcal{D}_{\tau}=\int_{B}D_{\tau}\,\mathrm{d}V. \tag{9}\]
It thus remains to define the Helmholtz free energy contributions, namely the elastic strain energy \(F_{\mathrm{el}}\) and the interfacial energy \(F_{\mathrm{int}}\), and also the (time-discrete) dissipation function \(D_{\tau}\). Note that the absence of a chemical energy contribution in the Helmholtz free energy is justified by the assumption that, under stress-free conditions, the martensite variants are energetically equivalent.
Following our previous works [31; 34], a Hencky-type anisotropic elastic strain energy is considered, which takes the form
\[F_{\mathrm{el}}=\frac{1}{2}(\det\mathbf{F}^{\mathrm{t}})\mathbf{H}^{\mathrm{e }}\cdot\mathbb{L}\mathbf{H}^{\mathrm{e}},\quad\mathbf{H}^{\mathrm{e}}=\frac{1 }{2}\log\mathbf{C}^{\mathrm{e}},\quad\mathbf{C}^{\mathrm{e}}=(\mathbf{F}^{ \mathrm{e}})^{\mathrm{T}}\mathbf{F}^{\mathrm{e}}, \tag{10}\]
where \(\mathbf{H}^{\mathrm{e}}\) is the logarithmic elastic strain, \(\mathbf{C}^{\mathrm{e}}\) is the elastic right Cauchy-Green tensor and \(\mathbb{L}=(1-\phi)\mathbb{L}_{\mathrm{A}}+\phi\mathbb{L}_{\mathrm{B}}\) is the (effective) fourth-order elastic stiffness tensor, obtained by Voigt-type averaging of the elastic stiffness tensors of martensite variants A and B, see [39] for the general form of an elastic stiffness tensor with orthorhombic symmetry.
On the other hand, a double-obstacle potential with an isotropic gradient energy term is adopted for the interfacial energy [40],
\[F_{\rm int}=\frac{4\gamma_{\rm tw}}{\pi\ell}\left(\phi(1-\phi)+\ell^{2}\nabla\phi \cdot\nabla\phi\right), \tag{11}\]
where \(\gamma_{\rm tw}\) is the interfacial energy density (per unit area) associated with the martensite-martensite (twin) interface and \(\ell\) is the corresponding interface thickness parameter. The interfacial energy of the form (11) leads to a theoretical (i.e., under stress-free conditions) interface thickness of \(\pi\ell\).
The final component of the model to be specified is the time-discrete dissipation function \(D_{\tau}\). In line with the conventional phase-field modeling, a viscous dissipation is employed here, which is expressed as
\[D_{\tau}=\tau D=\frac{\tau}{2m}\left(\frac{\phi-\phi_{n}}{\tau}\right)^{2}, \tag{12}\]
where \(m\) is the interface mobility parameter, \(\tau\) is the time increment and \(\phi_{n}\) is the order parameter at the previous time step. It is noteworthy that the form (12) is obtained by applying the backward-Euler method to integrate the rate-potential \(D=(1/2m)\dot{\phi}^{2}\) which is expressed in terms of \(\dot{\phi}\), the rate of the order parameter.
We now briefly outline the most important aspects of the finite-element implementation of the phase-field model described above. The actual unknowns of the model in the implementation are the displacement field \(\mathbf{u}=\boldsymbol{\varphi}-\mathbf{X}\) and the order parameter \(\phi\). As will be discussed in Section 4, our analysis is restricted to generalized plane strain condition. Thus, spatial discretization is done by using isoparametric 8-noded serendipity elements (with reduced \(2\times 2\) Gauss integration rule) for the displacement field \(\mathbf{u}\) and 4-noded bilinear elements for the order parameter \(\phi\). The resulting discretized nonlinear equations are solved in a monolithic fashion by using the Newton method. The penalty regularization method is employed to enforce the inequality constraint for the order parameter, \(0\leq\phi\leq 1\), as done in our prior studies involving multiple order parameters [31; 34].
For an efficient and reliable computer implementation, the AceGen system is used [41; 42], which features automatic differentiation and code simplification capabilities, and thereby, guarantees an exact computation of the tangent matrix. The simulations are carried out by using AceFEM, a finite-element environment closely connected with AceGen.
## 4 Phase-field simulation results
In this section, we present and discuss the results obtained from our phase-field simulations. The setup of the problem and the material parameters are outlined in Section 4.1, while the quantitative measures which are used for analyzing the simulation results are described in Section 4.2. The discussion of the simulation results commences with the analysis of a representative case in
Section 4.3. Subsequently, the effect of twin spacing and the related size effects are studied in Section 4.4. Finally, in Section 4.5, the effect of twin volume fraction is investigated.
### Problem setup and material parameters
The purpose of our computational study is to conduct a detailed analysis of the macroscopic M-MM interface in a CuAlNi single crystal (see Fig. 1) by using the phase-field model presented in Section 3. Within such a macroscopic interface, a microstructured transition layer (of some finite width) is formed in which the local incompatibility between the (macroscopically) homogeneous phases of single martensite variant and twinned martensite is accommodated by elastic strains. It is assumed in the present study that this transition layer is morphologically periodic along the M-MM interface with the period being the twin spacing \(h\) containing a twin pair. Meanwhile, outside of the transition layer and far away from the M-MM interface, the elastic micro-strain energy and the related stresses are expected to tend to zero. Accordingly, in the finite-element simulations, the domain under study is chosen to be sufficiently long in the direction parallel to the twin interfaces, i.e., a geometrical aspect ratio of at least \(2L/h\)=40 is selected where \(2L\) denotes the height of the domain, and with the periodic boundary conditions enforced at the corresponding edges, see Fig. 2(a). It is noteworthy that our computational problem is closely related to that of Tuma and Stupkiewicz [14] on the austenite-twinned martensite interfaces, see also [10] for the related sharp-interface modeling study.
Nevertheless, it turned out from our simulations that the far-field elastic micro-strain energy is non-zero, indicating that the elastic strains (and hence energy) are not confined to a transition layer along the M-MM interface. Thereby, as will be elaborated in Sections 4.2 and 4.3, corrections ought to be made in order to subtract the far-field energy contributions from the energy of the macroscopic interface, as accomplished in [43]. It should be pointed out that the far-field energy contributions diminish by increasing the geometrical aspect ratio \(2L/h\). However, acquiring zero far-field energy contributions, as confirmed by our auxiliary simulations, would require a very long computational domain, which, due to the computational restrictions, is not feasible.
As discussed in Section 2, the twinning plane normal \(\mathbf{l}\) and the M-MM interface normal \(\mathbf{m}\) can be obtained from the crystallographic theory. Accordingly, the domain under study is oriented such that the problem refers to a plane that contains both \(\mathbf{l}\) and \(\mathbf{m}\), as shown in Fig. 2(a). This means that the global \(x_{1}\) axis aligns with \(\mathbf{l}\) and the global \(x_{2}\) axis deviates from \(\mathbf{m}\) by a characteristic angle \(\theta\). As a result, the domain takes the shape of a parallelogram, and the angle \(\theta\) measures the deviation of the parallelogram from a rectangular shape. A generalized plane strain condition is considered in the simulations, which implies that, while the problem is independent of the out-of-plane spatial dimension, it accounts for a non-zero out-of-plane displacement, e.g., [10; 14].
Figure 2: (a) The setup of the problem and the initial conditions, and (b) a schematic illustration of the nominal and effective (energy-minimizing) macroscopic M–MM interfaces. In panel (a), the sketch in the middle depicts the full periodic domain and the sketch on the right depicts the actual computational domain used in the simulations. To enhance the clarity of the sketches, the aspect ratio of the objects has been considerably decreased beyond their actual proportions.
A deformation-controlled loading is applied through prescribing a constant average (overall) deformation gradient \(\bar{\bf F}\). Subsequently, the microstructure is allowed to attain a steady (equilibrium) state (i.e., the minimum of the total free energy is obtained through a viscous evolution of the microstructure). This state is then taken as the subject of the analysis. The average deformation gradient \(\bar{\bf F}\) is defined in the following form
\[\bar{\bf F}=\kappa^{0}{\bf F}_{1}+(1-\kappa^{0})(\lambda^{0}{\bf F}_{2}+(1- \lambda^{0}){\bf F}_{3}), \tag{13}\]
with the individual deformation gradients \({\bf F}_{1}\), \({\bf F}_{2}\) and \({\bf F}_{3}\) being equal to
\[{\bf F}_{1}={\bf U}_{\rm A},\quad{\bf F}_{2}=\hat{\bf R}{\bf R}{\bf U}_{\rm B },\quad{\bf F}_{3}=\hat{\bf R}{\bf U}_{\rm A}. \tag{14}\]
Here, \({\bf U}_{\rm A}={\bf U}_{1}\) and \({\bf U}_{\rm B}={\bf U}_{3}\), see Eq. (3), represent the transformation stretch tensors of the two martensite variants involved, and the rotations \(\hat{\bf R}\) and \({\bf R}\) come from the crystallographic theory, see Eq. (5). Accordingly, the average deformation gradient \(\bar{\bf F}\) corresponds to (theoretically) stress-free conditions. Eq. (13) involves two volume fractions, namely \(\lambda^{0}\) and \(\kappa^{0}\) (referred to as 'nominal' volume fractions in the sequel). The former controls the relative twin volume fraction and is an input in the crystallographic theory equations, cf. Eq. (2), while the latter controls the overall volume fraction of the single martensite and twinned martensite regions within the computational domain and is adjusted such that the domain of twinned martensite is sufficiently large to accommodate a rather long needle-shaped microstructure. Note that the initial state of the system is set by prescribing the order parameter \(\phi\) in accordance with the nominal volume fractions \(\lambda^{0}\) and \(\kappa^{0}\), see Fig. 2(a).
Our simulations revealed negligible discrepancies between the results of type-I and type-II twins. As a consequence, our primary focus in this study is on the analysis of type-I twins, while the striking resemblance between the simulation results of type-I and type-II twins is illustrated in A. As such, the mixing rule (7) is formulated in terms of the type-I solution of the twinning equation, as given in Eq. (4).
The computational domain is discretized by using a uniform finite-element mesh (unless stated otherwise). The size of the elements \(d\) is set such that the mesh is fine enough to properly resolve the interfaces and to capture the subtle features of the resulting microstructure. More specifically, a ratio of approximately 5 is considered between \(\pi\ell\) and \(d\), where the former is the theoretical interface thickness. Periodic boundary conditions are enforced on both the displacement field \(\mathbf{u}\) and the order parameter \(\phi\). To reduce the computational cost, the two-fold rotational symmetry of the microstructure about the central point (see point 0 in Fig.2(a)) is exploited, and thereby, only half of the domain (of the size \(h\times L\)) is computed. Accordingly, the anti-periodicity of the displacement field \(\mathbf{u}\) and the symmetry of the order parameter \(\phi\) with respect to the point 0 are enforced at the bottom edge, and similarly at the top edge.
The following material parameters are used in all the simulations. The anisotropic elastic constants of orthorhombic martensite, namely \(c_{11}=189\), \(c_{22}=141\), \(c_{33}=205\), \(c_{44}=54.9\), \(c_{55}=19.7\), \(c_{66}=62.6\), \(c_{12}=124\), \(c_{13}=45.5\), \(c_{23}=115\) (all in GPa), are adopted from the available literature data [44; 45]. The interfacial energy density is selected as \(\gamma=0.02\,\mathrm{J/m^{2}}\), see e.g., [14]. Finally, the mobility parameter \(m\) takes the value of \(m=1\) (MPa s)\({}^{-1}\). Note that our analysis is limited to the steady-state microstructure, and not its evolution process. Therefore, the mobility parameter \(m\) acts merely as a regularization parameter and its value does not affect the final results.
### Quantitative description of the microstructure
Throughout the analysis, in addition to the examination of morphological features of the predicted microstructures, we employ a set of quantitative measures to characterize the microstructures and compare them across different cases. The selected measures are established based on the following averaging operations,
\[\left\langle\cdot\right\rangle=\left\langle\cdot\right\rangle\big{|}_{\eta}= \frac{1}{w}\int_{-w/2}^{w/2}\left(\cdot\right)\,\mathrm{d}\xi,\quad\left\{ \cdot\right\}=\frac{1}{L}\int_{0}^{L}\left\langle\cdot\right\rangle\big{|}_{ \eta}\,\mathrm{d}\eta=\frac{1}{wL}\int_{0}^{L}\left(\int_{-w/2}^{w/2}\left( \cdot\right)\,\mathrm{d}\xi\right)\mathrm{d}\eta, \tag{15}\]
see Fig. 2(a) for the definition of \(\xi\) and \(\eta\) coordinates. Accordingly, through the width-averaging operation \(\left\langle\cdot\right\rangle\), the average order parameter \(\left\langle\phi\right\rangle\), the integrated elastic strain energy \(h\langle F_{\mathrm{el}}\rangle\), cf. Eq. (10), and the integrated interfacial energy \(h\langle F_{\mathrm{int}}\rangle\), cf. Eq. (11), are obtained. Note that these averages can be evaluated at arbitrary height, thus \(\left\langle\cdot\right\rangle=\left\langle\cdot\right\rangle\big{|}_{\eta}\). At the same time, the respective overall quantities are determined via the volume-averaging operation \(\left\{\cdot\right\}\), namely the overall order parameter \(\left\{\phi\right\}\), the total elastic strain energy \(\mathcal{F}_{\mathrm{el}}=hL\{F_{\mathrm{el}}\}\) and the total interfacial energy \(\mathcal{F}_{\mathrm{int}}=hL\{F_{\mathrm{int}}\}\).
In order to effectively characterize the overall elastic micro-strain energy of the macroscopic M-MM interface, we define the energy-based measures \(\gamma_{\mathrm{el}}^{\mathrm{tot}}\) and \(\Gamma_{\mathrm{el}}^{\mathrm{tot}}\) as
\[\gamma_{\mathrm{el}}^{\mathrm{tot}}=\frac{\mathcal{F}_{\mathrm{el}}}{w},\quad \Gamma_{\mathrm{el}}^{\mathrm{tot}}=\frac{\gamma_{\mathrm{el}}^{\mathrm{tot}} }{h}, \tag{16}\]
where \(\gamma_{\mathrm{el}}^{\mathrm{tot}}\) represents the elastic micro-strain energy per unit area of the M-MM interface, while the energy factor \(\Gamma_{\mathrm{el}}\) measures the dependence of the energy on the microstructure. Both \(\gamma_{\mathrm{el}}^{\mathrm{tot}}\) and \(\Gamma_{\mathrm{el}}^{\mathrm{tot}}\) provide a quantification of the elastic strain energy of the M-MM interface, however, the energy factor \(\Gamma_{\mathrm{el}}^{\mathrm{tot}}\) is of particular importance, as it filters out the first-order dependence of \(\gamma_{\mathrm{el}}^{\mathrm{tot}}\) on the twin spacing \(h\), and thus in this sense it can be considered size-independent [10; 14].
The results of our simulations reveal the presence of non-zero values of the elastic strain energy \(h\langle F_{\mathrm{el}}\rangle\) far from the M-MM interface, i.e., at the upper and lower boundaries of the computational domain at \(\eta=0\) and \(\eta=L\), see, for instance, Fig. 4 in Section 4.3 and the associated discussion. It is therefore desirable to mitigate these far-field energy contributions by correcting the elastic micro-strain energy measures. To this end, we first define the effective volume fractions \(\lambda^{*}\) and
as follows
\[\lambda^{*}=\left\langle\phi\right\rangle|_{\eta=0},\quad\kappa^{*}=1-\frac{\{ \phi\}}{\lambda^{*}}. \tag{17}\]
Note that while \(\kappa^{0}\), cf. Eq. (13), specifies the nominal position of the M-MM interface, \(\kappa^{*}\) specifies the corresponding effective (actual) position, as delineated in Fig. 2(b). Next, the 'corrected' total elastic strain energy \(\mathcal{F}_{\mathrm{el}}^{\mathrm{corr}}\) is calculated upon subtracting the contributions of the far-field energies \(\mathcal{F}_{\mathrm{el}}^{\infty,0}\) and \(\mathcal{F}_{\mathrm{el}}^{\infty,L}\) from the total elastic strain energy \(\mathcal{F}_{\mathrm{el}}\), see [43], viz.,
\[\mathcal{F}_{\mathrm{el}}^{\mathrm{corr}}=\mathcal{F}_{\mathrm{el}}-\mathcal{ F}_{\mathrm{el}}^{\infty,0}-\mathcal{F}_{\mathrm{el}}^{\infty,L}, \tag{18}\]
where
\[\mathcal{F}_{\mathrm{el}}^{\infty,0}=(1-\kappa^{*})hL\langle F_{\mathrm{el}} \rangle|_{\eta=0},\quad\mathcal{F}_{\mathrm{el}}^{\infty,L}=\kappa^{*}hL \langle F_{\mathrm{el}}\rangle|_{\eta=L}. \tag{19}\]
Note that the far-field energies \(\mathcal{F}_{\mathrm{el}}^{\infty,0}\) and \(\mathcal{F}_{\mathrm{el}}^{\infty,L}\) can be also computed by using the nominal volume fraction \(\kappa^{0}\) (by simply substituting \(\kappa^{*}\) by \(\kappa^{0}\)). This aspect is discussed in the subsequent sections. Finally, the new energy-based measures \(\gamma_{\mathrm{el}}\) and \(\Gamma_{\mathrm{el}}\) are defined as
\[\gamma_{\mathrm{el}}=\frac{\mathcal{F}_{\mathrm{el}}^{\mathrm{corr}}}{w}, \quad\Gamma_{\mathrm{el}}=\frac{\gamma_{\mathrm{el}}}{h}. \tag{20}\]
An additional energy-based measure that is consistently used to assess the macroscopic M-MM interface is the excess interfacial energy density \(\gamma_{\mathrm{int}}^{\mathrm{exs}}\) defined as
\[\gamma_{\mathrm{int}}^{\mathrm{exs}}=\frac{\mathcal{F}_{\mathrm{int}}^{ \mathrm{exs}}}{w},\quad\mathcal{F}_{\mathrm{int}}^{\mathrm{exs}}=\mathcal{F} _{\mathrm{int}}-\mathcal{F}_{\mathrm{int}}^{\mathrm{ref}}. \tag{21}\]
Here, \(\mathcal{F}_{\mathrm{int}}^{\mathrm{ref}}\) represents the total interfacial energy related to the nominal (needle-less) M-MM interface (i.e., when only the two parallel planar twin interfaces are accounted for) and is calculated either based on the effective volume fraction \(\kappa^{*}\), i.e., \(\mathcal{F}_{\mathrm{int}}^{\mathrm{ref}}=2(1-\kappa^{*})L\gamma_{\mathrm{tw}}\), or based on the nominal volume fraction \(\kappa^{0}\), i.e., \(\mathcal{F}_{\mathrm{int}}^{\mathrm{ref}}=2(1-\kappa^{0})L\gamma_{\mathrm{tw}}\). Recall that \(\gamma_{\mathrm{tw}}\) is the interfacial energy density associated with the local martensite-martensite interface, see Eq. (11). In fact, \(\gamma_{\mathrm{int}}^{\mathrm{exs}}\) in Eq. (21) represents the extra interfacial energy density resulting from the difference between the predicted microstructure and the needle-less microstructure, see the schematic representation of the respective M-MM interfaces in Fig. 2(b).
### Modeling M-MM interface: a representative study
In this section, we present the analysis of a representative study, with the aim to elucidate the individual characteristics of the simulated microstructures, as a preliminary step prior to examining their collective macroscopic responses. The computational domain considered in this study has dimensions of \(h\times L=70\times 1400\) nm\({}^{2}\), the selected nominal volume fractions are \(\kappa^{0}=0.4\) and \(\lambda^{0}=0.3\), and the interface thickness parameter is adopted as \(\ell=1\) nm. The computational
domain is discretized into approximately \(250\,000\) elements of the size \(d=0.625\) nm, thus resulting in approximately \(2.5\) million degrees of freedom.
Fig. 3 depicts the simulation results in terms of the steady-state microstructure. The microstructure is visualized in both the undeformed and deformed configurations, and is represented by the spatial distribution of the order parameter \(\phi\) and of the von Mises stress. As shown in Fig. 3, the microstructure in its steady state has developed a distinctive needle-shaped domain of martensite variant B. While the needle appears to be straight in the undeformed configuration, it exhibits a visible bending in the deformed configuration with a bending angle of approximately \(5^{\circ}\) (measured at the needle apex) with respect to the longitudinal axis (global \(x_{2}\) axis). The observed bending of the needle conforms with the experimental observations, e.g., [19, 46], as well as with the previous modeling analyses, e.g., [19, 25, 27, 29]. Another noteworthy feature of the microstructure pertains to the excessive diffuseness of the needle apex. Our auxiliary simulations, aimed at investigating the effect of the interface thickness parameter \(\ell\) on the microstructure, confirmed that such excessive diffuseness does not represent a physical characteristic of the microstructure, but rather a numerical artifact arising from the phase-field modeling framework. It was observed that reducing \(\ell\) results in a significantly less diffuse needle apex. Nevertheless, it is important to note that a smaller \(\ell\) requires a relatively finer finite-element mesh, which renders the computations excessively costly, and thus has not been considered in our main simulations.
From the distribution of the von Mises stress, we observe that the stress is predominantly concentrated in the areas surrounding the curved interfaces (within variant A) close to the needle apex. Conversely, within the needle itself (within variant B), the stress is considerably lower. Interestingly, our von Mises stress distribution shows qualitative (and to some extent quantitative) similarities with the stress distribution obtained by Seiner et al. [19], in particular, the stress distribution related to the 'optimal' case that results from the minimization of elastic energy, see Fig. 8 therein.
We continue the discussion by examining the longitudinal profiles of the integrated elastic strain energy \(h\langle f_{\text{el}}\rangle\), integrated interfacial energy \(h\langle f_{\text{int}}\rangle\) and the average order parameter \(\langle\phi\rangle\), see Fig. 4. It is immediate to see that the elastic strain energy reaches its peak within the region occupied by the needle apex and then decays rapidly as it moves away from this region. Contrary to the notion that the elastic micro-strain energy vanishes far away from the M-MM interface, we observe that the far-field elastic strain energy is non-zero and is equal to \(h\langle F_{\text{el}}\rangle\big{|}_{\eta=0}=2.1\times 10^{-3}\) J/m\({}^{2}\) and \(h\langle F_{\text{el}}\rangle\big{|}_{\eta=L}=1.2\times 10^{-4}\) J/m\({}^{2}\). Notably, the presence of non-zero energy contribution at \(\eta=0\) is directly linked to the observed noticeably higher value of the effective volume fraction \(\lambda^{*}=0.35\) compared to the nominal volume fraction \(\lambda^{0}=0.3\). And this, in turn, leads to a deviation between the effective volume fraction \(\kappa^{*}=0.49\) and the nominal volume fraction \(\kappa^{0}=0.4\), as highlighted by
the dashed and solid white lines overlaid on the undeformed microstructure in Fig. 3. The underlying cause of this omnipresent discrepancy can be sought in the dominant role of the interfacial energy, stemming from the limited size of the computational domain, and will be further discussed in Section 4.4.
As discussed previously in Sections 4.1 and 4.2, we opt to mitigate the far-field energy contributions by introducing the corrected elastic micro-strain energy measures \(\gamma_{\mathrm{el}}\) and \(\Gamma_{\mathrm{el}}\), see Eq. (20). As an illustration of this correction, the corrected profile of the integrated elastic strain energy \(h\langle F_{\mathrm{el}}\rangle\) is depicted in Fig. 4(a). This correction is accomplished by dividing the curve into two segments using the effective volume fraction \(\kappa^{*}\), where the first segment spans from \(\eta=0\) to \(\eta=(1-\kappa^{*})L\), and the second segment spans from \(\eta=(1-\kappa^{*})L\) to \(\eta=L\). Subsequently, we subtract the far-field elastic strain energy \(h\langle F_{\mathrm{el}}\rangle\big{|}_{\eta=0}=2.1\times 10^{-3}\) J/m\({}^{2}\) from the first segment and \(h\langle F_{\mathrm{el}}\rangle\big{|}_{\eta=L}=1.2\times 10^{-4}\) J/m\({}^{2}\) from the second segment. Note that the area beneath this corrected curve is equal to \(\mathcal{F}_{\mathrm{el}}^{\mathrm{corr}}\), cf. Eq. (18). Alternatively, this adjustment can be also made by employing the nominal volume fraction \(\kappa^{0}\). The division of the curve into two segments would then take place at a different position, as shown by the arrow in Fig. 4(a).
Our discussion in this section concludes by drawing attention to the trend of the interfacial energy \(h\langle F_{\mathrm{int}}\rangle\). Specifically, \(h\langle F_{\mathrm{int}}\rangle=0.0406\) J/m\({}^{2}\) remains almost constant throughout the entire
Figure 3: Simulation results corresponding to the representative study: the steady-state microstructure containing a needle-shaped domain of variant B in both the undeformed and deformed configurations. The color contours represent the fields of the order parameter \(\phi\) and von Mises stress. The parallel dashed lines overlaid on the undeformed microstructure delineate the trajectory of the planar interfaces extended up to the effective position of the M–MM interface, determined based on \(\kappa^{*}\), while the inclined solid line indicates the nominal position of the M–MM interface, determined based on \(\kappa^{0}\). The place at which the dashed lines and the actual interfaces begin to diverge indicates the onset of the wedge-shaped region of the microstructure.
length of the twinned martensite domain. Considering that two martensite-martensite interfaces are present, the resulting interfacial energy density amounts to \(h\langle F_{\text{int}}\rangle/2=0.0203\) J/m\({}^{2}\), which is only marginally higher than the interfacial energy density \(\gamma_{\text{tw}}=0.02\) J/m\({}^{2}\) used in the simulations. This discrepancy is likely due to the finite resolution of the interfaces in our simulation, leading to inexact integration of the interfacial energy.
### Size effects
This section aims to investigate the impact of twin spacing \(h\) on the microstructure and the energy-based characteristics of the M-MM interface, and thereby, elucidate the related size effects. A series of simulations are carried out for twin spacing \(h\) ranging from \(h=20\) nm to \(h=160\) nm. Note that the geometrical aspect ratio is the same in all cases, \(L/h=20\). As discussed in Section 4.1, in order to maintain a reasonable resolution of the predicted microstructures, the ratio of \(\pi\ell/d=5\) (recall that \(\pi\ell\) refers to the theoretical interface thickness and \(d\) to the element size) is kept constant throughout all simulations. As such, it is not computationally feasible to perform all the simulations using a fixed interface thickness parameter \(\ell\). Instead, a common strategy, as adopted in previous studies [14; 34], is utilized in which \(\ell\) and \(d\) are proportionally increased, and this facilitates the extension of our analysis over a broader range of twin spacing. As it will be shown, the choice of \(\ell\) has a small influence on the simulation results, confirming the validity of our analysis outcomes.
Prior to a quantitative examination of the results, it should be pointed out that within the realm of phase-field modeling, the size-dependence of the microstructure stems from the inherent
Figure 4: Simulation results corresponding to the representative study: the longitudinal profiles of (a) the integrated elastic strain energy \(h\langle F_{\text{el}}\rangle\), (b) the integrated interfacial energy \(h\langle F_{\text{int}}\rangle\), and (c) the average order parameter \(\langle\phi\rangle\). The yellowish curve in panel (a) refers to the case where the profile of the elastic strain energy is corrected by means of subtracting the far-field energy contributions. The arrow in panel (a) indicates the position where the division of the curve into two segments (see the text) would take place if the nominal volume fraction \(\kappa^{0}\) were used for the correction.
length-scale of the interfacial energy and manifests itself as a result of the competition between the elastic strain energy and interfacial energy. Indeed, this fundamental premise underpins all the size-dependent characteristics observed in this study. Specifically, as the twin spacing \(h\) is increased, it leads to a shift in the balance of energy from the interfacial energy to elastic strain energy, and hence the minimization of the total energy gives rise to needle-shaped microstructures with relatively longer wedges. On the contrary, for small \(h\), since the interfacial energy is dominant, it is energetically favorable for the microstructure to develop a smaller area of interfaces, i.e., a relatively short domain of needle-shaped martensite. This is, however, achieved at the cost of an increase in the elastic strain energy. In particular, since the total volume fraction of variant B (quantified by \(\{\phi\}\)) is indirectly constrained by the displacement boundary conditions to be close to the nominal one (i.e., \(\{\phi\}=(1-\kappa^{*})\lambda^{*}\approx(1-\kappa^{0})\lambda^{0}\)), the shortening of the needle results in an increase in the twin volume fraction, hence \(\lambda^{*}>\lambda^{0}\). This is then accommodated by the elastic strain energy that does not vanish far from the M-MM interface.
Fig. 5 depicts the steady-state microstructures for different twin spacing \(h\), allowing for a clear observation of the size-dependent microstructural changes described above, in particular, concerning the discrepancy between the effective and nominal volume fractions. A quantitative examination of the microstructures (see Fig. 6(a)) reveals that this discrepancy is, as expected, more pronounced for smaller \(h\). As \(h\) is increased, both \(\lambda^{*}\) and \(\kappa^{*}\) gradually tend towards their corresponding nominal values. A direct outcome of the microstructural changes depicted and quantified in Figs. 5 and 6(a) is reflected in the plot of excess interfacial energy density \(\gamma_{\rm int}^{\rm exs}\) (Fig. 6(b)), which is calculated once based on the nominal volume fraction \(\kappa^{0}\) and once based on the effective volume fraction \(\kappa^{*}\), see
Figure 5: The steady-state microstructure (represented in the undeformed configuration) for different twin spacing \(h\). The parallel dashed lines indicate the trajectory of the planar interfaces and are extended up to the effective position of the M–MM interface (specified by \(\kappa^{*}\)), while the inclined solid lines indicate the position of the nominal M–MM interface (specified by \(\kappa^{0}\)).
Eq. (21) and the related discussion. It follows that both graphs in Fig. 6 exhibit similar monotonically increasing trend and that they are visibly distant, notably for smaller \(h\). The monotonically increasing trend of \(\gamma_{\rm int}^{\rm exs}\) displays the elongation of the wedge-shaped region of the microstructure as \(h\) increases, and thus the more interfacial energy associated with it.
Fig. 7 provides a demonstration of the size effects in terms of the elastic micro-strain energy measures \(\gamma_{\rm el}\) and \(\Gamma_{\rm el}\). It can be observed that while \(\gamma_{\rm el}\) exhibits a roughly linearly increasing trend, the energy factor \(\Gamma_{\rm el}\) exhibits a nonlinearly decreasing trend, seemingly an opposing behaviour to that of the excess interfacial energy density \(\gamma_{\rm int}^{\rm exs}\) shown in Fig. 6. The diminishing trend of \(\Gamma_{\rm el}\) is a clear indication of the shift in the balance of energy as \(h\) varies and highlights the interplay between the interfacial energy and elastic strain energy contributions. Although \(\Gamma_{\rm el}\) appears to converge towards a limit value, similar to other curves in Fig. 6, this limit value remains unattainable within the range of \(h\) explored in this study.
It is worth highlighting that the range of the values of the energy factor \(\Gamma_{\rm el}\) observed in Fig. 7, from 2 MJ/m\({}^{3}\) to nearly 4 MJ/m\({}^{3}\), is consistent with the findings of Seiner et al. [19]. Specifically, their 'optimal' microstructure exhibits a value of 2.9 MJ/m\({}^{3}\), and their 'experimental' microstructure exhibits a value of 4 MJ/m\({}^{3}\) (we have determined these values based on the overall elastic strain energy and domain geometry reported therein). Indeed, upon extrapolating the simulation data to encompass larger twin spacings it becomes evident that our energy factor \(\Gamma_{\rm el}\) has a limit value of only marginally lower than 2 MJ/m\({}^{3}\). This substantiates the relevance of the quantitative comparison made with the data of [19] in which a twin spacing of 10 \(\mu\)m was used.
In Figs. 6 and 7, we have presented the collective responses showing the size effects. In order
Figure 6: The impact of twin spacing \(h\) on (a) the effective volume fractions \(\lambda^{*}\) and \(\kappa^{*}\), and on (b) the excess interfacial energy density \(\gamma_{\rm int}^{\rm exs}\). The solid and dashed curves in panel (b) correspond to cases where \(\gamma_{\rm int}^{\rm exs}\) is determined based on the effective volume fraction \(\kappa^{*}\) and nominal volume fraction \(\kappa^{0}\), respectively, cf. Eq. (21).
to gain a deeper understanding on the effect of twin spacing \(h\), representative individual profiles of average elastic strain energy \(\langle F_{\rm el}\rangle\) and order parameter \(\langle\phi\rangle\) are reported in Fig. 8.
This section concludes with a discussion of the impact of the interface thickness parameter \(\ell\) on the simulation results, as it is essential to ensure that the choice of \(\ell\) does not compromise the validity of the analysis outcomes. While this can be partly discerned from Figs. 6 and 7, further investigation deems necessary. Fig. 9 presents magnified views of the needle-shaped martensite domains obtained for a fixed twin spacing \(h=70\) nm but for various interface thickness parameters \(\ell\). As expected, the apex of the needle becomes more diffuse as \(\ell\) increases (Fig. 9(a)). However, no significant morphological changes are evident. This is also confirmed by the magnified views of the corresponding trimmed microstructure visualizations (Fig. 9(b)), in which the diffuse interfaces are excluded by representing variant B via trimmed order parameter \(\phi\geq 0.5\) and displaying it using a single color.
Furthermore, Fig. 10 depicts the individual profiles of the elastic strain energy \(h\langle F_{\rm el}\rangle\) and interfacial energy \(h\langle F_{\rm int}\rangle\) for various \(\ell\). Except for some visible effects in the vicinity of the energy peaks, which is expected due to the change in the diffuseness of the needle apex, the profiles are practically insensitive to the choice of \(\ell\). It should be mentioned that the same conclusion holds for the profile of the order parameter \(\langle\phi\rangle\), which is not included in Fig. 10.
Figure 7: The graphs of the elastic micro-strain energy \(\gamma_{\rm el}\) (a) and the corresponding energy factor \(\Gamma_{\rm el}\) (b) as a function of twin spacing \(h\). The solid and dashed curves refer to cases where the calculation of the elastic strain energy measures \(\gamma_{\rm el}\) and \(\Gamma_{\rm el}\) is done based on the effective volume fraction \(\kappa^{*}\) and nominal volume fraction \(\kappa^{0}\), respectively, cf. Eqs. (18) and (19). Notice that, apart from the initial segment where \(h\) is relatively small, the two curves exhibit a reasonably good agreement.
Figure 8: The impact of twin spacing \(h\) on the profile of the average elastic strain energy \(\langle F_{\rm el}\rangle\) (first row) and average order parameter \(\langle\phi\rangle\) (second row) for three different interface thickness parameters \(\ell\), namely (a) \(\ell=1\) nm, (b) \(\ell=1.5\) nm, and (c) \(\ell=2.5\) nm. The vertical dashed lines in the first row indicate the position of the actual M–MM interface, which is specified in terms of the effective volume fraction \(\kappa^{*}\).
### Effect of twin volume fraction
The twin volume fraction \(\lambda^{0}\) is regarded as a crucial input parameter in the analysis of the M-MM interface, as its selection has profound implications on the microstructure and the related quantitative characteristics. With that in mind, in this section, we seek to gain insight into the effect of twin volume fraction \(\lambda^{0}\) on the simulation outcomes. To this end, simulations are conducted by varying \(\lambda^{0}\) within the range of \(0.1\) and \(0.9\) with an increment of \(0.1\). A computational domain with dimensions of \(h\times L=50\times 1250\) nm\({}^{2}\) is selected, a nominal volume fraction of \(\kappa^{0}=0.2\) is adopted and an interface thickness parameter of \(\ell=0.5\) nm. The latter yields microstructures with less diffuse interfaces compared to those presented in preceding sections. To maintain the same microstructure resolution as before, i.e., to keep the ratio of \(\pi\ell/d=5\), without a significant increase of the computational cost, a non-uniform finite-element mesh (non-uniform only in the longitudinal direction) is employed. More specifically, the mesh is finer in the vicinity of the needle apex (where the microstructure is more susceptible to morphological changes) and comprises nearly equiaxed elements of the size \(d=0.3125\) nm, while it is coarser sufficiently far from the needle apex and comprises elongated elements. It should be remarked that our primary observation was that the morphology of the microstructure changes considerably within the \(\lambda^{0}\) range of \(0.5\) to \(0.7\). Consequently, to augment the analysis, two additional simulations are performed for \(\lambda^{0}=0.55\) and \(\lambda^{0}=0.65\).
The steady-state microstructures for different twin volume fractions \(\lambda^{0}\) are compared in Fig. 11. Notably, two distinct families of microstructures emerge within the range of \(\lambda^{0}\) investigated. For \(\lambda^{0}\leq 0.5\), the microstructure exhibits a single needle of variant B, which
Figure 9: Magnified views of the needle for varying interface thickness parameter \(\ell\), with a fixed computational domain of \(h\times L=70\times 1400\) nm. In panel (b), the microstructures are represented by the trimmed order parameter \(\phi\geq 0.5\) displayed by a single color, thus excluding the diffuse interfaces.
Figure 10: The effect of interface thickness parameter \(\ell\) on the profile of the elastic strain energy \(h\langle F_{\mathrm{el}}\rangle\) (first row) and on the profile of the interfacial energy \(h\langle F_{\mathrm{int}}\rangle\) (second row) for different twin spacings, namely (a) \(h=70\) nm, (b) \(h=100\) nm, and (c) \(h=130\) nm.
Figure 11: The steady-state microstructure for various nominal twin volume fractions \(\lambda^{0}\): (a) the microstructures in the deformed configuration for \(\lambda^{0}\) values ranging from \(0.2\) to \(0.9\), and (b) magnified views of the periodically repeated microstructures (in the undeformed configuration) for \(\lambda^{0}\) values of \(0.6\), \(0.7\) and \(0.8\). The microstructures for \(\lambda^{0}=0.6\) and \(\lambda^{0}=0.65\) exhibit some excessive diffuseness close to the needle apex, which can be circumvented by using a smaller interface thickness parameter \(\ell\). Notice that, for space reasons, a portion of the microstructures in panel (a) is clipped from the left. As such, the microstructure for \(\lambda^{0}=0.1\), which possesses a relatively shorter needle but otherwise is similar to that for \(\lambda^{0}=0.2\), has not been included.
increasing \(\lambda^{0}\). On the other hand, for \(\lambda^{0}\geq 0.6\), variant A is engaged with the needle-shaped morphology and this is accompanied by the formation of an apparent interface between the domains of twinned martensite and single martensite. Unlike the microstructures for \(\lambda^{0}\leq 0.5\), the height of the needle is only minimally influenced by \(\lambda^{0}\). An intriguing observation is that the two families of microstructures are mediated by a transitional microstructure at \(\lambda^{0}=0.55\), which is clearly distinct from the microstructure of either family. Specifically, in this transitional microstructure, the needle-shaped domain of variant B exhibits a branching morphology, which arises as a means to reduce the elastic strain energy of the system. Similar observations of branching, as a spontaneous energy-minimizing mechanism, have been made in other phase-field modeling investigations [14, 27]. A closer examination of the microstructures for \(\lambda^{0}=0.6\) and \(\lambda^{0}=0.65\) reveals the occurrence of branching also in these cases, as can be seen clearly in the periodically repeated microstructure for \(\lambda^{0}=0.6\) in Fig. 11(b). Another noteworthy observation from the microstructure visualizations in Fig. 11(b) is that the apparent interface between the twinned martensite and single martensite domains is not perfectly straight and takes on a stepped appearance.
Fig. 12 summarizes the effect of the twin volume fraction \(\lambda^{0}\) on the macroscopic characteristics of the M-MM interface. Here, the primary observation pertains to the fact that the nominal and effective volume fractions tend towards each other as \(\lambda^{0}\) increases, see Fig. 12(a). In particular, for the second family of microstructures, i.e., for \(\lambda^{0}\geq 0.6\), the graphs of the effective and nominal volume fractions almost overlap. For the first family of microstructures, i.e., for \(\lambda^{0}\leq 0.5\), however, there exists a visible discrepancy (as already discussed in Section 4.4 for \(\lambda^{0}=0.3\)) which diminishes as \(\lambda^{0}\) increases. Consequently, the graphs of the excess interfacial energy density \(\gamma_{\rm int}^{\rm exs}\) and elastic micro-strain energy \(\gamma_{\rm el}\) (and thus also \(\Gamma_{\rm el}\)) (Fig. 12(b,c)) exhibit distinctive behaviours for the two families of microstructures, and thereby, have been depicted by separate curves. Both \(\gamma_{\rm int}^{\rm exs}\) and \(\gamma_{\rm el}\) energy measures have a monotonically increasing trend from both the right and left directions, i.e., as \(\lambda^{0}\) increases for the first family and as \(\lambda^{0}\) decreases for the second family, and peak at specific \(\lambda^{0}\). More precisely, the peak for excess interfacial energy density \(\gamma_{\rm int}^{\rm exs}\) occurs at \(\lambda^{0}=0.55\), associated with the special branching morphology observed, and the peak for elastic micro-strain energy \(\gamma_{\rm el}\) occurs at \(\lambda^{0}=0.6\).
It is worth mentioning that the graph of the elastic micro-strain energy \(\gamma_{\rm el}\) resembles qualitatively the bell-shaped curve proposed by Petryk et al. [43], which describes the elastic micro-strain energy of a generic transition layer linking laminated and homogeneous half-spaces. Their bell-shaped curve exhibits a symmetry with respect to the volume fraction of \(0.5\). Here, such a symmetry is not observed as a result of the substantial differences in the microstructures of the two families.
## 5 Conclusion
We have employed a conventional phase-field approach to model the microstructural features of the transition layer between a single martensite variant and a twinned martensite domain in a CuAlNi single crystal. The most salient feature of the microstructure is the presence of needle-like twins terminating at the interface, which has been successfully reproduced in our simulations, especially the bending and tapering of the needles that are in qualitative agreement with the experimental findings of Seiner et al. [19]. Our primary objective has been to quantify the energy-based characteristics of the transition layer. In view of this, we have investigated the influence of the twin spacing (size effects) and the twin volume fraction on the interfacial and elastic strain energy measures. The obtained values, particularly for the elastic strain energy factor \(\Gamma_{\mathrm{el}}\) and the stresses, are in a quantitative agreement with those obtained in [19] using a sharp-interface approach. A notable outcome of our analysis is the emergence of branching microstructure for certain twin volume fractions. Also, the microstructures and the energy measures predicted for type-I and type-II twins are found to be surprisingly similar. Our study exhibited a small impact of the phase-field length-scale parameter on the simulation results, confirming the validity of our analysis outcomes.
AcknowledgementThis research was funded in part by the National Science Centre (NCN) in Poland through the Grant No. 2021/43/D/ST8/02555. For the purpose of Open Access, the authors have applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM) version
Figure 12: The effect of the twin volume fraction \(\lambda^{0}\) on (a) the effective volume fractions \(\lambda^{*}\) and \(\kappa^{*}\), (b) the excess interfacial energy density \(\gamma_{\mathrm{int}}^{\mathrm{ess}}\), and (c) the elastic micro-strain energy measures \(\gamma_{\mathrm{el}}\) and \(\Gamma_{\mathrm{el}}\). In panel (a), the dashed curves indicate the nominal volume fractions \(\lambda^{0}\) and \(\kappa^{0}\). Notice that, in panels (b) and (c), the energy measures are plotted as a function of the effective volume fraction \(\lambda^{*}\). Moreover, the solid and dashed curves refer to cases where the calculations are done based on the effective volume fraction \(\kappa^{*}\) and nominal volume fraction \(\kappa^{0}\), respectively.
arising from this submission.
## Appendix A Effect of twin type
In this appendix, we present the simulation results obtained for type-II twin and compare them with those of type-I twin. Two cases are selected for this analysis, namely the case with the dimensions of \(h\times L=70\times 1400\) mm\({}^{2}\) and the nominal volume fractions of \(\kappa^{0}=0.4\) and \(\lambda^{0}=0.3\) (the representative case discussed in Section 4.3) and the case with the dimensions of \(h\times L=50\times 1250\) and the nominal volume fractions of \(\kappa^{0}=0.2\) and \(\lambda^{0}=0.7\) (see the study of the effect of twin volume fraction in Section 4.5). The longitudinal profiles of the integrated elastic strain energy \(h\langle F_{\rm el}\rangle\), the integrated interfacial interfacial energy \(h\langle F_{\rm int}\rangle\) and the average order parameter \(\langle\phi\rangle\) for the two twin types are compared in Figs. 13 and 14. The results consistently reveal that the effect of twin type is negligible, as only some truly minor discrepancies can be detected, see the insets in Figs. 13(a) and 14(a).
It is to be remarked that for these additional simulations, the rank-one mixing (7) is redefined in terms of the type-II solution of the twinning equation, i.e., \({\bf F}^{\rm t}={\bf U}_{\rm A}+\phi{\bf a}^{*}\otimes{\bf l}^{*}\), where \({\bf a}^{*}=(-0.0036,0.1691,-0.1921)\) and \({\bf l}^{*}=(0.2282,0.6885,0.6885)\).
## Data availability
Data will be made available on request.
Figure 13: The effect of the twin type on the simulation results for the representative study (\(\lambda^{0}=0.3\), see Section 4.3): (a) the profile of the elastic strain energy \(h\langle F_{\rm el}\rangle\), (b) the profile of the interfacial energy \(h\langle F_{\rm int}\rangle\), and (c) the profile of the average order parameter \(\langle\phi\rangle\). |
2301.01312 | Binary black hole spins: model selection with GWTC-3 | The origin of the spins of stellar-mass black holes is still controversial,
and angular momentum transport inside massive stars is one of the main sources
of uncertainty. Here, we apply hierarchical Bayesian inference to derive
constraints on spin models from the 59 most confident binary black hole merger
events in the third gravitational-wave transient catalogue (GWTC-3). We
consider up to five parameters: chirp mass, mass ratio, redshift, effective
spin, and precessing spin. For model selection, we use a set of binary
population synthesis simulations spanning drastically different assumptions for
black hole spins and natal kicks. In particular, our spin models range from
maximal to minimal efficiency of angular momentum transport in stars. We find
that, if we include the precessing spin parameter into our analysis, models
predicting only vanishingly small spins are in tension with GWTC-3 data. On the
other hand, models in which most spins are vanishingly small, but that also
include a sub-population of tidally spun-up black holes are a good match to the
data. Our results show that the precessing spin parameter has a crucial impact
on model selection. | Carole Périgois, Michela Mapelli, Filippo Santoliquido, Yann Bouffanais, Roberta Rufolo | 2023-01-03T19:04:10Z | http://arxiv.org/abs/2301.01312v3 | # Binary black hole spins: model selection with GWTC-3
###### Abstract
The origin of the spins of stellar-mass black holes is still controversial, and angular momentum transport inside massive stars is one of the main sources of uncertainty. Here, we apply hierarchical Bayesian inference to derive constraints on spin models from the 59 most confident binary black hole merger events in the third gravitational-wave transient catalogue (GWTC-3). We consider up to five parameters: chirp mass, mass ratio, redshift, effective spin, and precessing spin. For model selection, we use a set of binary population synthesis simulations spanning drastically different assumptions for black hole spins and natal kicks. In particular, our spin models range from maximal to minimal efficiency of angular momentum transport in stars. We find that, if we include the precessing spin parameter into our analysis, models predicting only vanishingly small spins are in tension with GWTC-3 data. On the other hand, models in which most spins are vanishingly small, but that also include a sub-population of tidally spun-up black holes are a good match to the data. Our results show that the precessing spin parameter has a crucial impact on model selection.
keywords: black hole physics - gravitational waves - binaries: general - stars: black holes
## 1 Introduction
The third observing run (O3) of the Advanced LIGO (Aasi et al., 2015) and Virgo (Acernese et al., 2015) detectors has brought the number of compact binary merger observations up to 90 events with a probability of astrophysical origin \(>0.5\)(Abbott et al., 2019, 2021, 2020). In particular, the 63 confident detections of binary black hole (BBH) mergers (with a false alarm rate FAR\(<0.25\) yr\({}^{-1}\)) lead to more accurate constraints on the mass and spin distribution of these systems (Abbott et al., 2021).
The intrinsic distribution of primary black hole (BH) masses inferred by the LIGO-Virgo-KAGRA collaboration (hereafter, LVK) shows several sub-structures, including a main peak at \(\approx 10\) M\({}_{\odot}\), a secondary peak at \(\approx 30-40\) M\({}_{\odot}\), and a long tail extending up to \(\sim 80\) M\({}_{\odot}\)(e.g., Abbott et al., 2021). The inferred distribution of mass ratios has a strong preference for equal-mass systems, but several BBHs are confindently unequal-mass (e.g.,GW190517 Abbott et al., 2020). Focusing on BH spins, we can safely exclude that all BHs are maximally spinning (Farr et al., 2017, 2018; Abbott et al., 2019). Typical spin magnitudes in BBHs are small, with \(\sim 50\%\) of BHs having \(\chi\lesssim 0.3\)(e.g., Wysocki et al., 2019; Abbott et al., 2021), although not all BHs in the LVK sample have zero spin (Roulet and Zaldarriaga, 2019; Miller et al., 2020). For example, GW151226 (Abbott et al., 2016) and GW190517 (Abbott et al., 2021) confidently possess spin. LVK data also support some mild evidence for spin-orbit misalignment (e.g., Tiwari et al., 2018; Abbott et al., 2021, 2022; Camilister et al., 2021, 2022).
These results provide crucial insights to understand BBH formation and evolution (e.g., Gerosa et al., 2013; Stevenson et al., 2015; Rodriguez et al., 2016; Stevenson et al., 2017; Talbot and Thrane, 2017; Fishbach and Holz, 2017; Vitale et al., 2017; Zevin et al., 2017; Farr et al., 2018; Barrett et al., 2018; Taylor and Gerosa, 2018; Arca Sedda and Benacquista, 2019; Roulet and Zaldarriaga, 2019; Wysocki et al., 2019; Bouffanais et al., 2021, 2021; Kimball et al., 2020; Baibhav et al., 2020; Arca Sedda et al., 2020; Zevin et al., 2021; Mapelli et al., 2021, 2022). Moreover, the mass and spin of BHs (BHs) carry the memory of their progenitor stars and therefore are a key to unravel the details of massive star evolution and collapse (e.g., Fryer and Kalogera, 2001; Heger et al., 2003; Belczynski et al., 2010; Mapelli et al., 2013; Fragos and McClintock, 2015; Marchant et al., 2016; Eldridge and Stanway, 2016; de Mink and Mandel, 2016; Spera and Mapelli, 2017; Bavera et al., 2020; Belczynski et al., 2020; Mandel et al., 2021; Fryer et al., 2022; Olejak et al., 2022; Chattopadhyay et al., 2022; van Son et al., 2022; Briel et al., 2022; Stevenson and Clarke, 2022; Broekgaarden et al., 2022, 2022). In particular, the spin magnitude of a stellar-origin BH should retain the imprint of the spin of the core of its progenitor star (e.g., Qin et al., 2018, 2019; Fuller and Ma, 2019; Bavera et al., 2020; Belczynski et al., 2020; Olejak and Belczynski, 2021; Stevenson, 2022).
Several models have been proposed to infer the spin magnitude of the BH from that of the progenitor star. The main open question concerns the efficiency of angular momentum transport within a star (e.g., Maeder and Meynet, 2000; Cantiello et al., 2014; Fuller et al., 2019). If angular momentum is efficiently transferred from the core to the outer layers, mass loss by stellar winds can dissipate most of
it, leading to a low-spinning stellar core and then to a low-spinning BH. If instead the core retains most of its initial angular momentum until the final collapse, the BH will be fast spinning.
In the shellular model (Zahn, 1992; Ekstrom et al., 2012; Limongi and Chieffi, 2018; Costa et al., 2019), angular momentum is mainly transported by meridional currents and shear instabilities, leading to relatively inefficient spin dissipation. In contrast, according to the Tayler-Spruit dynamo mechanism (Spruit, 2002), differential rotation induces the formation of an unstable magnetic field configuration, leading to an efficient transport of angular momentum via magnetic torques. Building upon the Tayler-Spruit mechanism, Fuller and Ma (2019) derived a new model with an even more efficient angular momentum dissipation, predicting that the core of a single massive star might end its life with almost no rotation.
Electromagnetic observations yield controversial results. Asteroseismology favours slowly rotating cores in the late evolutionary stages, but the vast majority of stars with an asteroseismic estimate of the spin are low-mass stars (Mosser et al., 2012; Gehan et al., 2018; Aerts et al., 2019). Continuum-fitting derived spins of BHs in high-mass X-ray binaries are extremely high (e.g., Reynolds, 2021; Miller-Jones et al., 2021; Fishbach and Kalogera, 2022), but such measurements might be affected by substantial observational biases (e.g., Reynolds, 2021). Finally, BH spins inferred from quasi periodic oscillations yield notably smaller values than continuum fitting. For example, the estimate of the dimensionless spin of the BH in GRO J1655-40 is \(\chi=0.7\pm 0.1\) and \(0.290\pm 0.003\) from continuum fitting (Shafee et al., 2006) and quasi-periodic oscillations (Motta et al., 2014), respectively.
In a binary system, the evolution of the spin is further affected by tidal forces and accretion, which tend to spin up a massive star, whereas non-conservative mass transfer and common-envelope ejection enhance mass loss, leading to more efficient spin dissipation (Kushnir et al., 2016; Hotokezaka and Piran, 2017; Zaldarriaga et al., 2018; Qin et al., 2018). For example, the model by Bavera et al. (2020) shows that the second-born BH can be highly spinning if its progenitor was tidally spin up when it was a Wolf-Rayet star orbiting about the first-born BH.
Furthermore, the orientation of the BH spin with respect to the orbital angular momentum of the binary system encodes information about binary evolution processes. In a tight binary system, tides and mass transfer tend to align the stellar spins with the orbital angular momentum (Gerosa et al., 2018, but see Stegmann and Antonini, 2021 for a possible spin flip process induced by mass transfer). If the binary system is in the field, the supernova kick is the main mechanism that can misalign the spin of a compact object with respect to the orbital angular momentum, by tilting the orbital plane (e.g., Kalogera, 2000). Finally, the spins of BHs in dynamically formed binary systems are expected to be isotropically distributed, because close encounters in a dense stellar cluster reset and previous signature of alignment (e.g., Rodriguez et al., 2016; Mapelli et al., 2021).
Here, we perform a model-selection hierarchical Bayesian analysis on confident LVK BBHs (\(p_{\rm astro}>0.9\) and \(FAR<0.25\,\mathrm{yr}^{-1}\)). We consider models of field BBHs for three of the most used angular-momentum transport models: (i) the shellular model as implemented in the Geneva stellar evolution code (Ekstrom et al., 2012), (ii) the Tayler-Spruit dynamo model as implemented in the mesa code (Cantiello et al., 2014), and (iii) the model by Fuller and Ma (2019). Hereafter, we will refer to these three models simply as GENEVA (G), MESA (M) and FULLER (F) models, following the description in Belczynski et al. (2020).
For each of these models, we consider an additional variation accounting for the Wolf-Rayet (WR) star tidal spin-up mechanism described by Bavera et al. (2020). Also, we account for spin tilts induced by core-collapse supernova explosions.
This paper is organized as follows. Section 2 presents our population-synthesis models. Section 3 describes the hierarchical Bayesian framework we used and discusses the LVK events used in our study. We lay down the results in Section 4, and summarize our conclusions in Section 5.
## 2 Astrophysical models
### monse and natal kicks
We simulated our binary systems with the code monse(Mapelli et al., 2017; Giacobbo et al., 2018). monse is a custom and upgraded version of ase(Hurley et al., 2000, 2002), in which we introduced metallicity-dependent stellar winds for OB (Vink et al., 2001), WR (Belczynski et al., 2010), and luminous blue variable stars (Giacobbo and Mapelli, 2018). monse includes a formalism for electron-capture (Giacobbo and Mapelli, 2019), core-collapse (Fryer et al., 2012), and (pulsational) pair-instability supernovae (Mapelli et al., 2020). Here, we adopt the rapid core-collapse supernova prescription, which enforces a gap between the maximum mass of neutron stars and the minimum mass of BHs (2-5 M\({}_{\odot}\), Ozel et al., 2010; Farr et al., 2011).
We model natal kicks of neutron stars and BHs according to three different models, as shown in Fig. 1:
* A unified kick model, in which both neutron stars and BHs receive a kick \(v_{\rm kick}\propto m_{\rm ej}/m_{\rm rem}\), where \(m_{\rm ej}\) is the mass of the ejecta and \(m_{\rm rem}\) the mass of the compact remnant (Giacobbo and Mapelli, 2020, hereafter GM20). This model naturally produces low-kicks for electron-capture, stripped and ultra-stripped supernovae (Tauris et al., 2015, 2017). Hereafter, we call this model GM20.
* A model in which compact-object kicks are drawn from a Maxwellian curve with one-dimensional root-mean-square \(\sigma=265\) km s\({}^{-1}\), consistent with observations of Galactic pulsars (Hobbs et al., 2005). This realistically represents the upper limit for BH natal kicks. Hereafter, we name this model \(\sigma 265\).
* A model in which compact-object kicks are drawn from a Maxwellian curve with \(\sigma=150\) km s\({}^{-1}\). This value of \(\sigma\) is more similar to what suggested from indirect measurements of Galactic BH kicks (e.g., Repetto et al., 2017; Atri et al., 2019). Hereafter, we refer to this model as \(\sigma 150\).
For more details about mose, see Giacobbo and Mapelli (2018). monse is an open-source code and can be downloaded from this link.
### Spin magnitude
We have implemented four models for the spin magnitude in monse, the first three from Belczynski et al. (2020), and the fourth from Bouffanais et al. (2019). Given the large uncertainties on angular momentum transport, we do not claim that these four models are a complete description of the underlying physics: our models must be regarded as toy models, which bracket the current uncertainties on BH spins.
#### 2.2.1 Geneva (G) model
In the Geneva (hereafter, G) model, the dimensionless natal spin magnitude of a BH (\(\chi\)) can be approximated as:
\[\chi=\begin{cases}0.85&M_{\rm CO}\leq m_{1}\\ a\,M_{\rm CO}+b&m_{1}<M_{\rm CO}<m_{2}\\ a_{\rm low}&M_{\rm CO}\geq m_{2}\end{cases} \tag{1}\]
where \(a=-0.088\) for all models, \(M_{\rm CO}\) is the final carbon-oxygen mass of the progenitor star, while the values of \(b\), \(m_{1}\), \(m_{2}\), and \(a_{\rm low}\) depend on metallicity, as indicated in Table 1. This model springs from a fit by Belczynski et al. (2020) to some evolutionary tracks by the Geneva group (Eskstrom et al., 2012), in which angular momentum transport is relatively inefficient.
#### 2.2.2 MESA (M) model
In the M model, we use the fits done by Belczynski et al. (2020) to a set of stellar tracks run with the mssa code. mssa models the transport of angular momentum according to the Tayler-Spruit magnetic dynamo (Spruit, 2002, see also Cantiello et al., 2014). This yields a dimensionless natal BH spin
\[\chi=\begin{cases}a_{1}\,M_{\rm CO}+b_{1}&\text{if $M_{\rm CO}\leq m_{1}$}\\ a_{2}\,M_{\rm CO}+b_{2}&\text{if $M_{\rm CO}>m_{1}$},\end{cases} \tag{2}\]
where \(a_{1}\), \(b_{1}\), and \(m_{1}\) are given in Table 2.
#### 2.2.3 Fuller (F) model
Fuller & Ma (2019) predict that angular momentum transport can be even more efficient than the one predicted by the Tayler-Spruit dynamo. Belczynski et al. (2020) summarize the results of the model by Fuller & Ma (2019) simply as \(\chi=0.01\) for all single stars and metallicities.
#### 2.2.4 Maxwellian model (Max)
Finally, we also introduce a toy model in which we represent the spin of a BH as a random number drawn from a Maxwellian curve with one-dimensional root-means square \(\sigma_{X}=0.1\) and truncated to \(\chi_{\rm max}=1.0\). This model has been first introduced by Bouffanais et al. (2019), because it is a good match to the distribution arising from LVK data (e.g., Abbott et al., 2019, 2021, 2021). Hereafter, we will indicate this Maxwellian toy model as Max, for brevity.
### Tidal spin up
The progenitor star of the second-born BH can be substantially spun-up by tidal interactions. In the scenario explored by Bavera et al. (2020), a common-envelope or an efficient stable mass transfer episode can lead to the formation of a BH-WR binary system, in which the WR star is the result of mass stripping. The orbital period of this BH-WR binary system can be sufficiently short to lead to efficient tidal synchronisation and spin-orbit coupling. The WR star is then efficiently spun-up. If the WR star then collapses to a BH directly, the final spin of the BH will retain the imprint of the final WR spin.
Based on the simulations by Bavera et al. (2020), Bavera et al. (2021) derive a fitting formula to describe the spin-up of the WR star and the final spin of the second-born BH:
\[\chi=\begin{cases}\alpha_{\rm WR}\log_{10}^{2}\left(P/[{\rm day}]\right)+ \beta_{\rm WR}\,\log_{10}\left(P/{\rm day}\right)&\text{if}P\leq 1\,{\rm d}\\ 0&\text{otherwise},\end{cases} \tag{3}\]
where \(P\) is the orbital period of the BH-WR sytem, \(\alpha_{\rm WR}=f\left(M_{\rm WR},c_{1}^{B},c_{2}^{B},c_{3}^{B}\right)\) and \(\beta_{\rm WR}=f\left(M_{WR},c_{1}^{B},c_{2}^{B},c_{3}^{B}\right)\). In this definition,
\[f\left(M_{\rm WR},c_{1},c_{2},c_{3}\right)=\frac{-c_{1}}{c_{2}+\exp\left(-c_{3 }M_{\rm WR}/[{\rm M}_{\odot}]\right)}, \tag{4}\]
where \(M_{\rm WR}\) is the mass of the WR star, while the coefficients \(c_{1}\), \(c_{2}\) and \(c_{3}\) have been determined through non-linear least-square minimization and can be found in Bavera et al. (2021).
In mose, we can use these fits for the spin of the second-born BH, while still adopting one of the models presented in the previous subsections (G, M, F, and Max) for the first-born BH.
\begin{table}
\begin{tabular}{c c c c c} \hline \(b\) & \(m_{1}\) (\(\rm M_{\odot}\)) & \(m_{2}\) (\(\rm M_{\odot}\)) & \(\alpha_{\rm low}\) & \(Z\) \\ \hline
2.58 & 16.0 & 24.2 & 0.13 & \(\geq 0.010\) \\
3.578 & 31.0 & 37.8 & 0.25 & [0.004, 0.010) \\
2.434 & 18.0 & 27.7 & 0.0 & [0.0012, 0.004) \\
3.666 & 32.0 & 38.8 & 0.25 & \(<0.0012\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters adopted in model G. See Eq. 1 for details.
Figure 1: Probability distribution function (PDF) of the binary kick velocities in the centre of mass (\(V_{\rm CM}\)), for our sample of simulated BBH mergers. The centre-of-mass kick velocity takes into account both the first and the second supernova event in each binary system (Perna et al., 2022). Dashed dark-cyan line: model GM20; solid black line: \(\sigma\)150; dotted red line: \(\sigma\)265. This figure only shows the kick velocity of the stellar progenitors of BBHs that merge within the lifetime of the Universe.
### Spin orientation
We assume that natal kicks are the only source of misalignment between the orbital angular momentum vector of the binary system and the direction of BH spins (Rodriguez et al., 2016; Gerosa et al., 2018). Furthermore, we conservatively assume that accretion onto the first-born BH cannot change the direction of its spin (Maccarone et al., 2007). For simplicity, we also neglect the spin-flip process recently described by (Stegmann and Antonini, 2021). Under such assumptions, we can derive the angle between the direction of the spins of the two compact objects and that of the orbital angular momentum of the binary system as (Gerosa et al., 2013; Rodriguez et al., 2016)
\[\cos\delta=\cos\left(v_{1}\right)\,\cos\left(v_{2}\right)+\sin\left(v_{1} \right)\,\sin\left(v_{2}\right)\,\cos\left(\phi\right), \tag{5}\]
where \(v_{i}\) is the angle between the new (\(\vec{L}_{\rm new}\)) and the old (\(\vec{L}_{\rm old}\)) orbital angular momentum after a supernova (\(i=1,\,2\) corresponding to the first and second supernova), so that \(\cos\left(v\right)=\vec{L}_{\rm new}\cdot\vec{L}_{\rm old}/(L_{\rm new}\,L_{ \rm old})\), while \(\phi\) is the phase of the projection of the orbital angular momentum into the orbital plane.
### Setup of mose runs
Hereafter, we consider eight possible models for the spins (see also Table 3):
* the first four models (hereafter, G, M, F, and Max) adopt the Geneva, Mesa, Fuller and Maxwellian models for both the first- and second-born BHs,
* the other four models (hereafter, G_B21, M_B21, F_B21, and Max_B21) adopt the fits by Bavera et al. (2021) for the second-born BH and the Geneva, Mesa, Fuller and Maxwellian models for the first-born BH.
For each of these eight spin models we consider three different kick models: the GM20, \(\sigma 265\), and \(\sigma 150\) models discussed in Section 2.1.
Finally, for each of these 24 models, we considered 12 metallicities (\(Z=0.0002\), \(0.0004\), \(0.0008\), \(0.0012\), \(0.0016\), \(0.002\), \(0.004\), \(0.006\), \(0.008\), \(0.012\), \(0.016\), and \(0.02\)). For each metallicity, we ran \(10^{7}\) (\(2\times 10^{7}\)) binary systems if \(Z\leq 0.002\) (\(Z\geq 0.004\)). Hence, for each model we ran \(1.8\times 10^{8}\) binary systems, for a total of \(4.32\times 10^{9}\) binary systems encompassing the eight models.
We sampled the initial conditions for each binary system as follows. We have randomly drawn the zero-age main sequence mass of the primary stars from a Kroupa (Kroupa, 2001) initial mass function in the range \(5-150\) M\({}_{\odot}\). The initial orbital parameters (semi-major axis, orbital eccentricity and mass ratio) of binary stars have been randomly drawn as already described in Santoliquido et al. (2021). In particular, we derive the mass ratios \(q\equiv m_{2}/m_{1}\) (with \(m_{2}\leq m_{1}\)) as \(\mathcal{F}(q)\propto q^{-0.1}\) with \(q\in[0.1,\,1]\), the orbital period \(P\) from \(\mathcal{F}(\Pi)\propto-0.55\) with \(\Pi=\log_{10}\left(P/{\rm d}\right)\in[0.15,\,5.5]\) and the eccentricity \(e\) from \(\mathcal{F}(e)\propto e^{-0.42}\) with \(0\leq e\leq 0.9\)(Sana et al., 2012).
As to the main binary evolution parameters, here we use \(\alpha=1\) for common envelope, while the parameter \(\lambda\) depends on the stellar structure as described in Claeys et al. (2014). The other binary evolution parameters are set-up as described in Santoliquido et al. (2021).
### Merger rate density
We estimate the evolution of BBH mergers with redshift by using our semi-analytic code Cosmo\(\mathcal{R}\)ate(Santoliquido et al., 2020, 2021). With Cosmo\(\mathcal{R}\)ate, we convolve our mose catalogues (Section 2.5) with an observation-based metallicity-dependent star formation rate (SFR) density evolution of the Universe, SFRD\((z,Z)\), in order to estimate the merger rate density of BBHs as
\[\mathcal{R}_{\rm BBH}(z)=\int_{z_{\rm max}}^{z}\left[\int_{Z_{\rm min}}^{Z_{ \rm max}}{\rm SFRD}(z^{\prime},Z)\,\mathcal{F}(z^{\prime},z,Z)\,\mathrm{d}Z \right]\,\frac{\mathrm{d}t(z^{\prime})}{\mathrm{d}z^{\prime}}\,\mathrm{d}z^{ \prime}, \tag{6}\]
where
\[\frac{\mathrm{d}t(z^{\prime})}{\mathrm{d}z^{\prime}}=[H_{0}\left(1+z^{\prime} \right)]^{-1}\,[(1+z^{\prime})^{3}\Omega_{M}+\Omega_{\Lambda}]^{-1/2}. \tag{7}\]
In the above equation, \(H_{0}\) is the Hubble constant, \(\Omega_{M}\) and \(\Omega_{\Lambda}\) are the matter and energy density, respectively. We adopt the values in Aghanim et al. (2020). The term \(\mathcal{F}(z^{\prime},z,Z)\) is given by:
\[\mathcal{F}(z^{\prime},z,Z)=\frac{1}{\mathcal{M}_{\rm TOT}(Z)}\frac{\mathrm{ d}\mathcal{N}(z^{\prime},z,Z)}{\mathrm{d}t(z)}, \tag{8}\]
where \(\mathcal{M}_{\rm TOT}(Z)\) is the total simulated initial stellar mass, and \(\mathrm{d}\mathcal{N}(z^{\prime},z,Z)/\mathrm{d}t(z)\) is the rate of BBHs forming from stars with initial metallicity \(Z\) at redshift \(z^{\prime}\) and merging at \(z\), extracted from our mose catalogues. In Cosmo\(\mathcal{R}\)ate, SFRD\((z,Z)\) is given by
\[{\rm SFRD}(z^{\prime},Z)=\psi(z^{\prime})\,p(z^{\prime},Z), \tag{9}\]
where \(\psi(z^{\prime})\) is the cosmic SFR density at formation redshift \(z^{\prime}\), and \(p(z^{\prime},Z)\) is the log-normal distribution of metallicities \(Z\) at fixed formation redshift \(z^{\prime}\), with average \(\mu(z^{\prime})\) and spread \(\sigma_{Z}\). Here, we take both \(\psi(z)\) and \(\mu(z)\) from Madau and Fragos (2017). Finally, we assume a metallicity spread \(\sigma_{Z}=0.3\).
### Hyper-parametric model description
For each of our models (Table 3), described by their hyper-parameters \(\lambda\), we predict the distributions of BBH mergers
\[\frac{\mathrm{d}N}{\mathrm{d}\theta}(\lambda)=N_{\lambda}\,p(\theta|\lambda), \tag{10}\]
where \(\theta\) are the merger parameters, and \(N_{\lambda}\) is the total number of mergers predicted by the model. Assuming an instrumental horizon redshift \(z_{\rm max}=1.5\), \(N_{\lambda}\) can be calculated as
\[N_{\lambda}=\int_{0}^{z_{\rm max}}\mathcal{R}(z)\,\frac{\mathrm{d}V_{\rm c}}{ \mathrm{d}z}\,\frac{T_{\rm obs}}{(1+z)}\,\mathrm{d}z, \tag{11}\]
where \(\frac{\mathrm{d}V_{\rm c}}{\mathrm{d}z}\) is the comoving volume and \(T_{\rm obs}\) the observation duration.
To model the population of merging BBHs, we have chosen five observable parameters \(\theta=\{\mathcal{M}_{\rm c},\,q,\,z,\,x_{\rm eff},\,x_{\rm p}\}\), where \(\mathcal{M}_{\rm c}=(m_{1}\,m_{2})^{3/5}/(m_{1}+m_{2})^{1/5}\) is the chirp mass in the source frame with \(m_{1}\) (\(m_{2}\)) the masses of the primary (secondary) BH of the binary,
\begin{table}
\begin{tabular}{l c c c} \hline Model Name & Spin Magnitude\({}^{a}\) & B21\({}^{b}\) & Kick Model\({}^{c}\) \\ \hline G & Geneva (G) & no & GM20, \(\sigma 265\), \(\sigma 150\) \\ G\_B21 & Geneva (G) & yes & GM20, \(\sigma 265\), \(\sigma 150\) \\ M & MESA (M) & no & GM20, \(\sigma 265\), \(\sigma 150\) \\ M\_B21 & MESA (M) & yes & GM20, \(\sigma 265\), \(\sigma 150\) \\ F & Fuller (F) & no & GM20, \(\sigma 265\), \(\sigma 150\) \\ F\_B21 & Fuller (F) & yes & GM20, \(\sigma 265\), \(\sigma 150\) \\ Max & Maxwellian (Max) & no & GM20, \(\sigma 265\), \(\sigma 150\) \\ Max\_B21 & Maxwellian (Max) & yes & GM20, \(\sigma 265\), \(\sigma 150\) \\ \hline \end{tabular}
\end{table}
Table 3: Description of the runs performed for this work. \({}^{a}\)Model for the spin magnitude (Section 2.2). \({}^{b}\)Correction of the spin magnitude accounting for tidal spin up, as described in B21 (Section 2.3). \({}^{c}\)Model for the natal kick (Section 2.1).
\(q=m_{2}/m_{1}\). and \(z\) is the redshift of the merger. In addition, we used two spin parameters: the effective spin (\(\chi_{\rm eff}\)) and the precessing spin (\(\chi_{\rm p}\)). The effective spin \(\chi_{\rm eff}\) is the mass-weighted projection of the two individual BH spins on the binary orbital angular momentum \(\vec{L}\)
\[\chi_{\rm eff}=\frac{(\vec{\chi}_{1}+q\,\vec{\chi}_{2})}{1+q}\cdot\frac{\vec{L} }{L}, \tag{12}\]
where \(\vec{\chi}_{1,2}=\vec{\bar{s}}_{1,2}\,c/(G\,m_{1,2}^{2})\) is the dimensionless spin parameter of the two BHs. The precessing spin \(\chi_{\rm p}\) is defined as
\[\chi_{\rm p}=\max\,\left(\chi_{1,\perp},\,A\,\chi_{2,\perp}\right), \tag{13}\]
where \(\chi_{1,\perp}\) (\(\chi_{2,\perp}\)) is the spin component of the primary (secondary) BH perpendicular to the orbital angular momentum vector \(\vec{L}\), and \(A=\left(4\,q+3\right)\,q/(4+3\,q)\).
To compute the distributions \(p(\theta|\lambda)\), we constructed a catalogue of \(10^{6}\) sources for all possible combinations of hyper-parameters \(\lambda\), using the merger rate density and the metallicity given by Cosmo&Ate. From these catalogues we derived continuous estimations of \(p(\theta|4)\) by making use of a Gaussian kernel density estimation assuming a bandwidth of 0.15.
## 3 Hierarchical Bayesian Inference
Given a set \(\mathcal{H}=\{h^{k}\}_{k=1}^{N_{\rm obs}}\) of \(N_{\rm obs}\) GW observations, the posterior distribution of a set of hyper-parameters \(\lambda\) associated to an astrophysical model can be described as an in-homogeneous Poisson distribution (e.g., Loredo, 2004; Mandel et al., 2019; Thrane and Talbot, 2019; Bouffanais et al., 2019, 2021, 2021):
\[p(\lambda,N_{\lambda}|\mathcal{H})=e^{-\mu_{\lambda}}\,\pi(\lambda,N_{\lambda })\prod_{k=1}^{N_{\rm obs}}N_{\lambda}\int\mathcal{L}^{k}(h^{k}|\theta)\,p( \theta|\lambda)\,\mathrm{d}\theta, \tag{14}\]
where \(N_{\rm obs}\) is the number of events observed by the LVK, with an ensemble of parameters \(\theta\), \(N_{\lambda}\) is the number of predicted mergers by the model (as calculated in eq. 11), \(\mu_{\lambda}\) the number of predicted observations given a model and a detector, \(\pi(\lambda,\,N_{\lambda})\) are the prior distributions on \(\lambda\) and \(N_{\lambda}\), and \(\mathcal{L}^{k}(\{h^{k}|\theta\})\) is the likelihood of the \(k^{\rm th}\) observation.
The predicted number of events \(\mu_{\lambda}\) can be written in terms of detection efficiency \(\beta(\lambda)\) for a given model:
\[\mu_{\lambda}=N_{\lambda}\,\beta(\lambda),\quad\text{with}\quad\beta(\lambda )=\int_{\theta}p(\theta|\lambda)\,p_{\rm det}(\theta)\,\mathrm{d}\theta, \tag{15}\]
where \(p_{\rm det}(\theta)\) is the detection probability for a set of parameters \(\theta\). This probability can be inferred by computing the optimal signal to noise ratio (SNR) of the sources and comparing it to a detection threshold. In our case we chose as reference a threshold \(\rho_{\rm thr}=8\) in the LIGO Livingston detector, for which we approximated the sensitivity using the measurements for the three runs separately (Abadie et al., 2010; Abbott et al., 2016; Wysocki et al., 2018). The values for the event's log-likelihood were derived from the posterior and prior samples released by the LVK. Hence, the integral in eq. 14 is approximated with a Monte Carlo approach as
\[\mathcal{I}^{k}=\int\mathcal{L}^{k}(h^{k}|\theta)\,p(\theta|\lambda)\, \mathrm{d}\theta\approx\frac{1}{N_{s}^{k}}\sum_{i=1}^{N_{\rm obs}^{k}}\frac{p( \theta_{i}^{k}|\lambda)}{\pi^{k}(\theta_{i}^{k})}, \tag{16}\]
where \(\theta_{i}^{k}\) is the \(i^{\rm th}\) posterior sample of the \(k^{\rm th}\) detection and \(N_{s}^{k}\) is the total number of posterior samples for the \(k^{\rm th}\) detection. To compute the prior term in the denominator, we also used Gaussian kernel density estimation.
Finally, we can also choose to neglect the information coming from the number of sources predicted by the model when estimating the posterior distribution. By doing so, we can have some insights on the impact of the rate on the analysis. In practice, this can be done by marginalising eq. 14 over \(N_{\lambda}\) using a prior \(\pi(N_{\lambda})\sim 1/N_{\lambda}\)(Fishbach et al., 2018), which yields to the following expression for a model log-likelihood
\[\mathcal{L}=p(\lambda|\{h\}^{k})\sim\pi(\lambda)\prod_{k=1}^{N_{\rm obs}} \left[\frac{\mathcal{I}^{k}}{\beta(\lambda)}\right]. \tag{17}\]
We adopted the formalism described in eqs. 14-17 to perform a hierarchical Bayesian inference to compare the astrophysical models presented Sec. 2 with the third gravitational-wave transient catalogue (GWTC-3), the most updated catalogue of gravitational-wave events from the LVK (Abbott et al., 2021, 2021). GWTC-3 contains 90 event candidates with probability of astrophysical origin \(p_{\rm astro}>0.5\). From GWTC-3, we extract 59 confident detections of BBHs with a false alarm rate \(\mathrm{FAR}<0.25\) yr\({}^{-1}\). In this sub-sample, we do not include binary neutron stars and neutron star - BH systems, and we also exclude the other BBH candidates with an higher FAR. Our chosen FAR threshold ensures a sufficiently pure sample for our analysis (Abbott et al., 2021). A list of the events used in this study is available in Appendix A. For the observable parameters \(\theta\), we use the choice described in Section 2.7, namely \(\theta=\{\mathcal{M}_{\rm c},\,q,\,z,\,\chi_{\rm eff},\,\chi_{\rm p}\}\).
## 4 Results
### Chirp mass
The chirp mass distribution (Fig. 2) does not depend on the spin model, by construction. Therefore, we only show different natal kicks. Models \(\sigma 150\) and \(\sigma 265\) show a similar distribution of chirp masses with two peaks of similar importance, one at \(\mathcal{M}_{\rm c}\approx 8\) M\({}_{\odot}\) and the
Figure 2: Predicted detectable distribution of chirp mass, for each kick model: GM20 (solid dark-cyan line), \(\sigma 150\) (dotted black line) and \(\sigma 265\) (dashed red line). For detectable distribution we mean the distribution of simulated BBHs with sufficiently high signal-to-noise ratio (Section 3). The shaded gray area is the distribution we obtain by stacking the posterior samples of the 59 confident detections from GWTC-3 (Appendix A).
other (broader) peak at \(\mathcal{M}_{\rm c}\approx 15\) M\({}_{\odot}\). In contrast, model GM20 has a much stronger preference for low-mass BHs, with a dominant peak at \(\mathcal{M}_{\rm c}\approx 8\) M\({}_{\odot}\). The reason for this difference is that all BHs in tight binary systems receive slow natal kicks in model GM20 (Fig. 1). This happens because stars in tight binary systems lose their envelope during mass transfer episodes; hence, the mass of supernova ejecta (\(m_{\rm ej}\)) is small, triggering low kicks in model GM20.
Figure 2 also compares the detectable distribution of our models with the stacked posterior samples from the confident BBH detections in GWTC-3. This figure highlights two main differences between the population synthesis models and the posterior samples: the peak at \(\mathcal{M}_{\rm c}\approx 15\) M\({}_{\odot}\) is stronger in the models than it is in the data, while the data present a more significant excess at \(\mathcal{M}_{\rm c}\approx 25-30\) M\({}_{\odot}\) than the models. Finally, the peak at \(\mathcal{M}_{\rm c}\approx 9\) M\({}_{\odot}\) in the data approximately matches the peak at \(\mathcal{M}_{\rm c}\approx 8\) M\({}_{\odot}\) in the models. The main features of our population synthesis models (in particular, the peaks at \(\mathcal{M}_{\rm c}\approx 8-10\) M\({}_{\odot}\) and \(\mathcal{M}_{\rm c}\approx 15-20\) M\({}_{\odot}\)) are also common to other population-synthesis models (e.g., Belczynski et al., 2020; van Son et al., 2022) and mostly spring from the core-collapse SN prescriptions by Fryer et al. (2012). Alternative core-collapse SN models (e.g., Mapelli et al., 2020; Mandel et al., 2021; Patton et al.,
Figure 3: Predicted detectable distribution of \(\chi_{\rm P}\) (left) and \(\chi_{\rm eff}\) (right) for all of our models. Different colours refer to the spin model: G, M, F and Max. Solid (dashed) lines include (do not include) the tidal spin-up model by B21. From top to bottom: GM20, \(\sigma 150\), and \(\sigma 265\). The shaded gray areas are the distributions we obtain by stacking the posterior samples of the 59 confident detections from GWTC-3 (Appendix A).
2022; Olejak et al. 2022) produce different features and deserve further investigation (Iorio et al., in prep.).
### Spin parameters
Figure 3 shows the detectable distribution of spin parameters \(\chi_{\rm p}\) and \(\chi_{\rm eff}\) for all of our models. By construction, large spins are much more common in models G and G_B21, while models F and F_B21 have a strong predominance of vanishingly small spins. Models M, M_B21, Max and Max_B21 are intermediate between the other two extreme models.
Including or not the correction by B21 has negligible impact on the distribution of \(\chi_{\rm p}\) and \(\chi_{\rm eff}\) for models G, because of the predominance of large spin magnitudes. In contrast, introducing the spin-up correction by B21 has a key impact on models F, because it is the only way to account for mild to large spins in these models. The correction by B21 is important also for models M and Max, being responsible for the large-spin wings.
Finally, our model with slow kicks (GM20) results in a distribution of \(\chi_{\rm p}\) that is more peaked at zero (for models G, M and Max) with respect to the other two kick models (\(\sigma\)150 and \(\sigma\)265). In fact, the supernova kicks in model GM20 are not large enough to appreciably misalign BH spins (see Fig. 1).
A similar effect is visible in the distribution of \(\chi_{\rm eff}\): model \(\sigma\)265 produces a distribution of \(\chi_{\rm eff}\) that is less asymmetric about the zero with respect to models \(\sigma\)150 and especially GM20.
### Model Selection
Figure 4 and Table 4 report the values of the log-likelihood log \(\mathcal{L}\) defined in Eq. 17. We can quantify the difference between two models A and B by computing the average absolute difference in percentage
\[\Delta\rm{log}\mathcal{L}(A,\ B)=\left(\frac{2}{\log\mathcal{L}_{i}^{A}-\log \mathcal{L}_{i}^{B}}\right)\]
on the non-A,B variation \(var\) (\(var\) would be kick(spin) if A and B are spin(kick) models). For example to compare the two models G and G_B21, A and B become G_B21 and G and \(var=\{\)GM20, \(\sigma\)150, \(\sigma\)265\(\}\).
The tidal spin-up mechanism (B21) affects the spin of a small part of the population of each model (Fig. 3). However, it improves the likelihood of the F and M models significantly (e.g., \(\Delta\rm{log}\mathcal{L}(M\_B21,\ M)=89\%\), Table 4). This improvement of the log-likelihood can be explained by the presence of higher values of \(\chi_{\rm p}\) and \(\chi_{\rm eff}\) in the distribution of populations M_B21 and F_B21 compared to M and F (Fig. 3).
The F model yields \(\mathcal{L}({\rm F})=-\infty\) if we do not include the tidal spin-up correction, regardless of the kick model. This indicates that the LVK data do not support vanishingly small BH spins for the entire BBH population. However, it is sufficient to inject a tiny subpopulation of spinning BHs, by switching on the B21 correction, and the F model becomes one of the best considered models. In fact, the F_B21 models only includes 0.4% of BHs with \(\chi>0.01\) and achieves log \(\mathcal{L}>200\) (for spin models \(\sigma\)150 and \(\sigma\)265).
The G and G_B21 spin models exhibit lower log-likelihood values than the others for all kicks models: log\(\mathcal{L}<150\) for \(\sigma\)150 and \(\sigma\)265, and log\(\mathcal{L}<0\) for GM20. This happens because the distribution of \(\chi_{\rm eff}\) has non-negligible support for extreme values \(\chi_{\rm eff}<-0.5\) and \(\chi_{\rm eff}>0.5\) (Fig. 3).
The kick models \(\sigma\)150 and \(\sigma\)265 show similar results (\(\Delta\rm{log}\mathcal{L}(\sigma 150,\sigma 265)<3\%\)) for every spin assumptions. Also, for all spin assumptions, the GM20 kick model scores a significantly lower likelihood than the other models \(\sigma\)150 and \(\sigma\)265 with \(\Delta\rm{log}\mathcal{L}(\sigma 150,\Delta\rm{log}\mathcal{L})\sim\Delta\rm{log} \mathcal{L}(\sigma 265,\Delta\rm{M20})\sim 150\%\). This result can be explained by the high peak of model GM20 at low chirp masses (\(\mathcal{M}_{\rm c}\sim 8\)M\({}_{\odot}\), see Sec.4.1 and Fig.2) and by the low value of \(\chi_{\rm p}\) compared to the other kick models (Fig. 3).
Models Max and Max_B21 are possibly the best match to the data, but this is not surprising, because they were built as a toy model to visually match the data. Among the astrophysically-motivated models (i.e., after excluding the Max model), M, M_B21 and F_B21 (with kick models \(\sigma\)150 and \(\sigma\)265) are the most favoured by the data. This might be interpreted as a support for the Tayler-Spruit instability mechanism (adopted in models M) and for the tidal spin-up model by B21.
### Importance of \(\chi_{\rm p}\)
The \(\chi_{\rm p}\) parameter encodes information on the spin component in the orbital plane. Its impact on gravitational-wave signals is much lower than that of \(\chi_{\rm eff}\), and therefore its measurement is less precise. To understand the impact of \(\chi_{\rm p}\) on our results, we re-ran the analysis without this parameter. The results are shown in Table 5 and in Fig. 4 with empty markers. Fig. 4 shows that, if we do not include \(\chi_{\rm p}\), the models M and M_B21 have almost the same log-likelihood, and even the F model yields a positive log-likelihood. Furthermore, the analysis without \(\chi_{\rm p}\) results in significantly larger values of \(\mathcal{L}\) for the kick model GM20. Our results demonstrate that the measured \(\chi_{\rm p}\) of GWTC-3 BBHs carries substantial information, despite the large uncertainties.
main uncertainties, we have implemented them into our population-synthesis code mobse, and compared them against GWTC-3 data within a hierarchical Bayesian framework.
The data do not support models in which the entire BH population has vanishingly small spins (model F). This result is mainly driven by the \(\chi_{\rm p}\) parameter. This is in agreement with, e.g., the complementary analysis presented in Callister et al. (2022). They employed a variety of complementary methods to measure the distribution of spin magnitudes and orientations of BBH mergers, and concluded that the existence of a sub-population of BHs with vanishing spins is not required by current data. Callister et al. (2022) find that the fraction of non-spinning BHs can comprise up to \(\sim 60-70\%\) of the total population. In our F_B21 models, we have \(\sim 99.6\%\) of BHs with \(\chi<0.01\).
Recently, Roulet et al. (2021) and Galaudage et al. (2021) claimed the existence of a sub-population of zero-spin BHs. From our analysis, we cannot exclude the existence of such sub-population, as the F model with B21 correction (F_B21) still represents a good match of the data. Similarly to Belczynski et al. (2020) and Gerosa et al. (2018), we find that models with large spins (G, G_B21) are less favoured by the data, but they are still acceptable if we allow for large kicks.
Overall, we find a preference for large natal kicks. This result goes into the same direction as the work by Callister et al. (2021). Actually, this preference for large natal kicks is degenerate with the adopted formation channel. Had we included the dynamical formation channel in dense star clusters, we would have added a sub-population of isotropically oriented spins (see, e.g., Figure 8 of Mapelli et al., 2022). In a forthcoming study, we will extend our analysis to a multi-channel analysis. While it is unlikely that BBH mergers only originate from one single channel, adding more formation channels to a hierarchical Bayesian analysis dramatically increases the number of parameters, making it more difficult to reject some portions of the parameter space.
## 6 Summary
The origin of BH spins is still controversial, and angular momentum transport inside massive stars is one of the main sources of uncertainty. Here, we apply hierarchical Bayesian inference to derive constraints on spin models from the 59 most confident BBH merger events in GWTC-3. We consider five parameters: chirp mass, mass ratio, redshift, effective spin, and precessing spin.
For model selection, we use a set of binary population synthesis simulations spanning different assumptions for black hole spins and natal kicks. In particular, our spin models account for relatively inefficient (G), efficient (Max and M), and very efficient angular-momentum transport (F). A higher efficiency of angular momentum transport is associated with lower BH spins. In particular, model F predicts vanishingly small spins for the entire BH population. For each of our models, we also include the possibility that some BHs are tidally spun-up (B21). We considered three different natal kick models: according to models \(\sigma 265\) and \(\sigma 150\), we randomly draw the kicks from a Maxwellian curve with \(\sigma=265\) and \(150\) km s\({}^{-1}\), respectively; in the third model (G20), we also derive the kicks from a Maxwellian curve with \(\sigma=265\) km s\({}^{-1}\), but the kick magnitude is then modulated by the ratio between the mass of the ejecta and the mass of the BH.
We summarize our main results as follows.
* The data from GWTC-3 do not support models in which the entire BH population has vanishingly small spins (model F).
* In contrast, models in which most spins are vanishingly small, but that also include a sub-population of tidally spun-up BHs (model F_B21) are a good match to the data.
* The models in which angular momentum transport is relatively inefficient (G and G_21) yield log-likelihood values that are much lower than models with efficient angular momentum transport (M, M_B21, Max, and Max_B21).
* Models with large BH kicks (\(\sigma 150\) and \(\sigma 265\) ) are favoured by our analysis with respect to low-kick models (G20).
* Our results show that the precessing spin parameter \(\chi_{\rm p}\) plays a crucial impact to constrain the spin distribution of BBH mergers.
## Acknowledgements
MM, CP, FS and YB acknowledge financial support from the European Research Council for the ERC Consolidator grant DE-MOBLACK, under contract no. 770017. This research made use of
Figure 4: Values of the log-likelihood \(\mathcal{L}\) defined in Eq. 17 for the four different models Geneva (G), MESA (M), Fuller (F), and Maxwellian (Max), with/without the tidal spin-up mechanism (B21). Blue crosses: GM20; dark pluses: \(\sigma 150\); red circles: \(\sigma 265\).
NumpY(Harris et al., 2020), and SciPy(Virtanen et al., 2020). For the plots we used Matplotlib(Hunter, 2007).
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author. The latest public version of MOSE can be downloaded from this repository. CosmoRate can be downloaded from this link.
|
2308.02310 | MASC: A Tool for Mutation-Based Evaluation of Static Crypto-API Misuse
Detectors | While software engineers are optimistically adopting crypto-API misuse
detectors (or crypto-detectors) in their software development cycles, this
momentum must be accompanied by a rigorous understanding of crypto-detectors'
effectiveness at finding crypto-API misuses in practice. This demo paper
presents the technical details and usage scenarios of our tool, namely Mutation
Analysis for evaluating Static Crypto-API misuse detectors (MASC). We developed
$12$ generalizable, usage based mutation operators and three mutation scopes,
namely Main Scope, Similarity Scope, and Exhaustive Scope, which can be used to
expressively instantiate compilable variants of the crypto-API misuse cases.
Using MASC, we evaluated nine major crypto-detectors, and discovered $19$
unique, undocumented flaws. We designed MASC to be configurable and
user-friendly; a user can configure the parameters to change the nature of
generated mutations. Furthermore, MASC comes with both Command Line Interface
and Web-based front-end, making it practical for users of different levels of
expertise. | Amit Seal Ami, Syed Yusuf Ahmed, Radowan Mahmud Redoy, Nathan Cooper, Kaushal Kafle, Kevin Moran, Denys Poshyvanyk, Adwait Nadkarni | 2023-08-04T13:22:22Z | http://arxiv.org/abs/2308.02310v2 | # MASC: A Tool for Mutation-Based Evaluation of Static Crypto-API Misuse Detectors
###### Abstract.
While software engineers are optimistically adopting crypto-API misuse detectors (or crypto-detectors) in their software development cycles, this momentum must be accompanied by a rigorous understanding of crypto-detectors' _effectiveness at finding crypto-API misuses in practice_. This demo paper presents the technical details and usage scenarios of our tool, namely Mutation Analysis for evaluating Static Crypto-API misuse detectors (MASC). We developed 12 generalizable, usage based mutation operators and three mutation scopes, namely _Main Scope_, _Similarity Scope_, and _Exhaustive Scope_, which can be used to expressively instantiate compilable variants of the crypto-API misuse cases. Using MASC, we evaluated nine major crypto-detectors, and discovered 19 unique, undocumented flaws. We designed MASC to be _configurable_ and _user-friendly_; a user can configure the parameters to change the nature of generated mutations. Furthermore, MASC comes with both Command Line Interface and Web-based front-end, making it practical for users of different levels of expertise.
Crypto-API, static analysis, crypto-API misuse detector, mutation testing, mutation-based evaluation, security, software-engineering +
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † †: These authors contributed equally to this paper
+
Footnote † † †: These authors contributed equally to this paper
+
Footnote † † † †: These authors contributed equally to this paper
+
Footnote † † † †: These authors contributed equally to this paper
+
Footnote †
sites), namely _Similarity Scope_ (extended from MDroid+ (Mack et al., 2017; Chen et al., 2018)), _Exhaustive Scope_ (extended from \(\mu\)SE (Beng et al., 2019; Chen et al., 2019; Chen et al., 2019)), and its independently developed _Main Scope_, thus creating mutated applications that contain crypto-API misuse. We demonstrated the practicality of prototype implementation of MASC by evaluating nine crypto-detectors from industry and academia, and discovered 19 previously undocumented, unknown flaws that compromise the within-scope soundness of crypto-detectors. The full details of MASC's methodology, design considerations, evaluation of crypto-detectors leading to finding novel flaws, practical impact of found flaws in open source applications (therefore, the applicability of the mutation operators), and discussion of the findings are available in the original research paper (Beng et al., 2019).
In this paper, we present a mature implementation of MASC framework with focus on extensibility, ease of use, and maintainability to the stakeholders of crypto-detectors, such as security researchers, developers, and users. To elaborate, because of the newly developed plug-in architecture, MASC users can now create their own mutation operators that can be easily plugged into MASC, without diving deep into the existing code base (\(11K\)+ source lines of code). Moreover, whereas the original prototype implementation of MASC involved semi-automated evaluation of crypto-detectors, we made MASC's workflow automated by leveraging the _de-facto_ SARIF (Kumar et al., 2019) formatted output of crypto-detectors. Furthermore, we have created a web-based front-end of MASC's implementation for the users to reduce the barrier to entry. Finally, we restructured and refactored the open-source codebase of MASC to increase maintainability and extensibility of MASC, which will make future contributions and enhancements easier for both developers and open-source enthusiasts of MASC. With these additions and enhancements, we hope that the current, open-source implementation of MASC will be used in finding flaws in, and thus helping to improve, existing crypto-detectors.
**Contribution:** We present MASC, a user-friendly framework that leverages mutation-testing techniques for evaluating crypto-detectors, with details of underlying techniques, design considerations, and improvements. The new, key features of MASC are as follows: _Automated Evaluation of Crypto-detectors:_ MASC can be used to evaluate crypto-detectors in an end-to-end automated workflow within the Main Scope.
_Customizable Evaluation of Crypto-detectors:_ A user can customize the evaluation of crypto-detectors by specifying the mutation operators for creating crypto-API misuse instances.
_Plug-in Architecture for Custom Operators:_ MASC helps security researchers, developers and users, jump right into evaluating crypto-detectors by creating their own, custom mutation operators that can be directly plugged-in the _Main_ Scope, without requiring them to learn and understand about the internal details of MASC.
_User-friendly Front-end for End-users:_ In addition to enhancing the command line interface of the original prototype implementation, we create and introduce an open-sourced, web-based front-end for end-users that can be run locally. The front-end contains an additional _play-test-learn_ interface, MASC Lab, where stakeholders can interact with mutation operators and can learn about mutating crypto-API misuse.
**Tool and Data Availability:** The prototype implementation of the MASC framework, scripts and results of evaluating crypto-detectors, as described in the original paper (Beng et al., 2019), are available in the MASC Artifact (Beng et al., 2019). Furthermore, the codebase of actively maintained, mature implementation of MASC is available separately with extensive documentation and examples (Beng et al., 2019).
## 2. Overview of Masc
Overall, MASC works by (1) mutating a base crypto API misuse case to create mutated crypto-API instantiations or mutated misuse case, (2) seeding or injecting the mutated misuse case in source code, (3) analyzing both unmutated and mutated source code using a target crypto-detector, and (4) comparing the outputs of crypto-detector applied on both base misuse case and mutated misuse case to identify undetected (not killed) mutated misuse case. The overview of this process is shown in Figure 1.
Conceptually, MASC contextualizes the traditional mutation testing techniques of SE domain for the evaluation of crypto-detectors, while introducing _crypto-API misuse mutation operators_ that instantiates variants or expressions of crypto-API misuse. To elaborate, while mutation operators from the traditional, SE mutation testing are used to describe operations that either add, modify, or remove existing source code statement(s), in the context of MASC, _crypto-API mutation operators_ create expressive instances of crypto-API misuse independent of any source code or application. As shown in Listing 1, statement marked \(//1\) is the base misuse case, whereas statements \(//2\) \(-//5\) are the mutated crypto-misuse cases instantiated by several mutation operators of MASC. We provide the design considerations and implementation details of MASC's mutation operators in Sec. 4.1. These mutated misuse instances are then "injected" or "seeded" in source code, where the injection site depends on the _mutation scopes_ of MASC, which we detail in Sec. 4.2.
## 3. Design Goals
We considered several goals while designing MASC, while leaning on the experience we gained from the original version.
_Diversity of Crypto-APIs_ (**DG1):** Effectively evaluating crypto-detectors requires considering misuse cases of existing crypto-APIs, which is challenging as crypto-APIs are as vast as the primitives
Figure 1. A conceptual overview of the MASC framework.
they enable. To address this, the crypto-API mutation operators need to be decoupled from the crypto-APIs. Such implementation would mean that even in the case when new crypto APIs are introduced, masc can still create mutated misuse cases as long as the new crypto-APIs follow existing design principles.
_Open to Extension_ (**DG2):** While both original and current implementations of masc come with 12 generalizable mutation operators, these represent a subset of different expressions of misuse cases. Hence, masc should be open to extension by stakeholders so that they can create their own mutation operators that can be easily plugged-in to masc, without needing to modify masc.
_Ease of Evaluating Crypto-detectors_ (**DG3):** While the original, semi-automated implementation of masc required manual evaluating the target crypto-detector, such heavy-lifting manual effort can not be simply expected from end-users. Part of this manual effort was _unavoidable_ due to the unique, varied outputs produced by crypto-detectors. However, with the recent focus on using crypto-detectors with CI/CD pipelines and the introduction of the _de-facto_ SARIF (Garf et al., 2018) formatted outputs, it would become possible to not only automate the entire evaluation process, but also make it customizable.
_Adapting to Users_ (**DG4):** Finally, masc should be created in such a way that it is usable by users of varying skills and in different environments. For instance, it should be usable as a stand-alone binary in a windowless server environment as a component, and as a front-end based software that can leverage the binary of itself.
## 4. Implementation of masc
To satisfy the design goals (**DG1-DG4**), we implemented masc (11\(K\)+ effective Java source line of code) following single-responsibility principle across modules, classes, and functions. Note that while current implementation of masc inherits the _mutation scopes_ of the original implementation with internal structural changes, the bulk of the changes with new features in the current implementation of masc are based on the _Main Scope_. Therefore, we describe the implementation details of masc with a focus on _Main Scope_ in the context of the design goals and provide an overview of the architecture in Figure 2.
**Configuration Manager:** To make masc as flexible as possible, we decoupled the crypto-API specific parameters from the internal structure of masc. As a result, user can specify any crypto-API along with its necessary parameters through an external configuration file defining the base crypto-API misuse case. The configuration file follows a key-value format, as shown in Listing 2. Additionally, user can specify the mutation operators and scope to be used, along with other configuration values, thus satisfying **DG1**.
```
scope=main type=StringOperator outputput=app/outputs apName=java.crypto.clipnet #Methodcallfromcrypto-API invocation=getInstance #Secureparametertousewithcrypto-API scoreParam=AES/GCR/Wpadding #insecureparametertousewithcrypto-API insecureparamParam=AES/GCR/Wpadding insecureparamParam=AES insecurevalueusedwithmutation #noise= #variable.classnameusedtocreatenecessarystructures variableName=cryptorvariable className=cryptorgetest #nameoftheapforsimilarity-scopespecificmutation apName=<Nameoftheapp
```
**Listing 2**: Example configuration file for masc
**Mutation Operator Module:**MASC analyzes the specified crypto-API and uses the values specified by the user (_e.g._, secure, and insecure parameters to be used with the API) for creating mutated crypto-API misuse instances. Internally, the decoupling of crypto-APIIs from masc is made possible through the use of _Java Reflection_ based API analysis and Java Source Generation using the _Java Poetry_ Library (**DG1**). While both the original and current implementation of masc comes with several generalizable mutation operators, the current implementation of masc includes an additional plug-in structure that facilitates creating custom mutation operators and custom key-value pairs for the configuration file. Both of these can be done _externally_, _i.e._, no modification to source code of masc is necessary (**DG2**). We provide additional details about masc's mutation operators in Section 4.1.
**Automated Evaluation Module:** The current implementation of masc leverages the SARIF formatted output to automate evaluation of crypto-detectors. To make end-to-end analysis automated, masc can be configured to use crypto-detector specific commands, such as _e.g._, compiling a mutated source code for analysis, evaluation stop conditions, command for running crypto-detector, output directory, and more (**DG3**-**DG4**).
Furthermore, masc is implemented to produce verbose logs. With the combination of flexible configuration, it is therefore possible to use the stand-alone binary masc jar file as a module of another software. As a proof of concept, we implemented masc. Web, a _python-django_ based front-end that offers all the functionalities of the masc (Usage details in Section 5) that uses the binary jar of masc as a module (**DG4**).
### Mutation Operators
We designed generalizable mutation operators by examining the Java Cryptographic Architecture (JCA) documentation. We identified two common patterns of crypto-API invocation as follows: (_i_) _restrictive_, where a developer is expected to only instantiate certain crypto-API objects by providing values from a pre-defined set, _e.g._, Cipher, and (_ii_) _flexible_, where the developers implement the behavior, _e.g._,HostnameVerifier. While defining mutation operators of these two distinct patterns, we assumed a threat model consisting of the following types of adversaries:
**Benign developer, accidental misuse (T1):** A benign developer who accidentally misuses crypto-API, but attempts to address such vulnerabilities using a crypto-detector.
Figure 2. Architecture Overview of the Main Scope of MASC
```
interfaceINVertendsHostnameVerifier()
``` INV(){ publicbooleanverify(Stringh,SSLSession)returntrue;}; ```
Listing 3: Flexible crypto-API based misuse mutation by MASC
**Benign developer, harmful fix (T2):** A benign developer who is trying to address a vulnerability identified by a crypto-detector in good faith, but ends up introducing a new vulnerability instead.
**Evasive developer, harmful fix (T3):** A developer who aims to finish a task as quickly or with low effort (_e.g._, a contractor), and is hence attempting to purposefully evade a crypto-detector.
The restrictive operators mutate the restrictive values that abstracts away the crypto-API misuse. For example, the abstraction can be based on method chaining, changing letter case (JCA is case-insensitive), and introducing alias variables, as shown in Listing 1. We implemented 6 mutation operators for restrictive crypto-APIs. Similarly, for the flexible APIs, we implemented mutation operators based on object-oriented programming concepts:
* **Method overriding** is used to create mutations that contain _ineffective_ security exception statements, irrelevant loops, and/or ineffective security sensitive return value,
* **Class extension** is used for implementing or inheriting parent crypto-API interface or abstract classes respectively, and
* **Object Instantiation** is for creating anonymous inner class object from the implemented or inherited classes of crypto-APIs.
We created 6 more conceptual mutation operators based on flexible crypto-APIs. An example of flexible mutant is shown in Listing 3.
### Mutation Scopes
To emulate vulnerable crypto-API misuse placement by benign and evasive developers, we designed three mutation scopes to be used with MASC:
* _Main Scope_ represents the simplest scope, where it seeds mutants at the beginning of the main method of a simple Java or Android template app, ensuring reachability.
* _Similarity Scope_, which is extended from MDroid+ (Dwork et al., 2018; Dwork et al., 2018), seeds mutants in the source code of an input application where a similar crypto-API is found. Note that it does not modify the existing crypto-API, and only appends the said mutant misuse case
* _Exhaustive Scope_, which is extended \(\mu\)SE (Brock et al., 2018; Dwork et al., 2018; Dwork et al., 2018), seeds mutants at _all syntactically possible_ locations in the target app, such as class definition, conditional segments, method bodies and anonymous inner class object declarations. This helps evaluate the reachability of the target crypto-detector.
## 5. Using Masc
As described previously, MASC has both command line interface and web-based front-end (MASC Web, shown in Figure 3). MASC CLI can be executed by providing a configuration file _e.g._, Cipher.properties using the command shown in Listing 4. Similarly, using the MASC Web, users can do the following, labeled as per Figure 3:
* Experiment and learn about crypto-API misuse using MASC Lab,
* Mutate open source applications by uploading the zipped source code in MASC Engine,
* Use custom implemented mutation operators as plugins,
* Create and upload configuration files, and
* Profile crypto-detectors by analyzing caught and uncaught mutants.
The detailed description of each of these, with example configuration files, and detailed developer documentation, is shared in the open-source repository of MASC(Brock et al., 2018).
## 6. Future Work and Conclusion
We discussed the overview, design goals, implementation details and usage of MASC, a user-friendly tool for mutation-based evaluation of static crypto-API misuse detectors. While we do not report any additional crypto-detector evaluation in this demonstration paper, evaluation results of the original implementation of MASC are available in the original paper (Brock et al., 2018). We plan to evaluate additional crypto-detectors with the current implementation of MASC, and aim to extend the customization support to the additional scopes, _i.e._, exhaustive scope and similarity scope. We hope that the current implementation of MASC will help crypto-detector stakeholders, _i.e._, security researchers, developers and users, to systematically evaluate crypto-detectors. Furthermore, we envision that that open-source enthusiasts will augment the mutation operators of MASC further, empowered by its easy to extend architecture, thus helping improve crypto-detectors by finding novel flaws.
###### Acknowledgements.
This work is supported in part by NSF-1815336, NSF-1815186, NSF-1955853 grants and Coastal Virginia Center for Cyber Innovation and the Commonwealth Cyber Initiative, an investment in the advancement of cyber R&D, innovation, and workforce development. For more information about COVA CCI and CCI, visit www.covacci.org and www.cyberinitiative.org.
Figure 3. Web based Front-end of the MASC |
2310.18102 | Impact of Hydrogenation on the Stability and Mechanical Properties of
Amorphous Boron Nitride | Interconnect materials with ultralow dielectric constant, and good thermal
and mechanical properties are crucial for the further miniaturization of
electronic devices. Recently, it has been demonstrated that ultrathin amorphous
boron nitride (aBN) films have a very low dielectric constant, high density
(above 2.1 g/cm3), high thermal stability, and mechanical properties. The
excellent properties of aBN derive from the nature and degree of disorder,
which can be controlled at fabrication, allowing tuning of the physical
properties for desired applications. Here, we report an improvement in the
stability and mechanical properties of amorphous boron nitride upon hydrogen
doping. With the introduction of a Gaussian approximation potential (GAP) for
atomistic simulations, we investigate the changing morphology of amorphous
boron nitride with varying H doping concentrations. We found that for 8 at% of
H doping, the concentration of $sp^3$-hybridized atoms reaches a maximum which
leads to an improvement of thermal stability and mechanical properties by 20%.
These results will be a guideline for experimentalists and process engineers to
tune the growth conditions of amorphous boron nitride films for numerous
applications. | Onurcan Kaya, Luigi Colombo, Aleandro Antidormi, Marco A. Villena, Mario Lanza, Ivan Cole, Stephan Roche | 2023-10-27T12:39:40Z | http://arxiv.org/abs/2310.18102v2 | # Impact of Hydrogenation on the Stability and Mechanical Properties of Amorphous Boron Nitride
###### Abstract
Interconnect materials with ultralow dielectric constant, and good thermal and mechanical properties are crucial for the further miniaturization of electronic devices. Recently, it has been demonstrated that ultrathin amorphous boron nitride (aBN) films have a very low dielectric constant, high density (above 2.1 g/cm3), high thermal stability, and mechanical properties. The excellent properties of aBN derive from the nature and degree of disorder, which can be controlled at fabrication, allowing tuning of the physical properties for desired applications. Here, we report an improvement in the stability and mechanical properties of amorphous boron nitride upon hydrogen doping. With the introduction of a Gaussian approximation potential (GAP) for atomistic simulations, we investigate the changing morphology of amorphous boron nitride with varying H doping concentrations. We found that for 8 at% of H doping, the concentration of \(sp^{3}\)-hybridized atoms reaches to a maximum which leads to an improvement of thermal stability and mechanical properties by 20%. These results will be a guideline for experimentalists and process engineers to tune the growth conditions of amorphous boron nitride films for numerous applications.
## 1 Introduction
As the semiconductor industry downscales integrated circuits and power consumption increases, materials and reliability are pushed to their limits. It is thus increasingly important to either improve the performances of existing materials and/or develop new materials to meet these stringent demands of power reduction of electronic devices and circuits. While transistors have gone through several generations of design and the introduction of new materials, the back-end-of-line interconnects have seen fewer changes. Nevertheless, significant efforts have been dedicated to address the reduction of the resistance-capacitance (RC) delay [1, 2, 3]. The resistance-capacitance (RC)
reduction could be achieved in several ways, 1) decrease the capacitance density by decreasing the dielectric constant of interlayer dielectric (ILD) and the interpretal dielectric (IMD), 2) decrease the resistivity of the metal interconnect wiring, and 3) increase the cross-section of the interconnects. However, there are several problems associated with decreasing RC time delay as per the above approaches, 1) new materials with lower dielectric constants exist but may not be stable within the required process flow, that is they may not be mechanically and thermally stable as well as being good diffusion barriers, and 2) it is difficult to find metallic systems with electrical conductivity lower than Cu that can also meet the stability requirements of devices [4, 5, 6].
A potential new barrier dielectric has recently emerged, amorphous boron nitride (\(\alpha\)-BN). Experimental reports on \(\alpha\)-BN indicate that it has a low dielectric constant, k-values lower than 2, and exhibits higher stability and mechanical properties compared to other low dielectric materials such as organic polymeric materials [7, 8, 9]. In addition, theoretical predictions suggest that a certain density of carbon content improves the structural and thermal properties of \(\alpha\)-BN:C [10]. More recently there is also a study on the use of \(\alpha\)-BN for interconnect capping layers [11]. This last work reports on the plasma-enhanced chemical vapor deposition (PECVD) of 3 and 7 nm \(\alpha\)-BN as a capping layer to replace PECVD-grown Si\({}_{3}\)N\({}_{4}\). The study finds that \(\alpha\)-BN is an excellent insulator with efficient barrier against Cu diffusion, has good adhesion to copper and SiO\({}_{2}\), is thermally stable and has a much lower dielectric constant (k=3), than Si\({}_{3}\)N\({}_{4}\) (k\(\sim\)7) enabling an RC-delay reduction of 10-17%. However, neither these results nor the previous studies on ALD-grown \(\alpha\)-BN[12, 13], report on the effects of C and H content. In Ref.[10], using machine learning techniques and classical molecular dynamics, we explored the effects of C content on the physical properties of \(\alpha\)-BN in an attempt to create an even more stable dielectric [10]. Also, it is well known that PECVD-grown Si\({}_{3}\)N\({}_{4}\) can contain large amounts of hydrogen [14] and it is likely that PECVD-grown \(\alpha\)-BN may also contain hydrogen. Therefore, it is important to understand the stability of hydrogen in \(\alpha\)-BN as it is for Si\({}_{3}\)N\({}_{4}\), since hydrogen can impact the underlying performance of the Si transistors and affect the dielectric and physical properties of interconnects. In this paper, we clarify the effects of hydrogenation of \(\alpha\)-BN:H on its thermal stability and mechanical properties.
From a computational perspective, atomistic calculations represent a suitable tool to describe complex structures, giving access to details at the atomic and molecular level to enable a basic understanding of new materials without performing more costly and time-consuming experiments. The extremely disordered nature of amorphous materials requires a computational approach able to capture the interatomic potentials in arbitrary complex local environments, a challenge that can only be tackled with machine learning-based methods [15, 16, 17]. More specifically, classical molecular dynamics with the employment of force fields derived using machine learning and ab-initio techniques constitute a powerful methodology to describe disordered material while keeping first-principles accuracy [18, 19, 20, 21]. In our theoretical study, we found
that hydrogen influences the hybridization of the core structure (\(sp^{2}/sp^{3}\)) and increases structural disorder as observed through radial distribution function (RDF). Further, in comparison to carbon doped \(\alpha\)-BN, whose thermal stability and Young's modulus increase monotonically with C doping, hydrogen doping leads to an \(\alpha\)-BN with a non-monotonic increase in these properties, in fact, they peak at around 8 at% hydrogen. These results are critically important in providing directions to the experimentalist in tuning the deposition processes to meet the electronic device requirements.
## 2 Methods
### Gaussian Approximation Potentials (GAP)
Structures for training and validation sets are generated using DFT calculations using the Quantum Espresso package [22, 23, 24] with Perdew-Burke-Ernzerhof (PBE) [25] exchange-correlation functional and projector-augmented wave (PAW) pseudopotentials. The energy cutoff for the wavefunction and electronic density is 75 Ry and 600 Ry, respectively. Both training and validation sets contain sufficiently large data sets of forces, energies, and stresses. The mentioned data sets involve both crystalline and amorphous BN structures, \(\alpha\)-BN:H samples with different levels of H concentration. The density of the structures in the data sets ranges between 1.0 \(g/cm^{3}\) and 3.0 \(g/cm^{3}\). Moreover, they also contain several distinct molecular configurations (\(H_{2}\), \(N_{2}\), ammonia, ammonium, and borazine) and isolated B, N, and H atoms. The final training and validation sets contain 2500 and 1800 samples, respectively. Such a wide variety of atomistic configurations enables us to model \(\alpha\)-BN:H samples with better accuracy.
The parameters shown in Table 2.1 has been employed to train the GAP model for \(\alpha\)-BN:H using the training database. Smooth Overlap of Atomic Positions (SOAP) descriptor[26] is introduced to model the many-body interactions between atoms, while 2b and 3b descriptors are adopted for two-body and three-body interactions. Uniform sparsification has been used for 2b and 3b terms, while the CUR method[27] has been chosen for the SOAP kernel. After the training, we compare the energies and forces of structures in both training and validation sets obtained from molecular dynamics simulations with GAP model (GAP-MD) with the results from DFT calculations in order to evaluate the accuracy of the generated GAP model as shown in Fig. 1. A significantly low root mean squared error (RMSE) is calculated for both training and validation sets.
### Melt-Quench Protocol for Sample Generation
One of the most common strategies used in molecular dynamics (MD) simulations to generate amorphous samples is the melt-quenching protocol. In this method, the sample is first melted by heating above the melting temperature and then rapidly quenched [28]. The \(\alpha\)-BN:H samples containing varying amounts of hydrogen are generated following this protocol using GAP-MD simulations with Large-scale Atomic/Molecular Massively
Parallel Simulator(LAMMPS) code[29]. Each sample has 10000 atoms and an equal number of boron and nitrogen atoms. First, all boron, nitrogen, and hydrogen atoms are placed randomly in the simulation cell; then, the melted samples are equilibrated at 5000 K for 50 ps (timestep of 0.1 fs) using a Nose-Hover thermostat. Later, the temperature of the samples is reduced to 2000 K in 100 ps (with a cooling rate of 40 K/ps) and equilibrated at this temperature. Samples are then cooled down to 300 K in 150 ps. After a short relaxation (10 ps) run, we also applied an annealing step where the temperature was increased to 700 K with a heating rate of 20 K/s and decreased to 300 K with a cooling rate of 5 K/s. Finally, annealed samples are relaxed and equilibrated at 300 K for 50 ps in the NPT ensemble.
## 3 Results
### Morphology Analysis of \(\alpha\)-BN:H
We first present the analysis of the morphology of \(\alpha\)-BN:H with different H concentrations employing the melt-quench protocol. A subset of the samples generated in this work is shown in Fig. 2. The RDF in Fig. 3 shows the density of surrounding atoms as a function of distance and gives insight into the crystallinity of the material. Even though the first two peaks (\(<\)4 \(\AA\)) are clearly identified, no peak can be recognizable for longer distances, indicating the lack of long-range order. Hence, the amorphous character of \(\alpha\)-BN does not change with the H doping.
The short-range order in \(\alpha\)-BN:H is dominated by the first-neighbor distance, which contributes to the first peak in the RDF located at an average distance of 1.42 \(\AA\), as shown in Fig. 3. A closer look at the first peak reveals an increased broadening and a shift to the left side with increased doping concentration, suggesting that it is induced by the presence of hydrogen atoms. Such change occurs due to the formation of chemical
\begin{table}
\begin{tabular}{c c c c} \hline & 2-body & 3-body & SOAP \\ \hline \(\delta\) (eV) & 2.0 & 0.1 & 0.1 \\ \(r_{cut}\) (\(\AA\)) & 3.7 & 3.0 & 3.7 \\ \(r_{\Delta}\) (\(\AA\)) & & & 0.5 \\ \hline \(\sigma_{at}\) (\(\AA\)) & & & 0.5 \\ \(n_{max}\), \(l_{max}\) & & & 8 \\ \(\zeta\) & & & 4 \\ \hline Sparsification & Uniform & Uniform & CUR \\ \hline \(N_{t}\) (\(\alpha\)-BN bulk) & & 150 & 2000 \\ \(N_{t}\) (Crystalline samples) & & 50 & 500 \\ \(N_{t}\) (Total) & 15 & 200 & 2500 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used to train the GAP potential for H-doped \(\alpha\)-BN.
bonds different from B-N (B-H, N-H, B-B, and N-N), whose average bond length is different from B-N. The average bond lengths of these bonds are approximately \(~{}1.21~{}\AA,~{}1.05~{}\AA,~{}1.81~{}\AA\), and \(~{}1.44~{}\AA\), respectively.
As \(\alpha\)-BN:H is formed by the melt-quenching method from high temperatures, the resulting microstructure frozen in from the quenching process has a large influence in the hybridization of atoms in formed film. Understanding how hybridization can be changed with the fabrication conditions can help us tailor the properties of the material. The coordination number of \(\alpha\)-BN:H is calculated to determine the type of hybridization. Fig. 4 shows the ratio of \(sp^{2}\) (having coordination number 3) and \(sp^{3}\) (having coordination number 4). The \(sp^{3}\) is calculated as the ratio of \(sp^{3}\) to \(sp^{3}\). The \(sp^{3}\) is calculated as the ratio of \(sp^{3}\) to \(sp^{3}\).
tion number 4) hybridized atoms. \(sp^{1}\)-hybridized atoms (having a coordination number of 2) are also presented. With the introduction of H atoms to \(\alpha\)-BN, the ratio of \(sp^{2}/sp^{3}\) drops rapidly with a minimum observed at 8 at% H concentration. A deeper understanding of the chemical composition of the samples can be obtained by investigating the number and nature of chemical species involved in the bonds of the samples as a function of H concentration. As shown in Fig. 4, while B-H and N-H bonds are increasing, others seem to be decreasing monotonically with larger H concentrations. No H-H bonds are observed up to 20 at% of H concentration.
While having more H atoms reduces the total mass of the sample, canceling the \(sp^{1}\) hybridizations and having shorter bonds cause a dramatic shrinkage in the volume of the cell down to an H concentration of 8 at%. Due to this interplay, the density of
Figure 2: A subset of generated \(\alpha\)-BN:H samples with varying H concentration using VESTA[30] software where B, N, and H atoms are represented as black, grey, and black, respectively.
\(\alpha\)-BN:H at low H concentration levels is increased from 2.17 to 2.181 \(g/cm^{3}\), however, at larger H concentrations, the density of \(\alpha\)-BN:H samples drops rapidly, as low as 2.01 \(g/cm^{3}\).
### Thermal Stability of \(\alpha\)-BN:H
Upon investigating the morphology of the generated samples, we also calculate the diffusivity of samples as a function of sample temperature. The diffusivity (\(D=\lim_{t\rightarrow\infty}MSD(t)/6t\)) of samples at any given temperature can be extracted from the
Figure 4: The ratio of \(sp^{2}\) and \(sp^{3}\)-hybridized atoms of \(\alpha\)-BN:H as a function of H concentration (left). Number of the chemical bonds with respect to the H concentration (right).
Figure 3: Average RDF of melt-quenched \(\alpha\)-BN:H samples. All lines are averaged over five samples.
mean square displacement (MSD) of atoms where the MSD shows the average mobility of particles. The diffusivity of a sample is zero when MSD approaches a non-zero constant and has a zero slope. However, when the sample under investigation experiences a structural rearrangement and loses its stability, MSD has a non-zero slope. This allows us to evaluate the thermal stability of the samples and understand when they become unstable. Here, thermal stability refers to the material's ability to retain its structural integrity without significant atomic diffusion or rearrangement and loss of short-range order when subjected to high temperatures. In order to assess the thermal conductivity, we calculate the stability of samples between 300 K to 3000 K. We first calculate the MSD and diffusivity at that temperature for 70 ps and then increase the temperature of the samples by 50 K in 30 ps, later we calculate the MSD at the new temperature. The time intervals are determined to be large enough to obtain statistically meaningful data. The NPT (constant number of atoms, pressure, and temperature) ensemble with a Nose-Hoover thermostat has been applied with a timestep of 0.25 fs.
The diffusivity of H atoms in \(\alpha\)-BN:H is presented in Fig. 5. Non-zero values at low temperatures are obtained due to the vibration of atoms. Larger MSD values, which indicate structural rearrangement of atoms and unstable structure, are observed starting from 1600 K for 3 at% and 5 at% H-doping. For the case of 8 at% doping, diffusivity is near zero up to 1800 K. For larger H doping this temperature value drops significantly. For highly H-doped \(\alpha\)-BN, diffusivity becomes non-zero between 1000-1200 K. However, B and N atoms become diffusive at higher temperatures compared to H atoms. While H atoms begin to diffuse around 1800 K, B and N atoms still have near-zero diffusivity until 2000 K. Similarly at higher H concentration levels, there is a significant difference between the temperature at which H atoms and B and N atoms start to diffuse. At low-level H doping, this temperature difference is quite low.
Regardless of the identity of atoms, a monotonic trend between thermal stability and H concentration levels is observed. \(\alpha\)-BN:H samples become more stable until the H concentration reaches 8 at% due to the increase in \(sp^{3}\)-hybridized atoms and reduction in \(sp^{1}\)-hybridized atoms. Thereafter, thermal stability decreases rapidly since larger H doping reduces the density and causes some pores within the sample. At H concentrations larger than 20 at%, samples become unstable. At low hydrogen levels, H doping can lead to a more thermally stable structure since it reduces the amount of \(sp^{1}\)-hybridized B and N atoms, reduces the number of dangling bonds, and increases the density. However, at larger concentrations, H atoms disrupt the stability of the \(\alpha\)-BN samples. Kaya et al. [10] showed that the thermal stability of C-doped \(\alpha\)-BN samples can be improved with larger amounts of \(sp^{3}\)-hybridized atoms. Similarly, Cloutier et al. [31] and Liu et al. [32] showed that increasing the density of \(\alpha\)-C films and number of \(sp^{3}\)-hybridized atoms can improve the stability of films.
### Mechanical Properties of \(\alpha\)-BN:H
To calculate the mechanical properties of \(\alpha\)-BN:H samples as a function of H concentration, we compute the elastic constant of the samples by using the stress fluctuations and Born matrix (i.e., the second derivatives of energy with respect to the strain) in a NVT ensemble at 300 K[33]. For each sample generated in this study, we calculate the full elastic stiffness tensor \(C_{ij}\) using the LAMMPS. Later, we calculate Young's modulus, shear modulus, bulk modulus, and Poisson's ratio from the stiffness matrix.
Fig. 6 shows Young's modulus, shear modulus, and bulk modulus of \(\alpha\)-BN:H samples. Results show the same non-monotonic trend similar to thermal stability and \(sp^{2}/sp^{3}\) ratios. This clearly indicates that the mechanical properties are largely dependent on the microstructure of \(\alpha\)-BN:H. Young's modulus of pure \(\alpha\)-BN samples was calculated as 270.11 GPa, which increases to 332.21 GPa with 8 at% H concentration, respectively, which coincidentally have the largest \(sp^{3}\)-hybridized bonds. The shear modulus and bulk modulus of \(\alpha\)-BN samples are also increased with a higher density and larger number
Figure 5: Diffusivity of H atoms as a function of temperature for samples of \(\alpha\)-BN:H with varying H concentration. Inset: Diffusivity of H atoms at 2200 K as a function of level of H doping.
of \(sp^{3}\)-hybridized atoms. However, larger H doping worsens the mechanical properties due to fewer \(sp^{3}\)-hybridized atoms, lower density, and more porous structure. Reduction in mechanical properties with lower density and \(sp^{3}\)-hybridized atoms has already been shown for \(\alpha\)-BN and other amorphous structures [10, 9, 34, 35, 36]. Even though the reported Young's modulus values in this study are lower than hexagonal and cubic BN, \(\alpha\)-BN:H still has superior mechanical properties than other ultralow-dielectric materials. Another important mechanical property is Poisson's ratio, which gives us insight into how materials act under stress. Poisson's ratio of \(\alpha\)-BN:H samples ranges between 0.24 and 0.281. Even though there is no clear trend with the Poisson's ratio and H doping or number of \(sp^{3}\)-hybridized atoms, \(\alpha\)-BN:H samples have a Poisson's ratio lower than 0.27, and Poisson's ratio drops significantly for structures with H doping higher than 15 at%. Since materials with Poisson's ratio lower than 2/7 are assumed to be brittle[37], all \(\alpha\)-BN:H samples are assumed to be brittle.
The data in Fig. 7 will allow us to get a deeper insight into how the microstructure of \(\alpha\)-BN films changes with the doping, we compare the temperature at samples lose their thermal stability of B and N atoms, and Young's modulus of \(\alpha\)-BN:H and \(\alpha\)-BN:C reported by earlier[10]. The temperature values presented in the Fig. 7 show the approximate temperature where the diffusivity of B and N atoms becomes a non-zero value. Since C atoms can bond with four atoms instead of one and are heavier than H atoms, they lead to more \(sp^{3}\) bonds and denser samples than \(\alpha\)-BN:H. This leads to a higher Young's modulus and more stable structures, even at higher doping values (20 at%). Variation in stability and mechanical properties due to doping shows that the fundamental properties of \(\alpha\)-BN films can be altered in different fabrication conditions.
Figure 6: Mechanical properties of \(\alpha\)-BN:H with respect to the H concentration.
## 4 Conclusion
Excellent mechanical properties and ultralow dielectric constant of \(\alpha\)-BN opens a new pathway for microelectronics and neuromorphic computing technologies. This study reveals how H doping tunes the morphology, stability, and mechanical properties of \(\alpha\)-BN. We first develop a machine-learning interatomic potential for H dopant in \(\alpha\)-BN and performed GAP-driven MD simulations to generate realistic structures. Thanks to the accurate machine learning approaches, we show that thermal stability and mechanical properties of \(\alpha\)-BN are improved by small H doping levels. Despite \(\alpha\)-BN:H's extraordinary properties, it is crucial to perform a thorough benchmark analysis on growth conditions and corresponding properties. The results obtained in our study will provide a guide to process engineers to optimize the growth conditions to achieve the optimum materials performance in the context of microelectronics.
## Acknowledgement
This project has been supported by Samsung Advanced Institute of Technology and is conducted under the REDI Program, a project that has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no. 101034328. This paper reflects only the author's view and the Research Executive Agency is not responsible for any use that may be made of the information it contains. ICN2 acknowledges the Grant PCI2021-122092-2A funded by MCIN/AEI/10.13039/501100011033 and by the "European Union NextGenerationEU/PRTR". Simulations were performed at the Center for Nanoscale Materials, a U.S. Department of Energy Office of Science User Facility, supported by the U.S. DOE, Office of Basic Energy Sciences, under Contract No. DE-AC02-06CH11357. Additional computational support was received from the King Abdullah University of Science and Technology-KAUST (Supercomputer Shaheen II Cray XC40) and Texas
Figure 7: Comparison of the temperature that the \(\alpha\)-BN:H and \(\alpha\)-BN:C samples lose their stability and their Young’s modulus with respect to the doping concentration.
Advanced Computing Center (TACC) at The University of Texas at Austin.
|
2305.15848 | Skew bracoids | Skew braces are intensively studied owing to their wide ranging connections
and applications. We generalize the definition of a skew brace to give a new
algebraic object, which we term a skew bracoid. Our construction involves two
groups interacting in a manner analogous to the compatibility condition found
in the definition of a skew brace. We formulate tools for characterizing and
classifying skew bracoids, and study substructures, quotients, homomorphisms,
and isomorphisms. As a first application, we prove that finite skew bracoids
correspond with Hopf-Galois structures on finite separable extensions of
fields, generalizing the existing connection between finite skew braces and
Hopf-Galois structures on finite Galois extensions. | Isabel Martin-Lyons, Paul J. Truman | 2023-05-25T08:41:32Z | http://arxiv.org/abs/2305.15848v1 | # Skew Bracoids
###### Abstract.
Skew braces are intensively studied owing to their wide ranging connections and applications. We generalize the definition of a skew brace to give a new algebraic object, which we term a _skew bracoid_. Our construction involves two groups interacting in a manner analogous to the compatibility condition found in the definition of a skew brace. We formulate tools for characterizing and classifying skew bracoids, and study substructures, quotients, homomorphisms, and isomorphisms. As a first application, we prove that finite skew bracoids correspond with Hopf-Galois structures on finite separable extensions of fields, generalizing the existing connection between finite skew braces and Hopf-Galois structures on finite Galois extensions.
Key words and phrases:Skew left braces, Hopf-Galois structure, Hopf-Galois theory 2020 Mathematics Subject Classification: Primary 20N99; Secondary 16T05, 12F10
## 1. Introduction
Skew braces are a generalization, due to Guanieri and Vendramin [15], of braces, which were introduced by Rump in [20]. A (left) _skew brace_ is a triple \((B,\star,\cdot)\) such that \((B,\star)\) and \((B,\cdot)\) are groups (sometimes called the _additive_ and _multiplicative_ groups of the skew brace, respectively) and the following compatibility condition (the (left) _skew brace relation_) is satisfied
\[a\cdot(b\star c)=(a\cdot b)\star a^{-1}\star(a\cdot c)\text{ for all }a,b,c\in B. \tag{1}\]
Here \(a^{-1}\) denotes the inverse of \(a\) with respect to \(\star\). A _brace_ is a skew brace in which the group \((B,\star)\) is abelian. These objects were introduced, and continue to be intensively studied, because they yield nondegenerate set theoretic solutions of the _Yang-Baxter equation_[15, Section 3], [22, Section 3]. They have also been found to have connections with a wide range of other algebraic objects, including near-rings, racks, quandles, pre-lie rings, and Hopf-Galois structures. We give a little more detail on this last construction since we return to it later. A _Hopf-Galois structure_ on a finite extension of fields \(L/K\) consists of a \(K\)-Hopf algebra \(H\) and a \(K\)-linear action of \(H\) on \(L\) satisfying a certain nondegeneracy condition ([8, (2.7) Definition]). A theorem of Greither and Pareigis [14] implies that if \(L/K\) is a Galois extension with Galois group \(G\) then the Hopf-Galois structures on \(L/K\) correspond bijectively with certain subgroups \(N\) of \(\operatorname{Perm}(G)\) of the same order as \(G\); the isomorphism class of \(N\) is then called the _type_ of the corresponding Hopf-Galois structure. As observed by Bachiller [1, Remark 2.6] (for \(N\) abelian) and by Smoktunowicz, Vendramin, and Byott [22, Appendix A] more generally, the extension \(L/K\) admits a Hopf-Galois structure of type \(N\) if and only if there
Introduction
The _quasi-linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _ _linear_ (or _quasi-linear_) _ _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _ _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _ _linear_ (or _quasi-linear_) _ _linear_ (or _quasi-linear_) _ _linear_ (or _quasi-linear_) _linear_ (or _quasi-linear_) _ _linear_ (or
**Conventions** All groups, skew braces, skew bracoids, and field extensions are assumed to be finite. The identity element of a group \(A\) is denoted \(e_{A}\).
**Acknowledgements** The second author gratefully acknowledges the support of the Engineering and Physical Sciences Research Council, project reference EP/W012154/1.
## 2. Definitions and characterizations
In this section we present the definition of a skew bracoid, give some families of examples, and establish some elementary properties, including results for constructing an characterizing skew bracoids.
**Definition 2.1**.: A (left) _skew bracoid_ is a \(5\)-tuple \((G,\cdot,N,\star,\odot)\) such that \((G,\cdot)\) and \((N,\star)\) are groups and \(\odot\) is a transitive action of \((G,\cdot)\) on \(N\) such that
\[g\odot(\eta\star\mu)=(g\odot\eta)\star(g\odot e_{N})^{-1}\star(g\odot\mu) \tag{2}\]
for all \(g\in G\) and \(\eta,\mu\in N\).
We shall call Equation (2) the (left) _skew bracoid relation_.
The intuition behind Definition 2.1 is that the group \((N,\star)\) is analogous to the additive group of a skew brace, and the group \((G,\cdot)\), together with its transitive action \(\odot\) on \(N\), is analogous to the multiplicative group. The skew bracoid relation (2) stipulates that the transitive action of \((G,\cdot)\) on \(N\) should interact with the binary operation \(\star\) on \(N\) in a manner analogous to the skew brace relation (1).
To ease notation, we frequently suppress the notation \(\cdot\), and occasionally the notation \(\star\), where there is no risk of confusion. We retain the symbol \(\odot\) for the action of \(G\) on \(N\) in our arguments, but often suppress it when specifying skew bracoids, _viz_\((G,N)\).
Unlike the theory of skew braces (in which we have two binary operations on the same set), we use the notation \(\,{}^{-1}\) to denote all inverses, since the group in which the inverse is being taken can be deduced from the context. Thus \((g\odot e_{N})^{-1}\) denotes the inverse of the \(g\odot e_{N}\in N\) element with respect to \(\star\), whereas \((g^{-1}\odot e_{N})\) denotes the action of \(g^{-1}\in G\) on the identity element of \(N\).
We shall say that a skew bracoid is _finite_ to mean that both \(G\) and \(N\) are finite. In this case, by the orbit-stabilizer theorem, the order of \(N\) divides the order of \(G\). All the skew bracoids we consider will be finite.
**Example 2.2**.: A skew brace \((B,\star,\cdot)\) can be viewed as a skew bracoid with \(G=(B,\cdot)\), \(N=(B,\star)\), and \(\odot=\cdot\). Where necessary, we express this as \((B,\star,B,\cdot,\odot)\) or \((B,B)\).
Conversely, if \((G,N)\) is a skew bracoid in which \(|G|=|N|\) (or, equivalently, \(\operatorname{Stab}_{G}(e_{N})=\{e_{G}\}\)) then the action of \(G\) on \(N\) is regular (i.e. transitive and fixed-point free), and so we may transport the structure of \(G\) to \(N\) via the rule
\[(g\odot e_{N})\circ(h\odot e_{N})=(gh)\odot e_{N}.\]
Having done this, the triple \((N,\star,\circ)\) is a skew brace. Because of this, we shall say that a skew bracoid in which \(|G|=|N|\) is _essentially a skew brace_.
**Example 2.3**.: Let \(n\in\mathbb{N}\), let \(d\) be a positive divisor of \(n\), let
\[G=\langle r,s\mid r^{n}=s^{2}=e_{G},\;srs^{-1}=r^{-1}\rangle\cong D_{n},\]
and let \(N=\langle\eta\rangle\cong C_{d}\). Then the rule
\[r^{i}s^{j}\odot\eta^{k}=\eta^{i+(-1)^{j}k} \tag{3}\]
defines a transitive action of \(G\) on \(N\), and we have
\[(r^{i}s^{j}\odot\eta^{k})\star(r^{i}s^{j}\odot e_{N})^{-1}\star(r ^{i}s^{j}\odot\eta^{\ell})\] \[= \eta^{i+(-1)^{j}k}\star\eta^{-i}\star\eta^{i+(-1)^{j}\ell}\] \[= \eta^{i+(-1)^{j}(k+\ell)}\] \[= r^{i}s^{j}\odot\eta^{k+\ell}.\]
Therefore the skew bracoid relation (2) is satisfied, and so \((G,N,\odot)\) is a skew bracoid.
Certain skew bracoids arise from a natural quotienting procedure on skew braces. To describe this procedure, we recall first that the \(\gamma\)_-function_ of a skew brace \((B,\star,\cdot)\) is the function \(\gamma:B\to\operatorname{Perm}(B)\) defined by \(\,{}^{\gamma(b)}a=b^{-1}\star(b\cdot a)\). In fact, we have \(\gamma(b)\in\operatorname{Aut}(B,\star)\) for each \(b\in B\), and the map \(\gamma\) is a homomorphism from \((B,\cdot)\) to \(\operatorname{Aut}(B,\star)\). A _left ideal_ of a skew brace \((B,\star,\cdot)\) is a subset \(A\) of \(B\) that is a subgroup of \(B\) with respect to \(\star\), a subgroup of \(B\) with respect \(\cdot\), and that satisfies \(\,{}^{\gamma(B)}A=A\) (the third condition, along with either of the subgroup conditions, implies the other). A left ideal \(A\) of \((B,\star,\cdot)\) is called a _strong left ideal_ if \((A,\star)\) is normal in \((B,\star)\), and is called an _ideal_ if \((A,\star)\) is normal in \((B,\star)\) and \((A,\cdot)\) is normal in \((B,\cdot)\). It is well known that quotients of skew braces by ideals are again skew braces. We can generalize this as follows
**Proposition 2.4**.: Let \((B,\star,\cdot)\) be a skew brace and let \(A\) be a strong left ideal. Then \((B,\cdot,B/A,\star,\odot)\) is a skew bracoid, where \((B/A,\star)\) denotes the quotient group of \((B,\star)\) by \((A,\star)\) and \(\odot\) denotes left translation of cosets with respect to \(\cdot\).
Proof.: First we note that the cosets of \(A\) in \((B,\star)\) and \((B,\cdot)\) coincide: for \(b\in B\) we have
\[b\star A = \{b\star a\mid a\in A\}\] \[= \{b\star{}^{\gamma(b)}a\mid a\in A\}\text{ since }\gamma(b)\in \operatorname{Aut}(A,\star)\] \[= \{b\star b^{-1}\star(b\cdot a)\mid a\in A\}\] \[= \{b\cdot a\mid a\in A\}\] \[= b\odot A.\]
Thus it makes sense to consider the set \(B/A\) simultaneously as a quotient group with respect to \(\star\) and as a \((B,\cdot)\)-set via left translation of cosets, and in fact we may write \(b\star A=b\odot A=bA\) without ambiguity. It is clear that the action of \((B,\cdot)\) on \(B/A\) by left translation is transitive, so it only remains to verify that the skew
bracoid relation is satisfied. For \(b,c,d\in B\) we have
\[b\odot(cA\star dA) = b\odot((c\star d)A)\] \[= (b\cdot(c\star d))A\] \[= ((b\cdot c)\star b^{-1}\star(b\cdot d))A\] \[= (b\cdot c)A\star(bA)^{-1}\star(b\cdot d)A\] \[= (b\odot cA)\star(b\odot A)^{-1}\star(b\odot dA).\]
(Here \(b^{-1}\) denotes the inverse of \(b\) in \((B,\star)\), and \((bA)^{-1}\), \((b\odot A)^{-1}\) denote the inverses of these cosets in \((B/A,\star)\).) Thus the skew bracoid relation is satisfied, and so \((B,\cdot,B/A,\star,\odot)\) is a skew bracoid.
**Example 2.5**.: The skew bracoid \((G,N)\) constructed in Example 2.3 can be obtained via the procedure described in Proposition 2.4. We write the group \(G\) as \((G,\cdot)\), and define a new binary operation on \(G\) by \(r^{i}s^{j}\star r^{k}s^{\ell}=r^{i+k}s^{j+\ell}\); it is then easy to verify that \((G,\star)\) is a group isomorphic to \(C_{n}\times C_{2}\) and \((G,\star,\cdot)\) is a skew brace with \(\gamma\)-function given by \({}^{\gamma(r^{i}s^{j})}r^{k}s^{\ell}=r^{(-1)^{j}k}s^{\ell}\). It follows that subgroup \(H=\langle r^{d},s\rangle\) of \((G,\star)\) is a strong left ideal of \((G,\star,\cdot)\). The quotient group \((G/H,\star)\) is cyclic of order \(d\), generated by the coset \(rH\). Now the action of \((G,\cdot)\) on \(G/H\) by left translation is given by
\[r^{i}s^{j}\odot(rH)^{k}=r^{i}s^{j}\odot r^{k}H=r^{i+(-1)^{j}k}H.\]
Thus \((G/H,\star)\) is isomorphic to the group \(N\) in Example 2.3 as a group and a \(G\)-set. We shall give a formal treatment of isomorphism of skew bracoids in Section 4.
**Remark 2.6**.: It is natural to ask whether every skew bracoid can be obtained via the procedure described in Proposition 2.4; at present we cannot answer this question.
In [17] Koch and the second author introduce the notion of the _opposite_ of a skew brace (the same construction is studied independently by Rump in [21]). Opposite skew braces have applications to the Yang-Baxter equation [17, Section 4], [16], [12], to enumeration and classification problems [2], [3], and in Hopf-Galois theory [18], [4]. In particular, they play an important role in the Stefanello-Trappeniers approach to the correspondence between skew braces and Hopf-Galois structures on Galois fields extensions [23]. The concept of opposites extends naturally to skew bracoids:
**Proposition 2.7**.: Let \((G,\cdot,N,\star,\odot)\) be a skew bracoid, and let \((N,\star^{op})\) be the opposite group to \(N\). Then \((G,\cdot,N,\star^{op},\odot)\) is a skew bracoid.
Proof.: It is clear that \(\odot\) gives a transitive action of \(G\) on \(N\). We must show that the skew bracoid relation holds for \((G,\cdot,N,\star^{op},\odot)\). Let \(g\in G\) and \(\eta,\mu\in N\). Then we have
\[g\odot(\eta\star^{op}\mu) = g\odot(\mu\star\eta)\] \[= (g\odot\mu)\star(g\odot e_{N})^{-1}\star(g\odot\eta)\] \[= (g\odot\eta)\star^{op}(g\odot e_{N})^{-1}\star^{op}(g\odot\mu).\]
Hence \((G,\cdot,N,\star^{op},\odot)\) is a skew bracoid.
Next we turn to the question of characterizing skew bracoids. We generalize results of Guanieri and Vendramin ([15, Proposition 1.11 and Theorem 4.2]). They prove that, given a group \((B,\star)\), the following are equivalent:
* a binary operation \(\cdot\) on \(B\) such that \((B,\star,\cdot)\) is a skew brace;
* a regular subgroup of the _holomorph_\(\operatorname{Hol}_{\star}(B)\) (this is the normalizer of \(\lambda_{\star}(B)\) in \(\operatorname{Perm}(B)\), and is equal to the semidirect product of \(\lambda_{\star}(B)\) and \(\operatorname{Aut}_{\star}(B)\));
* a group \(G\) acting on \((B,\star)\) by automorphisms together with a bijective \(1\)-cocycle \(\pi:G\to B\).
In the case of skew bracoids, we have
**Theorem 2.8**.: Let \((G,\cdot),(N,\star)\) be groups. The following are equivalent:
* A transitive action \(\odot\) of \(G\) on \(N\) such that \((G,\cdot,N,\star,\odot)\) is a skew bracoid;
* a transitive subgroup \(A\) of \(\operatorname{Hol}(N)\) isomorphic to a quotient of \(G\);
* a homomorphism \(\gamma:G\to\operatorname{Aut}(N)\) and a surjective \(1\)-cocycle \(\pi:G\to N\).
Proof.: First suppose that (i) holds. Let \(\lambda_{\odot}:G\to\operatorname{Perm}(N)\) be the permutation representation of the action \(\odot\). Then \(\lambda_{\odot}(G)\) is a transitive subgroup of \(\operatorname{Perm}(N)\) isomorphic to a quotient of \(G\). To show that \(\lambda_{\odot}(G)\subseteq\operatorname{Hol}(N)\) we consider the functions \(\gamma(g)\in\operatorname{Perm}(N)\) defined by
\[\gamma(g)=\lambda_{\star}(g\odot e_{N})^{-1}\lambda_{\odot}(g),\]
so that
\[{}^{\gamma(g)}\eta=(g\odot e_{N})^{-1}\star(g\odot\eta)\text{ for all }g\in G \text{ and }\eta\in N.\]
We claim that \(\gamma(g)\in\operatorname{Aut}(N)\) for all \(g\in G\); if this holds then we quickly obtain
\[\lambda_{\odot}(g)=\lambda_{\star}(g\odot e_{N})\gamma(g)\in\lambda_{\star}(N )\operatorname{Aut}(N)=\operatorname{Hol}(N)\text{ for all }g\in G.\]
To prove the claim, let \(g\in G\) and \(\eta,\mu\in N\); then we have
\[{}^{\gamma(g)}(\eta\star\mu) = (g\odot e_{N})^{-1}\star(g\odot(\eta\star\mu))\] \[= (g\odot e_{N})^{-1}\star(g\odot\eta)\star(g\odot e_{N})^{-1} \star(g\odot\mu)\text{ by }(2)\] \[= \left({}^{\gamma(g)}\eta\right)\star\left({}^{\gamma(g)}\mu\right).\]
Thus \(\gamma(g)\) is a homomorphism. To show that it is bijective, suppose that \(\eta\in\ker(\gamma(g))\). Then
\[(g\odot e_{N})^{-1}\star(g\odot\eta)=e_{N}\] \[\Rightarrow (g\odot\eta)=(g\odot e_{N})\] \[\Rightarrow \eta=e_{N}.\]
Hence \(\ker(\gamma(g))\) is trivial and so, since \(N\) is finite, \(\gamma(g)\) is bijective. Therefore \(\gamma(g)\in\operatorname{Aut}(N)\) as claimed, so \(A=\lambda_{\odot}(G)\) is a transitive subgroup of \(\operatorname{Hol}(N)\) isomorphic to a quotient of \(G\), and so (ii) holds.
Next suppose that (ii) holds. Let \(\delta:G\to A\) be a surjective homomorphism, and for each \(g\in G\) write \(\delta(g)=\lambda_{\star}(\pi(g))\gamma(g)\) with \(\pi(g)\in N\) and \(\gamma(g)\in\operatorname{Aut}(N)\). Then
\(\gamma:G\to\operatorname{Aut}(N)\) is the composition the homomorphism \(\delta\) with the projection onto the automorphism component; it is therefore a homomorphism, and so yields and action of \(G\) on \(N\) by automorphisms. Since \(A\) is a transitive subgroup of \(\operatorname{Hol}(N)\), we have
\[N=A[e_{N}]=\{\pi(g)\star\ ^{\gamma(g)}e_{N}\mid g\in G\}=\{\pi(g)\mid g\in G\},\]
so the map \(\pi:G\to N\) is surjective. Finally, the assumption that \(\delta\) is a homomorphism implies that for \(g,h\in G\) we have
\[\delta(gh)[e_{N}]=\delta(g)(\delta(h)[e_{N}])\] \[\Rightarrow \pi(gh)\ ^{\gamma(gh)}e_{N}=\pi(g)\star\ ^{\gamma(g)}[\pi(h)\star\ ^{ \gamma(h)}e_{N}]\] \[\Rightarrow \pi(gh)=\pi(g)\star\ ^{\gamma(g)}\pi(h).\]
Thus \(\pi\) is a surjective \(1\)-cocycle for the action of \(G\) on \(N\) by automorphisms via \(\gamma\), and so (iii) holds.
Finally, suppose that (iii) holds. For \(g\in G\) and \(\eta\in N\) define \(g\odot\eta\in N\)
\[g\odot\eta=\pi(g)\star\ ^{\gamma(g)}\eta.\]
The assumption that \(\pi\) is a \(1\)-cocycle for the action of \(G\) on \(N\) by automorphisms via \(\gamma\) implies \(\odot\) defines an action of \(G\) on \(N\): it is easy to see that \(\pi(e_{G})=e_{N}\), so \(e_{G}\odot\eta=\eta\) for all \(\eta\in N\), and for \(g,h\in G\) and \(\eta\in N\) we have
\[g\odot(h\odot\eta) = \pi(g)\star\ ^{\gamma(g)}[\pi(h)\star\ ^{\gamma(h)}\eta]\] \[= \pi(g)\star\ ^{\gamma(g)}\pi(h)\star\ ^{\gamma(g)\gamma(h)}\eta\] \[= \pi(gh)\star\ ^{\gamma(gh)}\eta\] \[= (gh)\odot\eta.\]
Furthermore, the assumption that \(\pi\) is surjective implies that this action is transitive: we have
\[G\odot e_{N}=\{\pi(g)\star\ ^{\gamma(g)}e_{N}\mid g\in G\}=\{\pi(g)\mid g\in G\}=N.\]
Finally, for \(g\in G\) and \(\eta,\mu\in N\) we have
\[g\odot(\eta\star\mu) = \pi(g)\star\ ^{\gamma(g)}[\eta\star\mu]\] \[= \pi(g)\star\ ^{\gamma(g)}\eta\star\ ^{\gamma(g)}\mu\] \[= \pi(g)\star\ ^{\gamma(g)}\eta\star\pi(g)^{-1}\star\pi(g)\star\ ^{ \gamma(g)}\mu\] \[= (g\odot\eta)\star(g\odot e_{N})^{-1}\star(g\odot\mu),\]
so (2) holds.
**Example 2.9**.: Recall the skew bracoid \((G,N)\) constructed in Example 2.3. Beginning with the action of \(G\) on \(N\) given in that example, we have
\[\lambda_{\odot}(r^{i}s^{j})[\eta^{k}]=\eta^{i+(-1)^{j}k}=\lambda_{\star}(\eta ^{i})\iota^{j}[\eta^{k}],\]
where \(\iota\in\operatorname{Aut}(N)\) is the automorphism given by inversion. Thus \(\lambda_{\odot}(G)=\lambda_{\star}(N)\langle\iota\rangle\subset\operatorname{ Hol}(N)\), and \(\gamma(r^{i}s^{j})=\iota^{j}\).
Secondly, (for example) the function \(\delta:G\to\lambda_{\star}(N)\langle\iota\rangle\) defined by \(\delta(r^{i}s^{j})=\lambda_{\star}(\eta^{i})\iota^{j}\) is a surjective homomorphism; the projection onto the \(\operatorname{Aut}(N)\) component
is \(\gamma(r^{i}s^{j})=\iota^{j}\), and the corresponding \(1\)-cocycle is given by \(\pi(r^{i}s^{j})=\eta^{i}\). Explicitly, we have
\[\pi(r^{i}s^{j}r^{k}s^{\ell}) = \pi(r^{i+(-1)^{j}k})\] \[= \eta^{i+(-1)^{j}k}\] \[= \eta^{i}\star\iota^{j}(\eta^{k})\] \[= \pi(r^{i}s^{j})^{\,\gamma(r^{i}s^{j})}\pi(r^{k}s^{\ell}).\]
Finally, from \(\gamma\) and \(\pi\) we may define a transitive action of \(G\) on \(N\) by
\[r^{i}s^{j}\odot\eta^{k} = \pi(r^{i}s^{j})\star\,^{\gamma(r^{i}s^{j})}\eta^{k}\] \[= \eta^{i}\eta^{(-1)^{j}k}\] \[= \eta^{i+(-1)^{j}k}.\]
We see that we recover the original action of \(G\) on \(N\), as in Example 2.3.
From the proof of Theorem 2.8 we extract the following, which will be used frequently in what follows.
**Definition 2.10**.: Let \((G,N)\) be a skew bracoid. The homomorphism \(\gamma:G\to\operatorname{Aut}(N)\) defined by
\[{}^{\gamma(g)}\eta=(g\odot e_{N})^{-1}\star(g\odot\eta)\text{ for all }g\in G \text{ and }\eta\in N\]
is called the _\(\gamma\)-function_ of the skew bracoid.
**Example 2.11**.: When a skew brace is viewed as a skew bracoid, the \(\gamma\)-function of the skew bracoid coincides with that of the skew brace.
If \((B,B/A)\) is a skew bracoid arising from a skew brace \((B,\star,\cdot)\) and a strong left ideal \(A\), as described in Proposition 2.4, then its \(\gamma\)-function is given by
\[{}^{\gamma(b)}(cA) = (b\odot e_{B}A)^{-1}\star(b\odot cA)\] \[= (bA)^{-1}\star((b\cdot c)A)\] \[= (b^{-1}\star(b\cdot c))A\] \[= (\,^{\gamma(b)}c)A.\]
**Example 2.12**.: The skew bracoid \((G,N)\) constructed in Example 2.3 has \(\gamma\)-function given by
\[{}^{\gamma(r^{i}s^{j})}\eta^{k} = (r^{i}s^{s}\odot e_{N})^{-1}\star(r^{i}s^{j}\odot\eta^{k})\] \[= \eta^{-i}\star\eta^{i+(-1)^{j}k}\] \[= \eta^{(-1)^{j}k}.\]
We record a useful consequence of the fact that the \(\gamma\)-function of a skew bracoid \((G,N)\) has values in \(\operatorname{Aut}(N)\).
**Proposition 2.13**.: Let \((G,N)\) be a skew bracoid. Then for all \(g\in G\) and \(\eta\in N\) we have
\[(g\odot e_{N})^{-1}\star(g\odot\eta^{-1})\star(g\odot e_{N})^{-1}=(g\odot \eta)^{-1}.\]
Proof.: Let \(g\in G\) and \(\eta\in N\). Since \(\gamma(g)\in\operatorname{Aut}(N)\) we have
\[\begin{array}{rl}&\gamma^{(g)}(\eta^{-1})=\left(\,{}^{\gamma(g)}\eta\right)^{-1 }\\ \Rightarrow&(g\odot e_{N})^{-1}\star(g\odot\eta^{-1})=((g\odot e_{N})^{-1} \star(g\odot\eta))^{-1}\\ \Rightarrow&(g\odot e_{N})^{-1}\star(g\odot\eta^{-1})=(g\odot\eta)^{-1}\star( g\odot e_{N}),\end{array}\]
from which the identity follows immediately.
If \((B,\star,\cdot)\) is a skew brace then the image of the left regular representation \(\lambda_{\bullet}:(B,\cdot)\to\operatorname{Perm}(B)\) is a regular subgroup of \(\operatorname{Hol}_{\star}(B)\) (see [15, Theorem 4.2]). The stabilizer of each element of \(B\) with respect to \(\cdot\) is trivial and so, _a fortiori_, \(\ker(\lambda_{\bullet})\) is trivial. In a skew bracoid \((G,\cdot,N,\star,\odot)\) the analogue of the first of these is almost always false: by the orbit-stabilizer theorem we have \(|G|=|\operatorname{Stab}_{G}(\eta)||N|\) for each \(\eta\in N\) so, unless \((G,N)\) is essentially a skew brace, each \(\operatorname{Stab}_{G}(\eta)\) is nontrivial (of course, these stabilizers are mutually conjugate in \(G\)). However, it may still happen that \(\ker(\lambda_{\odot})\) (the intersection of these stabilizers) is trivial, and so we make the following definition:
**Definition 2.14**.: We say that a skew bracoid \((G,N)\) is _reduced_ to mean that \(\lambda_{\odot}\) is injective or, equivalently, that the action of \(G\) on \(N\) via \(\odot\) is faithful.
**Example 2.15**.: A skew brace, viewed as a skew bracoid, is reduced.
If \((B,B/A)\) is a skew bracoid arising from a skew brace \((B,\star,\cdot)\) and a strong left ideal \(A\), as described in Proposition 2.4, then the stabilizer of the identity coset \(e_{B}A\) is precisely \(A\), and more generally the stabilizer of a coset \(b\odot A\) is \(b\cdot A\cdot b^{-1}\). It follows that \(\ker(\lambda_{\odot})=\bigcap_{b\in B}b\cdot A\cdot b^{-1}\), the normal core of \(A\) in \((B,\cdot)\). Therefore \((B,B/A)\) is reduced if and only if \((A,\cdot)\) is core-free.
**Example 2.16**.: Recall the skew bracoid \((G,N)\) constructed in Example 2.3. Since \(N\) is cyclic of order \(d\) and \(r^{i}s^{j}\odot\eta^{k}=\eta^{i+(-1)^{j}k}\), we have \(\ker(\lambda_{\odot})=\langle r^{d}\rangle\), so \((G,N)\) is reduced if and only if \(d=n\).
If \((G,N)\) is a skew bracoid that is not reduced then the elements of \(\ker(\lambda_{\odot})\) have no effect on the structure of \(N\) as a \(G\)-set, and so are, in some sense, superfluous. It is therefore tempting to revise our definition of a skew bracoid and insist that \(\odot\) should be a _faithful_ action of \(G\) on \(N\). However, we shall see in Section 3 that subskew bracoids and quotient skew bracoids of a reduced skew bracoid may not themselves be reduced; therefore the additional flexibility in Definition 2.1 is necessary. On the other hand, given a group \(N\), we clearly require a method for relating skew bracoids \((G,N)\) and \((G^{\prime},N)\) in which the actions of \(G\) and \(G^{\prime}\) on \(N\) are "essentially the same". This motivates the following:
**Definition 2.17**.: Two skew bracoids \((G,N)\) and \((G^{\prime},N^{\prime})\) are called _equivalent_ if \((N,\star)=(N^{\prime},\star^{\prime})\) and \(\lambda_{\odot}(G)=\lambda_{\odot^{\prime}}(G^{\prime})\subseteq\operatorname{ Hol}(N)\). This is denoted \((G,N)\sim(G,N^{\prime})\).
The intuition behind this definition is the result from the theory of skew braces that if \((B,\star)\) is a group and \(\cdot,\cdot^{\prime}\) are two further binary operations on \(B\) such that
\((B,\star,\cdot)\) and \((B,\star,\cdot^{\prime})\) are skew braces, then we have \((B,\star,\cdot)=(B,\star,\cdot^{\prime})\) if and only if \(\lambda_{\bullet}(B)=\lambda_{\bullet^{\prime}}(B)\) inside \(\operatorname{Hol}_{\star}(B)\).
It is clear that equivalence of skew bracoids is an equivalence relation.
**Proposition 2.18**.: Let \((G,N)\) be a skew bracoid, let \(K=\ker(\lambda_{\odot})\), and let \(\overline{G}=G/K\). Then the group \(\overline{G}\) acts on \(N\) via \((gK)\odot\eta=g\odot\eta\), and \((\overline{G},N)\) is a reduced skew bracoid, called the _reduced form_ of \((G,N)\).
Proof.: If \(gK=hK\) for some \(g,h\in G\) then \(g^{-1}h\in K\), so \(g^{-1}h\odot\eta=\eta\) for all \(\eta\in N\), and so \(g\odot\eta=h\odot\eta\) for all \(\eta\in N\); therefore the rule in the statement of the proposition is well defined. It follows quickly from the fact that \((G,N)\) is a skew bracoid that this rule defines a transitive action of \(\overline{G}\) on \(N\) and that the skew bracoid relation is satisfied. Therefore \((\overline{G},N)\) is a skew bracoid. To show that it is reduced, suppose that \(gK\in\overline{G}\) is such that \((gK)\odot\eta=\eta\) for all \(\eta\in N\). Then \(g\odot\eta=\eta\) for all \(\eta\in N\), so \(g\in K\), so \(gK=eK\). Therefore \((\overline{G},N)\) is a reduced skew bracoid.
From the definition of equivalence we obtain immediately
**Corollary 2.19**.: A skew bracoid is equivalent to its reduced form.
**Example 2.20**.: Recall from Example 2.16 that for the skew bracoid \((G,N)\) constructed in Example 2.3 we have \(\ker(\lambda_{\odot})=\langle r^{d}\rangle\). Therefore for this skew bracoid we have \(\overline{G}=G/\langle r^{d}\rangle\cong D_{n/d}\).
Combining Proposition 2.18 with Theorem 2.8 we find
**Corollary 2.21**.: Given a group \(N\), there is a bijection between transitive subgroups of \(\operatorname{Hol}(N)\) and equivalence classes of skew bracoids \((G,N)\).
From the definition of the \(\gamma\)-function of a skew bracoid (Definition 2.10) we see that if \((G,N)\) is a skew bracoid then \(K=\ker(\lambda_{\odot})\subseteq\ker(\gamma)\). Therefore \(\gamma\) factors through \(G/\ker(\lambda_{\odot})\), and the \(\gamma\)-function of \((\overline{G},N)\) is given by \(\,\overline{}^{\,\overline{\gamma}(gK)}\eta=\,^{\,\gamma(g)}\eta\) for all \(g\in G\) and \(\eta\in N\).
## 3. Substructures and quotients
The natural next step in our development of the theory of skew bracoids is to study substructures and quotients. Recall that if \((B,\star,\cdot)\) is a skew brace then a subset \(A\) of \(B\) is called a _subskew brace_ if it is closed under both \(\star\) and \(\cdot\), is called a _left ideal_ if it is a subskew brace satisfying \(\,{}^{\gamma(B)}A=A\), is called an _strong left ideal_ if it is a left ideal and is normal in \((B,\star)\), and is called an _ideal_ if it is a strong left ideal that if also normal in \((B,\cdot)\). If \(A\) is an ideal of \((B,\star,\cdot)\) then \((B/A,\star,\cdot)\) is a skew brace [15, Lemma 2.3]; we have already seen in Proposition 2.4 that if \(A\) is a strong left ideal of \((B,\star,\cdot)\) then \((B,\cdot,B/A,\star,\odot)\) is a skew bracoid.
To ease notation in this section, we frequently suppress the notation for the binary operations \(\cdot\) and \(\star\).
**Definition 3.1**.: A _subskew bracoid_ of a skew bracoid \((G,N)\) consists of a subgroup \(H\) of \(G\) and a subgroup \(M\) of \(N\) such that \((H,M)\) is a skew bracoid.
**Example 3.2**.: When a skew brace is viewed as a skew bracoid, the subskew bracoids are precisely the subskew braces.
If \((B,B/A)\) is a skew bracoid arising from a skew brace \((B,\star,\cdot)\) and a strong left ideal \(A\), as described in Proposition 2.4 and \(C\) is a subskew brace of \((B,\star,\cdot)\) that contains \(A\), then \((C,C/A)\) is a subskew bracoid of \((B,B/A)\).
**Example 3.3**.: Recall the skew bracoid \((G,N)\) constructed in Example 2.3. Let \(f\) be a positive divisor of \(d\), let \(M=\langle\eta^{f}\rangle\), and let \(H=\langle r^{f},s\rangle\). Then \((H,M)\) is a subskew bracoid of \((G,N)\).
A subskew bracoid of a reduced skew bracoid need not be reduced:
**Example 3.4**.: Consider the skew bracoid \((G,N)\) constructed in Example 2.3 in the particular case in which \(n=d=4\). By Example 2.16, \((G,N)\) is reduced. Let \(M=\langle\eta^{2}\rangle\) and \(H=\langle r^{2},s\rangle\); then \((H,M)\) is a subskew bracoid of \((G,N)\). But \((H,N)\) is not reduced: we have \(s\odot\eta^{2k}=\eta^{-2k}=\eta^{2k}\) for all \(k\), so \(\langle s\rangle\) is contained in (and in fact equals) the kernel of \(\lambda_{\odot}\) in \(H\).
**Definition 3.5**.: A _left ideal_ of a skew bracoid \((G,N)\) is a subgroup \(M\) of \(N\) such that \(\,{}^{\gamma(G)}M=M\). An _ideal_ of \((G,N)\) is a left ideal which is normal in \(N\).
**Example 3.6**.: When a skew brace is viewed as a skew bracoid, the left ideals of the skew bracoid are precisely the left ideals of the skew brace. However, the ideals of the skew bracoid are the strong left ideals of the skew brace.
If \((B,B/A)\) is a skew bracoid arising from a skew brace \((B,\star,\cdot)\) and a strong left ideal \(A\), as described in Proposition 2.4, then we have seen in Example 2.11 that the \(\gamma\)-function of \((B,B/A)\) is given by \(\,{}^{\gamma(b)}(cA)=(\,{}^{\gamma(b)}c)A\). Therefore the left ideals (resp. ideals) of \((B,B/A)\) are the sets \(C/A\) where \(C\) is a left ideal (resp. strong left ideal) of \(B\) that contains \(A\).
**Example 3.7**.: Recall the skew bracoid \((G,N)\) constructed in Example 2.3. Let \(f\) be a positive divisor of \(d\), and let \(M=\langle\eta^{f}\rangle\). By Example 2.12 the \(\gamma\)-function of \((G,N)\) is given by \(\,{}^{\gamma(r^{i}s^{j})}\eta^{k}=\eta^{(-1)^{j}k}\); hence \(M\) is closed under \(\gamma(G)\) and is therefore a left ideal of \((G,N)\). In fact, since \(N\) is abelian, \(M\) is an ideal of \((G,N)\).
In the theory of skew braces we have the useful result that if \((B,\star,\cdot)\) is a skew brace and \(A\) is a subgroup of \((B,\star)\) such that \(\,{}^{\gamma(B)}A=A\) then \(A\) is necessarily a subgroup of \((B,\cdot)\), and therefore a left ideal of \((B,\star,\cdot)\). We have a skew bracoid analogue of this result:
**Proposition 3.8**.: Let \((G,N)\) be a skew bracoid and let \(M\) be a left ideal of \((G,N)\). Define
\[G_{M}=\{g\in G\mid g\odot\mu\in M\text{ for all }\mu\in M\}.\]
Then \((G_{M},M)\) is a subskew bracoid of \((G,N)\).
Proof.: It is clear that that \(G_{M}\) acts on \(M\) and that the skew bracoid relation is satisfied. We need to show that the action of \(G_{M}\) on \(M\) is transitive. Define
\[G_{M}^{\prime}=\{g\in G\mid g\odot e_{N}\in M\}.\]
Since \(M\) is a left ideal of \((G,N)\), given \(g\in G\) we have
\[{}^{\gamma(g)}\mu=(g\odot e_{N})^{-1}(g\odot\mu)\in M\text{ for all }\mu\in M,\]
and so \(g\odot\mu\in M\) for all \(\mu\in M\) if and only if \(g\odot e_{N}\in M\). Therefore \(G_{M}=G_{M}^{\prime}\), and now it is clear that the action of \(G_{M}\) on \(M\) is transitive.
Note that the subgroup \(G_{M}\) associated to the left ideal \(M\) in Proposition 3.8 is, by construction, the largest subgroup of \(G\) that acts on \(M\).
**Example 3.9**.: When a skew brace \((B,\star,\cdot)\) is viewed as a skew bracoid, the subgroup \(B_{A}\) of \((B,\cdot)\) associated to a left ideal \(A\) is precisely \(A\) itself.
If \((B,B/A)\) is a skew bracoid arising from a skew brace \((B,\star,\cdot)\) and a strong left ideal \(A\), as described in Proposition 2.4, then we have seen in Example 3.6 that the left ideals of \((B,B/A)\) are the sets \(C/A\) where \(C\) is a left ideal of \(B\) that contains \(A\). The subgroup \(B_{C/A}\) of \((B,\cdot)\) associated to such a left ideal is \(C\).
**Example 3.10**.: Recall the skew bracoid \((G,N)\) constructed in Example 2.3. Let \(e\) be a positive divisor of \(d\), and let \(M=\langle\eta^{e}\rangle\). By Example 3.7 this is an ideal of \((G,N)\). We find that \(G_{M}=\langle r^{e},s\rangle\).
If \((B,\star,\cdot)\) is a skew brace and \(A\) is a subgroup of \((B,\cdot)\) such that \({}^{\gamma(B)}A=A\) then \(A\) is necessarily a subgroup of \((B,\star)\), and therefore a left ideal of \((B,\star,\cdot)\). We also have a skew bracoid analogue of this result:
**Proposition 3.11**.: Let \((G,N)\) be a skew bracoid, let \(H\) be a subgroup of \(G\) and let
\[M_{H}=\{h\odot e_{N}\mid h\in H\}.\]
Suppose that \({}^{\gamma(G)}M_{H}=M_{H}\). Then \(M_{H}\) is a left ideal of \((G,N)\).
Proof.: It is clear that \(H\) acts transitively on \(M_{H}\) and that the skew bracoid relation is satisfied. We need to show that \(M_{H}\) is a subgroup of \(N\). Let \(\eta,\mu\in M_{H}\) and write \(\eta=h\odot e_{N}\) and \(\mu=k\odot e_{N}\) with \(h,k\in H\). By the definition of \(M_{H}\), the element \(h^{-1}\odot\mu=(h^{-1}k)\odot e_{N}\) lies in \(M_{H}\), and since \({}^{\gamma(g)}M_{H}=M_{H}\), we have \({}^{\gamma(h)}(h^{-1}\odot\mu)\in M_{H}\). But we have
\[{}^{\gamma(h)}(h^{-1}\odot\mu) = (h\odot e_{N})^{-1}(h\odot(h^{-1}\odot\mu))\] \[= \eta^{-1}\mu.\]
Thus \(\eta^{-1}\mu\in M_{H}\), and so \(M_{H}\) is a subgroup of \(N\).
If \((G,N)\) is a skew bracoid and \(M\) is a subset of \(N\) then the structural properties of \(M\) are unaffected by reduction:
**Proposition 3.12**.: Let \((G,N)\) be a skew bracoid. A subset \(M\subseteq N\) is a subskew bracoid, (resp. left ideal, ideal) of \((G,N)\) if and only if it is a subskew bracoid (resp. left ideal, ideal) of \((\overline{G},N)\).
Proof.: Recall from Proposition 2.18 that \(\overline{G}=G/K\) (with \(K=\ker(\lambda_{\odot})\)) and that the transitive action of \(\overline{G}\) on \(N\) is given by \((gK)\odot\eta=g\odot\eta\). If \((H,M)\) is a subskew bracoid of \((G,N)\) then without loss of generality we may assume that \(\ker(\lambda_{\odot})\subseteq H\);
if not, we may replace \(H\) with the subgroup \(H\ker(\lambda_{\odot})\), which still acts transitively on \(M\). Then \(\overline{H}\) acts transitively on \(M\), so \((\overline{H},M)\) is a subskew bracoid of \((\overline{G},N)\). Conversely, a subskew bracoid of \((\overline{G},N)\) has the form \((\overline{H},M)\) for some subgroup \(H\) of \(G\) containing \(K\), and we see that \((H,M)\) is a subskew bracoid of \((G,N)\).
The \(\gamma\)-function of \((\overline{G},N)\) is given by \(\,{}^{\overline{\gamma}(gK)}\eta=\,^{\gamma(g)}\eta\); hence we have \(\,{}^{\gamma(G)}M=M\) if and only if \(\,{}^{\overline{\gamma}(\overline{G})}M=M\), and so \(M\) is a left ideal of \((G,N)\) if and only if it is a left ideal of \((\overline{G},M)\).
Finally, a left ideal \(M\) of \((G,N)\) is an ideal if and only if it is a normal subgroup of \(N\); since this condition is unrelated to the action of \(G\) or \(\overline{G}\) this occurs if and only if \(M\) is an ideal of \((\overline{G},N)\).
Next we prove that the quotient of a skew bracoid by an ideal is a skew bracoid.
**Proposition 3.13**.: Let \((G,N)\) be a skew bracoid and let \(M\) be an ideal of \((G,N)\). Then \((G,\cdot)\) acts on the quotient group \(N/M\) via \(g\odot(\eta M)=(g\odot\eta)M\), and \((G,N/M)\) is a skew bracoid.
Proof.: First we show that formula for the action of \(G\) on \(N/M\) is well defined. Let \(g\in G\) and \(\eta,\kappa\in N\), and suppose that \(\eta M=\kappa M\). Then \(\eta^{-1}\kappa\in M\), and since \(M\) is an ideal of \((G,N)\) we have \(\,{}^{\gamma(g)}(\eta^{-1}\kappa)\in M\). That is:
\[(g\odot e_{N})^{-1}(g\odot(\eta^{-1}\kappa))\in M.\]
Applying the skew bracoid relation (2) we have
\[(g\odot e_{N})^{-1}(g\odot\eta^{-1})(g\odot e_{N})^{-1}(g\odot\kappa)\in M. \tag{4}\]
Now applying Proposition 2.13 to the first three terms of (4) we obtain
\[(g\odot\eta)^{-1}(g\odot\kappa)\in M,\]
and so
\[(g\odot\eta)M=(g\odot\kappa)M.\]
Therefore the formula for the action of \(G\) on \(N/M\) is well defined. Since \((G,N)\) is a skew bracoid, it follows quickly that the action of \(G\) on \(N/M\) is transitive and that the skew bracoid relation is satisfied.
**Example 3.14**.: When a skew brace is viewed as a skew bracoid, the ideals of the skew bracoid are the strong left ideals of the skew brace (see Example 3.6). Using such an ideal to form a quotient skew bracoid is precisely the process described in Proposition 2.4.
If \((B,B/A)\) is a skew bracoid arising from a skew brace \((B,\star,\cdot)\) and a strong left ideal \(A\), as described in Proposition 2.4, then we have seen in Example 3.6 that the ideals of \((B,B/A)\) are the sets \(C/A\) where \(C\) is a strong left ideal of \(B\) that contains \(A\). The corresponding quotient skew bracoid is \((B,(B/A)/(C/A))\).
It is easily shown (similarly to Example 3.6) that the \(\gamma\)-function of a quotient skew bracoid \((G,N/M)\) is given by the composition of the \(\gamma\) function of \((G,N)\) with the natural projection \(N\twoheadrightarrow N/M\), so that \(\,{}^{\gamma(g)}(\eta M)=(^{\gamma(g)}\eta)M\) for all \(\eta\in N\) and \(g\in G\). We have \(\mathrm{Stab}_{G}(e_{N}M)=G_{M}\) (see Proposition 3.8); it follows quickly from
this that we have \(\operatorname{Stab}_{G}(g\odot e_{N}M)=gG_{M}g^{-1}\), and that \(\ker(\lambda_{\odot})=\bigcap_{g\in G}gG_{M}g^{-1}\), the largest normal subgroup of \(G\) contained in \(G_{M}\). We note that a quotient of a reduced skew bracoid need not be reduced:
**Example 3.15**.: Consider the skew bracoid \((G,N)\) constructed in Example 2.3 in the particular case in which \(n=d=4\). Recall from Example 2.16 that \((G,N)\) is reduced, from Example 3.7 that \(M=\langle\eta^{2}\rangle\) is an ideal of \((G,N)\), and from Example 3.9 that \(G_{M}=\langle r^{2},s\rangle\). Hence \(\operatorname{Stab}_{G}(e_{N}M)=G_{M}\) is a normal subgroup of \(G\), and so the kernel of the action of \(G\) on \(N/M\) is also equal to \(G_{M}\). Therefore \((G,N/M)\) is not reduced.
We note that in Example 3.15 we have \(|G|=8,|N|=4,|M|=2\), and \(|G_{M}|=4\). Since \(G_{M}\) coincides with the kernel of the action of \(G\) on \(N/M\), the reduced form \((\overline{G},N/M)\) of this skew bracoid (see Proposition 2.18) satisfies \(|\overline{G}|=|N/M|=2\), and so is essentially a skew brace. To study this phenomenon further we make a definition:
**Definition 3.16**.: An _enhanced left ideal_ (resp. _enhanced ideal_) of a skew bracoid \((G,N)\) is a left ideal (resp. ideal) \(M\) such that \(G_{M}\) is normal in \(G\).
In the case of skew braces, an enhanced ideal of a skew brace \((B,\star,\cdot)\) is a left ideal \(A\) that is a normal subgroup of \((B,\cdot)\); we are not aware of a term for this construction elsewhere in the literature.
As in Proposition 3.12, a subset \(M\) of \(N\) is an enhanced left ideal, or enhanced ideal, of a skew bracoid \((G,N)\) if and only if it is such a substructure of \((\overline{G},N)\). Enhanced ideals play an important role in the connection between skew bracoids and Hopf-Galois structures described in Section 5.
**Proposition 3.17**.: Let \((G,N)\) be a skew bracoid and let \(M\) be an ideal of \((G,N)\). Then \(M\) is an enhanced ideal of \((G,N)\) if and only if the reduced form of the skew bracoid \((G,N/M)\) is essentially a skew brace.
Proof.: First suppose that \(M\) is an enhanced ideal of \((G,N)\), so that \(G_{M}\) is normal in \(G\). As noted above, in the skew bracoid \((G,N/M)\) we have \(\operatorname{Stab}_{G}(g\odot M)=gG_{M}g^{-1}\) for each \(g\in G\). Since \(G_{M}\) is normal in \(G\) these stabilizers all coincide, so \(\ker(\lambda_{\odot})=G_{M}\), and so the reduced form of \((G,N/M)\) is \((G/G_{M},N/M)\). Now let \(S=\operatorname{Stab}_{G}(e_{N})\), so that \(|G|=|S||N|\). By the definition of \(G_{M}\) (see Proposition 3.8) we have \(S\subseteq G_{M}\), and so \(|G_{M}|=|S||M|\). Therefore we have
\[\left|\frac{G}{G_{M}}\right|=\frac{|G|}{|G_{M}|}=\frac{|S||N|}{|S||M|}=\frac{| N|}{|M|}=\left|\frac{N}{M}\right|,\]
and so \((G/G_{M},N/M)\) is essentially a skew brace.
Conversely, suppose that the reduced form of \((G,N/M)\) is essentially a skew brace. Then, writing \(K\) for the kernel of the action of \(G\) on \(N/M\), we have \(|G|/|K|=|N|/|M|\), and so \(|K|=|G_{M}|\). But \(K\subseteq S\subseteq G_{M}\), so in fact \(K=G_{M}\). Therefore \(G_{M}\) is normal in \(G\), and so \(M\) is an enhanced ideal of \((G,N)\).
**Proposition 3.18**.: Let \((G,N)\) be a skew bracoid, and let \(M\) be an ideal of \((G,N)\). There is a bijective correspondence between left ideals, ideals, enhanced left ideals,
enhanced ideals of \((G,N/M)\) and the corresponding substructures of \((G,N)\) that contain \(M\).
Proof.: There is a bijective correspondence between subgroups of \(N/M\) and subgroups of \(N\) that contain \(M\). Let \(P\) be a subgroup of \(N\) that contains \(M\), with corresponding subgroup \(P/M\) of \(N/M\).
Recall that the \(\gamma\)-function of the skew bracoid \((G,N/M)\) is given by \(\,{}^{\gamma(g)}(\eta M)=(^{\gamma(g)}\eta)M\) for all \(\eta\in N\) and \(g\in G\). Therefore we have \(\,{}^{\gamma(g)}(\pi M)\in P/M\) for all \(\pi\in P\) if and only if \((\,{}^{\gamma(g)}\pi)M\in P/M\) for all \(\pi\in P\) if and only if \(\,{}^{\gamma(g)}\pi\in P\) for all \(\pi\in P\), and so \(P\) is a left ideal of \((G,N)\) if and only if \(P/M\) is a left ideal of \((G,P/M)\).
The correspondence between subgroups of \(N/M\) and subgroups of \(N\) that contain \(M\) respects and detects normality, so a left ideal of \((G,N)\) that contains \(M\) is an ideal if and only if the corresponding left ideal of \((G,N/M)\) is an ideal.
Finally, since the action of \(G\) on \(N/M\) is given by \(g\odot(\eta M)=(g\odot\eta)M\), we see that if \(P\) is a left ideal of \((G,N)\) then we have \(G_{P/M}=G_{P}\), and so \(P\) is enhanced if and only if \(P/M\) is enhanced.
## 4. Homomorphisms and Isomorphisms
A homomorphism of skew braces is simply a map between the underlying sets that respects both of the group operations. The kernel of a skew brace homomorphism is an ideal of the domain, the image is a subskew brace of the codomain, and a version of the first isomorphism theorem holds. In this section we generalize these results to skew bracoids.
**Definition 4.1**.: A _homomorphism_ of skew bracoids \((G,N,\odot)\to(G^{\prime},N^{\prime},\odot^{\prime})\) is a pair of group homomorphisms \(\varphi:G\to G^{\prime}\) and \(\psi:N\to N^{\prime}\) such that
\[\psi(g\odot\eta)=\varphi(g)\odot^{\prime}\psi(\eta) \tag{5}\]
for all \(g\in G\) and \(\eta\in N\).
The presence of two different group homomorphisms in Definition 4.1 makes it appear rather unwieldy. In fact, skew bracoid homomorphisms are more restricted then they first appear:
**Proposition 4.2**.: Let \((G,N,\odot)\) and \((G^{\prime},N^{\prime},\odot^{\prime})\) be skew bracoids, let \(S=\operatorname{Stab}_{G}(e_{N})\), let \(S^{\prime}=\operatorname{Stab}_{G^{\prime}}(e_{N^{\prime}})\), and let \(\varphi:G\to G^{\prime}\) be a group homomorphism.
If \(\varphi(S)\subseteq S^{\prime}\) and the map \(\varphi_{N}:N\to N^{\prime}\) defined by
\[\varphi_{N}(g\odot e_{N})=\varphi(g)\odot^{\prime}e_{N^{\prime}} \tag{6}\]
is a group homomorphism then \(\varphi,\varphi_{N}\) form a homomorphism of skew bracoids.
Conversely, if \(\psi:N\to N^{\prime}\) is a group homomorphism such that \(\varphi,\psi\) form a homomorphism of skew bracoids then \(\varphi(S)\subseteq S^{\prime}\) and \(\psi=\varphi_{N}\).
Proof.: First suppose that \(\varphi(S)\subseteq S^{\prime}\) and that \(\varphi_{N}:N\to N^{\prime}\) is a group homomorphism. (Note that the assumption that \(\varphi(S)\subseteq S^{\prime}\) and the fact that \(G\) acts
transitively on \(N\) ensure that \(\varphi_{N}\) is well defined.) Since \(G\) acts transitively on \(N\), given \(\eta\in N\) we may write \(\eta=h\odot e_{N}\) for some \(h\in G\), and then we have
\[\varphi_{N}(g\odot\eta) = \varphi_{N}(g\odot(h\odot e_{N}))\] \[= \varphi_{N}(gh\odot e_{N})\] \[= \varphi(gh)\odot^{\prime}e_{N^{\prime}}\text{ by (\ref{eq:g})}\] \[= \varphi(g)\odot^{\prime}(\varphi(h)\odot^{\prime}e_{N^{\prime}}) \text{ since }\varphi\text{ is a homomorphism}\] \[= \varphi(g)\odot^{\prime}\varphi_{N}(h\odot e_{N})\text{ by (\ref{eq:g})}\] \[= \varphi(g)\odot^{\prime}\varphi_{N}(\eta).\]
Thus (5) is satisfied, and so \(\varphi,\varphi_{N}\) form a homomorphism of skew bracoids. Conversely, suppose that \(\psi:N\to N^{\prime}\) is a group homomorphism such that \(\varphi,\psi\) form a homomorphism of skew bracoids. If \(g\in S\) then we have
\[\varphi(g)\odot^{\prime}e_{N^{\prime}} = \varphi(g)\odot^{\prime}\psi(e_{N})\] \[= \psi(g\odot e_{N})\] \[= \psi(e_{N})\] \[= e_{N^{\prime}}.\]
Thus \(\varphi(S)\subseteq S^{\prime}\), and so \(\varphi_{N}\) is well defined. Since \(G\) acts transitively on \(N\), given \(\eta\in N\) we may write \(\eta=h\odot e_{N}\) for some \(h\in G\), and then we have
\[\psi(\eta) = \psi(h\odot e_{N})\] \[= \varphi(h)\odot^{\prime}e_{N^{\prime}}\] \[= \varphi_{N}(h\odot e_{N})\] \[= \varphi_{N}(\eta).\]
Thus \(\psi=\varphi_{N}\), which completes the proof.
In light of Proposition 4.2 we will adopt the following notation for homomorphisms of skew bracoids
**Notation 4.3**.: We will write \(\boldsymbol{\varphi}:(G,N,\odot)\to(G^{\prime},N^{\prime},\odot^{\prime})\) to denote the homomorphism of skew bracoids comprised of the group homomorphisms \(\varphi:G\to G^{\prime}\) and \(\varphi_{N}:N\to N^{\prime}\).
**Example 4.4**.: If we view skew braces as skew bracoids then a homomorphism of skew braces is a homomorphism of skew bracoids in which the two maps coincide.
If \((B,B/A)\) is a skew bracoid arising from a skew brace \((B,\star,\cdot)\) and a strong left ideal \(A\), as described in Proposition 2.4, then we obtain a natural homomorphism of skew bracoids \((B,B)\to(B,B/A)\) as follows: let \(\varphi:B\to B\) be the identity map. In the notation of Proposition 4.2 we have \(S=\{e_{B}\}\) and \(S^{\prime}=A\), so \(\varphi(S)\subseteq S^{\prime}\). The map \(\varphi_{B}:B\to B/A\) is given by
\[\varphi_{B}(b)=b\odot A=b\star A,\]
which is the natural projection \((B,\star)\twoheadrightarrow(B/A,\star)\). Therefore \(\varphi_{B}\) is a homomorphism, and so \(\varphi,\varphi_{B}\) form a homomorphism of skew bracoids.
More generally, if \((G,N)\) is a skew bracoid and \(M\) is an ideal of \((G,N)\) then choosing \(\varphi:G\to G\) to be the identity map we find that \(\varphi_{N}:N\twoheadrightarrow N/M\) is the natural projection and \(\varphi,\varphi_{N}\) form a homomorphism of skew bracoids.
Next we show that kernels and images of skew bracoid homomorphisms have the properties we would naturally expect.
**Proposition 4.5**.: Let \(\boldsymbol{\varphi}:(G,N,\odot)\to(G^{\prime},N^{\prime},\odot^{\prime})\) be a homomorphism of skew bracoids. Then
* \(\ker(\varphi_{N})\) is an ideal of \((G,N)\);
* \((\operatorname{Im}(\varphi),\operatorname{Im}(\varphi_{N}))\) is a subskew bracoid of \((G^{\prime},N^{\prime})\).
Proof.:
* It is clear that \(\ker(\varphi_{N})\) is a normal subgroup of \(N\). If \(\eta\in\ker(\varphi_{N})\) and \(g\in G\) then we have \[\varphi_{N}(\,{}^{\gamma(g)}\eta) = \varphi_{N}((g\odot e_{N})^{-1}\star(g\odot\eta))\] \[= \varphi_{N}(g\odot e_{N})^{-1}\star^{\prime}\varphi_{N}(g\odot\eta)\] \[= (\varphi(g)\odot^{\prime}\varphi_{N}(e_{N}))^{-1}\star^{\prime} (\varphi(g)\odot\varphi_{N}(\eta))\] \[= (\varphi(g)\odot^{\prime}e_{N^{\prime}})^{-1}\star^{\prime}( \varphi(g)\odot^{\prime}e_{N^{\prime}})\] \[= e_{N^{\prime}}.\] Therefore \(\ker(\varphi_{N})\) is an ideal of \((G,N)\).
* It is clear that \(\operatorname{Im}(\varphi)\) is a subgroup of \(G^{\prime}\) and that \(\operatorname{Im}(\varphi_{N})\) is a subgroup of \(N^{\prime}\). If \(g\in G\) and \(\eta\in N\) then we have \[\varphi(g)\odot^{\prime}\varphi_{N}(\eta)=\varphi_{N}(g\odot\eta)\in \operatorname{Im}(\varphi_{N}),\] so \(\operatorname{Im}(\varphi)\) acts on \(\operatorname{Im}(\varphi_{N})\). To show that this action is transitive, let \(\varphi_{N}(\eta)\in\operatorname{Im}(\varphi_{N})\), with \(\eta\in N\). Since \(G\) acts transitively on \(N\) there exists \(g\in G\) such that \(g\odot e_{N}=\eta\), and we have \[\varphi_{N}(\eta) = \varphi_{N}(g\odot e_{N})\] \[= \varphi(g)\odot^{\prime}\varphi_{N}(e_{N})\] \[= \varphi(g)\odot^{\prime}e_{N^{\prime}}.\] Therefore \(\operatorname{Im}(\varphi)\) acts transitively on \(\operatorname{Im}(\varphi_{N})\). The skew bracoid relation is satisfied because it holds in \((G^{\prime},N^{\prime})\). Therefore \((\operatorname{Im}(\varphi),\operatorname{Im}(\varphi_{N}))\) is a subskew bracoid of \((G^{\prime},N^{\prime})\).
**Example 4.6**.: Let \((G,N)\) be a skew bracoid, let \(M\) be an ideal of \((G,N)\), and consider the skew bracoid \((G,N/M)\). Let \(\varphi:G\to G\) be the identity map, so that \(\varphi_{N}:N\twoheadrightarrow N/M\) is the natural projection. Then \(\ker(\varphi_{N})=M\) and \((\operatorname{Im}(\varphi),\operatorname{Im}(\varphi_{N}))=(G,N/M)\).
**Definition 4.7**.: A homomorphism of skew bracoids \(\boldsymbol{\varphi}:(G,N,\odot)\to(G^{\prime},N^{\prime},\odot^{\prime})\) is called a _isomorphism_ if \(\varphi\) and \(\varphi_{N}\) are isomorphisms.
**Example 4.8**.: Consider the skew bracoid \((G,N)\) constructed in Example 2.3. We can formalize our observation from Example 2.5 that this skew bracoid can be obtained as the quotient of a skew brace by a strong left ideal.
Let \((G,\star,\cdot)\) be the skew brace described in Example 2.5, so that \((G,\cdot)\cong D_{n}\) and \((G,\star)\cong C_{n}\times C_{2}\), and consider the strong left ideal \(H=\langle r^{d},s\rangle\) and the skew bracoid \((G,G/H)\). We construct an isomorphism of skew bracoids \(\boldsymbol{\varphi}:(G,N)\to(G,G/H)\) as follows.
Let \(\varphi:G\to G\) be the identity map. Note that \(\operatorname{Stab}_{G}(e_{N})=\operatorname{Stab}_{G}(e_{G}H)=\langle r^{d},s\rangle\) so, in the notation of Proposition 4.2 we have \(\varphi(S)\subseteq S^{\prime}\). Therefore \(\varphi\) induces a map \(\varphi_{N}:N\to G/H\), which is given by
\[\varphi_{N}(\eta^{k})=\varphi_{N}(r^{k}\odot e_{N})=\varphi(r^{k})\odot e_{G} H=r^{k}\odot e_{G}H=r^{k}H=(rH)^{k}.\]
Therefore \(\varphi_{N}\) is an isomorphism from \(N\) to \(G/H\), and so \(\boldsymbol{\varphi}:(G,N)\to(G,G/H)\) is an isomorphism of skew bracoids.
The following Proposition strengthens Proposition 4.2 in the case of isomorphisms.
**Proposition 4.9**.: Let \(\boldsymbol{\varphi}:(G,N,\odot)\to(G^{\prime},N^{\prime},\odot^{\prime})\) be a homomorphism of skew bracoids.
If \(\boldsymbol{\varphi}\) is an isomorphism of skew bracoids then for each \(\eta\in N\) we have \(\operatorname{Stab}_{G^{\prime}}(\varphi_{N}(\eta))=\varphi(\operatorname{ Stab}_{G}(\eta))\).
Conversely, if \(\varphi:G\to G\) is an isomorphism and \(\operatorname{Stab}_{G^{\prime}}(e_{N^{\prime}})=\varphi(\operatorname{Stab}_ {G}(e_{N}))\) then \(\boldsymbol{\varphi}\) is an isomorphism of skew bracoids.
Proof.: First suppose that \(\boldsymbol{\varphi}\) is an isomorphism. Since \(\varphi\) and \(\varphi_{N}\) are isomorphisms, for \(g\in G\) and \(\eta\in N\) we have
\[g\odot\eta=\eta\] \[\Leftrightarrow \varphi_{N}(g\odot\eta)=\varphi_{N}(\eta)\] \[\Leftrightarrow \varphi(g)\odot^{\prime}\varphi_{N}(\eta)=\varphi_{N}(\eta).\]
Thus \(\operatorname{Stab}_{G^{\prime}}(\varphi_{N}(\eta))=\varphi(\operatorname{ Stab}_{G}(\eta))\), as claimed.
Now suppose that \(\varphi:G\to G^{\prime}\) is an isomorphism and \(\operatorname{Stab}_{G^{\prime}}(e_{N^{\prime}})=\varphi(\operatorname{Stab}_ {G}(e_{N}))\). We shall show that the homomorphism \(\varphi_{N}:N\to N^{\prime}\) is also an isomorphism. We note that
\[|N|=\frac{|G|}{|\operatorname{Stab}_{G}(e_{N})|}=\frac{|\varphi(G)|}{|\varphi( \operatorname{Stab}_{G}(e_{N}))|}=\frac{|G^{\prime}|}{|\operatorname{Stab}_{ G^{\prime}}(e_{N^{\prime}})|}=|N^{\prime}|.\]
Now let \(\eta\in N\), and write \(\eta=g\odot e_{N}\) with \(g\in G\). Then
\[\varphi_{N}(\eta)=\varphi_{N}(g\odot e_{N})=\varphi(g)\odot^{\prime}\varphi_ {N}(e_{N})=\varphi(g)\odot^{\prime}e_{N^{\prime}},\]
so \(\eta\in\ker(\varphi_{N})\) if and only if \(\varphi(g)\in\operatorname{Stab}_{G^{\prime}}(e_{N^{\prime}})\). Since \(\operatorname{Stab}_{G^{\prime}}(e_{N^{\prime}})=\varphi(\operatorname{ Stab}_{G}(e_{N}))\), this occurs if and only if \(\eta=e_{N}\). Therefore \(\varphi_{N}\) is injective, hence bijective. This completes the proof.
Since \(\ker(\lambda_{\odot})=\bigcap_{\eta\in N}\operatorname{Stab}_{G}(\eta)\), and similarly for \(\ker(\lambda_{\odot^{\prime}})\), we obtain
**Corollary 4.10**.: If \(\boldsymbol{\varphi}:(G,N,\odot)\to(G^{\prime},N^{\prime},\odot^{\prime})\) is an isomorphism of skew bracoids then \(\ker(\lambda_{\odot^{\prime}})=\varphi(\ker(\lambda_{\odot}))\).
By combining many of the results of this section, we prove a version of the First Isomorphism Theorem for skew bracoids.
**Theorem 4.11**.: Suppose that \(\boldsymbol{\varphi}:(G,N,\odot)\to(G^{\prime},N^{\prime},\odot^{\prime})\) is a homomorphism of skew bracoids, and let \(M=\ker(\varphi_{N})\). Then the reduced forms of \((G,N/M)\) and \((\operatorname{Im}(\varphi),\operatorname{Im}(\varphi_{N}))\) are isomorphic skew bracoids.
Proof.: By Proposition 4.5\((\operatorname{Im}(\varphi),\operatorname{Im}(\varphi_{N}))\) is a subskew bracoid of \((G^{\prime},N^{\prime})\); to ease notation we will relabel (without loss of generality) so that \(\operatorname{Im}(\varphi)=G^{\prime}\) and \(\operatorname{Im}(\varphi_{N}))=N^{\prime}\).
With this relabelling, the group homomorphism \(\varphi_{N}:N\to N^{\prime}\) is a surjection, and induces an isomorphism \(N/M\to N^{\prime}\); we denote this by \(\varphi_{N/M}\).
By Proposition 4.5 we see that \(M\) is an ideal of \((G,N)\), and so \((G,N/M)\) is a skew bracoid (Proposition 3.13). For all \(g\in G\) and \(\eta M\in N/M\) we have
\[\varphi(g)\odot^{\prime}\varphi_{N/M}(\eta M)=\varphi(g)\odot^{\prime}\varphi _{N}(\eta)=\varphi_{N}(g\odot\eta)=\varphi_{N/M}(g\odot\eta M),\]
so (abusing notation) we obtain a homomorphism of skew bracoids \(\boldsymbol{\varphi}:(G,N/M,\odot)\to(G^{\prime},N^{\prime},\odot^{\prime})\) in which \(\varphi:G\to G^{\prime}\) is surjective and \(\varphi_{N/M}:N/M\to N^{\prime}\) is an isomorphism.
Now let \(K\) denote the kernel of the action of \(G\) on \(N/M\) and let \(K^{\prime}\) denote the kernel of the action of \(G^{\prime}\) on \(N^{\prime}\).
Let \(\theta:G\to G^{\prime}/K^{\prime}\) be the composition of \(\varphi:G\to G^{\prime}\) with the natural projection; recalling our relabelling \(\operatorname{Im}(\varphi)=G^{\prime}\), we see that \(\theta\) is a surjection. We have
\[g\in\ker(\theta) \Leftrightarrow \varphi(g)\in K^{\prime}\] \[\Leftrightarrow \varphi(g)\odot^{\prime}\varphi_{N/M}(\eta M)=\varphi_{N/M}(\eta M) \text{ for all }\eta M\in N/M\] \[\Leftrightarrow \varphi_{N/M}(g\odot\eta M)=\varphi_{N/M}(\eta M)\text{ for all }\eta M\in N/M\] \[\Leftrightarrow g\odot\eta M=\eta M\text{ for all }\eta M\in N/M\text{ since }\varphi_{N/M}\text{ is an isomorphism}\] \[\Leftrightarrow g\in K.\]
Hence the surjection \(\theta:G\to G^{\prime}/K^{\prime}\) induces an isomorphism \(\theta:G/K\to G^{\prime}/K^{\prime}\). Since \(\varphi:G\to G^{\prime}\) maps \(\operatorname{Stab}_{G}(e_{N}M)\) into \(\operatorname{Stab}_{G^{\prime}}(e_{N^{\prime}})\), the map \(\theta\) also has this property; it therefore induces a map \(\theta_{N/M}:N/M\to N^{\prime}\). This is given by
\[\theta_{N/M}(g\odot e_{N}M)=\theta(g)\odot^{\prime}e_{N^{\prime}}=\varphi(g) \odot^{\prime}e_{N^{\prime}}=\varphi_{N/M}(g\odot e_{N}M).\]
Since \(\varphi_{N/M}:N/M\to N^{\prime}\) is an isomorphism, we see that \(\theta_{N/M}\) is an isomorphism, and so \(\theta,\theta_{N/M}\) form an isomorphism of skew bracoids \((\overline{G},N/M)\to(\overline{G^{\prime}},N^{\prime})\).
**Corollary 4.12**.: If \((G,N,\odot)\cong(G^{\prime},N^{\prime},\odot^{\prime})\) then \((\overline{G},N,\odot)\cong(\overline{G^{\prime}},N^{\prime},\odot^{\prime})\).
Proof.: If \(\boldsymbol{\varphi}:(G,N,\odot)\to(G^{\prime},N,\odot^{\prime})\) is an isomorphism then \(\varphi_{N}:N\to N^{\prime}\) is an isomorphism, so in the notation of Theorem 4.11 we have \(M=\{e_{N}\}\). Now Theorem 4.11 implies that \((\overline{G},N,\odot)\cong(\overline{G^{\prime}},N^{\prime},\odot^{\prime})\).
We have already recalled the result that if \((B,\star)\) is a group and \(\cdot,\cdot^{\prime}\) are two further binary operations on \(B\) such that \((B,\star,\cdot)\) and \((B,\star,\cdot^{\prime})\) are skew braces, then we have \((B,\star,\cdot)=(B,\star,\cdot^{\prime})\) if and only if \(\lambda_{\bullet}(B)=\lambda_{\bullet^{\prime}}(B)\) inside \(\operatorname{Hol}_{\star}(B)\); this motivated our definition of equivalence of skew bracoids (Definition 2.17).
We now recall that, furthermore, we have \((B,\star,\cdot)\cong(B,\star,\cdot^{\prime})\) if and only if there exists \(\theta\in\operatorname{Aut}_{\star}(B)\) such that \(\lambda_{\bullet^{\prime}}(B)=\theta\lambda_{\bullet}(B)\theta^{-1}\). We can detect isomorphisms between reduced skew bracoids in a similar way.
**Proposition 4.13**.: Let \(N\) be a group and let \((G,N,\odot)\) and \((G^{\prime},N,\odot^{\prime})\) be reduced skew bracoids. Then \((G,N,\odot)\cong(G^{\prime},N,\odot^{\prime})\) if and only if there exists \(\theta\in\operatorname{Aut}(N)\) such that
\[\lambda_{\odot^{\prime}}(G^{\prime})=\theta\lambda_{\odot}(G)\theta^{-1} \subseteq\operatorname{Hol}(N).\]
Proof.: First suppose that \(\boldsymbol{\varphi}:(G,N,\odot)\to(G^{\prime},N,\odot^{\prime})\) is an isomorphism of skew bracoids. Then \(\varphi_{N}:N\to N\) is an automorphism of \(N\), and for all \(g\in G\) and \(\eta\in N\) we have
\[\varphi(g)\odot^{\prime}\varphi_{N}(\eta)=\varphi_{N}(g\odot\eta).\]
Thus for all \(g\in G\) we have
\[\varphi_{N}^{-1}\lambda_{\odot^{\prime}}(g)\varphi_{N}=\lambda_{\odot}(g),\]
and so, choosing \(\theta=\varphi_{N}\in\operatorname{Aut}(N)\), we have \(\lambda_{\odot^{\prime}}(G^{\prime})=\theta\lambda_{\odot}(G)\theta^{-1}\).
Conversely suppose that there exists \(\theta\in\operatorname{Aut}(N)\) such that \(\lambda_{\odot^{\prime}}(G^{\prime})=\theta\lambda_{\odot}(G)\theta^{-1}\). Then for each \(g\in G\) there exists a unique element \(\varphi(g)\in G^{\prime}\) such that
\[\theta\lambda_{\odot}(g)\theta^{-1}=\lambda_{\odot^{\prime}}(\varphi(g)).\]
We see that \(\varphi:G\to G^{\prime}\) is an isomorphism and that \(\operatorname{Stab}_{G^{\prime}}(e_{N})=\varphi(\operatorname{Stab}_{G}(e_{N }))\), so \(\varphi\) induces a bijection \(\varphi_{N}:N\to N\) defined by \(\varphi_{N}(g\odot e_{N})=\varphi_{N}(g)\odot^{\prime}e_{N^{\prime}}\). Now for each \(g\in G\) we have
\[\varphi_{N}(g\odot e_{N}) = \varphi(g)\odot^{\prime}e_{N^{\prime}}\] \[= \theta\lambda_{\odot}(g)\theta^{-1}[e_{N}]\] \[= \theta(g\odot e_{N}).\]
Thus \(\varphi_{N}=\theta\in\operatorname{Aut}(N)\), and so \(\varphi,\varphi_{N}\) form an isomorphism of skew bracoids. Hence \((G,N,\odot)\cong(G^{\prime},N,\odot^{\prime})\).
**Corollary 4.14**.: Let \(N\) be a group and let \((G,N,\odot)\) be a reduced skew bracoid. Then the number of equivalence classes of skew bracoids that are isomorphic to \((G,N,\odot)\) is equal to
\[\frac{|\operatorname{Aut}(N)|}{|\operatorname{Aut}_{\odot}(N)|},\]
where \(\operatorname{Aut}_{\odot}(N)=\{\theta\in\operatorname{Aut}(N)\mid\theta \text{ normalizes }\lambda_{\odot}(G)\}\).
**Corollary 4.15**.: Let \(N\) be a group and let \((G,N,\odot)\) and \((G^{\prime},N,\odot^{\prime})\) be skew bracoids. Then the reduced forms of \((G,N,\odot)\) and \((G^{\prime},N,\odot^{\prime})\) are isomorphic if and only if there exists \(\theta\in\operatorname{Aut}(N)\) such that
\[\lambda_{\odot^{\prime}}(G^{\prime})=\theta\lambda_{\odot}(G)\theta^{-1} \subseteq\operatorname{Hol}(N).\]
We conclude this section by exploring some more interactions between isomorphism and equivalence of skew bracoids.
**Proposition 4.16**.: Let \(N\) be a group. Then two skew bracoids \((G,N,\odot)\) and \((G^{\prime},N,\odot^{\prime})\) are equivalent if and only if there is an isomorphism \(\boldsymbol{\varphi}:(\overline{G},N,\odot)\to(\overline{G^{\prime}},N,\odot^{ \prime})\) such that \(\varphi_{N}:N\to N\) is the identity map.
Proof.: First suppose that \((G,N,\odot)\) and \((G^{\prime},N,\odot^{\prime})\) are equivalent, so that \(\lambda_{\odot}(G)=\lambda_{\odot^{\prime}}(G^{\prime})\subseteq\operatorname {Hol}(N)\). Passing to the reduced forms we also have \(\lambda_{\odot}(\overline{G})=\lambda_{\odot^{\prime}}(\overline{G^{\prime}}) \subseteq\operatorname{Hol}(N)\), with each of \(\lambda_{\odot}\) and \(\lambda_{\odot^{\prime}}\) now being injective. The map \(\varphi:\overline{G}\to\overline{G^{\prime}}\) defined by \(\varphi=\lambda_{\odot^{\prime}}^{-1}\lambda_{\odot}\) is therefore an isomorphism, and for all \(\overline{g}\in\overline{G}\) we have
\[\varphi(\overline{g})\odot^{\prime}e_{N}=\overline{g}\odot e_{N}.\]
Therefore \(\varphi(\overline{g})\) stabilizes \(e_{N}\) if and only if \(\overline{g}\) does, and so \(\varphi\) induces a map \(\varphi_{N}:N\to N\), which is given by
\[\varphi_{N}(\overline{g}\odot e_{N})=\varphi(\overline{g})\odot^{\prime}e_{N} =\overline{g}\odot e_{N}\text{ for all }\overline{g}\in\overline{G}.\]
Therefore \(\varphi_{N}:N\to N\) is the identity map, and so \(\boldsymbol{\varphi}\) is an isomorphism of skew bracoids of the form given in the proposition.
Conversely, suppose that there is an isomorphism \(\boldsymbol{\varphi}:(\overline{G},N,\odot)\to(\overline{G^{\prime}},N,\odot ^{\prime})\) such that \(\varphi_{N}:N\to N\) is the identity map. Then for all \(\overline{g}\in\overline{G}\) and all \(\eta\in N\) we have
\[\overline{g}\odot\eta=\varphi_{N}(\overline{g}\odot\eta)=\varphi(\overline{g} )\odot^{\prime}\varphi_{N}(\eta)=\varphi(\overline{g})\odot^{\prime}\eta,\]
and so \(\lambda_{\odot}(\overline{G})=\lambda_{\odot^{\prime}}(\overline{G^{\prime}}) \subseteq\operatorname{Hol}(N)\). But \(\lambda_{\odot}(\overline{G})=\lambda_{\odot}(G)\) by the definition of \(\overline{G}\), and similarly \(\lambda_{\odot^{\prime}}(\overline{G^{\prime}})=\lambda_{\odot^{\prime}}(G^{ \prime})\). Therefore \(\lambda_{\odot}(G)=\lambda_{\odot^{\prime}}(G^{\prime})\), and so \((G,N)\) and \((G^{\prime},N)\) are equivalent.
## 5. Connecting skew bracoids with Hopf-Galois structures on separable extensions
In Section 1 we briefly summarized the connection between skew braces and Hopf-Galois structures on Galois field extensions. In this section we generalize this to a connection between skew bracoids and Hopf-Galois structures on (finite) separable extensions. We begin by describing in more detail the results we generalize.
A theorem of Greither and Pareigis [14] classifies the Hopf-Galois structures admitted by a separable extension of fields, as follows: let \(\widetilde{L}\) be the Galois closure of \(L/K\), let \(J=(J,\cdot)=\operatorname{Gal}(\widetilde{L}/K)\), let \(J^{\prime}=\operatorname{Gal}(\widetilde{L}/L)\), and consider the left coset space \(J/J^{\prime}\). Consider the left translation map \(\lambda_{\odot}:J\to\operatorname{Perm}(J/J^{\prime})\) defined by \(\lambda_{\odot}(j)[xJ^{\prime}]=jxJ^{\prime}\), and the action of \(J\) on \(\operatorname{Perm}(J/J^{\prime})\) by \(\,{}^{j}\eta=\lambda_{\odot}(j)\eta\lambda_{\odot}(j)^{-1}\) for all \(j\in J\) and \(\eta\in\operatorname{Perm}(J/J^{\prime})\). There is a bijection between regular subgroups of \(\operatorname{Perm}(J/J^{\prime})\) stable under this action of \(J\) (\(J\)_-stable regular subgroups_) and Hopf-Galois structures on \(L/K\).
If \(L/K\) is a Galois extension then \(J^{\prime}\) is trivial, so the Greither Pareigis theorem implies that there is a bijection between regular \(J\)-stable subgroups of \(\operatorname{Perm}(J)\) and Hopf-Galois structures on \(L/K\). There are various approaches to showing that \(J\)-stable regular subgroups of \(\operatorname{Perm}(J)\) are connected with skew braces; we follow
Stefanello and Trappeniers [23]. There is a bijection between binary operations \(\star\) on \(J\) such that \((J,\star)\) is a group and regular subgroups of \(\operatorname{Perm}(J)\), given by
\[\star\leftrightarrow\rho_{\star}(J),\text{ where }\rho_{\star}(j)[x]=x\star j^{-1}. \tag{7}\]
Recalling that \(\cdot\) denotes the original binary operation on \(J\), we find that \((J,\star,\cdot)\) is a skew brace if and only if \(\rho_{\star}(J)\) is \(J\)-stable, which occurs if and only if it yields a Hopf-Galois structure on \(L/K\). In this way Stefanello and Trappeniers obtain a bijection between binary operations \(\star\) on \(J\) such that \((J,\star,\cdot)\) is a skew brace and Hopf-Galois structures on \(L/K\).
Returning to the case in which \(L/K\) is separable, but possibly non-normal, we observe that we may replace the Galois closure \(\widetilde{L}\) in the statement of the Greither-Pareigis theorem with any (finite) Galois extension \(E\) of \(K\) that contains \(L\). To see this, let \(G=\operatorname{Gal}(E/K)\), \(\widetilde{G}=\operatorname{Gal}(E/\widetilde{L})\), and \(G^{\prime}=\operatorname{Gal}(E/L)\) (note that \(\widetilde{G}\) is a normal subgroup of \(G\)). Then \(J\cong G/\widetilde{G}\), \(J^{\prime}\cong G^{\prime}/\widetilde{G}\), and there is a natural identification of coset spaces \(G/G^{\prime}\to J/J^{\prime}\) given by \(gG^{\prime}\mapsto(g\widetilde{G})J^{\prime}\). Moreover, the left translation map \(\lambda_{\odot}:G\to\operatorname{Perm}(G/G^{\prime})\) has kernel \(\widetilde{G}\), and so factors through \(J\). Therefore there is a bijection between \(G\)-stable regular subgroups of \(\operatorname{Perm}(G/G^{\prime})\) and \(J\)-stable regular subgroups of \(\operatorname{Perm}(J/J^{\prime})\).
Therefore, throughout this section we denote by \(E/K\) a Galois extension of fields with Galois group \((G,\cdot)\), by \(L\) an intermediate field of \(E/K\) with corresponding subgroup \(G^{\prime}\subseteq G\), and by \(X\) the left coset space \(G/G^{\prime}\), in which we write \(\overline{x}\) for the coset \(xG^{\prime}\).
We then obtain the following generalization of the result of Stefanello and Trappeniers [23]:
**Theorem 5.1**.: There are bijections between
* binary operations \(\star\) on \(X\) such that \((G,\cdot,X,\star,\odot)\) is a skew bracoid, where \(\odot\) denotes left translation of cosets;
* \(G\)-stable regular subgroups of \(\operatorname{Perm}(X)\);
* Hopf-Galois structures on \(L/K\).
Proof.: The fact that there is a bijection between the objects given in (ii) and (iii) is the content of the theorem of Greither and Pareigis, combined with the discussion above.
Now suppose that \(\star\) is a binary operation on \(X\) such that \((G,\cdot,X,\star,\odot)\) is a skew bracoid. Let \(\rho_{\star}:X\to\operatorname{Perm}(X)\) denote the right regular representation of the group \((X,\star)\), so that \(\rho_{\star}(\overline{x})[\overline{y}]=\overline{y}\star\overline{x}^{-1}\) for all \(\overline{x},\overline{y}\in X\). Then \(\rho_{\star}(X)\) is clearly a regular subgroup of \(\operatorname{Perm}(X)\). To show that it is \(G\)-stable, let \(g\in G\) and \(\overline{x},\overline{y}\in X\). Then
\[\lambda_{\odot}(g)\rho_{\star}(\overline{x})\lambda_{\odot}(g^{ -1})[\overline{y}] = g\odot((g^{-1}\odot\overline{y})\star\overline{x}^{-1})\] \[= (g\odot(g^{-1}\odot\overline{y})\star(g\odot\overline{e})^{-1} \star(g\odot\overline{x}^{-1})\text{ by Equation \ref{eq:Pareigis}}\] \[= \overline{y}\star(g\odot\overline{x})^{-1}\star(g\odot \overline{e})\text{ by Proposition \ref{eq:Pareigis}}\] \[= \rho_{\star}((g\odot\overline{e})^{-1}\star(g\odot\overline{x}))[ \overline{y}]\] \[= \rho_{\star}(\,{}^{(\,\gamma(g)}\overline{x})[\overline{y}].\]
Thus \(\rho_{\star}(X)\) is \(G\)-stable; in fact, we have shown that
\[\lambda_{\odot}(g)\rho_{\star}(\overline{x})\lambda_{\odot}(g^{-1})=\rho_{\star} (\,{}^{\gamma(g)}\overline{x}) \tag{8}\]
for all \(g\in G\) and \(\overline{x}\in X\). Thus we obtain a \(G\)-stable regular subgroup of \(\operatorname{Perm}(X)\).
Next suppose that \(N\) is a \(G\)-stable regular subgroup of \(\operatorname{Perm}(X)\). Then the map \(a:N\to X\) defined by \(a(\eta)=\eta^{-1}[\overline{e}]\) is a bijection. Using this, we define a binary operation \(\star\) on \(X\) by the rule
\[a(\eta)\star a(\mu)=a(\eta\mu).\]
Then \((X,\star)\) is a group isomorphic to \(N\). It is clear that the action of \((G,\cdot)\) on \(X\) by left translation of cosets is transitive, so in order to show that \((G,\cdot,X,\star,\odot)\) forms a skew bracoid it only remains to show that the skew bracoid relation (2) is satisfied. To do this, note first that the assumption that \(N\) is \(G\)-stable implies that for each \(g\in G\) the map \(\theta_{g}:N\to N\) defined by \(\theta_{g}(\eta)=\lambda_{\odot}(g)\eta\lambda_{\odot}(g^{-1})\) is an automorphism of \(N\). For \(\eta\in N\) we have
\[g\odot a(\eta) = g\odot\eta^{-1}[\overline{e}]\] \[= \theta_{g}(\eta^{-1})\kappa^{-1}[\overline{e}],\mbox{ where } \kappa\in N\mbox{ satisfies }\kappa^{-1}[\overline{e}]=\overline{g}\] \[= a(\kappa\theta_{g}(\eta)).\]
Therefore for \(\eta,\mu\in N\) we have
\[g\odot(a(\eta)\star a(\mu)) = g\odot a(\eta\mu)\] \[= a(\kappa\theta_{g}(\eta\mu))\] \[= a(\kappa\theta_{g}(\eta)\theta_{g}(\mu))\] \[= a(\kappa\theta_{g}(\eta)\kappa^{-1}\kappa\theta_{g}(\mu))\] \[= a(\kappa\theta_{g}(\eta))\star a(\kappa)^{-1}\star a(\kappa \theta_{g}(\mu))\] \[= (g\odot a(\eta))\star(g\odot e_{N})^{-1}\star(g\odot a(\mu)).\]
Therefore the skew bracoid relation (2) holds, and so \((G,\cdot,X,\star,\odot)\) forms a skew bracoid.
The processes described above are mutually inverse: if \(N\) is a \(G\)-stable regular subgroup of \(\operatorname{Perm}(X)\) and \(\star\) is the corresponding binary operation on \(X\) then for all \(\eta,\mu\in N\) we have
\[\rho_{\star}(a(\mu))[a(\eta)] = a(\eta)\star a(\mu)^{-1}\] \[= a(\eta\mu^{-1})\] \[= \mu\eta^{-1}[\overline{e}]\] \[= \mu[a(\eta)],\]
so \(\rho_{\star}(X)=N\subseteq\operatorname{Perm}(X)\). Similarly, if \(\star\) is a binary operation on \(X\) such that \((G,\cdot,X,\star,\odot)\) is a skew bracoid and \(N=\rho_{\star}(X)\) is the corresponding \(G\)-stable regular subgroup of \(\operatorname{Perm}(X)\) then the bijection \(a:N\to X\) is given by
\[a(\rho_{\star}(\overline{x}))=\rho_{\star}^{-1}(\overline{x})[\overline{e}]= \overline{x},\]
and the binary operation \(\widehat{\star}\) on \(X\) arising from \(N\) is given by
\[\overline{x}\:\widehat{\star}\:\overline{y}=a(\rho_{\star}(\overline{x}))\: \widehat{\star}\:a(\rho_{\star}(\overline{y}))=a(\rho_{\star}(\overline{x}\star \overline{y}))=\overline{x}\star\overline{y}.\]
Hence \(\widehat{\star}=\star\).
This completes the proof that there is a bijection between the objects described in (ii) and those in (iii).
We note that the bijection \(a\) defined in the proof of Theorem 5.1 is not a direct generalization of the bijection used in the corresponding step of the argument connecting skew braces with Hopf-Galois structures on Galois field extensions (see [17, Subsection 2.2], for example). The direct generalization would be \(a(\eta)=\eta[\overline{e}]\); we choose \(a(\eta)=\eta^{-1}[\overline{e}]\) over this since it yields an isomorphism between \((X,\star)\) and \(\rho_{\star}(X)\).
If, instead of beginning with a Galois extension \(E\) of \(K\) and an intermediate field \(L\) we begin with a separable extension \(L/K\), then by Theorem 5.1 each Hopf-Galois structure on \(L/K\) yields multiple skew bracoids, corresponding to different choices of the (finite) Galois extension \(E\) of \(K\) containing \(L\). However, we have
**Proposition 5.2**.: Let \(\star\) be a binary operation on \(X\) such that \((G,\cdot,X,\star,\odot)\) is a skew bracoid. Then \(\ker(\lambda_{\odot})=\operatorname{Gal}(E/\widetilde{L})\), so the reduced form of \((G,X)\) is \((\overline{G},X)\) with \(\overline{G}=\operatorname{Gal}(\widetilde{L}/K)\).
Proof.: Recall that the action \(\odot\) on \(G\) on \(X\) is by left translation of cosets. We have \(\operatorname{Stab}_{G}(\overline{e})=G^{\prime}\), so for each \(x\in G\) we have \(\operatorname{Stab}_{G}(\overline{x})=xG^{\prime}x^{-1}\), and so \(\ker(\lambda_{\odot})=\bigcap_{x\in G}xG^{\prime}x^{-1}\), the largest normal subgroup of \(G\) contained in \(G^{\prime}\). By Galois theory, the fixed field of this subgroup is the smallest subfield of \(E\) that contains \(L\) and is a Galois extension of \(K\); that is, the Galois closure of \(L/K\). Thus \(\ker(\lambda_{\odot})=\operatorname{Gal}(E/\widetilde{L})\). The remaining statement follows by the definition of the reduced form of \((G,X)\) (Proposition 2.18).
We obtain immediately
**Corollary 5.3**.: The skew bracoid \((G,X)\) is reduced if and only if \(E\) is the Galois closure of \(L/K\).
In Greither and Pareigis's original paper [14] they show that if \(N\) is a \(G\)-stable regular subgroup of \(\operatorname{Perm}(X)\) then so too is \(N^{op}=\operatorname{Cent}_{\operatorname{Perm}(G)}(N)\). The interactions between the Hopf-Galois structures corresponding to \(N\) and \(N^{op}\) have been explored in, for example, [24] and [17]. In particular, [17, Proposition 3.4] shows that the skew brace corresponding to the subgroup \(N^{op}\) is the opposite of the skew brace corresponding to \(N\). This result generalizes to skew bracoids:
**Proposition 5.4**.: Let \(\star\) be a binary operation on \(X\) such that \((G,\cdot,X,\star,\odot)\) is a skew bracoid, and let \((G,\cdot,X,\star^{op},\odot)\) be the opposite skew bracoid. Then \(\rho_{\star^{op}}(X)=\rho_{\star}(X)^{op}\).
Proof.: For \(\overline{x},\overline{y},\overline{z}\in X\) we have
\[\rho_{\star^{op}}(\overline{x})\rho_{\star}(\overline{y})[ \overline{z}] = \rho_{\star^{op}}(\overline{x})[\overline{z}\star\overline{y}^{-1}]\] \[= \overline{x}^{-1}\star\overline{z}\star\overline{y}^{-1}\] \[= \rho_{\star}(\overline{y})[\overline{x}^{-1}\star\overline{z}]\] \[= \rho_{\star}(\overline{y})\rho_{\star^{op}}(\overline{x})[ \overline{z}].\]
Hence \(\rho_{\star^{op}}(X)\subseteq\rho_{\star}(X)^{op}\). But these groups have equal order, so in fact they are equal.
The connection between skew braces and Hopf-Galois structures on Galois extensions \(L/K\) employed in (for example) [19], [17], [16], and [18] is expressed in terms of _isomorphism classes_ of skew braces; in this case multiple Hopf-Galois structures on \(L/K\) can yield a single isomorphism class of skew braces. This is precisely quantified in [19, Proposition 2.1] and [18, Proposition 3.1]: two \(J\)-stable regular subgroups \(N,N^{\prime}\) of \(\operatorname{Perm}(J)\) yield isomorphic skew braces if and only if \(N^{\prime}=\varphi N\varphi^{-1}\) for some \(\varphi\in\operatorname{Aut}(J)\). This result generalizes to skew bracoids:
**Proposition 5.5**.: Let \(N,N^{\prime}\) be \(G\)-stable regular subgroups of \(\operatorname{Perm}(X)\). The corresponding skew bracoids \((G,\cdot,X,\star,\odot)\) and \((G,\cdot,X,\star^{\prime},\odot)\) are isomorphic if and only if there exists \(\varphi\in\operatorname{Aut}(G)\) such that \(\varphi(G^{\prime})=G^{\prime}\) and \(N^{\prime}=\varphi_{X}N\varphi_{X}^{-1}\).
Proof.: First suppose that \(\boldsymbol{\varphi}:(G,\cdot,X,\star,\odot)\to(G,\cdot,X,\star^{\prime},\odot)\) is an isomorphism. Then \(\varphi\in\operatorname{Aut}(G)\), and \(\varphi_{X}:(X,\star)\to(X,\star^{\prime})\) is an isomorphism. By Proposition 4.9 we have \(\varphi(\operatorname{Stab}_{G}(\overline{e}))=\operatorname{Stab}_{G}( \overline{e})\), i.e. \(\varphi(G^{\prime})=G^{\prime}\). For \(\overline{x},\overline{y}\in X\) we have
\[\rho_{\star^{\prime}}(\overline{x})[\overline{y}] = \overline{y}\star^{\prime}\overline{x}^{-1}\] \[= \varphi_{X}\left(\varphi_{X}^{-1}(\overline{y})\star\varphi_{X}^{ -1}(\overline{x})^{-1}\right)\] \[= \varphi_{X}\rho_{\star}(\varphi_{X}^{-1}(\overline{x}))[\varphi_{ X}^{-1}(\overline{y})]\] \[= \varphi_{X}\rho_{\star}(\overline{z})[\varphi_{X}^{-1}(\overline{ y})]\]
with \(\overline{z}=\varphi_{X}^{-1}(\overline{x})\in X\). Thus \(\rho_{\star^{\prime}}(X)=\varphi_{X}\rho_{\star}(X)\varphi_{X}^{-1}\).
Conversely suppose that there exists \(\varphi\in\operatorname{Aut}(G)\) such that \(\varphi(G^{\prime})=G^{\prime}\) and \(N^{\prime}=\varphi_{X}N\varphi_{X}^{-1}\). Recall from Theorem 5.1 that the binary operation \(\star\) on \(X\) is defined by
\[a(\eta)\star a(\mu)=a(\eta\mu),\]
where \(a:N\to X\) is the bijection defined by \(a(\eta)=\eta^{-1}[\overline{e}]\). The corresponding bijection \(a^{\prime}:N^{\prime}\to X\) is given by
\[a^{\prime}(\varphi_{X}\eta\varphi_{X}^{-1}) = \varphi_{X}\eta^{-1}\varphi_{X}^{-1}[\overline{e}]\] \[= \varphi_{X}\eta^{-1}[\overline{e}]\] \[= \varphi_{X}a(\eta),\]
and the binary operation \(\star^{\prime}\) on \(X\) is given by
\[a^{\prime}(\varphi_{X}\eta\varphi_{X}^{-1})\star^{\prime}a^{\prime}(\varphi_{X} \mu\varphi_{X}^{-1})=a^{\prime}(\varphi_{X}\eta\mu\varphi_{X}^{-1}).\]
Rewriting in terms of \(a\) we find
\[\varphi_{X}a(\eta)\star^{\prime}\varphi_{X}a(\mu) = \varphi_{X}a(\eta\mu)\] \[= \varphi_{X}(a(\eta)\star a(\mu)),\]
and so \(\varphi_{X}:(X,\star)\to(X,\star^{\prime})\) is a homomorphism. Since \(\varphi\) is an automorphism of \(G\) that stabilizes \(G^{\prime}\) we see that \(\varphi_{X}\) is a bijection, hence an isomorphism. Therefore \(\boldsymbol{\varphi}:(G,\cdot,X,\star,\odot)\to(G,\cdot,X,\star^{\prime},\odot)\) is an isomorphism.
**Corollary 5.6**.: The number of \(G\)-stable regular subgroups of \(\operatorname{Perm}(X)\) that arising from an isomorphism class of skew bracoids is
\[\frac{|\operatorname{Aut}_{G^{\prime}}(G)|}{|\operatorname{Aut}_{G^{\prime}, \star}(G)|},\]
where \(\operatorname{Aut}_{G^{\prime}}(G)=\{\varphi\in\operatorname{Aut}(G)\mid \varphi(G^{\prime})=G^{\prime}\}\) and \(\operatorname{Aut}_{G^{\prime},\star}(G)\) is the subgroup of \(\operatorname{Aut}_{G^{\prime}}(G)\) consisting of those automorphisms \(\varphi\) for which \(\varphi_{X}\in\operatorname{Aut}(X,\star)\).
Next we turn to the properties of the Hopf-Galois structure corresponding to a \(G\)-stable regular subgroup \(N\) of \(\operatorname{Perm}(X)\). Greither and Pareigis show that the Hopf algebra giving the corresponding Hopf-Galois structure is \(\widetilde{L}[N]^{J}\), where \(J\) acts on \(\widetilde{L}\) via the Galois action and on \(N\) via \(\,{}^{g}\eta=\lambda_{\odot}(g)\eta\lambda_{\odot}(g)^{-1}\), and that the action of this Hopf algebra on \(L\) is given by
\[\left(\sum_{\eta\in N}c_{\eta}\eta\right)[t]=\sum_{\eta\in N}c_{\eta}\eta^{-1} (eJ^{\prime})[t]. \tag{9}\]
In the case that \(L/K\) is Galois with Galois group \((J,\cdot)\), Stefanello and Trappeniers reinterpret this by showing that the Hopf algebra giving Hopf-Galois structure corresponding to a binary operation \(\star\) on \(J\) can be written as \(L[J,\star]^{J}\), with \(J\) acting on \(L\) via the Galois action an on \((J,\star)\) via the \(\gamma\)-function of the skew brace, and that the action of this Hopf algebra on \(L\) is given by
\[\left(\sum_{j\in J}c_{j}j\right)[t]=\sum_{j\in J}c_{j}j[t]. \tag{10}\]
We generalize this formulation to skew bracoids.
**Proposition 5.7**.: Let \(\star\) be a binary operation on \(X\) such that \((G,\cdot,X,\star,\odot)\) is a skew bracoid. The Hopf algebra giving the corresponding Hopf-Galois structure on \(L/K\) is \(E[X,\star]^{G}\), where \(G\) acts on \(E\) as Galois automorphisms and on \((X,\star)\) via the \(\gamma\)-function of the skew bracoid. The action of this \(K\)-Hopf algebra on \(L\) is given by
\[\left(\sum_{\overline{x}\in X}c_{\overline{x}}\overline{x}\right)[t]=\sum_{ \overline{x}\in X}c_{\overline{x}}\overline{x}(t)\text{ for all }t\in L. \tag{11}\]
Proof.: By Theorem 5.1 that \(\rho_{\star}(X)\) is \(G\)-stable regular subgroup of \(\operatorname{Perm}(X)\); the Hopf algebra giving the corresponding Hopf-Galois structure is \(E[\rho_{\star}(X)]^{G}\). The
isomorphism of groups \(\rho_{\star}:(X,\star)\to\rho_{\star}(X)\) yields an isomorphism of \(E\)-Hopf algebras \(E[X,\star]\cong E[\rho_{\star}(X)]\). By Equation (8) we have
\[\rho_{\star}(\,{}^{(\gamma(g)}\overline{x})=\lambda_{\odot}(g)\rho_{\star}( \overline{x})\lambda_{\odot}(g^{-1})\]
for all \(g\in G\) and \(\overline{x}\in X\), so this isomorphism is \(G\)-equivariant. Therefore by Galois descent we obtain \(E[X,\star]^{G}\cong E[\rho_{\star}(X)]^{G}\) as \(K\)-Hopf algebras.
By the theorem of Greither and Pareigis the the action of the Hopf algebra \(E[\rho_{\star}(X)]^{G}\) on \(L\) is given by
\[\left(\sum_{\overline{x}\in X}c_{\overline{x}}\rho_{\star}( \overline{x})\right)[t] = \sum_{\overline{x}\in X}c_{\overline{x}}\rho_{\star}(\overline{x} )^{-1}(\overline{e})[t]\] \[= \sum_{\overline{x}\in X}c_{\overline{x}}\overline{x}[t]\text{ for all }t\in L.\]
(Note that the expression \(\overline{x}[t]\) is well defined because \(\overline{x}=xG^{\prime}\) and \(t\in L=E^{G^{\prime}}\).) Transporting this to an action of \(E[X,\star]^{G}\) on \(L\) via the inverse of the isomorphism \(E[X,\star]^{G}\cong E[\rho_{\star}(X)]^{G}\), we find
\[\left(\sum_{\overline{x}\in X}c_{\overline{x}}\overline{x}\right)[t]=\sum_{ \overline{x}\in X}c_{\overline{x}}\overline{x}(t)\text{ for all }t\in L,\]
as claimed.
The connection between skew braces and Hopf-Galois structures on Galois extensions has been fruitfully applied to questions concerning the _Hopf-Galois correspondence_. If a Hopf algebra \(H\) gives a Hopf-Galois structure on an extension of fields \(L/K\) then each Hopf subalgebra \(H^{\prime}\) of \(H\) has a corresponding "fixed field"
\[L^{H^{\prime}}=\{x\in L\mid h(x)=\varepsilon(h)x\text{ for all }h\in H^{\prime}\}, \tag{12}\]
where \(\varepsilon:H\to K\) is the counit map of \(H\) (see [7] or [11, Chapter 7]). The resulting correspondence between Hopf subalgebras of \(H\) and intermediate fields of the extension \(L/K\) is inclusion reversing and injective, but not surjective in general. We say that an intermediate field having the form \(L^{H^{\prime}}\) for some Hopf subalgebra \(H^{\prime}\) of \(H\) is _realizable with respect to \(H\)_.
In analogy with the usual Galois correspondence, we always obtain a natural Hopf-Galois structure on the extension \(L/L^{H^{\prime}}\), given by the \(L^{H^{\prime}}\)-Hopf algebra \(L^{H^{\prime}}\otimes_{K}H^{\prime}\). The existence of a quotient Hopf algebra, and a natural quotient Hopf-Galois structure on \(L^{H^{\prime}}/K\), depend upon whether \(H^{\prime}\) is a _normal_ Hopf subalgebra of \(H\).
Turning to the Hopf algebras arising in Greither-Pareigis theory, it is well known that the Hopf subalgebras of a group algebra \(E[N]\) are precisely the sets \(E[P]\) with \(P\) a subgroup of \(N\), and that \(E[P]\) is a normal Hopf subalgebra of \(E[N]\) if and only if \(P\) is a normal subgroup of \(N\) (in this case the quotient Hopf algebra identifies naturally with \(E[N/P]\)). If \(G\) acts on \(N\) by automorphisms and on \(E\) via the Galois action then by the theory of Galois descent the Hopf subalgebras of \(E[N]^{G}\) are precisely the sets \(E[P]^{G}\) with \(P\) a subgroup of \(N\) that is stable under the action of \(G\), with normality of this Hopf subalgebra again equivalent to normality of \(P\) in
\(N\) (in this case we obtain a natural action of \(G\) on \(N/P\), and the quotient Hopf algebra identifies naturally with \(E[N/P]^{G}\)). See [23] for a recent exposition of these ideas.
In the case that \(L/K\) is Galois with Galois group \((J,\cdot)\), Childs [9] (using an earlier version of the connection between Hopf-Galois structures and skew braces) characterizes the Hopf subalgebras of a Hopf algebra of the form \(L[N]^{J}\) (and hence the intermediate fields realizable with respect to the corresponding Hopf-Galois structure) in terms of certain substructures of the corresponding skew brace. In the Stefanello-Trappeniers formulation, the Hopf subalgebras of a Hopf algebra of the form \(L[J,\star]^{J}\) correspond to left ideals of \((J,\star,\cdot)\), with the normal Hopf subalgebras corresponding to ideals. Moreover, they show that if \(I\) is a left ideal of \((J,\star,\cdot)\) then the fixed field of the Hopf subalgebra \(L[I,\star]^{J}\) coincides with the fixed field \(L^{I}\) obtained by viewing \(I\) as a subgroup of the Galois group.
We generalize this formulation to skew bracoids. First we note that the left ideals of a skew bracoid of the form \((G,X)\) have a particular form:
**Proposition 5.8**.: Let \(Y\) be a left ideal of the skew bracoid \((G,X)\). Then \(Y=G_{Y}/G^{\prime}\).
Proof.: Recall from Proposition 3.8 that
\[G_{Y} = \{g\in G\mid g\odot\overline{y}\in Y\text{ for all }\overline{y}\in Y\}\] \[= \{g\in G\mid g\odot\overline{e}\in Y\},\]
and that \((G_{Y},Y)\) is a subskew bracoid of \((G,X)\). In this case the action \(\odot\) of \(G\) on \(X\) is simply left translation of cosets, so \(g\odot\overline{e}=\overline{g}\) for each \(g\in G\), and so we have
\[Y=G_{Y}\odot\overline{e}=\{\overline{g}\mid g\in G_{Y}\}=G_{Y}/G^{\prime}.\]
**Theorem 5.9**.: Let \((G,\cdot,X,\star,\odot)\) be a skew bracoid and let \(H=E[X,\star]^{G}\) give the corresponding Hopf-Galois structure on \(L/K\).
1. There is a bijection between intermediate fields \(F\) of \(L/K\) that are realizable with respect to \(H\) and left ideals \(Y\) of \((G,X)\).
2. Writing \(L^{Y}\) for the intermediate field corresponding to a left ideal \(Y\), we have \(L^{Y}=E^{G_{Y}}\).
3. The Hopf-Galois structure given by \(H\) on \(L/K\) yields a quotient Hopf-Galois structure on \(L^{Y}/K\) if and only if \(Y\) is an ideal of \((G,X)\).
4. The extension \(L^{Y}/K\) is a Galois extension if and only if \(Y\) is an enhanced left ideal of \((G,X)\).
Proof.:
1. By the discussion above there is a bijection between intermediate fields \(F\) of \(L/K\) in the image of the Hopf-Galois correspondence with respect to \(H\) and Hopf subalgebras of \(H\). By Proposition 5.7 we have \(H=E[X,\star]^{G}\) with \(G\) acting on \(E\) as Galois automorphisms and on \(X\) via \(\gamma\). By the discussion above the Hopf subalgebras of \(E[X,\star]^{G}\) are the subgroups of \((X,\star)\) that are stable under \(\gamma(G)\), which are precisely the left ideals of \((G,X)\) (see Definition 3.5).
2. If \(Y\) is a left ideal of \((G,X)\) then the corresponding Hopf subalgebra of \(E[X,\star]^{G}\) is \(E[Y,\star]^{G}\), and the corresponding fixed field is \(L^{Y}=L^{E[Y,\star]^{G}}\), defined as in (12). Recall from Proposition 5.8 that \(Y=G_{Y}/G^{\prime}\); in particular, we have \(G^{\prime}\subseteq G_{Y}\), so \(E^{G_{Y}}\subseteq E^{G^{\prime}}=L\). Now using the action of \(E[X,\star]^{G}\) on \(L\) given in Proposition 5.7 we see that if \(t\in E^{G_{Y}}\) and \(h=\sum_{\overline{y}\in Y}c_{\overline{y}}\overline{y}\in L^{E[Y,\star]^{G}}\) then \[h(t)=\sum_{\overline{y}\in Y}c_{\overline{y}}\overline{y}(t)=\sum_{ \overline{y}\in Y}c_{\overline{y}^{t}}=\varepsilon(h)t,\] so \(t\in L^{Y}\). Hence \(E^{G_{Y}}\subseteq L^{Y}\). But we have \[[L:L^{Y}]=\dim_{K}(E[Y,\star]^{G})=|Y|=\frac{|G_{Y}|}{|G^{\prime}|}=\frac{[E :E^{G_{Y}}]}{[E:L]}=[L:E^{G_{Y}}].\] Therefore \(L^{Y}=E^{G_{Y}}\), as claimed.
3. As described above, we obtain a quotient Hopf-Galois structure on \(L^{Y}/K\) if and only if \(E[Y,\star]^{G}\) is a normal Hopf subalgebra of \(E[X,\star]^{G}\), which occurs if and only if the left ideal \(Y\) of \((G,X)\) is also a normal subgroup of \(X\), which is precisely the condition for \(Y\) to be an ideal of \((G,X)\) (see Definition 3.5).
4. Since \(L^{Y}=E^{G_{Y}}\), the extension \(L^{Y}/K\) is a Galois extension if and only if the subgroup \(G_{Y}\) attached to the left ideal \(Y\) is a normal subgroup of \(G\), which is precisely the condition for \(Y\) to be an enhanced ideal of \((G,X)\) (see Definition 3.16).
Finally, we consider the Hopf algebras giving Hopf-Galois structures on the various subextensions.
In [23] it is shown that if \((J,\star,\cdot)\) is a skew brace with corresponding Hopf-Galois structure \(L[J,\star]^{J}\) on a Galois extension \(L/K\) with Galois group \((J,\cdot)\) then, given a left ideal \(I\) of \((J,\star,\cdot)\), the skew brace \((I,\star,\cdot)\) corresponds to the Hopf-Galois structure given by \(L^{I}\otimes_{K}L[I,\star]^{J}\) on \(L/L^{I}\). Moreover, if \(I\) is an ideal then the skew brace \((J/I,\star,\cdot)\) corresponds to the Hopf-Galois structure given by \(L[J/I,\star]^{J}\) on \(L^{I}/K\). It is also observed that the Hopf algebra \(L[J/I,\star]^{J}\) gives a Hopf-Galois structure on \(L^{I}/K\) in the case that \(I\) is a strong left ideal, but not an ideal, of \((J,\star,\cdot)\). In this case \(L^{I}/K\) in a non-normal extension, so there is no corresponding skew brace.
From the details of the proof of Theorem 5.9 we see that if \((G,\cdot,X,\star,\odot)\) is a skew bracoid with corresponding Hopf-Galois structure \(E[X,\star]^{G}\) on a separable extension \(L/K\) and \(Y\) is a left ideal of \((G,X)\) then the skew bracoid \((G_{Y},Y)\) corresponds to the Hopf-Galois structure given by \(L^{Y}\otimes_{K}E[Y,\star]^{G}\) on \(L/L^{Y}\). Moreover, if \(Y\) is an ideal then the skew bracoid \((G,X/Y)\) corresponds to the Hopf-Galois structure given by \(E[X/Y,\star]^{G}\) on \(L^{Y}/K\). Note that this description is valid regardless of whether \(L^{Y}/K\) is a Galois extension or not. In particular, returning to the Galois case for a moment, the Hopf-Galois structure given by \(L[J/I,\star]^{J}\) on the non-normal extension \(L^{I}/K\) in the case that \(I\) is a strong left ideal, but not an ideal, of \((J,\star,\cdot)\)
corresponds under our theory with the skew bracoid \((J,\cdot,J/I,\star,\odot)\) (a quotient of a skew brace as described in Proposition 2.4.
In fact, we have:
**Proposition 5.10**.: Let \((G,\cdot,X,\star,\odot)\) be a skew bracoid and let \(H=E[X,\star]^{G}\) give the corresponding Hopf-Galois structure on \(L/K\). The following are equivalent:
* the Hopf-Galois structure given by \(H\) arises as the quotient of a Hopf-Galois structure on the Galois extension \(E/K\) by some normal Hopf subalgebra;
* \((G,X)\) arises as the quotient of a skew brace by a strong left ideal.
Proof.: First suppose that (i) holds. We may express the Hopf-Galois structure on \(E/K\) as \(E[G,\star]^{G}\) for some binary operation \(\star\) on \(G\) such that \((G,\star,\cdot)\) is a skew brace, and the normal Hopf subalgebra as \(E[G^{\prime},\star]^{G}\) with \((G^{\prime},\star,\cdot)\) a strong left ideal of \((G,\star,\cdot)\) Note that the underlying set of this strong left ideal is necessarily equal to \(G^{\prime}\), since the corresponding fixed field is assumed to be \(L\). The resulting quotient Hopf-Galois structure on \(L/K\) (which coincides with \(H\) by hypothesis) is given by \(E[G/G^{\prime},\star]^{G}\), where \((G/G^{\prime},\star)\) is a quotient group of \((G,\star)\). By Theorem 5.1 this Hopf-Galois structure corresponds to the skew bracoid \((G,\cdot,X,\star,\odot)\); hence this skew bracoid arises as the quotient of a skew brace by a strong left ideal, and so (ii) holds. Conversely, suppose that (ii) holds. Then there exists some binary operation \(\star\) on \(G\) such that \((G,\star,\cdot)\) is a skew brace, and \(G^{\prime}\) is a strong left ideal, so that \((G,X)\) is obtained via the process described in Proposition 2.4. Note that the underlying sets of the skew brace and strong left ideal are necessarily equal to \(G\) and \(G^{\prime}\), since the sets appearing in the skew bracoid are \(G\) and \(X=G/G^{\prime}\). The skew brace corresponds to a Hopf-Galois structure on \(E/K\), given by \(E[G,\star]^{G}\), and the strong left ideal corresponds to a normal Hopf subalgebra \(E[G^{\prime},\star]^{G}\). The resulting quotient Hopf-Galois structure is given by \(E[G/G^{\prime},\star]^{G}=E[X,\star]^{G}\), which coincides with the Hopf-Galois structure that corresponds to the skew bracoid \((G,\cdot,X,\star,\odot)\); hence (i) holds.
|
2303.10466 | Fragmentation from group interactions: A higher-order adaptive voter
model | The adaptive voter model allows for studying the interplay between homophily,
the tendency of like-minded individuals to attract each other, and social
influence, the tendency for connected individuals to influence each other.
However, it relies on graphs, and thus, it only considers pairwise
interactions. We develop a minimal extension of the adaptive voter model to
hypergraphs to study the interactions of groups of arbitrary sizes using a
threshold parameter. We study $S$-uniform hypergraphs as initial
configurations. With numerical simulations, we find new phenomena not found in
the counterpart pairwise models, such as the formation of bands in the
magnetization and the lack of an equilibrium state. Finally, we develop an
analytical model using a sparse hypergraph approximation that accurately
predicts the bands' boundaries and height. | Nikos Papanikolaou, Renaud Lambiotte, Giacomo Vaccario | 2023-03-18T18:13:13Z | http://arxiv.org/abs/2303.10466v1 | # Fragmentation from group interactions: A higher-order adaptive voter model
###### Abstract
The adaptive voter model allows for studying the interplay between homophily, the tendency of like-minded individuals to attract each other, and social influence, the tendency for connected individuals to influence each other. However, it relies on graphs, and thus, it only considers pairwise interactions. We develop a minimal extension of the adaptive voter model to hypergraphs to study the interactions of groups of arbitrary sizes using a threshold parameter. We study \(S\)-uniform hypergraphs as initial configurations. With numerical simulations, we find new phenomena not found in the counterpart pairwise models, such as the formation of bands in the magnetization and the lack of an equilibrium state. Finally, we develop an analytical model using a sparse hypergraph approximation that accurately predicts the bands' boundaries and height.
keywords: opinion dynamics, network science, group interactions, co-evolution model, hypergraphs +
Footnote †: journal: Physica
## 1 Introduction
How collective phenomena can be explained from their micro-constituents is at the core of many disciplines. For example, in statistical mechanics, the Lenz-Ising model explains the spontaneous magnetization of materials by considering the local interactions among two adjacent atomic dipoles [1]. Similarly, in socio-physics, the adaptive voter model describes the emergence of consensus and fragmentation in social networks by modelling interactions among individuals [2]. In this model, each individual \(i\) is characterized by a degree of freedom \(s_{i}=\{0,1\}\) representing whether the individual is in favour
(\(s_{i}=1\)) or against (\(s_{i}=0\)) a given issue. Then, individuals are connected among each other and interact according to a simple rule: an individual can either adopt the opinion of a neighbour or drop this connection and create a new one with an individual having the same opinion. Despite the simplicity of this dynamics, it exhibits two totally different final states: consensus (i.e., all individuals have the same opinion) or fragmentation - where the social network splits into two separate components with opposite opinions.
We extend this type of models considering _group interactions_ and study how they affect fragmentation. Group interactions are interactions that involve more than two individuals. In opinion dynamics, examples are group messaging, group discussions or emails with multiple recipients. Studies have shown that complex mechanisms based on group interactions are often required to describe the dynamics in a social group [3; 4; 5; 6]. Examples of such mechanisms are peer pressure [7] and reinforcement [8]. Another mechanism is advanced by Social Impact Theory, stating that groups modulate the impact of a source on a target individual [9; 10]. Moreover, group interactions have been very relevant in diverse fields ranging from physics [11; 12], neural networks [13], and ecology [14].
To model group interactions, we use hyperedges [15]. A hyperedge of size \(k\geq 2\) represents a group interaction among \(k\) individuals. By combining hyperedges, we obtain a hypergraph. This mathematical object is a powerful tool successfully used to study diverse group interactions, such as multi-protein interactions in cellular biology [16], species interactions in theoretical and experimental ecology [17; 18], and academic teams in co-authorship networks [19].
One can also use simplicial complexes to model group interactions [20]. Simplicial complexes are hypergraphs with additional constraints [21]. An important one for this discussion is that their hyperedges are closed under inclusion. This requirement means that all the individuals of a group are assumed to also interact with each other, pairwise or in small groups. We instead use hyperedges to depict group interactions. This choice allows describing arbitrarily large social groups whose members do not necessarily interact pairwise or through smaller groups with all other members.
Our model is based on the adaptive voter model of [22] and has the following dynamics. At each time step, we choose a hyperedge \(e\) and check its size \(n_{e}\). If \(n_{e}=2\), i.e., a pairwise interaction, then we apply the rules of the adaptive voter model [2]. If \(n_{e}\geq 3\), either the _influence_ or _split-merge_ process occurs. The influence process assumes that the minority in
the group adopts the majority's opinion with a certain probability. The split-merge process instead assumes that the minority splits from the group and merges with another group sharing the same majority opinion. A threshold parameter \(\gamma\) determines the critical size of the minority at which either the influence or split-merge process occurs.
The main differences from [22] are two. First, [22] studies the system in a heterogeneous mean-field regime (HMF) where the _group size distribution_ was preserved. Hence, they studied how group interactions affect the dynamics to total consensus, but fragmentation could not emerge as the final state. Here, we instead explore how fragmentation is affected by group interactions. To this end, we consider a system far away from an HMF. Second, [22] considers that when a group splits, both subgroups merge into other groups. We instead assume that only the minority group merges. This change implies that we now preserve the number of groups over time; hence, the importance of groups also stays constant during the dynamics.
We study how fragmentation is affected by the threshold parameter \(\gamma\) and the initial mean degree, i.e., the average number of groups to which each individual belongs. In general, we find that fragmentation decreases with gamma (i.e., the importance of group influence) and initial mean degree (i.e., the system's connectivity). Moreover, we find a striking difference compared to the adaptive voter model without group interactions. We find fragmentation bands, i.e., equilibrium states with different degrees of fragmentation depending on \(\gamma\). As the threshold parameter varies, the transition between these states is discontinuous. We also provide an analytic explanation for these bands and their discontinuity when the hypergraphs are sparse.
The remainder of this paper is divided into three sections. In Sect. 2, we present our model's dynamics and define the observables. In Sect. 3, we present the results: the effects of group interactions on fragmentation (Subsect. 3.1), a comparison adaptive voter model without group interactions (Subsect. 3.2), and an analytic model under a sparsity approximation (Subsect. 3.3). Finally, in Sect. 4, we summarize the results and discuss future work.
## 2 Hypergraph Adaptive Voter Model
We model individuals as nodes on a hypergraph. Each node \(i\) is described by a state variable \(s_{i}(t)\) which represents its opinion at a specific time \(t\) and can take value either \(1\) or \(0\). Let \(N\) be the number of nodes.
### Initialization
At the beginning of each simulation, nodes are assigned the opinion \(1\) with probability \(A_{s}\). We generally set \(A_{s}=0.5\), unless stated. Also, we initialize the system (at \(t=0\)) as an \(S\)-uniform hypergraph \(\mathcal{H}_{0}\) given by \((V,E)\) where \(V\) is the vertex set and \(E\) is the edge set, containing only edges of size \(S\). For the \(S\)-uniform hypergraph, the mean degree is \(\mathcal{H}_{0}\) is \(\langle d(\mathcal{H}_{0})\rangle=\frac{S}{N}\) where \(n\) is the number of edges in \(E\). In summary, the parameters for initializing the model are \(N\), \(A_{s}\), \(n\), and \(S\).
In the next section, we define the dynamics of the model. For the dynamics, there are two parameters: a _probability of rewiring_\(p\) and a _threshold parameter_\(\gamma\in[0,0.5]\).
### Dynamics of the model
We call an edge \(e\) active (inactive) if the opinions of the nodes in \(e\) are different (same). This definition applies to both simple edges and hyperedges. The fraction of nodes with opinions \(1\) in an edge \(e\) is denoted by:
\[f_{e}(t)=\frac{1}{n_{e}}\sum_{i\in e}s_{i}(t) \tag{1}\]
where \(n_{e}\) is the size of the edge \(e\), i.e., the number of nodes belonging to the group. At each time step, we sample an edge \(e\) from \(E(t)\):
* if \(e\) is a simple edge (i.e. \(n_{e}=2\)) then
* if \(e\) is active then:
* with probability \(p\), _rewiring_ occurs. This means that each node in \(e\) rewires to a random node from the network with the same opinion. This process changes the edge set \(E\).
* with probability \(1-p\), _adaptation_ occurs. This means that one of the nodes is randomly chosen and it adopts the opinion of the other.
* if \(e\) is inactive then nothing happens.
* if \(e\) is a hyperedge (i.e., \(n_{e}\geq 3\)) then:
* if \(f_{e}(t)\leq\gamma\) or \(f_{e}(t)\geq 1-\gamma\) then _influence_ occurs. This means that each node with the minority opinion changes its opinion with
probability proportional to \(f_{e}(t)\) if the majority opinion is \(1\) and \(1-f_{e}(t)\) otherwise. In case of a tie, the "minority" opinion is chosen randomly. This is an extension of the adaptation for group interactions. * if \(\gamma<f_{e}(t)<1-\gamma\) then _splitting and merging_ occurs. Splitting means that the hyperedge \(e\) separates into two inactive edges with opposite opinions. Merging means that the smaller split edge integrates with another edge randomly chosen from the hypergraph whose majority opinion is the same as in the split edge.1 This process changes the edge set \(E\) and is an extension of the rewiring for group interactions: it models homophily at between groups. Footnote 1: If the smaller split edge cannot find another edge sharing the same majority, then the larger edge is integrated with another edge. This allows to keep constant the number of hyperedges.
This procedure is repeated until equilibrium is reached. We define equilibrium as when all the edges have become inactive. When an edge \(e\) has been selected at a timestep, we graphically depict its dynamics in Figure 1.
Figure 1: Schematic representation of the model dynamics. The red circles are nodes with opinion \(0\) and the blue circles are nodes with opinion \(1\). Depending on whether it is a simple edge (a) or a hyperedge (b), we apply different rules. In (a), we consider an active simple edge and have either rewiring or adaptation occurring with probability \(1-p\) or \(p\), respectively. In (b) we consider a hyperedge with fraction of nodes with opinion \(1\) (red) equal to \(\frac{3}{5}\). Influence occurs if \(\frac{3}{5}>\gamma\), then each (red) node with minority opinion may change its opinion with probability \(\frac{3}{5}\). Else, splitting and merging occur: the active hyperedge splits into two. The split edge with the minority opinion (red) is merged into a second edge with the same majority opinion chosen randomly from the rest of the network.
Based on this dynamics, hyperedges are treated differently from simple edges, i.e., their mechanisms are fundamentally different. The motivation behind this is that hyperedges model group interactions, while simple edges model pairwise interactions. For the former, the concept of majority and minority emerges unlike for the latter. The presence of a majority and minority can create biases towards one opinion. In our model, we focus on the case in which the majority opinion is preferred above a certain threshold. Note that the proposed dynamics for group interactions cannot be described as multiple pairwise interactions. This impossibility to decompose the group interactions introduces new possible phenomena that could not be observed using models of pairwise interactions [23].
### Difference from previous models
Even though these dynamics are quite similar with those presented in the introduction, there are important differences. Firstly, we use hypergraphs instead of simplicial complexes. The main reason is that as explained in the previous sections hypergraphs are less constraining to model large groups, since it is not assumed that each subgroup or pair of individuals in a group are connected, as is the case in simplicial complexes (they are closed under inclusion).
Moreover, in our model, influence and splitting/merging take place deterministically depending on the threshold parameter and not on a probability \(q\). This choice can be justified by the fact that we would expect influence to occur with different probabilities for different sizes of groups. Also, unlike for groups of size 3, for big groups the fraction of minority opinions can take multiple values and we would like the large values to be treated differently than smaller ones. For example, in a hyperedge with size 5, an edge with only one node of opinion 0 has different impact on the opinion dynamics than an edge with two nodes of opinion 0. This distinction would not be possible using a hyperparameter probability \(q\) and it would be cumbersome to introduce one for every possible size and minority fraction. The concept of threshold in group interactions is also sociologically useful based on the threshold model.
Another important difference is that at each timestep whole groups can be picked, unlike in the previous model where only simple edges were randomly chosen. In this way, we decouple the group interactions from the pairwise interactions. This is because the dynamics of the selected hyperedge is determined by the threshold parameter and the fraction of minority opinions in
that hyperedge independently of parameters describing pairwise interactions.
### Edge-based magnetization
Similarly to the classical Voter Model, the quantities of interest are the time to reach equilibrium and the magnetization. The former is useful to investigate whether group interactions accelerate or delays the evolution of system to its equilibrium. The latter quantifies whether the system at equilibrium has reached consensus or its degree of fragmentation.
We distinguish two kinds of magnetization: edge-based magnetization and node-based magnetization. Both kinds of magnetization for finite systems can be used to distinguish whether total consensus or fragmentation occurs. If the magnetization at equilibrium is equal to 1 (-1) then the system has reached total consensus with opinion 1 (0). Otherwise it is fragmented. In formula, the node-based magnetization at time \(t\) is defined as:
\[m(t)=\frac{\sum_{i\in V}(2s_{i}(t)-1)}{N}, \tag{2}\]
where \(s_{i}\) is the opinion of the node \(i\). This means that the node-based magnetization is equal to the fraction of nodes with opinion 1 minus the fraction of nodes with opinion 0 in the node set \(V\). Edge-based magnetization at time \(t\) sums through all the nodes of all the edges and is defined as:
\[m(t)=\frac{1}{\sum_{e\in E(t)}n_{e}}\sum_{\begin{subarray}{c}i\in e\\ e\in E(t)\end{subarray}}(2s_{i}(t)-1) \tag{3}\]
where \(s_{i}\) is the opinion of the node \(i\). The edge-based magnetization is related to degree-weighted moments because the nodes with the greatest number of degrees contribute the most to the sum.
In our study, choosing the node-based over the edge-based magnetization makes negligible difference since the initial hypergraph is chosen to have a binomial degree distribution. Therefore, the standard deviation of the degree is relatively small, and hence, each node is treated equal in the edge-based magnetization. For this study, we chose the edge-based magnetization as it is usually preferred in the complex network literature.
## 3 Results
### Fragmentation of adaptive voters on hypergraphs
#### 3.1.1 Low values of \(\gamma\) lead to fragmentation
In Fig. 2 (a), we report the absolute edge magnetization in function of \(\gamma\) and the mean initial degree of the nodes. In general, we observe that we have consensus for high values of \(\gamma\), i.e., when the dynamics is dominate by influence. At lower values of \(\gamma\), we instead have fragmentation, i.e., the final state is composed by groups containing nodes with both opinions. This occurs because at low \(\gamma\) it is more likely that an active group splits into two groups having opposite opinion. By this, a fragmented state emerges and consensus is out of reach.
#### 3.1.2 High mean degrees lead to total consensus
In Fig. 2 (a), we also observe that the fragmentation depends on the initial mean degree. The mean degree describes the average number of groups to which a node belongs. We explore mean degree values ranging from one to 100. We find that at low initial mean degree, we have more fragmentation, while at high mean degree less. This result occurs as when decreasing the mean degree, a node belongs to very few groups (one or two). If one of this
Figure 2: Colormaps of mean degree versus \(\gamma\). (a) The color represents the average absolute magnetization. (b) The color represents the average time to convergence. The maximum number of simulated steps was chosen to be 7000. If a trajectory does not converge then this number is assigned for computing the average. The parameters are: \(p=0.55\), \(N=100\), \(S=10\), \(A_{S}=0.5\). For each mean degree and \(\gamma\), we simulate 20 trajectories.
group splits, it is very unlikely that its nodes will go through the influence dynamics and fragmentation will appear. For high mean degree, nodes belong to more groups and hence, groups overlap. Thanks to this overlap, the majority opinion of the system can propagate, and the systems can reach total consensus.
#### 3.1.3 Time of convergence
In Fig. 2 (b), we report the time needed to converge to a stable state, i.e., a state without active groups. We find that the time of convergence changes a lot depending on \(\gamma\) and the initial mean degree. In particular, at fixed initial mean degree, the time of convergence is not monotonous in \(\gamma\). It first increases with \(\gamma\) until a threshold value \(\gamma_{t}\) that depends on the mean degree. Then there is a \(\gamma\)-range in which the time of convergence stays high, and finally it decreases again.
At very low \(\gamma\), fragmentation is a stable state from which the system does not move, see Fig. 2 (a). This state is also quick to reach as almost every sampled group splits into two. Hence, the time of convergence is of the order of the number of initial groups (Fig. 2 (b)). When increasing \(\gamma\), groups undergo the influence dynamics which tries to move the system towards consensus. However, at this intermediate values of \(\gamma\), influence is not strong enough and does not manage to push the system to consensus. The system gets instead trapped in \(k\)-orbits and never reaches a state without active groups.
A simple example of \(k\)-orbit is when a node \(i\) is in two groups with opposite majority opinion: a first group active and a second one inactive. When influence acts on the active group and changes the opinion of the node \(i\), the first group becomes inactive and the second one active. This type of dynamics can repeat itself, locking the system on a 2-orbit. Above \(\gamma_{t}\) and the range with the \(k\)-orbits, consensus is the final equilibrium of the system, see Fig. 2 (a). The bigger is \(\gamma\), the more likely influence occurs, and hence, the time to reach consensus decreases, see Fig. 2 (b).
### Comparison to the adaptive voter model without group interactions
#### 3.2.1 Predicting the final state
In the adaptive voter model without group interactions by [2], the magnetization (i.e., fraction of ones, \(m\)) and the density of active edges (i.e., fraction of 0-1 edges, \(\rho_{e}\)) describe the phase transition between fragmentation
and consensus. The magnetization denotes whether the system is in fragmentation or total consensus. The density of active edges denotes whether the system has reached equilibrium. In [2], the authors show that the density of active edges has a quadratic form (a concave parabola) in the magnetization during the system's evolution, i.e., \(\rho_{e}(t)\sim-m(t)^{2}\). Hence, by fitting a parabola on the time sequence of (\(m(t)\),\(\rho_{e}(t)\)), the intersections between the fitted parabola and the \(x\)-axis predict the final states that the system eventually reaches at \(t\to\infty\). Prompted by this result, we ask whether it applies also in presence of group interactions.
In Fig. 3, we plot the density of active edges versus edge-based magnetization for different values of \(\gamma\). We find that parabolas are poorly fitted to the trajectories and the results of [2] do not generalize for the dynamics of the presented model. The reason is that there is a congestion of trajectories at high values for the density of active edges. This congestion happens because a group is active if there is at least one node with a different opinion. Therefore, in each group multiple nodes need to change their opinion to change the group from active to inactive. Since we initialize nodes with random opinions and also group them at random, the density of active edges starts close to 1. Then, there are large time periods during which the magnetization can change while the density of the active edges stays constant. This process creates a congestion of trajectories at high values for the density of active edges.
Figure 3: The edge-based magnetization for 10 trajectories for 5 values of threshold parameters \(\gamma\) versus the density of active edges. The parameters are \(p=0.55\), \(N=500\), \(S=32\), \(n=125\) with mean degree, \(\langle d(\mathcal{H}_{0})\rangle=8\). The curves cannot be fitted with parabolas due to a congestion of trajectories around density of active edges equal to 1.
#### 3.2.2 Multiple fragmented states
In Fig 4(a), we show the absolute final edge-based magnetization versus \(\gamma\). We find the occurrence of bands, i.e., the absolute magnetization is approximately a step function between the initial value of the magnetization \(2A_{s}-1\) and \(1\). In other words, we have _multiple_ fragmented states when varying the strength of influence. This phenomenon is not observed in the adaptive voter model without group interactions which has one phase transition. Precisely, when increasing the probability to rewire active edges, the final possible states are _only two_: a state with the fraction of minority equal to zero (total consensus) or a state with the fraction of minority equal to the initial (minority) fraction.
The presence of bands with constant magnetization is only an effect of influence. Indeed, they are more noticeable if there is no the merging and rewiring mechanism, but only influence and splitting (see Fig 4(b)). This means that merging and rewiring is unrelated to the existence of the bands. The effect of these two mechanisms is to smooth out the curve, especially for low values of \(\gamma\) where they are more likely to occur.
In Sect. 3.3, we derive the height (i.e., the absolute magnetization value) and the location (i.e., the \(\gamma\) ranges) of these bands when the merging and
Figure 4: Absolute final _edge-based_ magnetization versus threshold parameter, \(\gamma\). In (a), the merging and rewiring mechanism is active while in in (b), they are not. The initial configuration is an \(S\)-uniform hypergraph with edge size, \(S=10\), number of nodes \(N=100\), number of initial hyperedges \(n=20\) (thus the mean degree is 2) and probability of rewiring \(p=0.55\). Opinions 1 and 0 were initially assigned to the nodes with equal probability. The points and the shaded area are the mean and the standard deviation of the absolute magnetization for 20 trajectories for each value of \(\gamma\). In both (a) and (b), there are bands where the average absolute magnetization stays constant and then increases abruptly. These bands are more noticeable in (b).
rewiring process are switched off and the hypergraphs is sparse.
#### 3.2.3 Convergence to the adaptive voter model without group interactions
For large number of groups \(n\), the presented model looks similar to the adaptive voter model without group interactions. In Fig. 5, we plot the absolute magnetization vs \(\gamma\) for mean degree 10 and 100. The absolute magnetization for most of the values of \(\gamma\) is either equal to the \(2A_{s}-1\) (with \(A_{s}=0.55\)) or equal to 1. This phenomenon occurs also in the classical adaptive voter model.
Note also that there is a small region of \(\gamma\) values for which the absolute magnetization has intermediate values. When increasing the mean degree, the "width" of the bands decreases and the system asymptotically approaches a sharp transition.
To better understand the boundaries of the bands and how the absolute magnetization changes, we now analytically study our model. We consider the case where the mean degree is low since the bands are more prominent in this regime (see Fig. 2 (a)).
### Sparse hypergraph approximation
We develop an analytical expression to describe the boundaries and the height of the bands, i.e., we characterize the multiple fragmented states. For this analysis, we assume that the hypergraph is sparse. The sparsity
Figure 5: Absolute final edge-based magnetization versus \(\gamma\). (a) Mean degree equal to 10 (\(N=100\), \(n=100\), \(S=10\)). (b) Mean degree equal to 100 (\(N=100\), \(n=1000\), \(S=100\)). The points and the shaded area are the mean and the standard deviation of the trajectories respectively for 10 trajectories for each of the 100 values of \(\gamma\) used. The other parameters are \(p=0.55\) and \(A_{s}=0.5\).
assumption allows us to ignore the overlap between edges. Also, we neglect the merging and rewiring mechanism since bands still exist without this mechanism.
#### 3.3.1 The boundaries of the bands
In Appendix, we formally prove the existence of the bands and their boundaries with respect to the parameter \(\gamma\). To do this, we calculate the master equation for \(N(k,l,t)\) that is the number of edges with \(k\) nodes of opinion 1 and size \(l\) at time \(t\). We recursively solve this equation and find that the boundaries of the bands are the rational numbers \(\frac{k}{S}\) where \(k=1,2,...,S\) such that \(N(k,S,0)\) is larger than zero.
A heuristic proof is the following. Without loss of generality, let us assume that \(A_{s}>0.5\). Then, at \(t=0\), the majority of edges of size \(S\) that do not split will on average become inactive edges of opinion 1 due to the influence mechanism. These edges increase the absolute edge-based magnetization and this increase is preserved in time by the sparsity of the hypergraph. Precisely, the sparsity hypothesis implies that there is little overlap between edges, and hence, inactive edges stay inactive as they cannot be re-activated from nodes belonging to other edges. On the other hand, if an initial hyperedge splits, it creates two inactive edges of opposite opinions. These inactive edges do not change the absolute edge-based magnetization and their state stays frozen. Hence, the final absolute edge-magnetization depends on the _initial_ fraction of edges that is susceptible to influence or split. A hyperedge is susceptible to influence or split depending on the value of \(\gamma\) and its minority fraction which is a discontinuous value \(\frac{k}{S}\) with \(k=1,2,..,S\). For example, by varying \(\gamma\), edges start splitting when \(\gamma<\frac{k}{S}\). Hence, at the critical values \(\frac{k}{S}\) with \(k=1,2,..,S\), we have a different number of edges susceptible to influence or split and different final magnetization.
The final result is that the boundaries of the bands for sparse hypergraphs is given by the following theorem.
**Theorem 1**.: _Let an \(S\)-uniform hypergraph with \(S>2\), \(N\) nodes, \(n\) edges, \(p\) the probability of rewiring, \(\gamma\) the threshold parameter evolve following the dynamics described in Sect. 2, without the merging and rewiring mechanism. For low mean degrees of the initial hypergraph (e.g., \(\frac{Sn}{N}\approx 1\)), the magnetization discontinuously changes at the following critical values of \(\gamma\)_
\[\gamma_{c}=\frac{k}{S}, \tag{4}\]
_where \(k\in\mathbb{Z}\) such that \(N(k,S,0)>0\) and \(\frac{k}{S}\leq\frac{1}{2}\) or:_
\[\gamma_{c}=1-\frac{k}{S}, \tag{5}\]
_where \(k\in\mathbb{Z}\) such that \(N(k,S,0)>0\) and \(\frac{k}{S}>\frac{1}{2}\)_
In Figure 6, we show that the expectations coming from analytical analysis match the simulation results for mean degree equal to \(2\). For systems with high mean degrees (approximately higher than \(4\)), the previous analysis does not work because inactive edges can still get re-activated due to overlap. However, the boundaries of the bands still occur at rational numbers \(\frac{k}{S}\) of \(\gamma\). We argue that this occurs as the trajectories in which inactive edges get re-activated are rare and do not contribute significantly to the final state.
#### 3.3.2 The height of the bands
We calculate the height of the bands, i.e., the final absolute magnetization at equilibrium for sparse hypergraphs. To calculate this, we first characterize initial configurations depending on their fraction of edges having a certain
Figure 6: Illustration of Theorem 1, which calculates the locations of the bands. Parameters: \(N=100\), \(p=0.5\), \(S=10\), \(n=20\) edges. Subfigure 5(a) shows the frequency of \(N(k,10,0)\) for \(k\in\{0,..,10\}\) where \(N(k,l,t)\) is the number of edges of size \(l\) with \(k\) nodes with opinion \(1\) at time \(t\). The values of \(k\) with non-zero \(N(k,10,0)\) are \(k=\{2,3,5,6,7,8\}\). Based on Theorem 1, we calculate the minority fraction of the edges \((k,10)\) with \(k=\{2,3,5,6,7,8\}\) and this gives the critical values of the bands \(\gamma_{c}=\{\frac{2}{10},\frac{3}{10},\frac{4}{10},\frac{5}{10}\}\) which match with the simulations in Subfigure 5(b). In Subfigure 5(b), the blue vertical lines are the positions of the bands calculated by the previous theorem.
majority. Then, we use this information to calculate the probability of different initial configurations. Finally, we compute the final expected magnetization by computing the expected final state of each hyperedge based on its initial majority. The expected final state of each edge is obtained under the sparse hypergraph assumption.
To estimate the initial expected magnetization, recall that given an initial \(A_{s}\) there are many different possible initial configurations. For example, if \(A_{s}=0.55\), we can have with a high probability that about half of the nodes have opinion 1; and with low with probability, we can also have that all the nodes have opinion 0. To account for these different initial configurations, we consider the following binomial probability for observing \(k\) nodes with opinion 1:
\[p(k,A_{s})=\binom{N}{k}A_{s}^{k}(1-A_{s})^{N-k} \tag{6}\]
Then, the probability to observe an edge of size \(S\) with \(\lambda\) nodes with opinion 1 at fixed initial fraction \(\alpha=k/N\) is:2
Footnote 2: To write this last equation, we assume that the system is large enough as we are using a sampling with replacement. The exact formula is instead: \(q(\lambda,k,N)=\frac{\binom{k}{3}\binom{N-k}{S-\lambda}}{\binom{N}{N}}\)
\[p(\lambda,k/N)=n\cdot\binom{S}{\lambda}\left(\frac{k}{N}\right)^{\lambda} \left(1-\frac{k}{N}\right)^{S-\lambda} \tag{7}\]
By taking the product of \(p(\lambda,k/N)\) and \(p(k,A_{s})\), we obtain the probability to observe a edge with \(\lambda\) nodes with opinion 1 in an initial configuration with \(k\) nodes with opinion 1. Using this probability, we can compute the initial expected magnetization:
\[\langle m(0,A_{s})\rangle=\sum_{k=0}^{N}\sum_{\lambda=0}^{\min{(S,k)}}p(k,A_{s} )p(\lambda,k/N)n\cdot m(\lambda) \tag{8}\]
where \(m(\lambda)=\frac{1}{Sn}\left(2\lambda-S\right)\) is the magnetization of an edge with \(\lambda\) nodes with opinion 1 and \(n\) is the number of hyper-edges.
From (8), we obtain the final expected magnetization by recalling that when a hypergraph is sparse, the evolution of its edges are independent and hence, determined by the magnitude of the initial majority:
1. if \(\gamma<\frac{\lambda}{S}<1-\gamma\) then \(e\) splits and the final magnetization does not change,
2. if \(\frac{\lambda}{S}>1-\gamma\) then all its nodes with opinion \(0\) will eventually get opinion \(1\),
3. if \(\frac{\lambda}{S}<\gamma\) then all its nodes with opinion \(1\) will eventually get opinion \(0\).
By applying, these three conditions to (8), we compute the expected final magnetization:
\[\begin{split}\langle m(\infty,A_{s})\rangle=\sum_{k=0}^{N}p(k,A_ {s})\left[\quad\sum_{\lambda=0}^{S\gamma}p(\lambda,k/N)m(\lambda)+\right.\\ +\sum_{\lambda=S\gamma}^{S-S\gamma}p(\lambda,k/N)m(\lambda)+\\ +\left.\sum_{\lambda=S-S\gamma}^{S}p(\lambda,k/N)m(\lambda) \right]n\end{split} \tag{9}\]
Figure 7: Comparison of the Analysis with the Sparsity Approximation (Equation 9) and the simulations for mean degree \(1\) and \(1.5\). Parameters: \(p=0.55\), \(N=500\), \(S=10\), \(n=50\) (for Subfigure 6(a)), \(n=75\) (for Subfigure 6(b)), \(A_{S}=0.55\). The points and the shaded area are the mean and the standard deviation of the trajectories respectively for \(1000\) initial configurations. The black line is the analytical magnetization based on equation 9.
In Figure 7, we compare (9) with the simulations for mean degree equal to \(1\) and \(1,5\). The analytic predictions for the final absolute magnetization are compatible with the values coming from the simulations. We observe that the match is better for mean degree equal to \(1\). This is expected as when the mean degree is low, then the sparsity assumption is less violated. In Fig.8, we have perfect match between the analytic formula and the simulations for the limit case of mean degree equal to \(1\). Precisely, we obtain a relative error lower than \(10\%\) which decreases with increasing sample size (see left panels in Fig.8). Also, note that the relative error is higher for larger \(\gamma\). This possibly occurs because the number of possible final states increases with \(\gamma\) and hence, we require more simulations to explore them.
When increasing the mean degree, the analytic predictions based on (9) significantly underestimates the simulated values. The reason is that the analytic prediction is valid under the assumption that there is no overlap between edges. For high mean degree, the overlap is instead significant and allows the initial global majority opinion to better diffuse in the system. Pre
Figure 8: Comparison of the Analysis with the Sparsity Approximation (Equation 9) and the simulations for mean degree \(1\). Parameters: \(p=0.55\), \(N=100\), \(S=10\), \(n=10\), \(A_{S}=0.55\). The points are the mean of the trajectories respectively for XXX trajectories. On the right, we have the convergence between the simulations and the analytical absolute magnetization when increasing the sample size.
cisely, recall that it is more likely to sample a hyperedge whose local majority is equal to the global majority. In this sampled edge, nodes might change their opinion to the global majority opinion. Because of the overlap, these changes increase the number of nodes having the global majority opinion not only in the sampled edge but also in its overlapping edges. Thus, the expected number of edges with the local majority opinion equal to the global majority opinion increases. In other words, overlap increases the absolute final magnetization.
## 4 Conclusion
We have extended the adaptive voter model by including group interactions. We have shown that the inclusion of group interactions drastically changes the dynamics and can lead to fragmentation bands at equilibrium. Specifically, fragmentation bands appear at equilibrium because the final global majority can reach different values. This final value depends on the critical size a majority should have in a group to convince the minority to change their opinion. This type of final state is not present with only pairwise interactions. Note that this type of final state is not present in the absence of group interactions.
Second, different groups may share individuals, and this group overlap might create geometrical frustration. The presence of this frustration creates \(k\)-orbits as final states. Therefore, unlike the classical adaptive voter model, the system does not always reach a single equilibrium configuration. Instead, the system may get trapped in oscillations where some individuals change their opinions periodically. Although this finding might not have direct applications, it shows how group interactions enrich the set of possible final states from a mathematical point of view.
For analytical tractability, we have assumed an \(S\)-uniform hypergraph as the initial topology. This is a simplification since real-world social hypergraphs are instead heterogeneous as individuals interact in groups of different sizes and with non-random connectivity. Recently, in [24; 5], it was shown that the heterogeneity of the initial configuration in presence higher order interactions can significantly affect the system dynamics. Based on our results, we expect instead no drastic changes in the dynamics, but only an increase in the number of fragmentation bands. It is still open for research to study the interplay between group interactions and heterogeneous initial configurations.
Finally, extending the model to consider individuals with different importance would be interesting. Some individuals might be more influential because of their status[25], reputation[26], or position in a hierarchical structure[27]. It would be straightforward to account for the importance of each individual when computing group majorities. This extension would allow for modelling scenarios where a more _silent majority_ adopts the opinion of a _louder minority_.
## Acknowledgements
The authors thank Frank Schweitzer and Luca Verginer for providing useful suggestions about the notation and visualizations.
|
2302.07233 | S-Motzkin paths with catastrophes and air pockets | So called $S$-Motzkin paths are combined the concepts `catastrophes' and `air
pockets. The enumeration is done by properly set up bivariate generating
functions which can be extended using the kernel method. | Helmut Prodinger | 2023-02-14T18:23:37Z | http://arxiv.org/abs/2302.07233v1 | # \(S\)-Motzkin paths with catastrophes and air pockets (very early version)
###### Abstract
So called \(S\)-Motzkin paths are combined the concepts 'catastrophes' and 'air pockets. The enumeration is done by properly set up bivariate generating functions which can be extended using the kernel method.
**Keywords:**\(S\)-Motzkin path; catastrophe; kernel method; air pocket.
**2020 Mathematics Subject Classification:** 05A15.
## 1 Introduction
Dyck paths consist of up-steps \((1,1)\) and down-steps \((1,-1)\), start at the origin and do not go below the \(x\)-axis; they appear in many texts, we just give one major reference, [12]. Typically, the path returns to the \(x\)-axis at the end, but we also consider the scenario of open paths, where the paths end at level \(j\), say. A popular variation of Dyck paths are Motzkin paths; the difference is just that now a horizontal step \((1,0)\) is also allowed.
In this paper, we concentrate on \(S\)-Motzkin paths, which is a subfamily of all Motzkin paths: all three steps (up, level, down) must appear \(n\) times, and, ignoring the down-steps, the sequence is \((1,0)(1,1)(1,0)(1,1)(1,0)(1,1)\ldots(1,0)(1,1)\). The follow figure shows how these paths are recognized: The two layers enforce that the flat and up steps are interlaced. Only paths that end in the origin are \(S\)-Motzkin but we consider all paths wherever they end.
Here is an example of such an \(S\)-Motzkin path with 15 steps:
This subfamily of Motzkin paths originated from a question in a student competition; see [10] and [7] for history and analysis. In the following we will combine this family with _catastrophes_ and _air pockets_, both originating in papers by Jean-Luc Buril and his team [5], [3]; the older paper by Banderier and Wallner [2] might be called the standard reference for lattice paths with catastrophes. The very recent papers [4, 5] contain some bijective aspects. Our own paper [9] investigates the situation in the context of skew Dyck paths.
Dyck (and other lattice) paths with catastrophes are characterized by additional steps ('catastrophes') that bring the path back to the \(x\)-axis in just one step from any level \(j\geq 2\). For \(S\)-Motzkin paths the definition is similar, and the graphical description in Figure 2 is easiest to understand; the catastrophes are drawn in special colors.
In the last section, \(S\)-Motzkin paths with air pockets will be discussed. Briefly, down steps of any length are now allowed, but no two down steps may follow each other.
Figure 1: Graph to recognize \(S\)-Motzkin paths; they start and end at the special state (origin).
\(S\)-Motzkin paths with catastrophes
In the sequel, we analyze the paths as in Fig. 2.
We introduce generating functions \(f_{i}=f_{i}(z)\), where the coefficient of \(z^{n}\) counts the number of paths starting at the origin (=the big circle) and ending after \(n\) steps at state \(i\) (=level \(i\)) in the upper layer. The generating functions \(g_{i}=g_{i}(z)\), where the coefficient of \(z^{n}\) counts the number of paths starting at the origin (=the big circle) and ending after \(n\) steps at state \(i\) (=level \(i\)) in the lower layer.
The following recursions are easy to see:
\[f_{0} =1+z(f_{1}+f_{2}+f_{3}+f_{4}+\cdots),\] \[f_{i} =zg_{i-1}+zf_{i+1},\ i\geq 1,\] \[g_{0} =zf_{0}+z(g_{1}+g_{2}+g_{3}+g_{4}+\cdots),\] \[g_{i} =zf_{i}+zg_{i+1},\ i\geq 1.\]
Since \(f_{0}\) and \(g_{0}\) are somewhat special, we leave them out for the moment and compute the other ones, \(f_{i}\), \(g_{i}\), \(i\geq 1\). Eventually we will solve the equations for \(f_{0}\) and \(g_{0}\), which will turn out to be just linear. Therefore we introduce the bivariate generating functions
\[F(u)=F(u,z)=\sum_{i\geq 1}u^{i-1}f_{i},\quad G(u)=G(u,z)=\sum_{i\geq 1}u^{i-1}g_{i}\]
and we treat \(f_{0}\) and \(g_{0}\) as parameters. Summing the recursions over all possible values of \(i\),
\[F(u)=zg_{0}+zuG(u)+\frac{z}{u}[F(u)-f_{1}],\quad G(u)=zF(u)+\frac{z}{u}[G(u)-g_ {1}].\]
Note that \(f_{1}=F(0)\) and \(g_{1}=G(0)\). We compute
\[F(u)=\frac{z(-u^{2}g_{0}+zug_{0}+uf_{1}-zf_{1}+zu^{2}g_{1})}{z^{2 }u^{3}-u^{2}+2zu-z^{2}},\] \[G(u)=\frac{z(-zu^{2}g_{0}+ug_{1}+zuf_{1}-zg_{1})}{z^{2}u^{3}-u^{2 }+2zu-z^{2}}.\]
To factor the denominator, we set \(u=zv\), and also \(z^{3}=x=t(1-t)^{2}\) to get
\[z^{2}(vt-1)(v^{2}t^{2}-2tv^{2}+vt+v^{2}-2v+1).\]
Therefore the three roots (expressed again in the variable \(u\)) are given by
\[u_{1}=\frac{z}{t},\qquad u_{2}=-z\frac{t-2+\sqrt{4t-3t^{2}}}{2(1-t)^{2}}, \qquad u_{3}=-z\frac{t-2-\sqrt{4t-3t^{2}}}{2(1-t)^{2}}\]
and so
\[z^{2}u^{3}-u^{2}+2zu-z^{2}=z^{2}(u-u_{1})(u-u_{2})(u-u_{3}).\]
These three roots appear already in [7], were more details are provided. Therefore
\[F(u)=\frac{-u^{2}g_{0}+zug_{0}+uf_{1}-zf_{1}+zu^{2}g_{1}}{z(u-u_{1})(u-u_{2})( u-u_{3})}\quad\text{and}\quad G(u)=\frac{-zu^{2}g_{0}+ug_{1}+zuf_{1}-zg_{1}}{z(u -u_{1})(u-u_{2})(u-u_{3})}.\]
Cancelling the bad factors \((u-u_{2})(u-u_{3})\) out, we get
\[F(u)=\frac{-g_{0}+zg_{1}}{z(u-u_{1})}\quad\text{and}\quad G(u)=\frac{-g_{0}}{(u -u_{1})}.\]
Figure 2: Graph to recognize \(S\)-Motzkin paths with catastrophes. Purple arrows lead to the initial state. Olive arrows lead to the level 0 state in the second layer.
As a general remark, factors are bad if \(\frac{1}{u-\overline{u}}\) has no power series expansion around \(z=0\), \(u=0\). This is part of the kernel method, see [6] for a user-friendly collection of examples. Plugging in \(u=0\), we get
\[f_{1}=\frac{g_{0}-zg_{1}}{zu_{1}}\quad\text{and}\quad g_{1}=\frac{g_{0}}{u_{1}}= \frac{g_{0}t}{z}\quad\text{and thus}\quad f_{1}=g_{0}\frac{1-\frac{z}{u_{1}}}{zu _{1}}=g_{0}\frac{t(1-t)}{z^{2}}.\]
Now we can solve for \(f_{0}\) and \(g_{0}\):
\[f_{0}=1+z(f_{1}+f_{2}+f_{3}+f_{4}+\cdots)=1+zF(1)=1+\frac{-g_{0}+ zg_{1}}{1-u_{1}}\] \[g_{0}=zf_{0}+z(g_{1}+g_{2}+g_{3}+g_{4}+\cdots)=zf_{0}+\frac{-zg_ {0}}{1-u_{1}},\]
Therefore
\[f_{0}=\frac{-t+z-zt}{-t+z-2zt+zt^{2}}\quad\text{and}\quad g_{0}=\frac{z(z-t)}{ -t+z-2zt+zt^{2}}.\]
Using the Lagrange inversion formula (or contour integration), we get the expansion
\[t=\sum_{n\geq 1}\frac{1}{n}\binom{3n-2}{n-1}x^{n}=\sum_{n\geq 1}\frac{1}{n} \binom{3n-2}{n-1}z^{3n}.\]
\[f_{0}=1+z^{3}+z^{5}+3z^{6}+z^{7}+7z^{8}+13z^{9}+11z^{10}+43z^{11}+70z^{12}+89z^ {13}+264z^{14}+424z^{15}+650z^{16}+1657z^{17}+\cdots\]
\[g_{0}=z+2z^{4}+2z^{6}+7z^{7}+2z^{8}+15z^{9}+32z^{10}+23z^{11}+96z^{12}+174z^{13 }+192z^{14}+604z^{15}+1048z^{16}+1434z^{17}+\cdots\]
The coefficients are not 'nice', in the sense that there are no simple expressions available for them. Consequently, \(f_{k}\) and \(g_{k}\) also do not have nice coefficients, although the factor \(\frac{1}{u-u_{1}}\) leads to nice coefficients, as can be seen from [7].
Now we move to **asymptotics**.
As can be seen from the discussion in [10], the asymptotic enumeration of \(S\)-Motzkin paths is driven by a square-root type singularity, as it often happens in the enumeration of trees and lattice paths:
\[t\sim\frac{1}{3}-\frac{2}{3\sqrt{3}}\Big{(}1-\frac{27x}{4}\Big{)}^{1/2},\]
and the closest singularity (in \(x\)) is at \(\frac{4}{27}\). Switching to the \(z\)-notation, as we have to in the context of catastrophes, we must look at the three roots closest to origin of modulus \(\left(\frac{4}{27}\right)^{1/3}=0.5291336839\). Consequently, the exponential growth of \(S\)-Motzkin paths is given by the reciprocal: \(1.88988157485^{n}\). The exponent \(n\) refers to the length \(n\) of the \(S\)-Motzkin path. There are only paths when \(n\) is divisible by 3, but that is of no concern.
For the case of catastrophes, we get a closer singularity. We need the dominant zero of the denominator \(-t+z-2zt+zt^{2}\). A computer provides the value \(\overline{z}=0.5248885986\dots\) and the corresponding value \(\overline{t}=0.2755080409\dots\). As we can see, the value is slightly smaller: \(0.5248885986<0.5291336839\). Consequently this number leads to a _simple_ pole, and the exponential growth is larger, as is not too surprising, considering the additional steps that are possible. The calculations are as follows:
We must expand \(f_{0}\) and \(g_{0}\) at the simple pole \(z=\overline{z}\). First note that
\[\frac{t}{dz}=\frac{dx}{dz}\frac{dt}{dt}\frac{t}{dt}=3z^{2}\frac{1}{(1-t)(1-3t) }\quad\text{and}\quad\frac{t}{dz}\Big{|}_{z=\overline{z},t=\overline{z}}= \frac{3\overline{z}^{2}}{(1-\overline{t})(1-3\overline{t})}=\]
\[-t+z-2zt+zt^{2}\sim\frac{d}{dz}(-t+z-2zt+zt^{2})\Big{|}_{z=\overline{z}}(z- \overline{z})=\Big{(}-\frac{t}{dz}+1-2t-2z\frac{t}{dz}+t^{2}+2zt\frac{t}{dz} \Big{)}\Big{|}_{z=\overline{z}}(z-\overline{z})\]
\[\sim-11.0530836206(z-\overline{z})\sim 21.0579609634(1.905166167z),\]
and further
\[f_{0}=\frac{-t+z-zt}{-t+z-2zt+zt^{2}}\sim 0.0049752931\frac{1}{1-1.905166167z}.\]
Since \(f_{0}(z)\) is the generating function of \(S\)-Motzkin paths with catastrophes, we got an asymptotic equivalent of these numbers of length \(n\) via \([z^{n}]f_{0}(z)\sim 0.0049752931(1.905166167)^{n}\).
A similar computation leads to \([z^{n}]g_{0}(z)\sim 0.0062160344(1.905166167)^{n}\). We continue with
\[F(u)=\frac{-g_{0}+zg_{1}}{z(u-u_{1})}=\frac{g_{0}(1-t)t}{z^{2}(1-u\frac{t}{z})}\]
and therefore
\[[u^{k}]F(u)=\frac{(z-t)(1-t)}{-t+z-2zt+zt^{2}}\frac{t^{k+1}}{z^{k+1}}.\]
Similarly,
\[[u^{k}]G(u)=[u^{k}]\frac{t(z-t)}{-t+z-2zt+zt^{2}}\frac{1}{(1-\frac{u}{z})}=\frac {t(z-t)}{-t+z-2zt+zt^{2}}\frac{t^{k}}{z^{k}}.\]
We note that \(G(1)=\frac{zt}{-t+z-2zt+zt^{2}}\). Now we move to partial \(S\)-Motzkin paths with arbitrary endpoint. In terms of generating functions, it just means \(u:=1\), and we found the generating function
\[f_{0}(z)+F(1,z)+g_{0}(z)+G(1,z) =\frac{1}{-t+z-2zt+zt^{2}}\Big{[}(-t+z-zt)+(1-t)t+z(z-t)+zt\Big{]}\] \[=\frac{z+z^{2}-zt-t^{2}}{-t+z-2zt+zt^{2}}\] \[=1+z+z^{2}+2z^{3}+3z^{4}+5z^{5}+10z^{6}+16z^{7}+30z^{8}+58z^{9}+98 z^{10}+189z^{11}+\cdots.\]
The asymptotic behaviour of the coefficients is also of the form \(\mathsf{const.}(1.905166167)^{n}\).
## 3 Right-to-left S-Motzkin paths with catastrophes
We use similar generating functions as before, namely \(a_{i}(z)\) for the top layer, and \(b_{i}(z)\) for the bottom layer. Then
\[a_{0}=1+zb_{0},\quad a_{1}=zb_{1}+za_{0},\quad a_{i}=zb_{i}+za_{i -1}+za_{0},\ i\geq 2,\] \[b_{0}=za_{1},\quad b_{1}=zb_{0}+za_{2},\quad b_{i}=za_{i+1}+zb_{ i-1}+zb_{0},\ i\geq 2.\]
As before, we introduce
\[A(u)=\sum_{i\geq 1}u^{i-1}a_{i},\quad B(u)=\sum_{i\geq 1}u^{i-1}b_{i}.\]
We compute
\[A(u) =a_{1}+\sum_{i\geq 2}u^{i-1}a_{i}=a_{1}+z\sum_{i\geq 2}u^{i-1}[b_{ i}+a_{i-1}+a_{0}]\] \[=a_{1}+z\sum_{i\geq 2}u^{i-1}b_{i}+z\sum_{i\geq 2}u^{i-1}a_{i-1}+z \sum_{i\geq 2}u^{i-1}a_{0}\] \[=a_{1}+zB(u)-zb_{1}+zuA(u)+\frac{zu}{1-u}a_{0}\]
and
\[B(u) =b_{1}+\sum_{i\geq 2}u^{i-1}b_{i}=b_{1}+z\sum_{i\geq 2}u^{i-1}[a _{i+1}+b_{i-1}+b_{0}]\] \[=b_{1}+z\sum_{i\geq 2}u^{i-1}a_{i+1}+z\sum_{i\geq 2}u^{i-1}b_{i-1}+ z\sum_{i\geq 2}u^{i-1}b_{0}\] \[=b_{1}+\frac{z}{u}\sum_{i\geq 0}u^{i}a_{i+1}-\frac{z}{u}a_{1}-za_ {2}+zuB(u)+\frac{zu}{1-u}b_{0}\] \[=\frac{z}{u}A(u)-\frac{z}{u}a_{1}+zuB(u)+\frac{z}{1-u}b_{0}.\]
Figure 3: Graph to recognize \(S\)-Motzkin paths with catastrophes from right-to-left.
We rewrite this system in the form
\[A(u) =zuA(u)+zB(u)+\Phi(u),\quad\Phi(u)=a_{1}-zb_{1}+\frac{zu}{1-u}a_{0},\] \[B(u) =\frac{z}{u}A(u)+zuB(u)+\Psi(u),\quad\Psi(u)=-\frac{z}{u}a_{1}+ \frac{z}{1-u}b_{0}.\]
The solution is
\[A(u)=\frac{z(zub_{0}-zu^{2}a_{0}-za_{1}+zua_{1}+ua_{0})}{(z^{2}u^{3}-2zu^{2}+u- z^{2})(1-u)},\quad B(u)=\frac{z(-zu^{2}b_{0}-zu^{2}a_{1}+a_{1}u+b_{0}u+zua_{1}+za_ {0}-a_{1})}{(z^{2}u^{3}-2zu^{2}+u-z^{2})(1-u)}.\]
Recall that if
\[u_{1}=\frac{z}{t},\qquad u_{2}=-z\frac{t-2+\sqrt{4t-3t^{2}}}{2(1-t)^{2}}, \qquad u_{3}=-z\frac{t-2-\sqrt{4t-3t^{2}}}{2(1-t)^{2}};\]
then
\[z^{2}u^{3}-2zu^{2}+u-z^{2}=z^{2}\Big{(}u-\frac{1}{u_{1}}\Big{)}\Big{(}u-\frac{ 1}{u_{2}}\Big{)}\Big{(}u-\frac{1}{u_{3}}\Big{)};\]
this can be checked directly, compare also [7]. Since \(\frac{1}{u_{1}}=\frac{t}{z}\sim z^{2}\), the factor \(\big{(}u-\frac{1}{u_{1}}\big{)}\) is 'bad' and must cancel out. Plugging in \(u=\frac{t}{z}\) into the numerators, we must get \(0\). Dividing out the factor \(u-\frac{1}{z}\), the solutions now look like
\[A(u)=\frac{z(zb_{0}+za_{1}-uza_{0}-a_{0}t+a_{0})}{z^{2}\big{(}u-\frac{1}{u_{2} }\big{)}\big{(}u-\frac{1}{u_{3}}\big{)}\big{(}1-u\big{)}},\quad B(u)=\frac{z(- zub_{0}+za_{1}-zua_{1}+a_{1}+b_{0}-tb_{0}-ta_{1})}{z^{2}\big{(}u-\frac{1}{u_{2}} \big{)}\big{(}u-\frac{1}{u_{3}}\big{)}(1-u)}.\]
Note that
\[\Big{(}u-\frac{1}{u_{2}}\Big{)}\Big{(}u-\frac{1}{u_{3}}\Big{)}=u^{2}+\frac{t- 2}{z}u+\frac{z}{t}.\]
Now we can plug in \(u=0\) to get
\[A(0)=a_{1}=\frac{z(zb_{0}+za_{1}-a_{0}t+a_{0})}{(1-t)^{2}},\quad\text{and} \quad a_{1}=\frac{z(zb_{0}+a_{0}-a_{0}t)}{-z^{2}+1-2t+t^{2}}.\]
Since \(a_{0}=1+zb_{0}\), we get \(a_{1}=\frac{z(2zb_{0}+1-t-ztb_{0})}{-z^{2}+1-2t+t^{2}}\). We also get
\[B(0)=b_{1}=\frac{z(za_{1}+a_{1}+b_{0}-tb_{0}-ta_{1})}{(1-t)^{2}}.\]
From \(b_{0}=za_{1}\) we find
\[b_{0}=\frac{z^{2}(zb_{0}+a_{0}-a_{0}t)}{-z^{2}+1-2t+t^{2}}=\frac{t(1-t)}{-t+z- 2zt+zt^{2}},\quad\text{and}\quad a_{0}=1+zb_{0}=f_{0}=\frac{-t+z-zt}{-t+z-2zt+zt ^{2}}.\]
In principle, one could also write formulae for general \(a_{k}\) and \(b_{k}\), by using partial fraction decomposition. Since the results look very complicated and do not provide extra insight, we refrain from giving such explicit results.
A brief comment about asymptotics: Since the denominators are the same as in the left-right instance, we get again an exponential behaviour, with the same rate as before. The concept of open end does not make sense here since there are infinitely many such paths of a given length \(n\geq 1\).
While one might be tempted to attack the current question using some bijective tricks, it is worthwhile to note that our approach is very flexible, and, e. g., subsets of the catastrophes may be considered, with little extra efforts.
## 4. \(S\)-Motzkin paths and air pockets
Now we move to another model popularized by Baril, namely introducing air pockets. These are maximal chains of downsteps, but this time only counted as one step. Since the main issue of \(S\)-Motzkin paths to keep the pattern flat, up, flat, up, \(\ldots\) alive, the downsteps live their own live, and we managed to construct a graph with 4 layers of states, describing all possible scenarios. Note that the wavy edges represent transitions without reading a symbol. The generating functions for the four layers, reaching level \(i\), can be read off from the diagram; note that the wavy edge is labelled by 1, not by \(z\), since there is no step done.
\[a_{0}=1,\quad a_{i}=zd_{i-1},\ i\geq 1,\quad b_{i}=a_{i}+z\sum_{j>i}a_{j},\]
\[c_{i}=zb_{i},\quad d_{i}=c_{i}+z\sum_{j>i}c_{j}.\]
The bivariate generating functions are \(A(u)=\sum_{i\geq 0}u^{i}a_{i}\), \(B(u)=\sum_{i\geq 0}u^{i}b_{i}\), etc. Summing the recursions over all values of \(i\), we find the system
\[A(u)=1+zuD(u),\quad B(u)=A(u)+\frac{z}{1-u}[A(1)-A(u)],\] \[C(u)=zB(u),\quad D(u)=C(u)+\frac{z}{1-u}[C(1)-C(u)].\]
The system can be reduced to two equations
\[C(u)=zA(u)+\frac{z^{2}}{1-u}[A(1)-A(u)],\] \[\frac{A(u)-1}{u}=zC(u)+\frac{z^{2}}{1-u}[C(1)-C(u)];\]
and so, by solving,
\[A(u)=\frac{-z^{2}uC(1)+z^{3}u^{2}A(1)-1+2u+z^{4}uA(1)-u^{2}+z^{2 }u^{2}C(1)-z^{3}uA(1)}{-1+2u-u^{2}+z^{2}u-2z^{2}u^{2}-2z^{3}u+z^{2}u^{3}+2z^{3} u^{2}+z^{4}u},\] \[C(u)=\frac{\left(z^{2}u^{2}C(1)-u^{2}-zu+2u+zA(1)u+z^{3}uC(1)-z^ {2}uC(1)-1+z-zA(1)\right)z}{-1+2u-u^{2}+z^{2}u-2z^{2}u-2z^{3}u+z^{2}u^{3}+2z^{ 3}u^{2}+z^{4}u}.\]
Plugging in \(u=1\) gives the void equations \(A(1)=A(1)\) and \(C(1)=C(1)\). Therefore the denominator has to be investigated. We find that \(-1+2u-u^{2}+z^{2}u-2z^{2}u^{2}-2z^{3}u+z^{2}u^{3}+2z^{3}u^{2}+z^{4}u=z^{2}(u- \rho)(u-\sigma)(u-\tau)\); the explicit forms provided by Maple are useless, but fortunately gfun (in Maple) allows manipulations with the relevant series:
\[\rho=z^{-2}-2z^{3}-z^{4}-2z^{5}-6z^{6}-6z^{4}-15z^{58}-22z^{9}-33z ^{10}-86z^{11}-115z^{12}-256z^{13}-486z^{14}-804z^{15}-1783z^{16}-3074z^{17}\] \[-6049z^{18}-12104z^{19}-21902z^{20}-44918z^{21}-85235z^{22}-165124 z^{23}-331137z^{24}-631740z^{25}-1261785z^{26}-2477694z^{27}+\cdots\]
The other roots are ugly but we compute the simpler \((u-\tau)(u-\tau)=u^{2}+Ku+L\) with
\[K=-2z^{2}-2z^{5}-z^{6}-2z^{7}-6z^{8}-4z^{9}-15z^{10}-22z^{11}-33z ^{12}-86z^{13}-115z^{14}-256z^{15}-486z^{16}+\cdots,\] \[L=z^{2}+2z^{5}+2z^{7}+5z^{8}+2z^{9}+14z^{10}+16z^{11}+27z^{12}+74 z^{13}+86z^{14}+222z^{15}+395z^{16}+\cdots.\]
So we must divide this term out from numerator and denominator. Therefore
\[A(u)=\frac{z^{3}A(1)+z^{2}C(1)-1}{z^{2}(u-\rho)},\quad\text{and}\quad C(u)= \frac{z^{2}C(1)-1}{z(u-\rho)},\]
Figure 4: Four layers of states.
and by \(u=1\) and solving,
\[A(1)=\frac{-1+\rho}{z^{2}(-1+\rho+z)^{2}},\quad\text{and}\quad C(1)=\frac{1}{z(-1 +\rho+z)}.\]
Since these values are known, we find
\[A(u)=\frac{\rho}{\rho-u}\quad\text{and}\quad C(u)=\frac{1-\rho}{z(1-\rho-z)( \rho-u)},\]
where the identity \(1-2\rho+\rho^{2}-z^{2}\rho+2z^{2}\rho^{2}+2z^{3}\rho-z^{2}\rho^{3}-2z^{3}\rho^ {2}-z^{4}\rho=0\) was used for simplification. One sees \(A(0)=1\), which is clear from combinatorial reasons. Further
\[a_{k}=[u^{k}]A(u)=\rho^{-k}\quad\text{and}\quad c_{k}=[u^{k}]C(u)=\frac{1-\rho }{z(1-\rho-z)}\rho^{-k-1}.\]
The other quantities are then \(b_{k}=\frac{1}{z}c_{k}\) and \(d_{k}=\frac{1}{z}a_{k+1}\), for any \(k\geq 0\).
We leave the analysis of this air pocket model from right to left, as well as other parameters, to the interested reader. The factorization \((u-\rho^{-1})(u-\sigma^{-1})(u-\tau^{-1})\) will play a role here, and only one factor is bad, namely \((u-\rho^{-1})\).
It is possible to consider catastrophes and air pockets at the same time; we leave such considerations to enthusiastic young researchers.
|
2303.06204 | Electrically Controlled Anomalous Hall Effect and Orbital Magnetization
in Topological Magnet MnBi2Te4 | In this work, we propose an intrinsic mechanism to understand the even-odd
effect, namely the opposite signs of the anomalous Hall resistance and the
different shapes of hysteresis loops for even and odd septuple layers (SLs), of
MBE-grown MnBi2Te4 thin films with electron doping. In particular, we show that
the non-zero hysteresis loops in the anomalous Hall and magnetic circular
dichroism measurements for even-SLs MnBi2Te4 films are originated from two
different anti-ferromagnetic (AFM) states with opposite magnetoelectric
coefficients that give rise to different energies of zeroth Landau levels of
the surface states in this model. The complex form of the anomalous Hall
hysteresis loop in even-SLs MnBi2Te4 films can be understood from two magnetic
transitions, a transition from one AFM state to the other AFM state followed by
a second transition to the ferromagnetic state. Our model also provides a
microscopic understanding of the electrical switching between two AFM states
via the axion electrodynamics in even-SL MnBi2Te4 films. We further study
orbital magnetization and magnetoelectric coefficient in MnBi2Te4 films, and
find an even-odd oscillation behavior of the magnetoelectric coefficient. | Ruobing Mei, Yi-Fan Zhao, Chong Wang, Yafei Ren, Di Xiao, Cui-Zu Chang, Chao-Xing Liu | 2023-03-10T20:45:32Z | http://arxiv.org/abs/2303.06204v4 | Electrically Controlled Anomalous Hall Effect and Orbital Magnetization in Topological Magnet MnBi\({}_{2}\)Te\({}_{4}\)
###### Abstract
In this work, we propose an intrinsic mechanism to understand the even-odd effect, namely the opposite signs of the anomalous Hall resistance and the different shapes of hysteresis loops for even and odd septuple layers (SLs), of MBE-grown MnBi\({}_{2}\)Te\({}_{4}\) thin films with electron doping. In particular, we show that the non-zero hysteresis loops in the anomalous Hall and magnetic circular dichroism measurements for even-SLs MnBi\({}_{2}\)Te\({}_{4}\) films are originated from two different anti-ferromagnetic (AFM) states with opposite magnetoelectric coefficients that give rise to different energies of zeroth Landau levels of the surface states in this model. The complex form of the anomalous Hall hysteresis loop in even-SLs MnBi\({}_{2}\)Te\({}_{4}\) films can be understood from two magnetic transitions, a transition from one AFM state to the other AFM state followed by a second transition to the ferromagnetic state. Our model also provides a microscopic understanding of the electrical switching between two AFM states via the axion electrodynamics in even-SL MnBi\({}_{2}\)Te\({}_{4}\) films. We further study orbital magnetization and magnetoelectric coefficient in MnBi\({}_{2}\)Te\({}_{4}\) films, and find an even-odd oscillation behavior of the magnetoelectric coefficient.
## I Introduction
The recent discovery of MnBi\({}_{2}\)Te\({}_{4}\) (MBT) [1; 2; 3; 4; 5; 6; 7; 8], a tetradymite-type anti-ferromagnetic compound with topologically non-trivial electronic band structure, provides an excellent platform to explore the interplay between topological physics and magnetism[9; 10]. Exotic magnetic topological phases, including the quantum anomalous Hall (QAH) state [11; 12], axion insulator [11; 12; 13] and higher-order Mobius insulator [14], have been theoretically predicted in this compound. For the bulk materials, the A-type of anti-ferromagnetism, namely ferromagnetic coupling in one septuple layer (SL) and anti-ferromagnetic (AFM) coupling between two adjacent SLs, has been unambiguously established through magnetic susceptibility measurements [3; 4] and neutron diffraction experiments [2]. Non-trivial band structure has also been demonstrated through the observation of Dirac surface states with angular-resolved photon emission spectroscopy [3; 15; 16; 17; 18], though the existence of magnetic gap of the surface state is still under debate [4; 15; 19; 20]. These experiments confirmed unambiguously the coexistence of magnetic order and topological band structure in bulk MBT.
The situation of the MBT thin films however is subtle. Theoretically, an even-odd effect was predicted for _insulating_ MBT films [11; 12; 13]. With odd number of SLs, the magnetization on the top and bottom SLs are parallel, leading to a nonzero net magnetization and the QAH state with a quantized Hall resistance. With even number of SLs, the magnetization of the top and bottom SLs are anti-parallel, leading to a zero net magnetization and the axion insulator state [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36], in which the resistance shows a zero Hall plateau [11; 12]. Later experiments however challenged this scenario through the reflective magnetic circular dichroism (RMCD) and anomalous Hall (AH) measurements in exfoliated MBT flakes [37; 38; 39; 40; 41; 42]. The hysteresis loops from both measurements are not synchronized. Specifically, with the odd number of SLs, RMCD signals exhibit a clear hysteresis loop whereas the AH hysteresis loop is almost invisible. With even number of SLs, a small RMCD signal around zero external field was reported, whereas a clear anomalous Hall resistance hysteresis loop is found. These experimental findings indicate the complexity of real materials where the complex chemical and magnetic environments that depend on individual sample qualities might play a role.
More recently, another type of even-odd effect was found in the _metallic_ MBT films grown by molecular beam epitaxy (MBE) method in the electron doping regime [43]. Although the metallic samples with both even and odd SLs show hysteresis loops in AH resistance, the shapes of the loops are clearly distinct. In both cases, the AH resistance hysteresis loops can be decomposed into two AH components. One component behaves the same for the even and odd SLs, which is a trivial component with a smaller coercive field from the minor Mn-doped Bi\({}_{2}\)Te\({}_{3}\) phase. The other AH component is expected to be from the dominant MBT phase and shows interesting even-odd effect: (1) The signs of AH resistance at zero magnetic field are opposite for even and odd SLs; (2) For even SLs, the AH sign reverses twice around the spin-flop transition between the AFM state and canted AFM state, as shown in Fig. S1(a), which reproduces the transport data in Fig. 3g of Ref. [43] (one can also find similar data in Fig. 4e of Ref. [37] and Fig. 2b of Ref. [39] for even-SL MBT films), while no such behavior occurs for odd SLs. We notice that such patterns have also been observed in even SLs of MBT films that are fabricated by flux method followed by mechanical exfoliation [37; 39; 44]. In Ref. [37], the opposite AH signs of MBT films with even and odd SLs were attributed to the competition between intrinsic (Berry curvature contribution) and extrinsic (impurity
scattering contribution) AH effects [45]. However, as the samples prepared in different approaches (MBE or mechanical exfoliation) have very different disorder levels, this motivates us to explore intrinsic mechanism for this even-odd effect.
In this work, we provide a theoretical understanding of the AH hysteresis loop observed in MBT films based on a two-surface-state model and a four-band thin film model. We show that for electron doping case, the intrinsic AH signs in even and odd SLs are opposite. In particular, with even SLs, we argue that there are two AFM states (see Fig. S1(b)) that are degenerate in the presence of inversion symmetry. The inversion symmetry breaking from the substrate and external gates splits these two AFM states, leading to the AH hysteresis loop in even SL MBT films. Around the spin-flop transition, there are actually two magnetic transitions: the system transits from one AFM state to the other AFM state first and then to the ferromagnetic (FM) state, causing the double sign changes in the AH response (see Fig. S1(a)). In contrast, with hole doping, we show that the AH signs in even and odd SLs are the same.
We then turn to the orbital magnetization. In ferromagnetic materials, local magnetic moments usually come from electron spin, which are much larger than orbital magnetic moments that originate from electron's orbital motion [46]. Orbital ferromagnetism in twisted bilayer graphene is an exempt due to the lack of spin local magnetic moments of carbon atoms [47; 48; 49]. In A-type AFM materials, such as MBT, the cancellation of spin magnetization in adjacent SLs makes the orbital magnetization important in the magnetoelectric properties of the materials. We calculate the layer number dependence of orbital magnetization in MBT films and find that the orbital magnetization in even SLs has a linear dependence on the electric field, while it remains a constant under varying electric field in odd SLs. We thus can extract the magnetoelectric coefficient, which approaches the bulk value determined by the axion parameter as the layer thickness increases for even SLs and stays zero for odd SLs, revealing an even-odd oscillation behavior. We also clarify the relation and distinction between the axion parameter that features the axion insulator phase and magnetoelectric coefficient that characterizes the magnetoelectric response and can be quantized for topological magnetoelectric effect.
## II Even-odd effect of hysteresis loop in MBT films
We start from a symmetry consideration of the AFM states in even SL MBT films. For even SLs, there are two possible AFM configurations, labelled as AFM1 and AFM2, and the magnetization in each SL for AFM1 is opposite to that for AFM2, as shown in Fig. S1(b). Without any external fields, both AFM ground states spontaneously break inversion \(\hat{I}\) or time reversal symmetry \(\hat{T}\). AFM1 is related to AFM2 by either \(\hat{I}\) or \(\hat{T}\) operation and thus they have the same ground state energy. Similarly, in the presence of an external magnetic field (electric field), these two AFM ground states are still energetically degenerate since they can be related to each other by \(\hat{I}\) (\(\hat{T}\)). Breaking both \(\hat{I}\) and \(\hat{T}\) explicitly is essential to energetically distinguish these two AFM states. In experiments, the degeneracy can be lifted in the presence of both magnetic field and effective electric field from the asymmetric substrates or electric gates [41; 44].
To make the above argument more concrete, let us consider a simple model for the MBT films with two surface states, and the corresponding Hamiltonian can be expressed as
\[H=H_{M}+H_{e}+H_{e-M}, \tag{1}\]
where \(H_{M}\) is the magnetization part, \(H_{e}\) is the electron part and \(H_{e-M}\) is the coupling between electrons and magnetization. Specifically,
\[H_{M}=J\mathbf{m}_{s1}\cdot\mathbf{m}_{s2}-M_{s}\mathbf{B}\cdot(\mathbf{m}_{s1}+\mathbf{m}_{s2})- \frac{K}{2}(m_{s1,z}^{2}+m_{s2,z}^{2}), \tag{2}\]
[50; 51; 39] where \(\mathbf{m}_{si}=(sin\theta_{i}cos\phi_{i},sin\theta_{i}sin\phi_{i},cos\phi_{i})\) is the magnetization vector with polar and azimuthal angles \(\theta\) and \(\phi\), the index \(i=1,2\) labels the magnetization at the top and bottom surfaces, \(M_{s}\) is the saturation magnetization, \(J\) labels the effective exchange coupling between magnetizations at two surfaces (\(J>0\) for AFM and \(J<0\) for FM), \(\mathbf{B}\) is the external magnetic field and \(K\) is easy-axis anisotropy. Here we have absorbed the magnitude of magnetization into the definition of the parameters \(J\), \(K\) and \(M_{s}\), so \(\mathbf{m}_{si}\) is a unit vector (\(|\mathbf{m}_{si}|=1\)). For \(H_{e}\), we consider two-surface-state model with the electron Hamiltonian
\[H_{e}=v_{f}((k_{x}+\frac{e}{\hbar}A_{x})\sigma_{y}-(k_{y}+\frac{e}{\hbar}A_{y })\sigma_{x})\tau_{z}+V_{0}\tau_{z}/2, \tag{3}\]
where \(\sigma\) and \(\tau\) are Pauli matrices in the spin and layer sub-space, \(v_{f}\) is the Fermi velocity, \(V_{0}\) is the asymmetric potential between two surfaces and the Landau gauge is chosen as \(\mathbf{A}=(0,A_{y},0)=(0,Bx,0)\) for the out-of-plane magnetic field \(\mathbf{B}=(0,0,B)\). The electron-magnetization coupling Hamiltonian depends on magnetic configurations, and takes the form
\[H_{e-M}=gM_{s}\begin{pmatrix}m_{s1,z}\sigma_{z}&0\\ 0&m_{s2,z}\sigma_{z}\end{pmatrix} \tag{4}\]
where \(g\) is the exchange coupling coefficient between electrons and magnetic moments and the two blocks of the above Hamiltonian are for two surfaces. Here the exchange coupling of in-plane magnetization is dropped because it only shifts the locations of surface Dirac point and can be generally absorbed into the gauge potential \(\mathbf{A}\). The directly Zeeman coupling between electron spin and external magnetic field is dropped as it is much smaller than the exchange coupling at the low magnetic field limit. The magnetic simulation in Ref. [13] and
[39] suggests that the ground state of \(H_{M}\) is given by the out-of-plane AFM configurations with \(\mathbf{m}_{s1}=(0,0,\pm 1)\) and \(\mathbf{m}_{s2}=-\mathbf{m}_{s1}\) at zero or low external magnetic fields, which just corresponds to the AFM1 and AFM2 shown in Fig. S1(b). Thus, the magnetization energy for these two AFM states is \(E_{M}=-J-K\). The energy of AFM states is independent of magnetic field \(B\) due to the zero total magnetization \(\mathbf{m}_{s1}+\mathbf{m}_{s2}=0\). On the other hand, the FM state has the magnetization energy \(E_{M}=J-K\pm 2M_{s}B\), where \(\pm\) selects the FM state with magnetization vectors aligned with \(B\) as the favored configuration. Thus, for \(J>0\), the AFM states have lower energy while the FM state can be energetically favored at a large \(B\). To distinguish the ground state energy for two AFM states, we need to further include the electron energy from \(H_{e}+H_{e-M}\), which involves the Landau level (LL) spectrum in the presence of magnetic fields (see Appendix Sec.I.A). For Dirac surface states, besides the normal LLs, given by \(\varepsilon^{N}_{\mu,\nu}=\mu\sqrt{2v_{f}^{2}N/l_{c}^{2}+(gM_{s})^{2}}+\nu V_ {0}/2\), where \(\mu,\nu=\pm 1\), \(N=1,2,\cdots\) representing the \(N\)th LL and \(l_{c}=\sqrt{\frac{\hbar}{eB}}\) is the magnetic length, there are additional zeroth LLs (zLLs), given by \(\varepsilon^{0}_{1,\lambda}=\lambda V_{0}/2+\lambda gM_{s}\) for the AFM1 state and \(\varepsilon^{0}_{2,\lambda}=\lambda V_{0}/2-\lambda gM_{s}\) for the AFM2 state under positive magnetic field, where \(\lambda=+\) (\(\lambda=-\)) corresponds to the zLL on the top (bottom) surface. We first notice that all the higher LLs \(\varepsilon^{N}\) (\(N>0\)) of these two surface states are equivalent for the two AFM configurations, even in the presence of both electric and magnetic fields. Thus, all the energy difference between two AFM states comes from the zLLs. The energies of the zLLs depend on the signs of the magnetic gaps (the sign of \(g\) and \(m_{si,z}\) in \(H_{e-M}\)). When \(g>0\), for the AFM1 (AFM2) configuration, the zLL for the top surface state with \(\lambda=+\) has the energy of the conduction band bottom (valence band top) while that for the bottom surface state with \(\lambda=-\) is at the valence band top (conduction band bottom), as shown in Fig. S1(c). At zero electric field \(V_{0}=0\), the occupied zLL has the same energy for two AFM configurations \(\varepsilon^{0}_{1,-}=\varepsilon^{0}_{2,+}=-gM_{s}\), but this degeneracy will be broken by an external electric field, which shifts the energies \(\varepsilon^{0}_{1,-}\) and \(\varepsilon^{0}_{2,+}\) oppositely. As a result, for \(V_{0}>0\), the occupied zLL of AFM1 decreases in energy (\(\varepsilon^{0}_{1,-}=-V_{0}/2-gM_{s}\)) while that of AFM2 increases (\(\varepsilon^{0}_{2,+}=V_{0}/2-gM_{s}\)) (Fig. S1(d)), and the energy difference between them is \(\Delta\varepsilon=\varepsilon^{0}_{1,-}-\varepsilon^{0}_{2,+}=-V_{0}<0\). Therefore, AFM1 is energetically favored for \(V_{0}>0\) that corresponds to the anti-parallel alignment of external electric and magnetic fields (we choose \(V_{0}=-eEL\) with \(E\) representing electric field and \(L\) the film thickness). For \(V_{0}<0\) (parallel alignment of electric and magnetic fields), AFM2 has a lower ground state energy than AFM1 as \(\Delta\varepsilon=-V_{0}>0\). Therefore, we have shown microscopically the energy difference between two AFM states arises from the energy shift of zeroth LLs of Dirac surface states under electric fields.
After identifying the lower-energy AFM configuration under both external magnetic and electric fields, we next study the sign of anomalous Hall (AH) conductance for the two-surface-state model of the MBT films. According to Fig. 3 in Ref. [43], the MBT samples are heavily electron-doped so that the ordinary Hall resistance shows a negative slope. For odd SL MBT films, when reducing magnetic fields from a large value to zero, the zero field AH resistance shares the same sign as the ordinary Hall resistance without any sign change. We find that only the Landau level spectrum for \(g>0\) is consistent with the above experimental observations within our current definition of the parameters, and thus we can fix the exchange coupling sign as positive (see Appendix Sec.I.B for more details). In contrast, the signs between zero field AH resistance and ordinary Hall resistance for even SL MBT films are opposite. The sign of AH for each band is fixed by the magnetization vectors \(m_{s1,z}\) and \(m_{s2,z}\). We first consider the odd SL with \(m_{s1,z}=m_{s2,z}=1\), which are in alignment with a positive magnetic field. To be consistent with the experiments, we determine that the valence bands of both surfaces contribute negative AH sign while the conduction bands contribute positive AH sign (see Appendix Fig. S1(b)). For even SL, the AH signs of bands are reversed for the surface whose magnetization is flipped compared to that of odd SL. Therefore, the valence band of the top (bottom) surface should contribute a positive (negative) sign to the AH conductance for AFM1, and vice versa for AFM2, as illustrated in Fig. S1(c), where blue and red stand for positive and negative AH signs, respectively. When the Fermi level is within the magnetic gap (i.e. charge neutral point), the valence bands of the top and bottom surfaces give exact opposite contributions, which leads to zero overall AH conductance. For the system with \(E>0\) (\(V_{0}<0\)) at electron doping, the favored AFM configuration is AFM2, so the conduction band of the top surface appears near the Fermi energy and contributes a positive sign to the overall AH conductance (Fig. S1(e)). When \(E<0\) (\(V_{0}>0\)), the favored AFM configuration is AFM1, which also exhibits positive AH conductance at electron doping (Fig. S1(d)). Therefore, at electron doping, the energetically favored AFM state of even-SLs MBT films always possesses a positive sign of the AH conductance under a positive magnetic field, regardless of the direction of the electric field. Under negative external magnetic field, similar analysis suggests that the odd SL MBT film displays positive AH, while the even SL shows negative AH. We conclude that the odd and even SL films will always have opposite AH signs at electron doping, which is independent of the alignment between electric and magnetic fields. These results explain the observations of the even-odd AH effect in both exfoliated and MBE-grown samples, where the systems are electron doped.
The situation is dramatically changed for the hole doping. For \(B>0\) and \(E>0\), the Fermi level now crosses the valence band of the bottom surface in AFM2, so that all the occupied states contribute a negative AH conduc
tance, which is the same sign of the AH conductance in odd SL. Similar situation occurs for \(B>0\) and \(E<0\). Therefore, the odd SL and even SL both exhibit negative (positive) AH sign under positive (negative) magnetic field, again regardless of the electric field direction. This is consistent with the experiment in Ref. [44], where the 5 SL sample was shown to exhibit negative AH at both electron and hole doping, while the 6 SL sample displays negative and positive AH resistance at hole and electron doping, respectively.
To buttress our arguments, we also investigate a thin film model which includes both surface and bulk states, and perform numerical calculations for the energy of zLLs and the corresponding AH conductivity for 2 SL MBT films (see Appendix Sec.I.B). Although we only show the results for 2 SL MBT film for simplicity, the conclusion can be applied to thicker even SL films. For 2 SL MBT, there are four possible AFM and FM configurations as shown in Fig. S2(a). We denote the ground state energy density of the AFM1 and AFM2 configurations as \(\varepsilon_{1}\) and \(\varepsilon_{2}\), respectively, and calculate the energy density difference \(\Delta\varepsilon=\varepsilon_{1}-\varepsilon_{2}\) as a function of the magnetic field \(B\) and the asymmetric potential \(V_{0}=-eEL\) induced by the out-of-plane electric fields \(E\) with \(L\) being the film thickness (Fig. S2(b)). As expected, for \(\mathbf{B\cdot E}<0\), we find \(\Delta\varepsilon<0\) so that the AFM1 configuration is energetically favored, while for \(\mathbf{B\cdot E}>0\), \(\Delta\varepsilon>0\) and the AFM2 configuration is preferred. We then compare the Hall conductivity in 2 SL and 3 SL MBT films at both electron and hole doping with carrier density \(n=\pm 2\times 10^{-12}cm^{-2}\) for a positive magnetic field, as shown in Fig. S2(c). For 3-SL MBT films, the top and
Figure 1: (a) Experimental measurement of AH resistance \(\rho_{yx}\) as a function of magnetic field \(\mu_{0}H\) in a 2 SL MBT film. The spin-flop field is around 2.3 T, indicated by the black arrow. Here only the contribution to the Hall resistance from the intrinsic AH effect in MBT is included while other contributions, e.g. the AH contributions from Mn doped (Bi,Sb)\({}_{2}\)Te\({}_{3}\) and ordinary Hall effect, have been excluded. The intrinsic AH contribution for other even SL MBT films is similar to that of 2S SL MBT films. See Ref. [43] for more details. (b) Magnetization configurations of even SL MBT films. Under an external magnetic field, there are two possible AFM configurations related by inversion symmetry, which are energetically equivalent. (c) Illustration of the two-surface state model for \(g>0\) and \(B>0\), where ”t” and ”b” stand for top and bottom surfaces, respectively. Each band is labeled with blue or red color, which represents positive or negative AH sign. The zeroth Landau levels are shown in red and the other Landau levels are shown in green. (d) When an electric field \(E\) is present, the top and bottom surfaces split in energies. For a negative \(E\) (\(V_{0}>0\)), the occupied zLL in AFM1 is lower in energy than that in AFM2, and therefore, AFM1 becomes the favored configuration. At electron doping, the conduction band of the bottom surface contributes positive AH sign (blue color). (e) For a positive \(E\) (\(V_{0}<0\)), AFM2 is the favored state also with positive AH sign at electron doping.
bottom SLs have parallel magnetization, and the middle SL has opposite magnetization due to AFM (see Appendix Fig. S2(a)). At the asymmetric potential \(V_{0}=0\), the 2 SL MBT presents zero Hall conductance as the top and bottom surface states cancel out their contributions, while the 3 SL MBT has a negative AH conductance for both electron and hole-doping. When \(V_{0}\) is nonzero, the electric field can induce a nonzero AH response in 2 SL, which is positive (negative) in the electron-doped (hole-doped) regime. On the other hand, the doping does not have any influence on the sign of AH conductance in 3 SL MBT, which only weakly varies with the electric fields. In summary, at electron doping, the 2 SL and 3 SL MBT show opposite AH signs, while at hole doping, they have the same sign of AH conductance, which is consistent with the analysis based on the two-surface-state model.
We further investigate the hysteresis loop for even SL MBT films. In Fig. S2(d), we show the Hall conductivity \(\sigma_{xy}\) for all the AFM and FM states at electron density \(n=2\times 10^{-12}cm^{-2}\) for \(V_{0}>0\) and sketch the expected favored states at different \(B\) by the red line. To determine the favored state between FM and AFM states, we refer to the magnetization energy \(E_{M}\). We estimate the spin-flop transition occurring around the field \(B_{c}^{\pm}=\pm B_{c}\approx\pm J/M_{s}\) (As the in-plane magnetization is not included here, we did not study the canted AFM phase during the spin-flop transition), and for \(|B|>B_{c}\), the FM states have the lower magnetization energy and the magnetization of both SLs tends to align with the magnetic field, which means that FM1 is favored at \(B>B_{c}^{+}\) and FM2 is favored at \(B<B_{c}^{-}\). For \(|B|<B_{c}\), the AFM states have lower energy, and at \(0<B<B_{c}^{+}\) AFM1 is the favored state, while at \(B_{c}^{-}<B<0\) AFM2 is the favored state for \(V_{0}>0\), as demonstrated previously. As shown in Fig. S1(a), the typical value of the spin-flop field \(B_{c}\) for 2 SL MBT film in the experiment is around 2.3 T [43]. When the magnetic field is swept from positive to negative, the favored state for 2-SL films goes through FM 1 \(\rightarrow\) AFM 1 \(\rightarrow\) AFM 2 \(\rightarrow\) FM 2 and correspondingly the sign of Hall conductivity \(\sigma_{xy}\) varies as \(-\rightarrow+\rightarrow-+\), as shown by the red lines in Fig. S2(d). Since the phase transition between the AFM1 and AFM2 configurations around \(B\approx 0\) is of the first order, a hysteresis loop can form at small magnetic fields before the spin-flop transition that occurs at larger magnetic fields, as shown by the black dashed lines in Fig. S2(d), which give rise to the form of the AH hysteresis loop observed in experiments in Fig. S1(a). The double sign changes of the observed hysteresis loop in experiments can be understood as a two-step phase transition: the first-step transition between two AFM states followed by the second-step transition between the AFM and FM states.
The dependence of the ground state energy of AFM configurations on electric fields implies the possibility of electrical control of AH near the AFM states transition point. For a positive magnetic field, the Hall conductance of the AFM1 (AFM2) configuration is positive (negative) for \(V_{0}>0\), negative (positive) for \(V_{0}<0\) and zero at \(V_{0}=0\). If we sweep the electric field from positive to negative as illustrated by the blue curve in Fig. S2(e), the favored configuration changes from AFM 1 to AFM 2 according to Fig. S2(b), so the Hall conductance of the favored state should always be positive. However, due to hysteresis, the system stays in AFM 1 for a short moment after the electric field flips before transiting to AFM 2. As a result, the Hall conductivity first changes from positive to negative momentarily then back to positive, so the sign change in AH due to the hysteresis loop is an indicator of the transition between two AFM states. This physical picture of electric control of Hall conductance potentially corresponds to the recent experimental observations of the switching of AH sign by scanning the electric field back and forth [44]. This observation was interpreted as the consequence of the phenomenological axion electrodynamics while our understanding based on the zLLs is equivalent to that but provides a microscopic physical picture.
## Orbital magnetization and magnetoelectric effect in MBT films
In this section, we will discuss an alternative view point of the magnetic transition between different AFM states based on the orbital magnetization created by magnetoelectric effect in the MBT films. Magnetic moments have two origins: spin and orbital moment.Normally, magnetic moments from spin is much larger than orbital moments. For odd-SL MBT films, there is an uncompensated net spin magnetization, and thus the orbital magnetization is negligible. For even SLs, however, the spin magnetization cancels out in the AFM configuration, and thus the orbital magnetization can play a role. The even-SLs MBT films are expected to host the magnetoelectric effect, which means that an electric field can create a magnetization, given by \(\mathbf{M}=\alpha\mathbf{E}\), where the \(\alpha\) is the magnetoelectric coefficient. The magnetoelectric effect has been previously studied in 2D magnetic materials [52; 53]. As studied below, in the MBT films, this electric field induced orbital magnetization depends on the AFM configurations and has opposite sign between AFM1 and AFM2, so that it can provide an alternating understanding on different energies between two AFM configurations under an external magnetic field.
The orbital magnetization usually contains two parts, a trivial part and a topological part [54; 55; 56; 57],
\[\begin{split} m_{total}=m_{trivial}+m_{topological},\\ m_{trivial}=-\frac{ie}{2\hbar}\langle\nabla_{k}u|\times[H(k)- \epsilon(k)]|\nabla_{k}u\rangle,\\ m_{topological}=-\frac{ie}{2\hbar}\langle\nabla_{k}u|\times 2[ \epsilon(k)-\mu]|\nabla_{k}u\rangle,\end{split} \tag{5}\]
where \(\epsilon(k)\) and \(|u\rangle\) are the eigenvalues and eigenstates of the Hamiltonian \(H(k)=H_{e}+H_{e-M}\) of the system, and \(\mu\) is the Fermi energy. Intuitively, the trivial part
Figure 2: (a) Illustration of thin film model for 2 SL MBT films. Each SL has a thickness of \(d\). For AFM states, the adjacent layers have anti-parallel magnetization and for FM states the two layers have parallel magnetization. (b) The energy density difference between AFM1 and AFM2 \(\Delta\varepsilon=\varepsilon_{1}-\varepsilon_{2}\) as a function of asymmetric potential \(V_{0}\) and magnetic field \(B\) at electron density \(n=2\times 10^{12}cm^{-2}\). When \(B\cdot V_{0}>0\), \(\Delta\varepsilon<0\), and thus AFM1 is the favored state; when \(B\cdot V_{0}<0\), \(\Delta\varepsilon>0\), so AFM2 becomes the favored configuration. (c) Numerically calculated Hall conductance \(\sigma_{xy}\) as a function of \(V_{0}\) for favored 2 SL and 3 SL samples at electron and hole doping under positive \(B\) with carrier density \(n=\pm 2\times 10^{12}cm^{-2}\). The 2 SL with electron doping exhibits opposite AH sign compared to both the 2 SL with hole doping and 3 SL. (d) Hall conductance \(\sigma_{xy}\) as a function of \(B\) at electron density \(n=2\times 10^{-12}cm^{-2}\) for positive \(V_{0}\). The red line is the expected favored state at different \(B\). The spin-flop transition field is around 2.3 T for even MBT films. The hysteresis loop is expected around \(B=0\) T before the spin-flop transition, as illustrated by the dashed black lines. (e) Electric control of Hall conductivity for 2 SL. The yellow circles and green triangles stand for the Hall conductivity for AFM1 and AFM2 respectively. AFM1 and AFM2 are the favored states for \(V_{0}>0\) and \(V_{0}<0\), and they both display zero AH conductance at zero \(V_{0}\). As \(V_{0}\) sweeps from positive to negative (blue line), the Hall conductivity experiences a sign change as AFM1 persists through the small negative \(V_{0}\) until AFM2 takes over.
comes from the self rotation of a wave packet around its center of mass, and the topological part originates from the center of mass motion under the presence of boundary potentials, which relates to the Berry curvature.
Figure S3(a)-(d) show the orbital magnetic moment as a function of asymmetric potential \(V_{0}\) for 2-5 SL in the thin film model at chemical potential \(\mu=0\). Figure S3(a), (c) show the orbital magnetic moment for the 2 and 4 SLs in AFM1 configuration, which is zero for \(V_{0}=0\) and linearly increases with \(V_{0}\). This orbital magnetic moment is thus induced by z-direction electric field corresponding to the magnetoelectric effect. The signs of all the magnetic moment components reverse for AFM2. In the presence of external magnetic fields, the electric-field-induced orbital magnetization can lead to the energy difference \(\Delta\varepsilon=\varepsilon_{AFM1}-\varepsilon_{AFM2}=\mathbf{M}_{orb,1} \cdot\mathbf{B}-\mathbf{M}_{orb,2}\cdot\mathbf{B}\) between two AFM states, where \(\mathbf{M}_{orb,1}\) and \(\mathbf{M}_{orb,2}\) are the orbital magnetization of AFM1 and AFM2, respectively. Our calculations show that a negative electric field, namely \(V_{0}>0\), can induce a negative (positive) total orbital magnetization \(\mathbf{M}_{orb,1}=-M_{0}\hat{z}\) (\(\mathbf{M}_{orb,2}=M_{0}\hat{z}\)) for the AFM1 (AFM2) configuration, where \(M_{0}\) is a positive number and \(\hat{z}\) is the unit vector along z axis, so \(\Delta\varepsilon=-2M_{0}B<0\) at positive magnetic field \(B\), which means that AFM1 is the favored configuration, and \(\Delta\varepsilon=2M_{0}B>0\) at negative magnetic field \(-B\), for which AFM2 is favored. Therefore, we arrive at the same conclusion as that from the perspective of zLLs, namely AFM1 is the favored state for \(\mathbf{B}\cdot\mathbf{E}<0\) and vise versa.
We notice that in even SLs, both trivial and topological parts have non-zero slopes while they remain constants in odd SLs, which is consistent with the results for two-surface state model (see Appendix Sec.III). We estimate the magnitude of the calculated orbital magnetization and compare it to the Bohr magneton. Using Bohr magneton \(\mu_{B}=\frac{e\hbar}{2m_{e}}=\frac{2\pi e}{\hbar}R_{y}a_{0}^{2}\) with Rydberg constant \(R_{y}\simeq 13.6eV\) and Bohr radius \(a_{0}\simeq 5.29\times 10^{-11}m\), we derive \(\frac{e}{\hbar}=\frac{\mu_{B}}{2\pi a_{0}^{2}R_{y}}\simeq 4.18\mu_{B}/(nm^{2} \cdot eV)\). For an electric field strength \(E=0.1V/nm\), the orbital magnetic moment in the even SL is about \(10^{-1}\frac{e}{\hbar}\cdot eV\sim 0.4\mu_{B}/nm^{2}\). The Mn ions have a magnetic moment of \(\sim 5\mu_{B}\) and the in-plane lattice constant of MBT is \(a\simeq 0.43nm\)[2], so the spin magnetization is around \(27\mu_{B}/nm^{2}\), and hence the orbital magnetization is approximately two orders smaller than the spin magnetization, which thus can only play a role when spin magnetization is cancelled, e.g. AFM configuration.
We further extract the magnetoelectric coefficient \(\alpha\) from the orbital magnetization [58; 59; 60], defined by \(\mathbf{M}=\alpha\mathbf{E}\). As shown in Fig. S3(e), we find that for even SLs, the trivial part of \(\alpha\) goes to zero and the topological part approaches quantized value \(e^{2}/2h\) as the layer number increases at \(\mu=0\). On the other hand, the odd SLs always exhibit zero \(\alpha\) although the orbital magnetization is nonzero. Thus, there exists an even-odd effect of magnetoelectric coefficient \(\alpha\), which oscillates between zero and nonzero as the layer number of MBT films alternates between odd and even in Fig. S3(e). The behaviors of \(\alpha\) for a nonzero chemical potential \(\mu\) are shown in Appendix Sec.III.
We should distinguish magnetoelectric coefficient \(\alpha\) from the axion parameter \(\theta\), a three-dimensional (3D) bulk quantity that characterizes axion electrodynamics in topological insulators (TIs) [13; 58; 59; 60; 61; 62]. In the field theory description of 3D TIs, a topological \(\theta\) term \(\theta e^{2}\mathbf{E}\cdot\mathbf{B}/2\pi h\) is added into the ordinary Maxwell electromagnetic Lagrangian, in which \(\mathbf{E}\) and \(\mathbf{B}\) are the conventional electric and magnetic fields inside an insulator, and \(\theta\) is a pseudo-scalar and called the axion parameter. When time reversal symmetry is present, \(\theta\) can only takes two quantized values \(0\) and \(\pi\), corresponding to trivial insulators and TIs, respectively [62]. The axion parameter \(\theta\) can be directly connected to magnetoelectric coefficient \(\alpha\) (an experimental observable) as \(\alpha=\theta e^{2}/2\pi h\) when all the surface states of 3D TIs are gapped. This can only be achieved in magnetic TIs with the AFM alignment of magnetization between top and bottoms surfaces, so that all surface states are gapped out. For the MBT films, this corresponds to even SLs in the large thickness limit, and the magnetoelectric coefficient \(\alpha\) value approaches \(e^{2}/2h\) (Fig. S3(e)) as \(\theta=\pi\) in bulk MBT, namely the topological magnetoelectric effect for the axion insulator phase. On the other hand, for the thick MBT films with odd SLs (parallel magnetization alignment between two surfaces), our calculations suggest magnetoelectric coefficient \(\alpha\) is zero, different from the bulk axion parameter value (\(\theta=\pi\)). As its bulk \(\theta\) value is quantized to be \(\pi\), such phase in the 3D limit is previously studied as the axion insulators with higher-order topology [63; 64; 65; 23; 24; 66; 25; 26; 24; 23; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66], in which metallic modes exist at the hinges of finite samples.
Our results also suggest that even though the spin magnetization is zero in even SLs, the orbital magnetization can have an impact in the magnetization measurements, e.g. the magnetic circular dichroism (MCD) experiment [42]. In fact, the early RMCD experiments show a non-zero hysteresis loop around small magnetic fields, which was assumed to come from magnetic domains/disorders in the system [39; 40]. Our studies of the orbital magnetization here provides an intrinsic mechanism for these observations. A decent RMCD signal may also come from the p-d transition of magnetic ions while the orbital magnetization discussed here may be more sensitive to a small photon energy that matches the topological gap. Thus, examining the frequency dependence of RMCD may provide information of the origins of RMCD signals. We also notice that a recent work suggests the RMCD signals for even-SL MBT can exist as the reflections from the top and bottom surface states are not identical, leading to a finite Kerr rotation at a certain photon wavelength in the Kerr experiment configuration, while the transmission part of the MCD (Faraday rotation) remains zero [67]. Therefore, while the RMCD signals may have the contributions from both mechanisms, the transmission MCD signals will be dominated by orbital magnetization discussed here. Electric control of the orbital magne
tization in even SL can also be realized by measuring the MCD signals while sweeping the electric field, as illustrated in Fig. S3(f). Following the blue curve, at positive \(V_{0}\), AFM1 is the favored configuration with negative orbital magnetization, and the magnetization vanishes at \(V_{0}=0\). As \(V_{0}\) turns to a small negative value, the system remains in the AFM1 with the positive orbital magnetization as the AFM1-AFM2 transition is of the first order, giving rise to the hysteresis loop, similar to the Hall resistance hysteresis loop discussed in Fig. S2(e). The transition from AFM1 to AFM2 is expected to occur at a certain negative value of \(V_{0}\), and the orbital magnetization changes to negative after this value of \(V_{0}\). Therefore, the orbital magnetization is expected to vary from negative to positive then back to negative as the electric potential \(V_{0}\) sweeps from positive to negative.
## IV Conclusion and discussion
In summary, we use a two-surface-state model and a thin-film model to study the MBT films, and demonstrate that the presence of electric and magnetic fields can select a favored AFM configuration in even SL through the effect of zLLs, which leads to a nonzero AH response and orbital magnetization in the system. Our results provide a possible explanation for the hysteresis loops of even and odd SL MBT observed in experiments. For the real experimental samples, disorders and magnetic domains are inevitable. For example, antiferromagnetic domain walls have been imaged in MBT via cryogenic magnetic force microscopy [68]. Thus, the transition between two AFM states discussed here corresponds to the enlargement and shrinkage of two opposite AFM domains in real samples. Furthermore, bulk states, in addition to surface states, may also play a role due to
Figure 3: Calculated orbital magnetic moment \(m\) as a function of \(V_{0}\) for (a) 2SL, (b) 3SL, (c) 4SL and (d) 5SL in the thin film model for chemical potential \(\mu=0\). The blue, red and yellow lines are for trivial, topological and total magnetic moment, respectively. In even SLs, \(m\) displays non-zero slope versus \(V_{0}\), while in odd SLs, \(m\) is a constant under varying \(V_{0}\). (e) The trivial and topological part of \(\alpha\) and total \(\alpha\) as a function of SL number. (f) Illustration of the electric control of orbital magnetic moment in even SL MBT.
the chemical potential inhomogeneity. Consequently, we expect more complicated behaviors due to the interplay between intrinsic and extrinsic mechanisms in real materials [45], and our prediction here is more applicable to high-quality samples with low carrier concentrations. We further propose that the orbital magnetization induced by electric field in even SL can result in nonzero MCD signals in the AFM regime. More experimental studies are necessary to validate our prediction of the AH effect in even and odd SL MBT at electron and hole doping, as well as the possibility of electric control of orbital magnetization in even SL MBT films.
## Acknowledgement
We would like to acknowledge Binghai Yan for the helpful discussion. R.B. M. and C.-X. L. acknowledges the support through the Penn State MRSEC-Center for Nanoscale Science via NSF award DMR-2011839. Y.-F. Z. and C.-Z. C. acknowledge the support from the ARO Award (W911NF2210159) and the Gordon and Betty Moore Foundation's EPiQS Initiative (Grant GBMF9063 to C. -Z. C.). D.X. is supported by AFOSR MURI 2D MAGIC (FA9550-19-1-0390).
|
2310.11640 | Free-text Keystroke Authentication using Transformers: A Comparative
Study of Architectures and Loss Functions | Keystroke biometrics is a promising approach for user identification and
verification, leveraging the unique patterns in individuals' typing behavior.
In this paper, we propose a Transformer-based network that employs
self-attention to extract informative features from keystroke sequences,
surpassing the performance of traditional Recurrent Neural Networks. We explore
two distinct architectures, namely bi-encoder and cross-encoder, and compare
their effectiveness in keystroke authentication. Furthermore, we investigate
different loss functions, including triplet, batch-all triplet, and WDCL loss,
along with various distance metrics such as Euclidean, Manhattan, and cosine
distances. These experiments allow us to optimize the training process and
enhance the performance of our model. To evaluate our proposed model, we employ
the Aalto desktop keystroke dataset. The results demonstrate that the
bi-encoder architecture with batch-all triplet loss and cosine distance
achieves the best performance, yielding an exceptional Equal Error Rate of
0.0186%. Furthermore, alternative algorithms for calculating similarity scores
are explored to enhance accuracy. Notably, the utilization of a one-class
Support Vector Machine reduces the Equal Error Rate to an impressive 0.0163%.
The outcomes of this study indicate that our model surpasses the previous
state-of-the-art in free-text keystroke authentication. These findings
contribute to advancing the field of keystroke authentication and offer
practical implications for secure user verification systems. | Saleh Momeni, Bagher BabaAli | 2023-10-18T00:34:26Z | http://arxiv.org/abs/2310.11640v1 | Free-text Keystroke Authentication using Transformers: A Comparative Study of Architectures and Loss Functions
###### Abstract
Keystroke biometrics is a promising approach for user identification and verification, leveraging the unique patterns in individuals' typing behavior. In this paper, we propose a Transformer-based network that employs self-attention to extract informative features from keystroke sequences, surpassing the performance of traditional Recurrent Neural Networks. We explore two distinct architectures, namely bi-encoder and cross-encoder, and compare their effectiveness in keystroke authentication. Furthermore, we investigate different loss functions, including triplet, batch-all triplet, and WDCL loss, along with various distance metrics such as Euclidean, Manhattan, and cosine distances. These experiments allow us to optimize the training process and enhance the performance of our model. To evaluate our proposed model, we employ the Aalto desktop keystroke dataset. The results demonstrate that the bi-encoder architecture with batch-all triplet loss and cosine distance achieves the best performance, yielding an exceptional Equal Error Rate of 0.0186%. Furthermore, alternative algorithms for calculating similarity scores are explored to enhance accuracy. Notably, the utilization of a one-class Support Vector Machine reduces the Equal Error Rate to an impressive 0.0163%. The outcomes of this study indicate that our model surpasses the previous state-of-the-art in free-text keystroke authentication. These findings contribute to advancing the field of keystroke authentication and offer practical implications for secure user verification systems.
## 1 Introduction
Keystroke authentication is a type of biometric authentication that relies on the distinct typing patterns of an individual to verify their identity. With online security becoming increasingly crucial and the number of cyber-attacks on the rise, keystroke authentication has emerged as a promising solution to strengthen authentication systems [26]. Compared to traditional authentication methods like passwords and PINs, keystroke authentication offers several benefits. For one, it eliminates the need for individuals to remember complicated passwords, which can be hard to create and recall. Moreover, it provides an extra layer of security that is difficult to duplicate or steal since it is based on an individual's unique typing behavior. Additionally, keystroke authentication is non-intrusive and does not require any physical contact with the user, making it more user-friendly than other forms of biometric authentication. Given its potential to provide a high level of security without requiring additional hardware or software, keystroke authentication has gained significant attention in recent years [34, 23].
Keystroke identification technology involves analyzing the timing and duration of individual keystrokes, as well as the general layout during a typing session. These patterns depend on several factors, such as the size and shape of the individual's hands, the way they position their fingers on the keyboard, and the speed and rhythm of their typing [8]. By analyzing these patterns, it is possible to construct a profile of an individual's typing behavior, which can be utilized to authenticate them with great accuracy.
There are various applications of keystroke identification, such as controlling access to secure facilities and systems, authenticating online financial transactions, and verifying remote employees' identities [28]. Additionally, the technology can be utilized for continuous authentication, enabling the constant verification of users throughout their session instead of solely during the initial login [18].
Keystroke authentication can be classified into two main categories: fixed-text and free-text authentication. Fixed-text authentication entails using a predetermined text, such as a password or a passphrase, that the user needs to enter. On the other hand, free-text authentication permits the user to enter any text of their preference. Fixed-text authentication is typically more precise because the user types the same text each time, enabling easier identification of any ir
regularities or inconsistencies in typing behavior [25]. Conversely, free-text authentication can be more user-friendly as it allows users to choose any text they desire, making it simpler to recall and type quickly.
While keystroke authentication offers numerous benefits, it does have some limitations that need to be addressed. The primary challenge lies in developing algorithms that can accurately analyze an individual's typing behavior, while accounting for variations in patterns caused by factors like stress, fatigue, injury, or keyboard design [24]. Furthermore, privacy concerns arise from the collection and storage of sensitive biometric data [12]. Additionally, keystroke authentication is vulnerable to attacks like replay attacks, where an attacker records an individual's keystrokes and gains unauthorized access [13]. Thus, it is crucial to develop secure and robust keystroke authentication systems that can withstand such attacks.
Despite the challenges, keystroke identification is still a vibrant area of research and development. As the demand for secure and user-friendly authentication methods grows, keystroke identification may play an increasingly pivotal role in the future of biometric authentication. This authentication method can be applied to a broad spectrum of devices, including desktops, laptops, smartphones, and tablets. Moreover, keystroke identification can be integrated with other authentication techniques like passwords and tokens to enhance security [2].
The literature on keystroke biometrics is vast and contains many promising approaches. However, it is important to acknowledge its limitations. A lot of previous works in keystroke authentication have concentrated on predetermined texts for authentication purposes. This fixed-text approach fails to accurately reflect genuine typing behavior, as users typically type diverse texts of varying lengths [20]. Furthermore, several earlier works have overlooked the importance of scalability in their approaches, relying instead on small datasets derived from limited user populations or contexts. This limitation severely hampers their practical applicability in real-world scenarios, where a vast amount of data is readily accessible [1].
This paper presents an innovative approach to keystroke authentication that overcomes limitations and substantially improves the accuracy and efficiency of the authentication process. Our approach is specifically designed for free-text keystroke authentication scenarios that more closely reflect real-world typing behavior. To achieve scalability and handle large amounts of data, we utilize a transformer neural network architecture [33], which has demonstrated excellent performance in processing vast amounts of information. Additionally, our work draws on a comprehensive dataset, meticulously selected to encompass a broad range of users, typing behaviors, and text variations. This extensive dataset enables our model to identify subtle patterns and nuances in individual typing styles, enhancing the accuracy and robustness of our keystroke authentication system. To summarize, our contributions include:
1. We evaluate the efficacy of two different architectures, specifically the bi-encoder and cross-encoder, in the context of keystroke authentication to determine the architecture that yields the highest effectiveness for this particular task.
2. In order to enhance the accuracy of our keystroke authentication model, we employ a range of contrastive learning techniques. This entails experimenting with different loss functions and distance metrics, followed by thorough comparisons to determine their effectiveness.
3. We explore multiple anomaly detection algorithms with the objective of improving the calculation of similarity scores when comparing queries to the enrollment set. Through this investigation, our aim is to enhance the identification of genuine and imposter keystrokes.
4. Our method attains a significantly lower Equal Error Rate (EER) compared to the previous literature on the widely-known and accessible Aalto keystroke dataset [8], highlighting the efficiency of our model for the purpose of keystroke authentication.
The remainder of this paper is organized as follows: Section 2 offers an in-depth review of the relevant literature on keystroke biometrics. Section 3 outlines our proposed methodology, including the transformer architecture and contrastive learning techniques employed. The experimental setup and the dataset used for training and evaluation are outlined in Section 4. The results and analysis are provided in Section 5. Finally, we conclude our study and delve into future prospects in Section 6.
## 2 Related Works
Keystroke authentication has witnessed significant advancements and contributions throughout its evolution. Monrose and Rubin [22] were pioneers in this field, introducing a groundbreaking algorithm for free-text keystroke authentication. Their approach utilized mean latency and standard deviation of digraphs to compare an unknown input against reference profiles. Building upon their work, Gunetti and Picardi [11] extended the algorithm to n-graphs, further improving its effectiveness.
Subsequent studies have explored different aspects of keystroke biometrics using similar methodologies. For instance, Huang et al. [15] investigated the impact of data size on free-text keystroke performance, while Crawford and Ahmadzadeh [7] examined the influence of user movement during typing on the effectiveness of mobile keystroke dynamics, determining user position before authentication.
Statistical learning algorithms have proven to be highly effective in analyzing keystroke dynamics. Researchers have employed various techniques to model and classify keystroke sequences. Hidden Markov Models (HMM) were used by Jiang et al. [16] to capture the timing information, and this approach was extended to Partially Observable Hidden Markov Models (POHMM) by Monaco and Tappert [21]. In a different study, Ayotte et al. [3] employed a Random Forest (RF) classifier to identify the most significant features in digraph-based algorithms. Additionally, other studies, such as those by Saevancee and Bhatarakosol [27], and Zahid et al. [36], demonstrated promising results using k-Nearest Neighbor (KNN) and fuzzy logic, respectively.
Among the various approaches, Support Vector Machine (SVM) has emerged as a popular choice in keystroke biometrics. Ceker and Upadhyaya [4] proposed a combination of the existing digraphs method for feature extraction and an SVM classifier for user authentication. Cilia and Inguanez [6] conducted an extensive study focusing on differentiating typing modes (one or two hands) and user activity (standing or moving) using an SVM-based keystroke verification system. Furthermore, SVM was employed by Gascon et al. [10] in conjunction with mobile sensor data, incorporating the user's motion information while entering text into the smartphone. Regardless of the classifier used, the fusion of keystroke dynamics with simultaneous movement sensor data from mobile devices yielded significant improvements in authentication results [30, 18].
Benchmark evaluations have played a crucial role in comparing and assessing the performance of different keystroke biometric algorithms. Killourhy and Maxion [17] collected a comprehensive keystroke-dynamics dataset and conducted a thorough comparison of various algorithms. Similarly, Kim et al. [19] performed a benchmark study on algorithms such as Gaussian and Parzen Window Density Estimation, one-class SVM, KNN, and k-means.
In recent years, the field of keystroke biometrics has witnessed remarkable progress in authentication performance, thanks to the emergence of deep learning techniques. Ceker and Upadhyaya [5] explored the applicability of deep learning by utilizing Convolutional Neural Networks (CNN) and Gaussian data augmentation on three diverse datasets. Recurrent Neural Networks (RNN) have also exhibited impressive results [9], while Multi-Layer Perceptron (MLP) architectures have been extensively explored [31].
Building upon the applicability of RNNs in capturing temporal patterns, Xiaofeng et al. [34] introduced a fusion of convolutional and recurrent neural networks to extract features for keystroke authentication. RNN variations have also demonstrated their effectiveness in keystroke biometrics. Li et al. [20] introduced a unique method for feature engineering, which involved generating image-like matrices using a hybrid model that combined a CNN with a Gated Recurrent Unit (GRU). While, Acien et al. [1] developed TypeNet, a Siamese Long-Short-Term-Memory (LSTM) model for large-scale keystroke authentication in free-text scenarios. Additionally, Stragapede et al. [29] further advanced the field of keystroke authentication by incorporating a Transformer architecture and leveraging the power of Gaussian range encoding.
Notably, the study by Acien et al. [1] holds particular relevance to this paper, as they employed the same dataset and experimental protocol, achieving state-of-the-art results in free-text desktop keystroke authentication. Thus, their work serves as a significant benchmark for the current study, enabling a direct comparison of the proposed system.
## 3 Proposed Method
In this study, we focus on evaluating the effectiveness of two different architectures, namely the bi-encoder and cross-encoder, for keystroke authentication. Figure 1 presents the pipeline of the proposed architectures. Our goal is to identify the architecture that yields the highest effectiveness for this particular task. To enhance the accuracy of our keystroke authentication model, we employ a comprehensive range of contrastive learning techniques. To begin, we experiment with various loss functions and distance metrics. This allows us to explore different approaches to measure the similarity between keystroke patterns and distinguish genuine users from impostors. By systematically varying the loss functions and distance metrics, we can assess their impact on the performance.
### Pre-processing & Feature Extraction
The raw keystroke data consists mainly of a timestamp indicating when a key is pressed and released, along with the corresponding ASCII code. Keystroke sequences in free-text form can vary in length. In order to maintain a consistent input size, these variable-length sequences are either truncated or padded to a fixed size.
For each sequence, we extract five temporal features: hold latency (HL), press latency (PL), release latency (RL), inner key latency (IL), and outer key latency (OL), as illustrated in Figure 2. These temporal feature values are normalized to have a mean of 0 and a standard deviation of 1 before being used as input for the model. Additionally, we incorporate five spatial features that represent the key positions on the keyboard. Each key is assigned an embedding, which is learned during the training process. The spatial features are derived from the key embeddings of two consecutive keycodes. To achieve this, we apply a one-dimensional convolutional layer with a kernel size of 2 and stride of 1 to the key embeddings matrix. The number of channels in this convolutional layer is set to 5 to match the size of the temporal features. The final input sequence
provided to the model has 10 channels, obtained by concatenating the temporal and spatial features.
### Bi-Encoder
The bi-encoder model is characterized by its ability to independently process two input sequences and compare them. During the encoding process, there is no direct interaction between the two inputs, and a vector embedding is generated for each sequence. These embeddings capture the essential information from the input data and serve as a basis for calculating a similarity score using a distance metric. This enables the network to effectively determine the similarity between the given input sequences.
To obtain the vector embedding of the keystroke sequence, the extracted features are fed to the bi-encoder. Initially, these features are passed through a linear layer, projecting them to a higher dimension that aligns with the hidden size of the model. In order to preserve positional information during encoding, learnable positional embeddings are incorporated into the sequence. The bi-encoder adopts the widely used transformer architecture, which comprises stacked transformer layers. Each transformer layer consists of a self-attention layer and a feedforward layer each followed by a normalization layer and residual connections. The attention mechanism functions by mapping a set of queries, keys, and values to an output, which is computed as a weighted average of the values. These weights are determined based on the similarity between the key and its corresponding query:
\[\text{Attention}(Q,K,V)=\text{softmax}(\frac{QK^{T}}{\sqrt{d}})V \tag{1}\]
Figure 1: Comparison between the two proposed architectures: The bi-encoder independently processes each keystroke, generating separate vector representations for each sequence. It utilizes a distance metric to assess the similarity between various sequences. In contrast, the cross-encoder takes both sequences as input and produces an output representation that captures their relationship. This representation is then passed through a classifier to determine the final score.
Figure 2: Example of the distinct temporal features extracted between two successive keystrokes.
Where \(Q\), \(K\), and \(V\) are the matrices of the queries, keys, and values respectively, and \(d\) is the dimension of the input vector. To derive the keys, queries, and values from the input sequence, the self-attention layer utilizes linear projection. Rather than employing a single attention function, each attention module employs multiple heads with distinct parameters. This enables the model to gather information from various representation subspaces:
\[\begin{split}\text{MultiHead}(Q,K,V)&=\text{ Concat}(head_{1},\dots,head_{h})W^{O}\\ \text{where }head_{i}&=\text{Attention}(QW_{i}^{Q},KW_{ i}^{K},VW_{i}^{V})\end{split} \tag{2}\]
Additionally, padded tokens are masked at this stage to prevent them from being attended to by the transformer. Employing self-attention allows the transformer model to efficiently captures dependencies among tokens in the input sequence, enhancing its ability to generate precise and contextually aware representations. Once the sequence has passed through the transformer, the final representation is obtained by applying mean-pooling to the unmasked tokens and passing the resultant vector through a feedforward layer, which further transforms the representation into a suitable space for contrastive learning.
#### 3.2.1 Distance Metrics
In contrastive learning, distance metrics are pivotal for assessing the similarity or dissimilarity between sample pairs. These metrics quantify the separation between embeddings in the learned representation space. We utilize three distinct distance metrics:
**Euclidean Distance:** This fundamental metric calculates the straight-line distance between two points in the embedding space. It is defined as the square root of the sum of squared coordinate differences. Euclidean distance is sensitive to both the magnitude and direction of vector differences.
**Manhattan Distance:** Also known as L1 distance, this metric measures the distance between two points in a grid-like system. Unlike Euclidean distance, which represents the straight-line distance, Manhattan distance follows a path along the grid's edges.
**Cosine Similarity:** This metric gauges the cosine of the angle between two vectors, providing a measure of their similarity rather than their distance. It is a common choice in contrastive learning. Cosine similarity is calculated as the dot product of the two vectors divided by the product of their magnitudes. Its range spans from -1 (indicating opposite directions) to 1 (representing parallel directions). Similarly, cosine distance can be defined by subtracting the cosine similarity from 1 and dividing the result by 2, ranging from 0 (for identical vectors) to 1 (for vectors pointing in opposite directions).
#### 3.2.2 Training & Loss Function
To train the bi-encoder model, the embeddings are optimized to minimize the dissimilarity between positive pairs (i.e., pairs of similar sequences) and to maximize the dissimilarity between negative pairs (i.e., pairs of dissimilar sequences). To achieve this, we utilize a contrastive loss function. Specifically, we employ three distinct loss functions in the training process: triplet loss, batch-all triplet loss, and Weighted Decoupled Contrastive Learning (WDCL) loss. Each of these loss functions is defined as follows:
**Triplet Loss:** The triplet loss function is a common choice for training Siamese networks. It involves the selection of a triplet of samples, including an anchor sample, a positive sample (similar to the anchor), and a negative sample (dissimilar to the anchor). The objective is to maximize the distance between the anchor and negative sequences while minimizing the distance between the anchor and positive sequences. The triplet loss is defined as:
\[\text{Triplet Loss}=\max\{|z_{a}-z_{p}|-|z_{a}-z_{n}|+\text{margin},0\} \tag{3}\]
Here, \(z\) represents the embedding of the input, \(|.|\) denotes the distance function, and the margin is a hyperparameter that controls the separation between positive and negative pairs. The model learns to project similar sequences closer together while pushing dissimilar sequences apart through iteratively sampling and optimizing these triplets.
**Batch-all Triplet Loss:** In contrast to the triplet loss, which selects a single triplet at a time, the batch-all triplet loss considers all possible triplets within a training batch [14]. It exhaustively evaluates the loss for each triplet, encouraging the model to find the hardest triplets in the batch. This approach ensures that the model optimizes across the entire batch rather than relying on a single triplet at each iteration. The loss is defined as:
\[\text{Batch-all Triplet Loss}=\sum_{i}\max\{|z_{a}-z_{p}^{(i)}|-|z_{a}-z_{n}^{(i)}| +\text{margin},0\} \tag{4}\]
The sum is taken over all possible triplets in the batch, and the max function ensures that only the triplets violating the margin condition contribute to the loss.
**Weighted Decoupled Contrastive Learning (WDCL):** Decoupled Contrastive Learning (DCL) loss is employed to enhance the discriminative capabilities of the learned representations by decoupling the positive and negative samples during the training process [35]. Traditional contrastive learning encourages positive pairs to be closer together while pushing negative pairs further apart. In large-scale datasets, the number of negative samples can significantly outnumber positive samples, causing an imbalance that can affect the learning process. DCL addresses this issue by decoupling the positive and negative samples in the loss computation, providing more flexibility and control over the
learning process. We generalize the loss function to WDCL by introducing a weighting function:
\[\text{WDCL}=-w(z_{a},z_{p})e^{<z_{a},z_{p}>}+\log(\sum_{i}e^{<z_{a},z_{n}^{(i)}>}) \tag{5}\]
Here, \(w\) is the weight function, \(<\).\(>\) denotes the similarity function, and the sum is taken over all negative samples for the anchor. The weight function can be determined using a function that assigns higher weights to pairs with smaller similarities. We can choose \(w\) to be a negative von Mises-Fisher weighting function:
\[w(x,y)=\frac{e^{<x,y>}/k}{\mathbb{E}[e^{<x,y>}/k]} \tag{6}\]
Where \(k\) is a hyperparameter controlling the strength of the weighting. The intuition behind the weight function is that in practice, the data may exhibit varying degrees of similarity, and treating all pairs equally may not be optimal. The weight function addresses this issue by assigning higher weights when a positive pair of samples are far from each other.
### Cross-Encoder
In cross-encoder architecture, the input consists of a pair of keystrokes: a source keystroke and a target keystroke. The cross-encoder model takes both sequences as input and encodes them together into a joint representation. It considers the interaction between them and produces a single output representation that captures their relationship.
In order to calculate the similarity score, the cross-encoder utilizes the extracted features from both the source and target keystrokes. Similar to the previous approach, these features are processed through a linear layer, which maps them to a higher dimension equivalent to the hidden size of the transformer. Positional embeddings are introduced to the sequences as well. At this point, the two sequences are padded to enable simultaneous processing by the cross-encoder. Additionally, token type embeddings are incorporated to differentiate between the two sequences within the transformer. The resulting sequence is then passed through the transformer to obtain the joint representation. The cross-encoder structure is similar to the bi-encoder, except that the cross-encoder utilizes self-attention to attend to both sequences. Following the transformer, the source and target sequences are separated and the representation for each sequence is obtained by applying mean-pooling and passing the resulting vector through a feedforward layer. Ultimately, the two representations are concatenated and fed into a linear layer with softmax activation, which converts the joint representation into a two-class probability distribution. The components of this vector represent the probability scores that indicate the level of similarity and dissimilarity between the two sequences.
#### 3.3.1 Training & Loss Function
The cross-encoder model is trained using a supervised learning approach, where the model is provided with a pair of inputs along with a label indicating their similarity or dissimilarity. The model is optimized to predict the correct label for a given keystroke pair. For this purpose, we employ the cross-entropy loss function.
Cross-entropy loss is a commonly used loss function in classification tasks. It measures the dissimilarity between the predicted probability distribution and the true probability distribution of the target variables. In the context of binary classification, where there are two possible classes (0 and 1), the cross-entropy loss can be defined mathematically as follows:
\[\text{Cross\_Entropy}(y,\hat{y})=-[y\cdot log(\hat{y})+(1-y)\cdot log(1-\hat{y})] \tag{7}\]
Here, \(y\) represents the true label and \(\hat{y}\) denotes the predicted probability. In this equation, the first term \(y\cdot\log(\hat{y})\) accounts for the loss when the true label \(y\) is 1, while the second term \((1-y)\cdot\log(1-\hat{y})\) captures the loss when the true label \(y\) is 0.
## 4 Experiment Setup
### Dataset
All the experiments in this paper are conducted using the Aalto desktop keystroke dataset [8]. This dataset is a comprehensive collection of keystroke biometric data obtained from physical keyboards, comprising over 5GB of information gathered from 168,000 participants. To collect the data, participants were instructed to memorize English sentences and type them as quickly and accurately as possible. The sentences were randomly selected from a pool of 1,525 sentences derived from the Enron mobile email corpus and the English Gigaword newswire corpus. The selected sentences range from a minimum of 3 words to a maximum of 70 characters. It is important to note that participants may have made errors while typing, resulting in keystrokes exceeding the 70-character limit as characters could be deleted. Each participant in the Aalto desktop dataset completed 15 sessions, with each session involving typing a single sentence. The captured raw data from each session includes a time series with three dimensions: keycodes, press times, and release times of the keystroke sequence. The timestamps are recorded in UTC format with millisecond precision, and the keycodes range from 0 to 255, corresponding to the ASCII code.
### Implementation Details
In order to ensure a fair comparison with previous studies, we followed the protocol introduced by Acien et al. [1] for our training process. We utilized 30,000 subjects from
the Aalto dataset exclusively for training purposes. The remaining portion of this dataset was only utilized for evaluating the model, ensuring that there was no overlap between the subjects used for training and evaluation.
We utilize a consistent batch size of 512 across all loss functions. The triplet loss integrates 512 anchors, along with positive and negative pairs, within each batch. Likewise, the batch-all triplet loss employs 64 unique subjects, with 8 samples selected from each subject in every batch. For the WDCL loss batch, we include 512 positive pairs chosen from different subjects, treating each as a negative sample for all other subjects. In the case of the softmax loss, the batch consists of 256 positive pairs and 256 negative pairs. The pairing selection is randomized for all loss functions.
When applying the triplet or batch-all triplet loss, we set the margin to 1 when using the Euclidean and Manhattan distances, and 0.25 when using the cosine distance. Each keycode was represented by a vector embedding of size 16. The transformer architecture consists of 6 layers with a hidden size of 256. We used 8 attention heads and set the intermediate size in the feedforward layers to 512. The transformer incorporates the gelu activation function along with a dropout rate of 0.1. Following the mean-pooling stage, the final feedforward layer reduces the representation size to 64.
All models were implemented in PyTorch, and we employed the Adam optimizer for training. The models underwent a total of 75,000 training steps, starting with an initial learning rate of 0.001 and diminishing by a factor of 0.1 after every 25000 steps, following a step decay schedule.
### Evaluation Metric
The Equal Error Rate (EER) serves as a widely accepted metric for evaluating the efficacy of keystroke authentication systems. It delineates the point at which the False Acceptance Rate (FAR) and the False Rejection Rate (FRR) intersect. The FAR pertains to the rate at which impostor (non-matching) samples samples are incorrectly accepted as genuine, while the FRR signifies the rate at which genuine (matching) samples are incorrectly rejected. The computation of the EER entails a comparison of the model's output scores against a predetermined threshold. Should a user's score surpass this threshold, they are accepted; otherwise, they are rejected. Modulating the threshold permits an analysis of the trade-off between FAR and FRR. A lower EER indicates a more precise and dependable authentication system. We investigate two distinct scenarios for EER calculation:
**Adaptive EER:** This approach involves the selection of an individualized threshold for each subject, resulting in a subject-specific EER value. The final EER value is derived by averaging these subject-specific EER values. The adaptive EER approach bestows the advantage of superior EER performance, as the system tailors a specific threshold for each subject. Nevertheless, a potential drawback is the necessity of a substantial number of samples from each subject to accurately calibrate the threshold. If an insufficient number of samples is available, it becomes challenging to calculate the threshold accurately.
**Global EER:** In this approach, a single threshold is chosen to be applied universally across all subjects. When the system is trained offline, deploying it with a fixed, predetermined threshold proves to be more convenient and obviates the need for collecting an extensive number of samples from individual users.
In the present study, we consider both the adaptive EER and global EER scenarios within our experimental framework. By exploring and comparing both approaches, we aim to provide a comprehensive understanding of the system's performance and evaluate the effectiveness of each method.
### Evaluation Protocol
The authentication process involves comparing a given query with the subject's enrollment set. To evaluate the system's adaptability to varying amounts of enrollment data, we conducted experiments using different configurations of enrollment sessions. Each subject provided a total of 15 keystroke sequences. For our experiments, we randomly selected the enrollment data from the initial 10 samples of each subject, reserving the remaining 5 samples for testing purposes.
Queries can emanate from either the same individual (a genuine match) or a different subject (an impostor match). We treat this as an anomaly detection task to differentiate between impostor and genuine samples. When presented with a query, our objective is to determine whether it corresponds to the subject's enrollment set. To achieve this, we employed various anomaly detection algorithms and assessed their performance.
**Average Distance:** This method, involving the measurement of the distance between a given query and the enrollment set, proves to be a highly effective and straightforward technique for detecting anomalies. The final score is computed based on the average of these distances.
**Angle Based Outlier Detection (ABOD):** ABOD stands as a potent technique for detecting anomalous data by assessing their angles in relation to a reference set. It quantifies the angular deviation of a query from the center of the enrollment set, identifying anomalies as queries with significantly distinct angles compared to the enrollment set.
**Local Outlier Factor (LOF):** LOF is a highly regarded and powerful anomaly detection algorithm. Its primary objective is to evaluate the local density deviation of a query concerning its neighboring data. Intuitively, when a query
exhibits a significantly lower density compared to the rest of the enrollment data, it is highly probable to be classified as an anomaly.
**One-class Support Vector Machine:** One-class SVM is a widely adopted machine learning algorithm for anomaly detection. The basic idea behind a one-class SVM is to create a boundary or hyperplane that encloses the majority of data in a high-dimensional space. The goal is to find a region that maximizes the margin around the enrollment set while minimizing the number of data points outside that region. This region is referred to as the normal region, and any query outside this region is considered an anomaly.
## 5 Experiment Results
In this section, we present a comprehensive analysis of the results obtained from our experiments, accompanied by an extensive ablation study of our models. Our objective is to delve into the influence of each model component on the overall performance and understand its impact on the task at hand. We provide insights on several aspects, including a comparison between the bi-encoder and cross-encoder architectures, the effects of different loss functions and distance metrics, the impact of enrollment data volume on performance, the effectiveness of anomaly detection algorithms used for similarity score calculation, and a comparison with previous state-of-the-art models.
### Architecture Comparison
One of the most critical factors that significantly influences the performance of a neural network is the training process. In this section, we thoroughly examine the impact of different loss functions, distance measures, and model architectures on the overall performance. Among the various loss functions used for training authentication models, the triplet loss stands out as one of the most commonly employed approaches. Therefore, we consider the triplet loss as our baseline and compare the performance of other methods against it. In Table 1, we present a comprehensive overview of the adaptive EER exhibited by different methods using varying numbers of enrollment samples (E) for a keystroke length of 50. Notably, the triplet loss attains a EER of 1.682%, 1.463%, and 0.773% when using Euclidean distance, Manhattan distance, and cosine distance, respectively, with only 1 enrollment sample. This outcome emphasizes the significance of the distance function on the model's performance. Consequently, we opt for the cosine distance to train the remaining models, as it yields superior performance.
Next, we compare the performance of other methods with the triplet loss as a benchmark. For the bi-encoder architecture the EER is 0.0756% and 0.0790% for batch-all triplet loss and WDCL loss, respectively, when using 1 enrollment sample. While, the cross-encoder architecture exhibits a higher EER of 0.852%, contrary to our initial expectations. Typically, cross-encoders tend to outperform bi-encoders due to their ability to leverage both reference and target sequences during the encoding process. However, our findings indicate that the cross-encoder architecture performs acceptably against the triplet loss but falls significantly behind against the batch-all triplet loss and WDCL loss. This leads us to conclude that the superior performance achieved by the batch-all triplet loss and WDCL loss is not solely attributed to the bi-encoder architecture, but rather to the specific loss functions utilized. These loss functions enable the model to derive a more effective representation of the input by comparing multiple samples during training. Without a doubt, the clear winner is the batch-all triplet loss, achieving the lowest EER of 0.0756% using only 1 enrollment sample. Additionally, due to the utilization of the bi-encoder architecture in this approach, the computational cost is lower compared to the cross-encoder architecture. As a result, the subsequent experiments conducted in this paper exclusively employ the batch-all triplet loss with the bi-encoder architecture as their foundation.
### Impact of Enrollment Data Quantity
The performance of keystroke authentication models is significantly affected by the available amount of enrollment data per subject. In this study, we investigate the
\begin{table}
\begin{tabular}{c|c|c|c c c c} \hline \multirow{2}{*}{Architecture} & \multirow{2}{*}{Loss Function} & \multirow{2}{*}{Distance Metric} & \multicolumn{4}{c}{Equal Error Rate (\%)} \\ & & & E = 1 & E = 3 & E = 5 & E = 10 \\ \hline \multirow{6}{*}{Bi-encoder} & Triplet & Euclidean & 1.6827 & 0.8945 & 0.7596 & 0.6549 \\ & Triplet & Manhattan & 1.4638 & 0.6782 & 0.5519 & 0.4644 \\ & Triplet & Cosine & 0.7737 & 0.3587 & 0.2791 & 0.2362 \\ & WDCL & Cosine & 0.0790 & 0.0314 & 0.0263 & 0.0196 \\ & Batch-all Triplet & Cosine & **0.0756** & **0.0305** & **0.0249** & **0.0186** \\ \hline Cross-encoder & Softmax & - & 0.8521 & 0.3020 & 0.1878 & 0.1372 \\ \hline \end{tabular}
\end{table}
Table 1: Comparing the effectiveness of different architectures, loss functions, and distance metrics in terms of EER for a keystroke length of 50 characters.
impact of enrollment data quantity from two perspectives: the number of enrollment samples and the length of each keystroke sequence. Figure 3 illustrates adaptive EER of the model across various numbers of enrollment samples and keystroke lengths.
We observed a notable decrease in EER of 0.1148% when increasing the length of the keystroke sequence from 20 to 100, while utilizing 5 enrollment samples. Within this reduction, a significant improvement of 0.1038% was observed when transitioning from a sequence length of 20 to 40. However, the improvement gradually diminishes in subsequent stages. A similar trend is evident when considering the number of enrollment samples. Increasing the number of enrollment samples from 1 to 10 resulted in an EER reduction of 0.0570% for an input sequence length of 50. Notably, a reduction of 0.0451% was observed when progressing from 1 to 3 samples. In general, increasing the amount of enrollment data enhances the model's performance. The most favorable outcome was achieved by utilizing all 10 enrollment samples with a length of 100, resulting in an EER of 0.0180%. These findings underscore the importance of having a sufficient amount of enrollment data for accurate keystroke authentication. Longer sequences and a higher number of enrollment samples contribute to improved model performance, ultimately enhancing the effectiveness and reliability of the authentication system.
### Feature Embeddings Analysis
A query is accepted in keystroke authentication based on its similarity with enrollment samples. The feature embeddings play a crucial role in this process, as an efficient keystroke authentication model should ensure that samples belonging to the same class (subject) are represented closely together while maintaining a significant distance from samples of other classes.
To evaluate the effectiveness of our approach, we provide feature embeddings of various keystroke samples from 12 distinct subjects. The embedding vectors in our proposed model consist of 64 dimensions. To visualize the distribution of these embeddings, we employ the t-SNE method [32], which allows us to map the high-dimensional vectors onto a two-dimensional space. The resulting visualization is depicted in Figure 4. Notably, the samples belonging to each subject are distinctly grouped together, exhibiting clear separation from the samples of other classes. This demonstrates the ability of our model to effectively discriminate between different subjects based on their keystroke patterns.
### Importance of Adaptive Threshold
The selection of an appropriate threshold is crucial as it directly impacts both the security and usability of the system. In this section, we will delve into the significance of employing an adaptive threshold and its impact on the performance of the authentication model, comparing it to the use of a global threshold. Figure 5 depicts the Receiver Operating Characteristic (ROC) curve obtained when employing a global threshold. This curve demonstrates the relationship between the FAR and the True Accept Rate (TAR) as the threshold of the model is adjusted. The EER corresponds to the point on the curve where it intersects with the diagonal line spanning from the top-left to the bottom-right corners.
A thorough comparison of employing adaptive threshold
Figure 4: Feature embeddings derived from keystroke data of various subjects, represented in a 2-dimensional space using t-SNE.
Figure 3: Influence of enrollment sample count and keystroke length on model’s EER
versus global threshold is presented in Table 2. Upon analysis, we discover that by utilizing a global threshold, the model achieves an EER of 0.1342% using 10 enrollment samples of length 50, which is considerably higher than the adaptive mode's EER of only 0.0186%. This discrepancy underscores the significance of determining the appropriate threshold for each individual subject. However, it is important to note that in this study, we calculate the adaptive EER by testing different threshold values for each subject. In practical applications, calibrating the threshold for each subject presents numerous challenges. It requires a substantial number of enrollment samples, making it impractical in scenarios where only a few enrollment samples are available per subject.
### Influence of Scoring Algorithm
The similarity score of a given query is determined by measuring its distance from the enrollment set of a subject. In our approach to calculating this score, we treat the problem as an anomaly detection task. Our aim is to find the most suitable scoring algorithm, and thus, we explore various options and compare their results. Figure 6 provides a visual representation of how these algorithms behave when applied to a set of data points.
For each subject, we train the algorithm using the enrollment set and subsequently predict the similarity score for new query. This score serves as the basis for determining whether to accept or reject different queries. We present the results obtained by each algorithm in Table 3. Notably, we observe that both the ABOD and LOF algorithms exhibit inferior performance when compared to simply averaging the distance between the query and enrollment samples. We attribute this decline in performance to the large size of the embedding vector and the limited number of training samples available per subject.
However, amidst these results, we discovered that the one-class SVM algorithm stands out as the most promising one. It not only outperforms the average distance approach but also reduces the EER from 0.0186% to 0.0163% when trained on all 10 enrollment samples. Remarkably, even with fewer training samples, this algorithm manages to maintain its strong performance and yields results slightly better than the average distance approach. These findings
\begin{table}
\begin{tabular}{c|c c c c} \hline \multirow{2}{*}{Scenario} & \multicolumn{5}{c}{Equal Error Rate (\%)} \\ & E = 1 & E = 3 & E = 5 & E = 10 \\ \hline Global Threshold & 0.4634 & 0.2407 & 0.1754 & 0.1342 \\ Adaptive Threshold & **0.0756** & **0.0305** & **0.0249** & **0.0186** \\ \hline \end{tabular}
\end{table}
Table 2: Comparing adaptive threshold to global threshold for varying numbers of enrollment data for a keystroke length of 50 characters.
Figure 5: ROC curve for the global threshold scenario obtained by iteratively changing the threshold value and evaluating the model, with varying enrollment sample sizes and a keystroke length of 50 characters.
\begin{table}
\begin{tabular}{c|c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{5}{c}{Equal Error Rate (\%)} \\ & E = 1 & E = 3 & E = 5 & E = 10 \\ \hline Average Distance & **0.0756** & 0.0305 & 0.0249 & 0.0186 \\ Angle Based Outlier Detector & - & 0.3826 & 0.0402 & 0.0289 \\ Local Outlier Factor & - & 0.0480 & 0.0274 & 0.0192 \\ One-class SVM & - & **0.0292** & **0.0214** & **0.0163** \\ \hline \end{tabular}
\end{table}
Table 3: Comparing the performance of different anomaly detection algorithms with varying numbers of enrollment samples for a keystroke length of 50 characters.
Figure 6: Illustrating the unique behaviors exhibited by diverse anomaly detection algorithms when applied to a collection of data points.
emphasize the significance of selecting an appropriate scoring algorithm for accurate similarity assessment.
### Comparison with the State-of-the-art
In this section, we present a comprehensive comparison between our obtained results and the findings of previous studies. To assess the performance of our proposed model, we evaluate its effectiveness in terms of EER across varying numbers of enrollment samples and keystroke lengths (L), while comparing it to the widely recognized TypeNet model. TypeNet is an LSTM-based architecture that has established itself as a benchmark by achieving state-of-the-art performance on the Aalto keystroke dataset. We selected Typenet for comparison due to its utilization of the same dataset and experimental protocol, providing a fair and direct assessment of our approach. Table 4 illustrates the performance of our proposed model alongside Typenet. Remarkably, our model outperforms TypeNet by a considerable margin, confirming the efficacy of the presented methodology. These results demonstrate the superior performance of our model, emphasizing its potential for enhancing keystroke authentication.
## 6 Conclusion and Future Work
In conclusion, this paper presented a comprehensive investigation into keystroke biometrics, with a focus on developing a highly effective system for free-text keystroke authentication. The research explored two transformer-based architectures, the bi-encoder and cross-encoder, and conducted experiments with various loss functions and distance metrics to optimize model training and enhance performance.
The evaluation of the proposed model on the Aalto desktop dataset shows promising results. The combination of a bi-encoder architecture with batch-all triplet loss and cosine distance yields exceptional outcomes. Remarkably, it achieves an EER of 0.0186% with 10 enrollment samples and maintains a commendable EER of 0.0756% even with just one enrollment sample. These findings highlight the model's effectiveness in accurately verifying users based on their unique keystroke patterns, making it a reliable tool for free-text keystroke authentication. Furthermore, we explored various algorithms to calculate similarity scores for queries from the enrollment set, including the implementation of a one-class SVM. This approach resulted in an outstanding EER of 0.0163% with 10 enrollment samples. These achievements present a significant advancement in free-text keystroke authentication, surpassing previous state-of-the-art approaches.
This work provides a highly effective model for free-text keystroke authentication, and the proposed methodologies can serve as a solid foundation for the development of advanced keystroke authentication systems with enhanced accuracy and security. Furthermore, the study's contributions go beyond its model performance by providing valuable insights for researchers and developers in the field. The exploration of various architectures, loss functions, and distance metrics gives a deeper understanding of their impact on performance, enabling others to build upon this work and devise even more efficient keystroke authentication systems.
However, the study also has some limitations. To address them, we suggest that future works explore additional datasets to assess the model's generalization to different populations and diverse scenarios. Additionally, including an analysis of potential adversarial attacks would ensure the proposed system's robustness and security in real-world deployment. Exploring hybrid approaches that combine keystroke biometrics with other authentication methods, such as password-based or behavioral biometrics, could offer enhanced security and usability.
|
2310.00995 | FMplex: A Novel Method for Solving Linear Real Arithmetic Problems | In this paper we introduce a novel quantifier elimination method for
conjunctions of linear real arithmetic constraints. Our algorithm is based on
the Fourier-Motzkin variable elimination procedure, but by case splitting we
are able to reduce the worst-case complexity from doubly to singly exponential.
The adaption of the procedure for SMT solving has strong correspondence to the
simplex algorithm, therefore we name it FMplex. Besides the theoretical
foundations, we provide an experimental evaluation in the context of SMT
solving. | Jasper Nalbach, Valentin Promies, Erika Ábrahám, Paul Kobialka | 2023-10-02T08:58:04Z | http://arxiv.org/abs/2310.00995v1 | # FMplex: A Novel Method for Solving
###### Abstract
In this paper we introduce a novel quantifier elimination method for conjunctions of _linear real arithmetic_ constraints. Our algorithm is based on the _Fourier-Motzkin variable elimination_ procedure, but by case splitting we are able to reduce the worst-case complexity from doubly to singly exponential. The adaption of the procedure for SMT solving has strong correspondence to the _simplex algorithm_, therefore we name it _FMplex_. Besides the theoretical foundations, we provide an experimental evaluation in the context of SMT solving.
## 1 Introduction
_Linear real arithmetic (LRA)_ is a powerful first-order theory with strong practical relevance. We focus on checking the satisfiability of _conjunctions_ of LRA constraints, which is needed e.g. for solving quantifier-free LRA formulas using _satisfiability modulo theories (SMT) solvers_. The problem is known to be solvable in _polynomial_ worst-case complexity but, surprisingly, the _ellipsoid_ method [14] proposed in 1980 by Khachiyan is still the only available algorithm that implements this bound. However, this method is seldomly used in practice due to its high average-case effort. Instead, most approaches employ the _simplex_ algorithm introduced by Dantzig in 1947, which has a _singly exponential_ worst case complexity, but which is quite efficient in practice. A third available solution is the _Fourier-Motzkin variable elimination (FM)_ method, proposed in 1827 by Fourier [10] and re-discovered in 1936 by Motzkin [24]. In contrast to the other two approaches, FM admits quantifier elimination, but it has a _doubly exponential_ worst case complexity, even though there have been various efforts to improve its efficiency by recognizing and avoiding redundant computations (e.g. [12, 13]).
In this paper, we propose a novel method, which is derived from the FM method, but which turns out to have striking resemblance to the simplex algorithm. This yields interesting theoretical insights into the relation of the two established methods and the nature of the problem itself. More precisely, our contributions include:
* The presentation of _FMplex_, a new variable elimination method based on a divide-and-conquer approach. We show that it does not contain certain redundancies Fourier-Motzkin might generate and it lowers the overall complexity from _doubly_ to _singly_ exponential.
* An adaptation of FMplex for SMT solving, including methods to prune the search tree based on structural observations.
* A theorem formalizing connections between FMplex and the simplex algorithm.
* An implementation of the SMT adaptation and its experimental evaluation.
After recalling necessary preliminaries in Section 2, we introduce our novel FMplex method first for quantifier elimination in Section 3 and then for SMT solving in Section 4. We present related work and compare FMplex with other methods, first qualitatively in Section 5, and then experimentally in Section 6. We discuss future work and conclude the paper in Section 7.
An extended version of this paper including more detailed proofs can be found on arXiv [26].
## 2 Preliminaries
Let \(\mathbb{R}\), \(\mathbb{Q}\) and \(\mathbb{N}\) denote the set of real, rational respectively natural (\(0\notin\mathbb{N}\)) numbers. For \(k\in\mathbb{N}\) we define \([k]:=\{1,\ldots,k\}\). Throughout this paper, we fix \(n\in\mathbb{N}\), a set \(X=\{x_{1},\ldots,x_{n}\}\) and a corresponding vector \(\boldsymbol{x}=(x_{1},\ldots,x_{n})^{T}\) of \(\mathbb{R}\)-valued variables.
MatricesFor \(m\in\mathbb{N}\) let \(E^{(m)}\in\mathbb{Q}^{m\times m}\) be the identity matrix, and \(\boldsymbol{0}^{(m)}=(0\;\cdots\;0)^{T}\in\mathbb{Q}^{m\times 1}\). The \(i\)th component of \(\boldsymbol{f}\in\mathbb{Q}^{m\times 1}\cup\mathbb{Q}^{1\times m}\) is denoted by \(f_{i}\) and the component-wise comparison to zero by \(\boldsymbol{f}\geq 0\). For \(A\in\mathbb{Q}^{m\times n}\), \(\boldsymbol{a}_{i,\cdot}\in\mathbb{Q}^{1\times n}\) and \(\boldsymbol{a}_{\cdot,i}\in\mathbb{Q}^{m\times 1}\) denote the \(i\)th row respectively column vector of \(A\). Furthermore, \(A[t]\) denotes the sub-matrix of \(A\) containing only the rows with indices from some \(I\subseteq[m]\). For \(\boldsymbol{f}\in\mathbb{Q}^{1\times m}\), \(\boldsymbol{f}A\) is a _linear combination_ of the rows \(i\in[m]\) of \(A\) with \(f_{i}\neq 0\). We call \(A\)_linearly independent_ if none of its rows is a linear combination of its other rows, and _linearly dependent_ otherwise. The _rank of \(A\) rank\((A)\)_ is the size of a maximal \(I\subseteq[m]\) with \(A[t]\) linearly independent.
Linear ConstraintsLet \(\boldsymbol{a}=(a_{1},\ldots,a_{n})\in\mathbb{Q}^{1\times n}\), \(b\in\mathbb{Q}\) and \(\sim\in\{=,\leq,<,\neq\}\) a _relation symbol_. We call \(\boldsymbol{ax}\) a _linear term_ and \(\boldsymbol{ax}\sim b\) a _linear constraint_, which is _weak_ if \(\sim\in\{=,\leq\}\) and _strict_ otherwise. A _system of linear constraints_, or short a _system_, is a non-empty finite set of linear constraints. For most of this paper, we only consider constraints of the form \(\boldsymbol{ax}\leq b\). We can write every system \(C=\{\boldsymbol{a}_{i,\cdot}\boldsymbol{x}\leq b_{i}\mid i\in[m]\}\) of such constraints in _matrix representation_\(A\boldsymbol{x}\leq\boldsymbol{b}\) with suitable \(A\in\mathbb{Q}^{m\times n}\) and \(\boldsymbol{b}\in\mathbb{Q}^{m\times 1}\). Conversely, every row \(\boldsymbol{a}_{i,\cdot}\boldsymbol{x}\leq b_{i},\ i\in[m]\) of \(A\boldsymbol{x}\leq\boldsymbol{b}\) is a linear constraint. Thus, the representations are mostly interchangeable; however, the matrix representation allows redundant rows in contrast to the set notation. As the latter will play a role later on, we will stick to the matrix representation.
Variable AssignmentAn _assignment_ is a function \(\alpha:Y\to\mathbb{R}\) with domain \(dom(\alpha)=Y\subseteq X\). The _extension_\(\alpha[x_{i}\mapsto r]\) is the assignment with domain \(dom(\alpha)\cup\{x_{i}\}\) such that \(\alpha[x_{i}\mapsto r](x_{j})=\alpha(x_{j})\) for all \(x_{j}\in dom(\alpha)\setminus\{x_{i}\}\) and \(\alpha[x_{i}\mapsto r](x_{i})=r\). For \(Z\subseteq Y\), the _restriction_\(\alpha|_{Z}\) is the assignment with domain \(Z\) such that \(\alpha|_{Z}(x_{i})=\alpha(x_{i})\) for all \(x_{i}\in Z\). We extend these notations to sets of assignments accordingly.
The standard _evaluation_ of a linear term \(t\) under \(\alpha\) is written \(\alpha(t)\). We say that \(\alpha\)_satisfies_ (or is a solution of) a constraint \(c=(\boldsymbol{ax}\sim b)\) if \(\alpha(a_{1}x_{1}+\ldots a_{n}x_{n})\sim b\) holds, and denote this fact by \(\alpha\models c\). All solutions of \(c\) build its _solution set_\(sol(c)\). Similarly, \(\alpha\models(A\boldsymbol{x}\leq\boldsymbol{b})\) denotes that \(\alpha\) is a common solution of all linear constraints in the system \(A\boldsymbol{x}\leq\boldsymbol{b}\). A system is _satisfiable_ if it has a common solution, and _unsatisfiable_ otherwise. Note that each satisfiable system has also a rational-valued solution.
We will also make use of the following two well-known results.
**Theorem 1** (Farkas' Lemma [9]).: _Let \(A\in\mathbb{Q}^{m\times n}\) and \(\boldsymbol{b}\in\mathbb{Q}^{m\times 1}\). Then the system \(A\boldsymbol{x}\leq\boldsymbol{b}\) is satisfiable if and only if for all \(\boldsymbol{f}\in\mathbb{Q}^{1\times m}\) with \(\boldsymbol{f}\geq 0\) and \(\boldsymbol{f}A=(0,\ldots,0)\in\mathbb{Q}^{1\times n}\) it holds \(\boldsymbol{f}\boldsymbol{b}\geq 0\)._
**Theorem 2** (Fundamental Theorem of Linear Programming, as in [22]).: _Let \(A\in\mathbb{Q}^{m\times n}\) and \(\boldsymbol{b}\in\mathbb{Q}^{m\times 1}\). Then \(A\boldsymbol{x}\leq\boldsymbol{b}\) is satisfiable if and only if there exists a subset \(I\subseteq[m]\) such that \(A[I]\) is linearly independent, \(|I|=\text{rank}(A)\), and there exists an assignment \(\alpha:X\to\mathbb{R}\) with \(\boldsymbol{\alpha}\models(A[t]\boldsymbol{x}=\boldsymbol{b}[t])\) and \(\alpha\models(A\boldsymbol{x}\leq\boldsymbol{b})\)._
### Fourier-Motzkin Variable Elimination
The _Fourier-Motzkin variable elimination_ (FM) [10, 24] method allows to eliminate any \(x_{j}\in X\) from a system \(A\mathbf{x}\leq\mathbf{b}\) by computing \(A^{\prime}\mathbf{x}\leq\mathbf{b^{\prime}}\) with \(\mathbf{a^{\prime}}_{\cdot,j}=0\) and such that an assignment \(\alpha\) is a solution of \(A^{\prime}\mathbf{x}\leq\mathbf{b^{\prime}}\) if and only if there is \(r\in\mathbb{Q}\) so that \(\alpha[x_{j}\mapsto r]\) is a solution of \(A\mathbf{x}\leq\mathbf{b}\). Graphically, the solution set of \(A^{\prime}\mathbf{x}\leq\mathbf{b^{\prime}}\) is the projection of the solutions of \(A\mathbf{x}\leq\mathbf{b}\) onto \(X\setminus\{x_{j}\}\).
The idea of the FM method is as follows. For each \(i\in[m]\) with \(a_{i,j}\neq 0\), the constraint \(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i}\) can be rewritten as either a _lower bound_ or an _upper bound_ on \(x_{j}\), denoted in both cases as \(bnd_{j}(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i})\):
\[\Big{(}\sum_{k\in[n]\setminus\{j\}}-\frac{a_{i,k}}{a_{i,j}}\cdot x_{k}\Big{)} +\frac{b_{i}}{a_{i,j}}\leq x_{j},\ \ \text{if}\ a_{i,j}<0,\ \ \ \ \ \ \ \text{resp.}\ \ \ \ \ \ \ x_{j}\leq\Big{(}\sum_{k\in[n]\setminus\{j\}}-\frac{a_{i,k}}{a_{i,j}}\cdot x_ {k}\Big{)}+\frac{b_{i}}{a_{i,j}},\ \ \text{if}\ a_{i,j}>0.\]
**Definition 1**.: _For \(A\in\mathbb{Q}^{m\times n}\), we define the index sets_
\[I_{j}^{-}(A):=\{i\in[m]\mid a_{i,j}<0\},\ \ \ I_{j}^{+}(A):=\{i\in[m]\mid a_{i,j}>0\}, \ \ \ \text{and}\ \ \ I_{j}^{0}(A):=\{i\in[m]\mid a_{i,j}=0\}.\]
\(I_{j}^{-}(A)\)_, \(I_{j}^{+}(A)\)\(\text{and}I_{j}^{0}(A)\) indicate the rows of \(A\mathbf{x}\leq\mathbf{b}\) which induce lower bounds, upper bounds and no bounds on \(x_{j}\), respectively. Due to the density of the reals, there exists a value for \(x_{j}\) that satisfies all bounds if and only if each lower bound is less than or equal to each upper bound. However, since in general the involved bounds are symbolic and thus their values depend on the values of other variables, we cannot directly check this condition. To express this, we let \(A^{\prime}\mathbf{x}\leq\mathbf{b^{\prime}}\) be defined by the constraint set
\[\{bnd_{j}(\mathbf{a}_{\ell,\cdot}\mathbf{x}\leq b_{\ell})\leq bnd_{j}(\mathbf{a}_{u,\cdot }\mathbf{x}\leq b_{u})\mid(\ell,u)\in I_{j}^{-}(A)\times I_{j}^{+}(A)\}\ \ \ \ \cup\ \ \ \ \{\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i}\mid i\in I_{j}^{0}(A)\}.\]
In matrix representation, the FM method applies the following transformation:
**Definition 2** (Fourier-Motzkin Variable Elimination).: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), and \(j\in[n]\). Let further \(m^{\prime}=|I_{j}^{-}(A)|\cdot|I_{j}^{+}(A)|+|I_{j}^{0}(A)|\) and \(F\in\mathbb{Q}^{m^{\prime}\times m}\) be a matrix consisting of exactly the following rows:1_
Footnote 1: Remember that we use lower case letters for rows of matrices with the respective upper case letter as name. Thus, \(\mathbf{\epsilon}_{i,\cdot}^{(m)}\) denotes the \(i\)th column vector of the identity matrix \(E^{(m)}\).
\[-\frac{1}{a_{\ell,j}}\cdot\mathbf{e}_{\ell,\cdot}^{(m)}+\frac{1}{a_{u,j}}\cdot\bm {e}_{u,\cdot}^{(m)}\ \ \ \text{for every pair}\ \ (\ell,u)\in I_{j}^{-}(A)\times I_{j}^{+}(A)\ \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \ \mathbf{e}_{i,\cdot}^{(m)}\ \ \ \text{for every}\ \ i\in I_{j}^{0}(A).\]
_Then the Fourier-Motzkin variable elimination \(\mathtt{FM}_{j}(A\mathbf{x}\leq\mathbf{b})\) of \(x_{j}\) from the system \(A\mathbf{x}\leq\mathbf{b}\) is defined as the system \(F\mathbf{x}\leq F\mathbf{b}\)._
The consistency of \(A\mathbf{x}\leq\mathbf{b}\) can be checked by successively eliminating variables \(x_{n},\ldots,x_{1}\), obtaining intermediate systems \(A^{(n-1)}\mathbf{x}\leq\mathbf{b}^{(n-1)},\ldots,A^{(0)}\mathbf{x}\leq\mathbf{b}^{(0)}\). All entries of the transformation matrix \(F\) in the definition above are positive, and thus for any \(k\in\{0,\ldots,n-1\}\) and any row \(i^{\prime}\) in \(A^{(k)}\mathbf{x}\leq\mathbf{b}^{(k)}\), there exists \(0\leq\mathbf{f}\in\mathbb{Q}^{m\times 1}\) s.t. \(\mathbf{f}A=\mathbf{a}_{\ell,\cdot}^{(k)}\) and \(\mathbf{f}\mathbf{b}=b_{\ell}^{(k)}\), or in short: \(\sum_{i\in[m]}f_{i}\cdot(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i})=(\mathbf{a}_{\ell, \cdot}^{(k)}\mathbf{x}\leq b_{\ell}^{(k)})\). We call this kind of linear combinations _conical combinations_. By Farkas' Lemma (Theorem 1), if \(A^{(0)}\mathbf{x}\leq\mathbf{b}^{(0)}\) is unsatisfiable, then so is \(A\mathbf{x}\leq\mathbf{b}\). If it is satisfiable, then it is satisfied by the empty assignment, which can be extended successively to a model of \(A^{(1)}\mathbf{x}\leq\mathbf{b}^{(1)},\ldots,A^{(n-1)}\mathbf{x}\leq\mathbf{b}^{(n-1)}\) and \(A\mathbf{x}\leq\mathbf{b}\).
A major drawback of the Fourier-Motzkin variable elimination is its doubly exponential complexity in time and space w.r.t. the number of eliminated variables. Moreover, many of the generated rows are redundant because they are linear combinations of the other rows, i.e. they could be omitted without changing the solution set of the system. Redundancies might already be contained in the input system, or they arise during the projection operation. While removing all redundancies is expensive, there are efficient methods for removing some redundancies of the latter type, for example Imbert's acceleration theorems [11, 12, 13].
**Lemma 1** (Redundancy by Construction).: _Let \(A\in\mathbb{Q}^{m\times n},\mathbf{b}\in\mathbb{Q}^{m\times 1}\) and \(F\in\mathbb{Q}^{m^{\prime}\times m}\). Let furthermore \(A^{\prime}=FA\), \(\mathbf{b}^{\prime}=F\mathbf{b}\) and \(i\in[m^{\prime}]\). If there exists \(\mathbf{r}\in\mathbb{Q}^{1\times m^{\prime}}\) with \(\mathbf{r}\geq 0\), \(r_{i}=0\) and \(\mathbf{r}F=\mathbf{f}_{i,\cdot}\) (i.e. the \(i\)th row of \(A^{\prime}\mathbf{x}\leq\mathbf{b}^{\prime}\) is a conical combination \(\mathbf{r}FA\mathbf{x}\leq\mathbf{r}F\mathbf{b}\) of the other rows), then that row is redundant in \(A^{\prime}\mathbf{x}\leq\mathbf{b}^{\prime}\), i.e. the solution set does not change when omitting it: \(sol(A^{\prime}\mathbf{x}\leq\mathbf{b}^{\prime})=sol(A^{\prime}[[m^{\prime}]\setminus \{i\}]\mathbf{x}\leq\mathbf{b}^{\prime}[[m^{\prime}]\setminus\{i\}])\)._
## 3 FMplex as Variable Elimination Procedure
The FM method encodes that none of the lower bounds on some variable \(x_{j}\) in a system \(A\mathbf{x}\leq\mathbf{b}\) is larger than any of its upper bounds. In our _FMplex_ method, instead of considering all lower-upper bound combinations at once, we _split the problem into a set of sub-problems_ by case distinction either on _which of the lower bounds is the largest_ or alternatively on _which of the upper bounds is the smallest_. For splitting on lower bounds, for each lower bound on \(x_{j}\) we consider solutions where this lower bound is maximal under all lower bounds, and at the same time not larger than any of the upper bounds. The upper bound case is analogous. Then \(A\mathbf{x}\leq\mathbf{b}\) is satisfiable if and only if there exists a solution in one of these sub-problems. Asymptotically, these sub-problems are significantly smaller than the systems produced by FM, so that in total our approach produces _at most exponentially_ many constraints after iterated application, in contrast to the doubly exponential effort of the FM method.
Formally, if there are no upper or no lower bounds on \(x_{j}\), then there is no need for case splitting and we follow FM using \(\exists x_{j}.\ A\mathbf{x}\leq\mathbf{b}\equiv A[\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\
**Definition 3** (Restricted Projection).: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\) and \(j\in[n]\)._
* _If_ \(I_{j}^{-}(A)\neq\mathbf{\emptyset}\) _and_ \(I_{j}^{+}(A)\neq\mathbf{\emptyset}\)_, then for any_ \(i\in I_{j}^{-}(A)\cup I_{j}^{+}(A)\) _we fix_ \(F\in\mathbb{Q}^{(m-1)\times m}\) _arbitrarily but deterministically to consist of exactly the following rows:_ \[\frac{1}{a_{i,j}}\cdot\mathbf{e}_{i,\cdot}^{(m)}-\frac{1}{a_{i^{ \prime},j}}\cdot\mathbf{e}_{i^{\prime},\cdot}^{(m)}\text{ for every }i^{\prime}\in I_{j}^{-}(A) \setminus\{i\},\] \[-\frac{1}{a_{i,j}}\cdot\mathbf{e}_{i,\cdot}^{(m)}+\frac{1}{a_{i^{ \prime},j}}\cdot\mathbf{e}_{i^{\prime},\cdot}^{(m)}\text{ for every }i^{\prime}\in I_{j}^{+}(A) \setminus\{i\},\qquad\text{ and }\qquad\mathbf{e}_{i^{\prime},\cdot}^{(m)}\text{ for every }i^{\prime}\in I_{j}^{0}(A).\] _Then the_ restricted projection \(P_{j,i}(A\mathbf{x}\leq\mathbf{b})\) _of_ \(x_{j}\) _w.r.t. the row_ \(i\) _from the system_ \(A\mathbf{x}\leq\mathbf{b}\) _is defined as the system_ \(FA\mathbf{x}\leq F\mathbf{b}\)_. We call_ \(F\) _the_ projection matrix _corresponding to_ \(P_{j,i}(A\mathbf{x}\leq\mathbf{b})\)_._
* _If_ \(I_{j}^{-}(A)=\mathbf{\emptyset}\) _or_ \(I_{j}^{+}(A)=\mathbf{\emptyset}\)_, then we define the projection matrix_ \(F\in\mathbb{Q}^{|I_{j}^{0}(A)|\times m}\) _to have exactly one row_ \(\mathbf{e}_{i^{\prime},\cdot}^{(m)}\) _for each_ \(i^{\prime}\in I_{j}^{0}(A)\)_, and define_ \(P_{j,\perp}(A\mathbf{x}\leq\mathbf{b})\) _as_ \(FA\mathbf{x}\leq F\mathbf{b}\)_._
The following lemma states a crucial result for our method: The solutions of the restricted projections for all lower (or all upper) bounds of a variable exactly cover the projection of the entire solution set.
**Lemma 2**.: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), \(j\in[n]\) and \(I\in\{I_{j}^{-}(A),I_{j}^{+}(A)\}\). If \(I_{j}^{-}(A)\neq\mathbf{\emptyset}\) and \(I_{j}^{+}(A)\neq\mathbf{\emptyset}\), then_
\[sol(A\mathbf{x}\leq\mathbf{b})|_{X\setminus\{x_{j}\}}=\bigcup_{i\in I}sol(P_{j,i}(A \mathbf{x}\leq\mathbf{b})).\]
_Otherwise (\(I_{j}^{-}(A)=\mathbf{\emptyset}\) or \(I_{j}^{+}(A)=\mathbf{\emptyset}\)), it holds \(sol(A\mathbf{x}\leq\mathbf{b})|_{X\setminus\{x_{j}\}}=sol(P_{j,\perp}(A\mathbf{x}\leq\bm {b}))\)._
Proof.: The case \(I_{j}^{-}(A)=\mathbf{\emptyset}\) or \(I_{j}^{+}(A)=\mathbf{\emptyset}\) follows from the correctness of FM. Assume \(I=I_{j}^{-}(A)\), the case \(I=I_{j}^{+}(A)\) is analogous.
\(\supseteq\)**:**: Let \(i\in I_{j}^{-}(A)\) and \(\alpha\models P_{j,i}(A\mathbf{x}\leq\mathbf{b})\), then for all \(\ell\in I_{j}^{-}(A)\), \(u\in I_{j}^{+}(A)\) it holds \(\alpha(bnd_{j}(\mathbf{a}_{\ell,\cdot}\mathbf{x}\leq b_{\ell}))\leq\alpha(bnd_{j}( \mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i}))\leq\alpha(bnd_{j}(\mathbf{a}_{u,\cdot}\mathbf{x} \leq b_{u}))\). Thus, \(\alpha[x_{j}\mapsto\alpha(bnd_{j}(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i}))]\models A \mathbf{x}\leq\mathbf{b}\).
\(\subseteq\)**:**: Let \(\alpha\models A\mathbf{x}\leq\mathbf{b}\) and \(i=\arg\max_{\ell\in I_{j}^{-}(A)}(\alpha(bnd_{j}(\mathbf{a}_{\ell,\cdot}\mathbf{x} \leq b_{\ell})))\), then for all \(u\in I_{j}^{+}(A)\) it holds
\(\alpha(bnd_{j}(\mathbf{a}_{i,\cdot}\mathbf{x}\leq b_{i}))\leq\alpha(bnd_{j}(\mathbf{a}_{u,\cdot}\mathbf{x}\leq b_{u}))\) and thus \(\alpha\models P_{j,i}(A\mathbf{x}\leq\mathbf{b})\).
**Definition 4** (FMplex Variable Elimination).: _For \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), \(j\in[n]\) and \(*\in\{-,+\}\), we define_
\[\textsc{FMP}_{j}^{*}(A\mathbf{x}\leq\mathbf{b})=\begin{cases}\{P_{j,i}(A\mathbf{x}\leq\mathbf{ b})\mid i\in I_{j}^{*}(A)\}&\text{ if }I_{j}^{-}(A)\neq\mathbf{\emptyset}\text{ and }I_{j}^{+}(A)\neq\mathbf{\emptyset}\\ \{P_{j,\perp}(A\mathbf{x}\leq\mathbf{b})\}&\text{ otherwise}.\end{cases}\]
The FMplex elimination defines a set of restricted projections which can be composed to the full projection according to Lemma 2. Lifting this from sets to logic naturally results in the following theorem which demonstrates the usage of our method.
**Theorem 3**.: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), and \(j\in[n]\). Then_
\[\exists x_{j}.\,A\mathbf{x}\leq\mathbf{b}\quad\equiv\quad\bigvee_{S\in\textsc{FMP}_{j}^ {+}(A\mathbf{x}\leq\mathbf{b})}S\quad\equiv\quad\bigvee_{S\in\textsc{FMP}_{j}^{-}(A\bm {x}\leq\mathbf{b})}S.\]
For eliminating multiple variables, we iteratively apply \(\textsc{FMP}^{-}\) or \(\textsc{FMP}^{+}\) to each restricted projection resulting from the previous elimination step. Note that we can choose the next variable to be eliminated as well as the variant independently in every branch.
**Example 2**.: _We continue Example 1, from which we eliminated \(x_{2}\) and now want to eliminate \(x_{1}\):_
\[\exists x_{1}.\,\exists x_{2}.\,A\mathbf{x}\leq\mathbf{b} \equiv \exists x_{1}.\bigvee_{S\in\mathbb{FMP}^{-}_{2}(A\mathbf{x}\leq\mathbf{b})}S\] \[\equiv \exists x_{1}.\,\,(x_{1}\leq 3\wedge-3x_{1}\leq-3\wedge-x_{1}\leq 1 )\,\vee\exists x_{1}.\,\,(-x_{1}\leq-3\wedge-2x_{1}\leq 0\wedge 0\leq 4)\]
_We eliminate the two quantifiers for \(x_{1}\) separately, using_
\[\mathbb{FMP}^{-}_{1}\left(x_{1}\leq 3\wedge-3x_{1}\leq-3\wedge-x_{1} \leq 1\right)=\{(0\leq 2\wedge 0\leq 2),(0\leq-2\wedge 0\leq 4)\}\text{ and }\] \[\mathbb{FMP}^{-}_{1}\left(-x_{1}\leq-3\wedge-2x_{1}\leq 0\wedge 0 \leq 4\right)=\{(0\leq 4)\}\]
_giving us the final result \(\exists x_{1}.\,\exists x_{2}.\,A\mathbf{x}\leq\mathbf{b}\,\,\equiv\,\,((0\leq 2\wedge 0 \leq 2)\vee(0\leq 4\wedge 0\leq-2))\vee(0\leq 4)\)._
We analyze the complexity in terms of the number of new rows (or constraints) that are constructed during the elimination of all variables:
**Theorem 4** (Complexity of \(\mathbb{FMP}\)).: _Let \(A\in\mathbb{Q}^{m\times n}\), and \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\). When eliminating \(n\) variables from \(A\mathbf{x}\leq\mathbf{b}\), the \(\mathbb{FMP}^{-}\) method constructs \(\mathcal{O}(n\cdot m^{n+1})\) new rows._
Proof.: The number \(N(m,n)\) of constructed rows is maximal if the system consists only of lower bounds and one upper bound. Then, \(\mathbb{FMP}^{-}\) yields \(m-1\) new systems of size \(m-1\), from which \(n-1\) variables need to be eliminated; thus \(N(m,n)\leq(m-1)\cdot((m-1)+N(m-1,n-1))\). With \(k=\min(n,m)\), we obtain \(N(m,n)\leq\sum\limits_{i=1}^{k}(m-i)\cdot\prod\limits_{j=1}^{i}(m-j)\leq n \cdot m^{n+1}\).
While still exponential, this bound is considerably better than the theoretical doubly exponential worst-case complexity of the FM method. Shortly speaking, FMplex trades one exponential step at the cost of the result being a decomposition into multiple partial projections. However, there are systems for which FMplex produces strictly more rows than the FM method: In the worst case from the above proof, FM obtains a single system of the same size as each of the sub-problems computed by \(\mathbb{FMP}^{-}\). Although in this case, we could simply employ \(\mathbb{FMP}^{+}\) instead, it is unclear whether there exists a rule for employing \(\mathbb{FMP}^{-}\) or \(\mathbb{FMP}^{+}\) that never produces more constraints than FM.
Like FM, FMplex keeps redundancies from the input throughout the algorithm, thus there might be identical rows in the same or across different sub-problems. But in contrast to FM, FMplex does not introduce any redundancies by construction in the sense of Lemma 1.
**Theorem 5**.: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\) and \(k\in[m]\). Assume \((A^{(0)}\mathbf{x}\leq\mathbf{b}^{(0)})=(A\mathbf{x}\leq\mathbf{b})\) and for all \(j\in[k]\), let \((A^{(j)}\mathbf{x}\leq\mathbf{b}^{(j)})\in\mathbb{FMP}^{-}_{j}(A^{(j-1)}\mathbf{x}\leq\mathbf{ b}^{(j-1)})\cup\mathbb{FMP}^{+}_{j}(A^{(j-1)}\mathbf{x}\leq\mathbf{b}^{(j-1)})\). Let \(F^{(1)},\ldots,F^{(k)}\) be the respective projection matrices, and \(F=F^{(k)}\cdot\ldots\cdot F^{(1)}\). Then \(F\) is linearly independent._
Proof.: By definition, the projection matrices are linearly independent, and thus so is their product \(F\).
## 4 FMplex as Satisfiability Checking Procedure
A formula is satisfiable if and only if eliminating all variables (using any quantifier elimination method such as FM or FMplex) yields a tautology. However, FMplex computes smaller sub-problems whose satisfiability implies the satisfiability of the original problem. Therefore, we do not compute the whole projection at once, but explore the decomposition using a depth-first search. The resulting search tree has the original system as root, and each node has as children the systems resulting from restricted projections. The original system is satisfiable if and only if a leaf without any trivially false constraints exists.
An example is depicted in Figure 2. We start with a basic version of the algorithm and then examine how the search tree can be pruned, resulting in two variants; all versions are given in Algorithm 1.
An important observation is that we can decide independently for each node of the search tree, which variable to eliminate next and whether to branch on lower or on upper bounds.
**Definition 5** (Branch Choices).: _The set of branch choices for a system \(A\boldsymbol{x}\leq\boldsymbol{b}\) is_
\[\text{branch\_choices}(A\boldsymbol{x}\leq\boldsymbol{b})= \{\{(x_{j},i)\mid i\in I_{j}^{-}(A)\}\mid j\in[n]\wedge I_{j}^{-} (A)\neq\emptyset\wedge I_{j}^{+}(A)\neq\emptyset\}\] \[\cup\{\{(x_{j},i)\mid i\in I_{j}^{+}(A)\}\mid j\in[n]\wedge I_{j}^ {-}(A)\neq\emptyset\wedge I_{j}^{+}(A)\neq\emptyset\}\] \[\cup\{\{(x_{j},\bot)\}\mid j\in[n]\wedge(I_{j}^{-}(A)=\emptyset \lor I_{j}^{+}(A)=\emptyset)\}.\]
For an initial input \(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}}\) with \(\widehat{m}\) rows, we define the depth-first search using the recursive method \(\mathtt{FMplex}(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}};A \boldsymbol{x}\leq\boldsymbol{b},F)\) in Algorithm 1a where \(A\boldsymbol{x}\leq\boldsymbol{b}\) is the currently processed sub-problem in the recursion tree. We track the relation of \(A\boldsymbol{x}\leq\boldsymbol{b}\) to \(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}}\) in terms of linear combinations using the parameter \(F\). The initial call is defined as \(\mathtt{FMplex}(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}})= \mathtt{FMplex}(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}}; \widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}},E^{(\widehat{m})})\). We allow that \(A\boldsymbol{x}\leq\boldsymbol{b}\) contains identical rows when they are obtained in different ways (which is reflected by \(F\)). We need to keep these duplicates for proving the results of this section.
SolutionsIf a trivially satisfiable node is found, the algorithm constructs an assignment starting with the empty assignment and extends it in reverse order in which the variables were eliminated. For every variable \(x_{j}\), a value is picked above all lower and below all upper bounds on \(x_{j}\) evaluated at the underlying assignment. By the semantics of the projection, the value of the designated (largest lower or smallest upper) bound on \(x_{j}\) is suitable.
ConflictsWe distinguish inconsistencies in \(A\boldsymbol{x}\leq\boldsymbol{b}\) by the following notions: We call a row \(i\) of \(A\boldsymbol{x}\leq\boldsymbol{b}\) a _conflict_ if it is of the form \(\boldsymbol{a}_{i,\cdot}=\boldsymbol{0}^{(n)}\) with \(b_{i}<0\). We call the conflict _global_ if \(\boldsymbol{f}_{i,\cdot}\geq 0\) and _local_ otherwise. In case of a global conflict, Farkas' Lemma allows to deduce the unsatisfiability of \(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}}\), thus stopping the search before the whole search tree is generated. Then a set of conflicting rows \(K\) of the input system corresponding to \(\boldsymbol{f}_{i,\cdot}\) is returned. In particular, the set \(\{\widehat{\boldsymbol{a}}_{j,\cdot}\ \boldsymbol{x}\leq\widehat{b}_{j}\mid f_{i,j}\neq 0\}\) is a minimal unsatisfiable subset of the constraints in \(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}}\). In case of a local conflict, we simply continue to explore the search tree. The algorithm returns _PARTIAL-UNSAT_ to indicate that \(A\boldsymbol{x}\leq\boldsymbol{b}\) is unsatisfiable, but the unsatisfiability of \(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}}\) cannot be derived. This approach, formalized in Algorithm 1a, guarantees that the initial call will never return _PARTIAL-UNSAT_; we always find either a global conflict or a solution.
The correctness and completeness of \(\mathtt{FMplex}\) follows from Theorem 3 and Theorem 6.
**Theorem 6**.: _Let \(\widehat{A}\in\mathbb{Q}^{\widehat{m}\times n}\), and \(\widehat{\boldsymbol{b}}\in\mathbb{Q}^{\widehat{m}}\times 1\). Then \(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}}\) is unsatisfiable if and only if the call \(\mathtt{FMplex}(\widehat{A}\boldsymbol{x}\leq\widehat{\boldsymbol{b}})\) to Algorithm 1a terminates with a global conflict._
Figure 2: The search tree corresponding to Example 2. The very first leaf (bottom left) is already satisfiable, meaning that the rest would not need to be computed.
```
1:\(\texttt{FMplex}(\widehat{\mathbf{Ax}}\leq\widehat{\mathbf{b}};\mathbf{Ax}\leq\mathbf{b},F,\)\(\mathcal{N},I\),\(\texttt{lvl},\texttt{bt}\_{lvl}\)
```
**Algorithm 1a** The base method consists of the plain (unframed and unfilled) parts.
**Algorithm 1b** Consists of the base method and the framed parts.
``` Data :\(\widehat{A}\in\mathbb{Q}^{\widehat{m}\times n},\)\(\widehat{\mathbf{b}}\in\mathbb{Q}^{\widehat{m}}\) Input :\(A\in\mathbb{Q}^{m\times n},\)\(\mathbf{b}\in\mathbb{Q}^{m}\), \(F\in\mathbb{Q}^{m\times\widehat{m}}\) s.t. \(F\widehat{A}=A\) and \(F\widehat{\mathbf{b}}=\mathbf{b}\), \(\mathcal{N}\subseteq[\widehat{m}]\), \(I\subseteq[\widehat{m}]\), \(\texttt{lvl}\in[\mathbf{n}]\cup\{\mathbf{0}\}\), and \(\texttt{bt}\_{\texttt{lvl}}:[\mathbf{m}]\rightarrow[\mathbf{n}]\cup\{\mathbf{0}\}\) Output :\((\mathit{SAT},\,\alpha)\) with \(\alpha\models A\mathbf{x}\leq\mathbf{b}\), or (_UNSAT_, \(K\)) where \(K\subseteq[\widehat{m}]\), or (_PARTIAL-UNSAT_, \(\texttt{l},\mathbf{K}\)) where \(I\in[\mathbf{n}]\) and \(K\subseteq[\widehat{m}]\)
1if\(A=0\wedge\mathbf{b}\geq 0\)thenreturn\((\mathit{SAT},\,())\)
2if\(\exists i\in[m]\). \(\mathbf{a}_{i,\cdot}=0\wedge b_{i}<0\wedge\mathbf{f}_{i,\cdot}\geq 0\)thenreturn\((\mathit{UNSAT},\{i^{\prime}\mid f_{i,i^{\prime}}\neq 0\})\)
3if\(\exists i\in[m]\). \(\mathbf{a}_{i,\cdot}=0\wedge b_{i}<0\wedge\mathbf{f}_{i,\cdot}\geq 0\)then
4\(i:=\arg\min_{i\in[m]}\{\texttt{bt}\_{\texttt{lvl}}(i)\mid\mathbf{a}_{i,\cdot}=0 \wedge b_{i}<0\}\)
5return(PARTIAL-UNSAT_,\(\texttt{bt}\_{\texttt{lvl}}(\mathbf{i})=1,\{i^{\prime}\mid f_{i,i^{\prime}}\neq 0\}\))
6\(K=\emptyset\)
7
8choose\(V\in\mathit{branch\_choices}(A\mathbf{x}\leq\mathbf{b},\)\(\left\{\mathcal{B}_{\mathcal{N},F}^{-1}(i)\mid i\in I\right\}\)
9foreach\((x_{j},i)\in V\)do
10 compute\(A^{\prime}\mathbf{x}\leq\mathbf{b^{\prime}}:=P_{j,i}(A\mathbf{x}\leq\mathbf{b})\) with projection matrix \(F^{\prime}\) and backtrack levels \(\texttt{bt}\_{\texttt{lvl}}^{\prime}\)
11\(\mathcal{N}^{\prime}:=\mathcal{N}\cup\{\mathcal{B}_{\mathcal{N},F}(i)\}\)if\(i\neq\bot\)else\(\mathcal{N}\)
12switch\(\texttt{FMplex}\) (\(\widehat{\mathbf{Ax}}\leq\widehat{\mathbf{b}};A^{\prime}\mathbf{x}\leq\mathbf{b^{\prime}},F^{ \prime}F,\)\(\mathcal{N}^{\prime},I\),\(\texttt{lvl}+1,\texttt{bt}\_{\texttt{lvl}}^{\prime}\))do
13case\((\mathit{UNSAT},K^{\prime})\)doreturn\((\mathit{UNSAT},K^{\prime})\)
14case\((\mathit{SAT},\alpha)\)doreturn\((\mathit{SAT},\alpha[x_{j}\mapsto r])\) for a suitable \(r\in\mathbb{Q}\)
15case(PARTIAL-UNSAT_,\(l\),_K_)do
16if\(l<\texttt{lvl}\)thenreturn(PARTIAL-UNSAT_,\(l\),_K_)
17else\(K=K\cup K^{\prime}\)
18
19\(I:=I\cup\{\mathcal{B}_{\mathcal{N},F}(i)\}\)
20
21
22
23
24
25
26
27
28
29
30
31
32
334
35
36
37
38
39
40
41
42
43
44
45
467
48
491
502
51
529
603
529
6104
611
62
629
73
749
84
85
86
87
988
990
910
920
931
942
953
964
975
989
991
992
993
994
995
996
100
1011
102
113
114
115
116
117
118
119
1201
121
122
123
124
125
1261
127
128
1291
129
1301
142
143
1444
145
1467
147
1480
1498
1509
161
1701
181
1821
190
1822
183
184
1901
1912
1913
1924
1935
1946
1977
1988
1999
200
2199
225
2961
2970
2010
226
2031
2047
2051
2061
2071
2072
2081
2098
2099
21999
2100
21011
2111
2127
2128
2132
2147
2148
2149
2249
250
2627
2714
284
2851
2962
2972
2986
2987
2998
2999
300
3101
3112
3201
3334
346
3571
387
3999
4901
402
4147
4152
403
4153
404
4154
405
4155
406
4156
4157
4158
4159
416
4167
4168
417
4169
429
430
4217
431
445
432
445
456
4571
469
478
479
480
4991
504
492
493
494
494
5169
529
532
540
556
572
584
585
595
596
597
598
607
599
610
629
632
645
659
666
671
681
698
699
799
799
810
700
7101
7111
720
7211
730
7311
740
741
751
761
778
799
82
832
849
851
861
871
882
871
883
884
885
862
871
887
888
992
993
945
886
871
888
896
997
987
998
999
999
9999
9999
9999
100
111
1111
112
1113
114
115
116
117
118
119
119
1201
111
117
1118
119
122
112
122
123
1124
1125
1126
127
128
129
1291
1292
120
1217
1218
122
123
124
1252
1253
126
127
128
1292
1293
128
1294
295
2967
2978
298
3099
400
5100
6011
702
703
7111
720
7321
740
752
761
761
871
812
812
812
8129
1297
1298
2999
2999
3099
4129
50
629
70
812
999
82
9
130
140
1512
16
173
174
18
19
20
121
22
23
24
25
26
27
28
29
312
33
4
4
5
6
7
8
9
9
9
10
10
112
20
213
30
14
5
15
16
17
18
20
21
217
218
219
312
32
33
4
5
6
7
8
9
12
8
20
21
219
30
312
32
4
5
7
8
9
12
33
4
5
8
9
12
34
5
8
12
3
5
8
12
3
5
8
12
3
5
8
12
3
6
8
12
3
7
8
12
3
8
12
3
8
12
3
8
12
3
8
3
8
12
3
8
3
12
3
8
3
8
12
3
8
3
8
12
3
8
3
12
3
8
3
12
3
8
3
4
8
3
4
4
5
8
3
4
4
5
8
5
5
8
5
12
3
8
5
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
12
3
8
5
Proof Idea for Theorem 6.: If \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\) is unsatisfiable, then there exists a minimal unsatisfiable subset \(\widehat{K}\) of the corresponding constraints. We construct a path in the search tree induced by Algorithm 1a yielding a conflict that is a linear combination of \(\widehat{K}\). As \(\widehat{K}\) is minimal, the linear combination is positive, i.e. the conflict is global. The other direction of the equivalence follows immediately with Farkas' Lemma. Consult the extended version for a detailed proof.
### Avoiding Redundant Checks
We observe that each row \(i\) in a sub-problem \(A\mathbf{x}\leq\mathbf{b}\) in the recursion tree of \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}})\) corresponds to a row \(\hat{\imath}\) in \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\) in the sense that it is a linear combination of the rows \(\{\hat{\imath}\}\cup\mathcal{N}\) of \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\), where \(\mathcal{N}\subseteq[\widehat{m}]\) corresponds to the lower/upper bounds designated as largest/smallest one to compute \(A\mathbf{x}\leq\mathbf{b}\):
**Theorem 7**.: _Let \(\widehat{A}\in\mathbb{Q}^{\widehat{m}\times n}\) and \(\widehat{\mathbf{b}}\in\mathbb{Q}^{\widehat{m}\times 1}\). Let \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq b,F)\) be a call in the recursion tree of the call \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}})\) to Algorithm 1a, where \(A\in\mathbb{Q}^{m\times n}\) and \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\) (by construction \(m\leq\widehat{m}\))._
_Then there exists a set \(\mathcal{N}\subseteq[\widehat{m}]\) such that_
1. \(A\mathbf{x}\leq\mathbf{b}\) _is satisfiable if and only if_ \((\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}})\wedge(\widehat{A}|\mathcal{N}|\mathbf{x}= \widehat{\mathbf{b}}[\mathcal{N}])\) _is satisfiable,_
2. _there exists an injective mapping_ \(\mathcal{B}_{\mathcal{N},\mathcal{F}}:[m]\to[\widehat{m}],i\mapsto\hat{\imath}\) _with_ \(\{\hat{\imath}\}=\{\hat{\imath}^{\prime}\in[\widehat{m}]\mid f_{i,\hat{\imath }^{\prime}}\neq 0\}\setminus\mathcal{N}\)_._
Proof Idea.: The statement follows with a straight forward induction over the elimination steps, where the original row corresponding to the chosen bound is added to \(\mathcal{N}\), and \(\mathcal{B}_{\mathcal{N},F}\) keeps track of which constraint corresponds to which original row. Consult the extended version for a detailed proof.
We call the above defined set \(\mathcal{N}\) the _non-basis_, inspired from the analogies to the simplex algorithm (discussed in Section 5.1). By the above theorem, the order in which a non-basis is constructed has no influence on the satisfiability of the induced sub-problem. In particular:
**Theorem 8**.: _Let \(A\in\mathbb{Q}^{m\times n}\), \(\mathbf{b}\in\mathbb{Q}^{m\times 1}\), \(j\in[n]\), and let \(i,i^{\prime}\in[m]\) be row indices with \(a_{i,j}\neq 0\) and \(a_{i^{\prime},j}\neq 0\). If \(P_{j,i}(A\mathbf{x}\leq\mathbf{b})\) is unsatisfiable, then \(P_{j,i^{\prime}}(A\mathbf{x}\leq\mathbf{b})\wedge(\mathbf{a}_{i,\cdot}\mathbf{x}=b_{i})\) is unsatisfiable._
Proof.: By Theorem 7, if \(P_{j,i}(A\mathbf{x}\leq\mathbf{b})\) is unsatisfiable, then \((A\mathbf{x}\leq\mathbf{b})\wedge(\mathbf{a}_{i,\cdot}\mathbf{x}=\mathbf{b}_{i})\) is unsatisfiable, and trivially \((A\mathbf{x}\leq\mathbf{b})\wedge(\mathbf{a}_{i,\cdot}\mathbf{x}=\mathbf{b}_{i})\wedge(\mathbf{a}_{i, \cdot}\mathbf{x}=\mathbf{b}_{i^{\prime}})\) is unsatisfiable as well. Using Theorem 7 in the other direction yields that \(P_{j,i^{\prime}}(A\mathbf{x}\leq\mathbf{b})\wedge(\mathbf{a}_{i,\cdot}\mathbf{x}=\mathbf{b}_{i})\) is unsatisfiable.
This suggests that if \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq\mathbf{b},F)\) with non-basis \(\mathcal{N}\) has a child call for row \(i\) which does not return \(\mathit{SAT}\), then no other call in the recursion tree of \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq\mathbf{b},F)\) where the corresponding non-basis contains \(\mathcal{B}_{\mathcal{N},F}(i)\) will return \(\mathit{SAT}\) either. Hence, we can ignore \(\mathcal{B}_{\mathcal{N},F}(i)\) as designated bound in the remaining recursion tree of \(\mathtt{FMplex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq\mathbf{b},F)\).
**Example 3**.: _Consider the system from Example 1, with an additional constraint \(c_{5}:(-x_{2}\leq 0)\). If \(c_{5}\) is tried first as greatest lower bound on \(x_{2}\), then the combination with \(c_{2}:(-2x_{2}\leq-2)\) yields the local conflict \(\frac{1}{2}c_{2}-c_{5}=(0\leq-1)\). Thus, this branch and, due to Theorem 8, any non-base containing row \(5\) yields an unsatisfiable system._
_Next, we try \(c_{1}\) as greatest lower bound on \(x_{2}\) resulting in the combinations \(\frac{1}{2}c_{2}-c_{1}=(x_{1}\leq 3)\), \(c_{5}-c_{1}=(x_{1}\leq 4)\), \(c_{1}+c_{3}=(-3x_{1}\leq-3)\) and \(c_{1}+c_{4}=(-x_{1}\leq 1)\) and corresponding non-base \(\{1\}\)._
_If we now choose \((x_{1}\leq 4)\) as smallest upper bound on \(x_{1}\), leading to the non-base \(\{1,5\}\), another local conflict occurs: \((x_{1}\leq 3)-(x_{1}\leq 4)=(0\leq-1)\). As \(5\) is contained in the non-base, we could know beforehand that this would happen and thus avoid computing this branch._
We update the \(\mathtt{FMplex}\) algorithm as shown in Algorithm 1b using the following definition:
**Definition 6**.: _The set of branch choices for \(A\mathbf{x}\leq\mathbf{b}\) with \(m\) rows w.r.t. \(I\subseteq[m]\) is_
\[\text{branch\_choices}(A\mathbf{x}\leq\mathbf{b},I)= \{\{(x_{j},i)\mid i\in I_{j}^{-}(A)\setminus I\}\mid j\in[n]\wedge I _{j}^{-}(A)\neq\emptyset\wedge I_{j}^{+}(A)\neq\emptyset\}\] \[\cup\{\{(x_{j},i)\mid i\in I_{j}^{+}(A)\setminus I\}\mid j\in[n] \wedge I_{j}^{-}(A)\neq\emptyset\wedge I_{j}^{+}(A)\neq\emptyset\}\] \[\cup\{\{(x_{j},\bot)\}\mid j\in[n]\wedge(I_{j}^{-}(A)=\emptyset \lor I_{j}^{+}(A)=\emptyset)\}.\]
It is easy to see that this modification prevents visiting non-basis twice in the following sense:
**Theorem 9**.: _Let \(\text{FMPlex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A\mathbf{x}\leq\mathbf{b},\_, \mathcal{N},\_\,)\) and \(\text{FMPlex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}};A^{\prime}\mathbf{x}\leq\mathbf{b }^{\prime},\_,\mathcal{N}^{\prime},\_\,)\) be two calls in the recursion tree of a call to Algorithm 1b. Then either \(\mathcal{N}\neq\mathcal{N}^{\prime}\) or one of the systems occurs in the subtree below the other and only unbounded variables are eliminated between them (i.e. one results from the other by deleting some rows). _
Theorem 10 states that, still, Algorithm 1b always terminates with _SAT_ or a global conflict. This follows by a slight modification of the proof of Theorem 6, presented in the extended version of this paper.
**Theorem 10**.: _Let \(\widehat{A}\in\mathbb{Q}^{\widehat{m}\times n}\), and \(\widehat{\mathbf{b}}\in\mathbb{Q}^{\widehat{m}\times 1}\). Then \(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}}\) is unsatisfiable if and only if the call \(\text{FMPlex}(\widehat{A}\mathbf{x}\leq\widehat{\mathbf{b}})\) to Algorithm 1b terminates with a global conflict. _
### Backtracking of Local Conflicts
So far, we ignored local conflicts that witness the unsatisfiability of a given sub-problem. In this section, we will cut off parts of the search tree based on local conflicts and examine the theoretical implications.
We applied Farkas' Lemma on conflicting rows in some sub-problem that are positive linear combinations of rows from the input system. We can also apply Farkas' Lemma to conflicting rows which are positive linear combinations of some _intermediate_ system to conclude the unsatisfiability of the latter. Whenever such a conflict occurs, we can backtrack to the parent system of that unsatisfiable system. Instead of tracking the linear combinations of every row in terms of the rows of each preceding intermediate system, we can do an incomplete check: If a conflicting row was computed only by addition operations, then it is a positive linear combination of the involved rows. Thus, we assign to every intermediate system a level, representing its depth in the search tree and store for every row the level where the last subtraction was applied to the row (i.e. a lower (upper) bound was subtracted from another lower (upper) bound). If a row is conflicting, we can conclude that the intermediate system at this level is unsatisfiable, thus we can jump back to its parent.
Assume the current system is \(A\mathbf{x}\leq\mathbf{b}\) at level \(\mathtt{lvl}\) with \(m\) rows whose backtracking levels are stored in \(\mathtt{bt\_lvl}:[m]\rightarrow([n]\cup\{0\})\). If \(\mathtt{lvl}=0\), then \(\mathtt{bt\_lvl}\) maps all values to \(0\). When computing \(P_{j,i}(A\mathbf{x}\leq\mathbf{b})\) for some \(x_{j}\) and \(i\) with projection matrix \(F\), the backtracking levels of the rows in the resulting system \(FA\mathbf{x}\leq F\mathbf{b}\) are stored in \(\mathtt{bt\_lvl}^{\prime}\) where for each row \(i^{\prime\prime}\)
\[\mathtt{bt\_lvl}^{\prime}(i^{\prime\prime}):=\begin{cases}\max\{\mathtt{bt\_ lvl}(i),\mathtt{bt\_lvl}(i^{\prime})\}&\text{ if }f_{i^{\prime\prime},i},f_{i^{\prime\prime},i^{\prime}}>0\text{ and }f_{i^{\prime\prime},k}=0,\;k\notin\{i,i^{\prime}\}\\ \mathtt{lvl}&\text{otherwise.}\end{cases}\]
The backtracking scheme is given in Algorithm 1c, which returns additional information in the _PARTIAL-UNSAT_ case, that is the backtrack level \(l\) of the given conflict, and a (possibly non-minimal) unsatisfiable subset \(K\).
**Theorem 11**.: _Let \(\text{FMPlex}(\_;A\mathbf{x}\leq\mathbf{b},\_,\_,\_,\_,\_,\text{\_lvl},\_\,)\) be a call to Algorithm 1c, and consider a second call \(\text{FMPlex}(\_;A^{\prime}\mathbf{x}\leq\mathbf{b}^{\prime},\_,\_,\_,\text{\_lvl},\_ \text{\_lvl},\_)\) in the recursion tree of the first call. If \(A^{\prime}\mathbf{x}\leq\mathbf{b}^{\prime}\) has a local conflict in a row \(i\) with \(\mathtt{bt\_lvl}^{\prime}(i)=\mathtt{lvl}\), then \(A\mathbf{x}\leq\mathbf{b}\) is unsatisfiable._
Proof.: By construction of bt_lvl', \(\mathbf{a}^{\prime}_{i}\), \(\mathbf{x}\leq b^{\prime}_{i}\) is a positive sum of rows from \(A\mathbf{x}\leq\mathbf{b}\), i.e. there exists an \(\mathbf{f}\in\mathbb{Q}^{1\times m}\) such that \((\mathbf{f}A\mathbf{x}\leq\mathbf{f}\mathbf{b})=(\mathbf{a}^{\prime}_{i_{+}}\), \(\mathbf{x}\leq b^{\prime}_{i})\). Then by Farkas' Lemma, \(A\mathbf{x}\leq\mathbf{b}\) is unsatisfiable.
While it is complete and correct, Algorithm 1c does not always terminate with a _global_ conflict (i.e. Theorem 6 does not hold any more), even if we do not ignore any rows (i.e. omit Line 17):
**Example 4**.: _We use Algorithm 1c to eliminate variables with the static order \(x_{3},x_{2},x_{1}\) from the system on the right, always branching on lower bounds. We first choose row \(1\) as greatest lower bound on \(x_{3}\). Rows \(3\) and \(4\) are retained as they do not contain \(x_{3}\) and the combination of row \(1\) with row \(5\) is positive, so these constraints have backtrack level \(0\). The combination with row \(2\) has backtrack level \(1\) because both rows are lower bounds. Using this constraint as greatest lower bound on \(x_{2}\) and combining it with row \(4\) leads to a local conflict with backtrack level \(1\). This means that the call at level \(1\) is unsatisfiable and thus we backjump to level \(0\)._
_The second branch is visited, leading to the non-basis \(\mathcal{N}=\{2,5,1\}\) after three steps, where a local conflict lets us backjump to level \(0\) again. As there are no more lower bounds on \(x_{3}\), the algorithm returns UNSAT without finding a global conflict._
## 5 Relation to Other Methods
### Simplex Algorithm
The simplex method [7, 19] is an algorithm for linear optimization over the reals and is able to solve _linear programs_. The _general simplex_[8] is an adaption for checking the satisfiability of systems of linear constraints. We illustrate its idea for the weak case.
Remind that given a system \(A\mathbf{x}\leq\mathbf{b}\) with \(m\) rows, by the fundamental theorem of linear programming (Theorem 2), \(A\mathbf{x}\leq\mathbf{b}\) is satisfiable if and only if there exists some maximal subset \(\mathcal{N}\subseteq[m]\) such that \(A[\mathcal{N}]\) is linearly independent and \(A\mathbf{x}\leq\mathbf{b}\cup A[\mathcal{N}]\mathbf{x}=\mathbf{b}[\mathcal{N}]\) is satisfiable - the latter can be checked algorithmically using Gaussian elimination, resulting in a system where each variable is replaced by bounds induced by the rows \(\mathcal{N}\). This system along with the information which element in \(\mathcal{N}\) was used to eliminate which variable is called a _tableau_. The idea of the simplex method is to do a local search on the set \(\mathcal{N}\) (called _non-basis_), that is, we replace some \(i\in\mathcal{N}\) (_leaving variable_) by some \(i^{\prime}\in[m]\setminus\mathcal{N}\) (_entering variable_) obtaining \(\mathcal{N}^{\prime}:=\mathcal{N}\cup\{i^{\prime}\}\setminus\{i\}\) such that \(A[\mathcal{N}^{\prime}]\) is still linearly independent. The clo is that the tableau representing \((A\mathbf{x}\leq\mathbf{b})\wedge(A[\mathcal{N}]\mathbf{x}=\mathbf{b}[\mathcal{N}])\) can be efficiently transformed into \((A\mathbf{x}\leq\mathbf{b})\wedge(A[\mathcal{N}^{\prime}]\mathbf{x}=\mathbf{b}[\mathcal{N}^{ \prime}])\) (called _pivot operation_), and progress of the local search can be achieved by the choice of \(i\) and \(i^{\prime}\). These local search steps are performed until a satisfying solution is found, or a conflict is found. These conflicts are detected using Farkas' Lemma (Theorem 1), i.e. a row in the tableau induces a trivially false constraint and is a positive linear combination of some input rows.
As suggested by Theorem 7, there is a strong correspondence between a tableau of the simplex algorithm and the intermediate systems constructed in FMplex. More precisely, if a non-basis of a simplex tableau is equal to the non-basis of a leaf system of Algorithm 1a, then the tableau is satisfiable if and only if the FMplex system is satisfiable. In fact, we could use the same data structure to represent the algorithmic states. Comparing the two algorithms structurally, FMplex explores the search space in a tree-like structure using backtracking, while simplex can jump between neighbouring leaves directly.
The idea for Algorithm 1b that excludes visiting the same non-basis in fact arose from the analogies between the two methods. Further, we observe a potential advantage of FMplex: Simplex has more non-bases reachable from a given initial state than the leaves of the search tree of FMplex, as FMplex needs only to explore all lower or all upper bounds of a variable while simplex does local improvements blindly. Heuristically, simplex cuts off large parts of its search space and we expect it often visits fewer non-bases than FMplex - however, as the pruning done by FMplex is by construction of the algorithm, we believe that there might be combinatorially hard instances on which it is more efficient than simplex.
### Virtual Substitution Method
_Virtual substitution_[21, 28] admits quantifier elimination for real arithmetic formulas. Here, we consider its application on existentially quantified conjunctions of linear constraints.
The underlying observation is that the satisfaction of a formula changes at the zeros of its constraints and is invariant between the zeros. Thus, the idea is to collect all _symbolic zeros_\(\mathrm{zeros}(\varphi)\) of all constraints in some input formula \(\varphi\). If all these constraints are weak, then a variable \(x_{j}\) is eliminated by plugging every zero and an arbitrarily small value \(-\infty\) into the formula, i.e. \(\exists x_{j}\). \(\varphi\) is equivalent to \(\varphi[-\infty/x_{j}]\vee\bigvee_{\xi\in\mathrm{zeros}(\varphi)}\varphi[\xi/ x_{j}]\). The formula \(\varphi[t/x_{j}]\) encodes the semantics of substituting the term \(t\) for \(x_{j}\) into the formula \(\varphi\) (which is a disjunction of conjunctions). As we can pull existential quantifiers into disjunctions, we can iteratively eliminate multiple variables by handling each case separately.
The resulting algorithm for quantifier elimination is singly exponential; further optimizations ([27] even proposes to consider only lower or upper bounds for the test candidates) lead to a very similar procedure as the FMplex quantifier elimination: Substituting a test candidate into the formula is equivalent to computing the restricted projection w.r.t. a variable bound. However, our presentation allows to exploit the correspondence with the FM method.
Virtual substitution can also be adapted for SMT solving [4] to a depth-first search similar to FMplex. A conflict-driven search for virtual substitution on conjunctions of weak linear constraints has been introduced in [16], which tracks intermediate constraints as linear combinations of the input constraints similarly to FMplex. Their conflict analysis is a direct generalization of the global conflicts in FMplex and is thus slightly stronger than our notion of local conflicts. However, their method requires storing and maintaining a lemma database, while FMplex stores all the information for pruning the search tree locally. The approaches have strong similarities, although they originate from quite different methods. Further, our presentation shows the similarities to simplex, is easily adaptable for strict constraints, and naturally extensible to work incrementally.
### Sample-Based Methods
There exist several depth-first search approaches, including McMillan et al. [23], Cotton [6] and Korovin et al. [17, 18], which maintain and adapt a concrete (partial) variable assignment. They share the advantage that combinations of constraints are only computed to guide the assignment away from an encountered conflict, thus avoiding many unnecessary combinations which FM would compute.
Similar to FMplex, these methods perform a search with branching, backtracking and learning from conflicting choices. However, they branch on variable assignments, with infinitely many possible choices in each step. Interestingly, the bounds learned from encountered conflicts implicitly partition the search space into a finite number of regions to be tried, similar to what FMplex does explicitly. In fact, we deem it possible that [17] or [18] try and exclude assignments from exactly the same regions that FMplex would visit (even in the same order). However, the sample-based perspective offers different possibilities for
heuristic improvements than FMplex: choosing the next assigned value vs. choosing the next lower bound; deriving constant variable bounds vs. structural exploits using Farkas' Lemma; possibility of very quick solutions vs. more control and knowledge about the possible choices.
Moreover, these methods offer no straight-forward adaption for quantifier elimination, while FMplex does. However, [23] and [6] can handle not only conjunctions, but any quantifier-free LRA formula in conjunctive normal form.
## 6 Experimental Evaluation
We implemented several heuristic variants of the FMplex algorithm, as well as the generalized _simplex_ and the _FM_ methods as non-incremental DPLL(T) theory backends in our SMT-RAT solver [5] and compared their performance in the context of satisfiability checking. Using the transformation given in [25] and case splitting as in [3], we extended the method to also handle strict and not-equal-constraints.
The base version of FMplex (Algorithm 0(a)) was tested with two different heuristics for the choice of the eliminated variable and for the order in which the branches are checked. These choices may strongly influence the size of the explored search tree; in the best case, the very first path leads to a satisfiable leaf or to a global conflict.
Min-FanoutWe greedily minimize the number of children: for any \(A\boldsymbol{x}\leq\boldsymbol{b}\) and \(I\), we choose \(V\in\textit{branch\_choices}(A\boldsymbol{x}\leq\boldsymbol{b},I)\) such that \(|V|\) is minimal; in case that this minimum is 1, we prefer choices \(V=\{(x_{j},\bot)\}\) for a \(j\in[n]\) over the other choices.
We prefer rows with a lower (earlier) backtrack level, motivated by finding a global conflict through trying positive linear combinations first. Moreover, if backtracking is used then we expect this heuristic to allow for backtracking further back on average.
Min-Column-LengthA state-of-the-art heuristic for simplex in the context of SMT solving is the _minimum column length_[15]: we choose the variables for leaving and entering the non-basis such that the number of necessary row operations is minimized. We resemble this heuristic in FMplex as follows: we prefer choices \(\{(x_{j},\bot)\}\) and if there is no such \(j\), we take the \(j\in[n]\) with minimal \(|I_{j}^{-}(A)|+|I_{j}^{+}(A)|\) and take the smallest choice between \(I_{j}^{-}(A)\) and \(I_{j}^{+}(A)\).
We first choose the rows which have the least non-zero coefficients (i.e. contain the least variables) to prefer sparse sub-problems. This can be understood as _Min-Row-Length_.
We consider the following solver variants: FMplex-a-MF0 and FMplex-a-MCL implement Algorithm 0(a) with the Min-Fanout and the Min-Column-Length heuristic, respectively. FMplex-a-Rand-1/2 denotes two variants of Algorithm 0(a) where all choices are taken pseudo-randomly with different seeds. FMplex-b-MF0 implements Algorithm 0(b) and FMplex-c-MF0 implements Algorithm 0(c), both using the Min-Fanout heuristic. Our approach is also compared to non-incremental implementations FM and Simplex. The FMplex variants and FM always first employ Gaussian elimination to handle equalities.
All solvers were tested on the SMT-LIB [2] benchmark set for QF_LRA containing 1753 formulas. As all evaluated solvers are non-incremental, we also generated conjunctions of constraints by solving each of these QF_LRA problems using a DPLL(T) SMT solver with an FMplex-c-MF0 theory solver backend, and extracting all conjunctions passed to it. If the solver terminated within the time and memory limits, we sampled 10 satisfiable and 10 unsatisfiable conjunctions (or gathered all produced conjunctions if there were fewer than 10). This amounted to 3084 (777 sat, 2307 unsat) additional benchmarks. The experiments were conducted on identical machines with two Intel Xeon Platinum 8160 CPUs (2.1 GHz, 24 cores). For each formula, the time and memory were limited to 10 minutes and 5 GB.
\begin{table}
\begin{tabular}{l r r r r r r|r r r r} & \multicolumn{4}{c}{SMT-LIB} & \multicolumn{4}{c}{Conjunctions} \\ & solved & sat & unsat & TO & MO & solved & sat & unsat & TO & MO \\ \hline Simplex & 958 & 527 & 431 & 714 & 81 & 3084 & 777 & 2307 & 0 & 0 \\ FM & 860 & 461 & 399 & 577 & 316 & 2934 & 747 & 2187 & 107 & 43 \\ FMplex-a-MFO & 814 & 432 & 382 & 840 & 99 & 2962 & 743 & 2219 & 122 & 0 \\ FMplex-a-MCL & 820 & 435 & 385 & 830 & 103 & 2965 & 742 & 2223 & 119 & 0 \\ FMplex-a-Rand-1 & 742 & 383 & 359 & 906 & 105 & 2806 & 668 & 2138 & 278 & 0 \\ FMplex-a-Rand-2 & 743 & 383 & 360 & 905 & 105 & 2823 & 671 & 2152 & 261 & 0 \\ FMplex-b-MFO & 822 & 434 & 388 & 830 & 101 & 2988 & 744 & 2244 & 96 & 0 \\ FMplex-c-MFO & 920 & 499 & 421 & 733 & 100 & 3084 & 777 & 2307 & 0 & 0 \\ Virtual-Best & 982 & 532 & 450 & 651 & 120 & 3084 & 777 & 2307 & 0 & 0 \\ \end{tabular}
\end{table}
Table 1: Number of solved instances, timeouts (TO) and memory-outs (MO).
The results in Table 1 show that Simplex solved the most SMT-LIB instances, followed by our FMplex-c-MFO and then FM. Interestingly, FM solves fewer conjunctive instances than the base version of FMplex due to higher memory consumption (43 memory-outs for FM, while the others have none). We see that a reasonable variable heuristic makes a difference as FMplex-a-Rand-* perform much worse than FMplex-a-MFO and FMplex-a-MCL. However, between the latter two, there is no significant difference. While our first optimization used in FMplex-b-MFO has no big impact, the backtracking implemented in FMplex-c-MFO allows for solving more instances within the given resource limits.
The running times for each individual SMT-LIB instance depicted in Figures 2(a) and 2(b) reveal that FM and FMplex-c-MFO often behave similar, but FM fails on a number of larger instances. We suspect that the smaller intermediate systems of FMplex are a main factor here. While Simplex is often faster than FMplex-c-MFO and solves 61 SMT-LIB instances not solved by FMplex-c-MFO, it fails to solve 23 instances on which FMplex-c-MFO succeeds (Of these instances, FM solves 3 respectively 14 instances). Accordingly, the Virtual-Best of the tested solvers performs significantly better than just Simplex, indicating potential for a combination of Simplex and FMplex-c-MFO.
Figure 2(c) compares the number of constraints generated by FM and FMplex-c-MFO on the conjunctive inputs. Especially on larger instances, FMplex seems to be in the advantage. Motivated by Section 4.1, Figure 2(d) compares the number of Simplex pivots to the number of systems in FMplex-c-MFO. We see that neither is consistently lower than the other, though Simplex seems to be slightly superior. Due to the log-log scale, not shown are 1305 instances in which either measurement is 0 (920 instances for Simplex, 981 for FMplex-c-MFO).
The implementation and collected data are available at [https://doi.org/10.5281/zenodo.7755862](https://doi.org/10.5281/zenodo.7755862).
## 7 Conclusion
We introduced a novel method _FMplex_ for quantifier elimination and satisfiability checking for conjunctions of linear real arithmetic constraints. Structural observations based on Farkas' Lemma and the Fundamental Theorem of Linear Programming allowed us to prune the elimination or the search tree. Although the new method is rooted in the FM method, it has strong similarities with both the virtual substitution method and the simplex method.
The experimental results in the context of SMT solving show that FMplex is faster than Fourier-Motzkin and, although simplex is able to solve more instances than FMplex, there is a good amount of instances which can be solved by FMplex but cannot be solved by simplex.
In future work, we aim to combine the structural savings of FMplex with the efficient heuristic of simplex, i.e. we transfer ideas from FMplex to simplex and vice-versa. Furthermore, we will investigate in tweaks and heuristics. For instance, we plan to adapt the perfect elimination ordering from [20] and work on an incremental adaption for SMT solving. Last but not least, we plan to increase the applicability of FMplex as a quantifier elimination procedure, including a different handling of strict inequalities, which is more similar to FM.
|
2302.10997 | Robust Auto-landing Control of an agile Regional Jet Using Fuzzy
Q-learning | A robust auto-landing problem of a Truss-braced Wing (TBW) regional jet
aircraft with poor stability characteristics is presented in this study
employing a Fuzzy Reinforcement Learning scheme. Reinforcement Learning (RL)
has seen a recent surge in practical uses in control systems. In contrast to
many studies implementing Deep Learning in RL algorithms to generate continuous
actions, the methodology of this study is straightforward and avoids complex
neural network architectures by applying Fuzzy rules. An innovative, agile
civil aircraft is selected not only to meet future aviation community
expectations but also to demonstrate the robustness of the suggested method. In
order to create a multi-objective RL environment, a Six-degree-of-freedom
(6-DoF) simulation is first developed. By transforming the auto-landing problem
of the aircraft into a Markov Decision Process (MDP) formulation, the problem
is solved by designing a low-level Fuzzy Q-learning (FQL) controller. More
specifically, the well-known Q-learning method, which is a discrete RL
algorithm, is supplemented by Fuzzy rules to provide continuous actions with no
need to complex learning structures. The performance of the proposed system is
then evaluated by extensive flight simulations in different flight conditions
considering severe wind gusts, measurement noises, actuator faults, and model
uncertainties. Besides, the controller effectiveness would be compared with
existing competing techniques such as Dynamic Inversion (DI) and Q-learning.
The simulation results indicate the superior performance of the proposed
control system as a reliable and robust control method to be employed in real
applications. | Mohsen Zahmatkesh, Seyyed Ali Emami, Afshin Banazadeh, Paolo Castaldi | 2023-02-21T21:04:00Z | http://arxiv.org/abs/2302.10997v1 | # Robust Auto-landing Control of an agile Regional Jet Using Fuzzy Q-learning
###### Abstract
A robust auto-landing problem of a Truss-braced Wing (TBW) regional jet aircraft with poor stability characteristics is presented in this study employing a Fuzzy Reinforcement Learning scheme. Reinforcement Learning (RL) has seen a recent surge in practical uses in control systems. In contrast to many studies implementing Deep Learning in RL algorithms to generate continuous actions, the methodology of this study is straightforward and avoids complex neural network architectures by applying Fuzzy rules. An innovative, agile civil aircraft is selected not only to meet future aviation community expectations but also to demonstrate the robustness of the suggested method. In order to create a multi-objective RL environment, a Six-degree-of-freedom (6-DoF) simulation is first developed. By transforming the auto-landing problem of the aircraft into a Markov Decision Process (MDP) formulation, the problem is solved by designing a low-level Fuzzy Q-learning (FQL) controller. More specifically, the well-known Q-learning method, which is a discrete RL algorithm, is supplemented by Fuzzy rules to provide continuous actions with no need to complex learning structures. The performance of the proposed system is then evaluated by extensive flight simulations in different flight conditions considering severe wind gusts, measurement noises, actuator faults, and model uncertainties. Besides, the controller effectiveness would be compared with existing competing techniques such as Dynamic Inversion (DI) and Q-learning. The simulation results indicate the superior performance of the proposed control system as a reliable and robust control method to be employed in real applications.
keywords: Fuzzy Q-learning, Q-learning, Reinforcement Learning, Auto-landing +
## 1 Introduction
Recently, the aviation industry has faced a number of challenges, including increased emissions and congested airspace. As a result, the future world requirements will be characterized as safety and efficiency. In the efficiency segment, reducing emissions by using less fuel and making economical flights alongside faster flights is the main goal. On another hand, high-speed flight management in moreover congested airspace falls within the subject of safety. In conclusion, the discussed parameters strongly encourage the development of novel aircraft configurations to gain advantageous elements. The Scope Clause, on the other hand, is an agreement that places a cap on the number of aircraft seats in order to prevent outsourcing and guarantee the jobs of union pilots. Therefore, it is inevitable that the Modern Regional Jet (MRJ) fleet would grow. That demonstrates the importance of reliable flight control systems. In high-performance aerodynamic configurations, such as TBW aircraft, there are some re-raised interests [1; 2; 3]. Apart from that, the aviation industry has expressed interest in TBWs because of their fuel burn efficiency [4]. Despite considerable dynamic modeling research [5], it appears that no reliable auto-landing controller has been developed for these configurations.
Studies show that the landing procedure is the riskiest part of flying and calls for expert pilotage abilities. The International Civil Aviation Organization (ICAO) 6th annex document introduces three alternate aerodromes--Take-off, En-route, and Destination--where an MRJ shall land in the event of particular failure scenarios. In this case, [6] examined the most frequent causes of flying incidents and accidents during the approach phase. The most hazardous elements included are pilot erroneous decisions (74%), skipping or completing activities incorrectly (72%), and ineffective crew communication, mutual cooperation, and mutual control (63%). The cited elements strengthen the function of trendy autonomous controllers with great dependability to bolster safety. There are several studies with advanced control centralization in the landing procedure. For instance, [7] developed a double-loop nonlinear dynamic inversion controller based on a deep failure estimator made up of layers of Long-short Term Memory (LSTM) and Convolution Neural Networks (CNN). These neural networks were trained for severe
stuck failures, with time-series data on landing trajectory patterns in actuator faulty and healthy landing condition simulations. In order to control the trajectory of a tailless and blended wing UAV confronting air turbulences and sensor measurement errors, a unique auto-landing framework was presented in [8]. In this research, a Backstepping-based controller is used to control the attitude angles. A Dynamic Inversion-based controller creates the throttle signal to keep a constant velocity, and an adaptive disturbance observer estimates the atmospheric turbulences to track the proposed landing trajectory. In [9] a generalized Anti-windup based on traditional PID controllers as well as a phase compensation system used to train a neural-assisted Sliding Mode controller. In order to tackle stuck actuator failures during an auto-landing scenario. It is demonstrated that the capacity of the neural-aided Sliding Mode controller to tolerate faults is greatly improved by adding Anti-windup and phase compensation. An aircraft heading angle is guided by a Deep Q-learning (DQL) in [10], allowing it to land in the desired 2-dimensional field. In this study, the dynamic modeling of aircraft is not included. In [11] a Deep Deterministic Policy Gradient (DDPG) was used to control an Unmanned Aerial Vehicle (UAV's) desired path for the landing flare phase in the longitudinal channel in existing wind disturbances. The structure of the proposed method includes two Deep ANNs as an Actor-critic architecture. Similar to this, the DDPG approach is employed in the outer loop of a landing procedure in [12] whereas the inner loop is controlled by a Proportional-integral-derivative (PID) controller. It is important to note that several classic techniques for aircraft attitude control have been created to improve the quality of landings. The main shortcoming of these traditional theories is their lack of adaptation in extended working points. To overcome this weakness, some existing approaches have increased their robustness by utilizing ANNs.
In this regard, publications like [13] and [14] are considered. In the first paper, a \(H_{2}\) controller is addressed to actuator faults and wind disturbances. The focus of the second paper is on designing an online neural-aided controller to increase the robustness of existing controllers for fault tolerance. Although not specified in the landing phase, a different group of publications also deals with controller design. For instance, [15] presented a layered Model Predictive Control system based on simple sparse rapidly exploring random trees for path planning, path control, velocity control, and angular
velocity control. This method improved the estimation of a real-time, nonlinear, and onboard convex Flight-envelop calculation. For hydraulic actuator failures, [16] created an Integral Sliding Mode controller featuring Control Allocation. This controller eliminates the need to redesign another controller owing to distributing control signals among redundant actuators. An ANN-based adaptive controller using Feedback Linearization was developed in [17] to address the dramatic roll and unsteady longitudinal behavior in the condition of partial wing damage. In another study, to achieve hydraulic fault tolerance in the longitudinal axis, [18] evaluated the performance of three controllers, including Adaptive Back-stepping, Robust Sliding Mode, and PID. In this instance, the initial controller overcame the other methods. A Q-learning horizontal trajectory tracking controller with an ANN foundation was developed in [19] using the MDP model of an airship with fine stability characteristics. In this study, the method of action selection was optimized using a Cerebellar Model Articulation Controller (CMAC) neural network. A Soft Actor-critic (SAC) technique was used in [20] to solve a path planning problem for a Long-endurance, Solar-powered UAV that took energy consumption into account. Another study cited as [21] focused on a Skywalker X8 inner loop control employing SAC and comparing it with a PID controller.
Proximal Policy Optimization (PPO), was used in [22] for orientation control of a typical extremely dynamic coupled Fixed-wing aircraft in the stall circumstance. After 100,000 episodes, there was a successful convergence of the PPO. The efforts that have been mentioned thus far use ANNs to improve convergence and robustness. To the best of our knowledge, there are discrete RL-based attitude control studies without using ANNs. A Q-learning method was used in [23] to control longitudinal and lateral angles in a general aviation airplane (Cessna 172). This research controls the desired angles of zero and the airplane profits good stability characteristics.
There are some Fuzzy adaptations on [24] work like [25] where the Q-functions and action selection strategy are inferred from Fuzzy rules. Also, in order to reduce the number of states needed to shape an MDP model for mobile robots that avoid obstacles, [26] suggests a Fuzzy technique. Because the mobile robot may encounter an infinite number of different conditions. The Fuzzy method is used to generalize the condition and minimize processor requirements alongside reducing states. Also, [27] proposed a
dynamic Fuzzy Q-learning for online and continuous tasks in mobile robots. In [28], the Fuzzy Q-learning (FQL) method and Strictly Negative Imaginary (SNI) property are used to provide a novel robust adaptive control for quadrotor attitude and altitude stabilization. The objective is to develop a control strategy that dynamically adapts the SNI controller using FQL. Another study, [29] used Q-learning as an attitude controller for a unique, highly maneuverable regional jet aircraft in MDP and POMDP scenarios. The simulation results in a variable pitch angle tracking were satisfactory.
Motivated by the preceding discussions, the following are the main contributions of the current study:
1. A novel continuous action generator is developed as a general connector between every (discrete/continuous) optimal policy and the RL environment.
2. In response to worldwide aviation community expectations, a TBW aircraft (figure 1) with specific stability characteristics is selected for the auto-landing problem, where the high maneuverability of the aircraft brings significant challenges into the design process.
3. In contrast to many studies, the complexity of ANN architectures and the low adaptation of classic methods are well resolved using Fuzzy Q-learning.
4. The robustness and reliability of the proposed FQL are examined under different flight conditions consisting of sensor measurement noises, atmospheric disturbances, actuator faults, and model uncertainties.
## 2 Six-DoF Aircraft Dynamic Modeling
In order to develop a nonlinear RL environment, the 6-DoF nonlinear equations of motions cited in [30; 31] are utilized in this section. Many environments based on Gym and Flight Gear are open-source, such as GymFG ([32]). But this plant must model and simulate from scratch due to the unique characteristics of the innovative configuration. In this approach, it is presumed that the earth is flat. Consequently, the body frame translational and rotational equations are as follows:
\[mD^{B}\mathbf{v}^{B}+m\mathbf{\Omega}^{B}\mathbf{v}^{B}=\mathbf{f}_{a}^{B}+\mathbf{f}_{p}^{B}+m \mathbf{g}^{B}, \tag{1}\]
\[D^{B}(\mathbf{I}^{B}\mathbf{\omega}^{B})+\mathbf{\Omega}^{B}\mathbf{I}^{B}\mathbf{\omega}^{B}=\mathbf{m}^{ B}_{a}+\mathbf{m}^{B}_{p}. \tag{2}\]
Where \(u,v,w\) are the velocity components, and \(D^{B}\) is defined as the Rotational Time Derivative in body frame. So the \(D^{B}\mathbf{v}^{B}\) equals \([\frac{dv}{dt}]^{B}=[\dot{u}\ \ \dot{v}\ \ \dot{w}]^{T}\), and \([\omega]^{B}=[p\ \ q\ \ r]^{T}\) are the roll, pitch, and yaw angular rates vector in body frame. Also \(\mathbf{\Omega}^{B}\) is the skew-symmetric form of angular rates vector.
\[[\Omega]^{B}=\begin{bmatrix}0&-r&q\\ r&0&-p\\ -q&p&0\end{bmatrix}. \tag{3}\]
Furthermore, \(m\) is the mass of aircraft, and \(\mathbf{I}^{B}\) is the moment of inertia matrix in body frame;
\[[I]^{B}=\begin{bmatrix}I_{x}&0&I_{xz}\\ 0&I_{y}&0\\ I_{xz}&0&I_{z}\end{bmatrix}. \tag{4}\]
In equations (1) and (2), some variables on the right-hand side are considered to be zero including \(\mathbf{m}^{B}_{p}\) vector which is engine power moments. Also, \(\mathbf{f}^{B}_{a}\), \(\mathbf{f}^{B}_{p}\), and \(\mathbf{m}^{B}_{a}\) are considered as aerodynamic force, engine power force, and aerodynamic moment vectors. Where except engine power, the other non-zero forces and moments are computed in
Figure 1: Chaka 50 and 76 Modern Regional Jet (MRJ) Family [4]
aerodynamic frame as follows;
\[\begin{bmatrix}L\\ D\\ m_{a}\end{bmatrix}^{S}=\bar{q}S\bar{c}\begin{bmatrix}c_{L_{0}}&c_{L_{\alpha}}&c_{L_{ \dot{\alpha}}}&c_{L_{u}}&c_{L_{q}}&c_{L_{\dot{s}_{E}}}\\ c_{D_{0}}&c_{D_{\alpha}}&c_{D_{\dot{\alpha}}}&c_{D_{u}}&c_{D_{q}}&c_{D_{\dot{s}_ {E}}}\\ c_{m_{0}}&c_{m_{\alpha}}&c_{m_{\dot{\alpha}}}&c_{m_{u}}&c_{m_{q}}&c_{m_{\dot{s}_ {E}}}\end{bmatrix}\begin{bmatrix}1\\ \alpha\\ \frac{\dot{\alpha}\bar{c}}{2V_{P_{1}}}\\ \frac{u}{V_{P_{1}}}\\ \frac{q\bar{c}}{2V_{P_{1}}}\\ \delta_{E}\end{bmatrix}. \tag{5}\]
Furthermore, aerodynamic forces need a transfer from stability to the body frames of the angle of attack \(\alpha\) around \(y_{S}\) axis.
\[\begin{bmatrix}f_{a_{x}}\\ f_{a_{y}}\\ f_{a_{z}}\end{bmatrix}^{B}=\begin{bmatrix}cos\alpha&0&-sin\alpha\\ 0&1&0\\ sin\alpha&0&cos\alpha\end{bmatrix}^{BS}\begin{bmatrix}-D\\ 0\\ -L\end{bmatrix}^{S}. \tag{6}\]
Also, the gravitational acceleration vector \(\mathbf{g}^{B}\) in the body frame is as follows:
\[\begin{bmatrix}g_{x}\\ g_{y}\\ g_{z}\end{bmatrix}^{B}=\begin{cases}&-g\sin(\theta)\\ g\cos(\theta)\sin(\phi)\\ g\cos(\theta)\cos(\phi)\end{cases}. \tag{7}\]
Additionally, rotational kinematic equations are required for transfer from body to inertial frames.
\[\begin{bmatrix}\dot{\phi}\\ \dot{\theta}\\ \dot{\psi}\end{bmatrix}=\begin{bmatrix}1&\sin\varphi\tan\theta&\cos\varphi \tan\theta\\ 0&\cos\varphi&-\sin\varphi\\ 0&\sin\varphi/\cos\theta&\cos\varphi/\cos\theta\end{bmatrix}\begin{bmatrix}p \\ q\\ r\end{bmatrix}^{B}. \tag{8}\]
So, the translational kinematic equations using (1), and (8), in the inertial frame is achievable.
\[\begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{z}\end{bmatrix}^{E}=\begin{bmatrix}\cos\psi\cos\theta&\cos\psi\sin \theta\sin\varphi-\sin\psi\cos\varphi&\cos\psi\sin\theta\cos\varphi+\sin\psi \sin\varphi\\ \sin\psi\cos\theta&\sin\psi\sin\theta\sin\varphi+\cos\psi\cos\varphi&\sin\psi \sin\theta\cos\varphi-\cos\psi\sin\varphi\\ -\sin\theta&\cos\theta\sin\varphi&\cos\theta\cos\varphi\end{bmatrix}\begin{bmatrix} u\\ v\\ w\end{bmatrix}^{B}. \tag{9}\]
Based on Computational Fluid Dynamics (CFD), stability and control derivatives for the Chaka-50 are presented in [4]. Table (1) summarises these derivatives for two flying
scenarios. Trim conditions in a wings-level flight are calculated for simulation verification utilizing trim equations in [33]. The drag equation takes into account the absolute values of \(\delta_{E}\), \(i_{H_{1}}\), and \(\alpha_{1}\). In addition, the flight path angle \(\gamma_{1}\), motor installation angle \(\phi_{T}\), and horizontal tail incidence angle \(i_{H}\) are all equal to zero. By solving the trim equation, the elevator deflection \(\delta_{E}\) and required thrust \(f_{a_{x}}\) for a trim flight are obtained and shown in table (2). The numbers in the aforementioned table (2) are crucial for the validation of the 6-DoF simulation.
### Six-DoF Aircraft Dynamic Model considering Atmospheric Disturbances
Aircraft landing quality is affected by atmospheric disturbance which is air turbulence in the small areas of the atmosphere that often happens close to the ground. The
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Longitudinal Derivatives} & \multirow{2}{*}{Ideal} & Random & \multirow{2}{*}{Longitudinal Derivatives} & \multirow{2}{*}{Ideal} & Random \\ & & Uncertainty & & & \\ & & ([-10 10])\% & & & ([-10 10])\% \\ \hline \(c_{D_{0}}\) & 0.0338 & 0.0358 & \(c_{L_{u}}\) & 0.081 & 0.076 \\ \(c_{L_{0}}\) & 0.3180 & 0.3363 & \(c_{m_{u}}\) & -0.039 & -0.041 \\ \(c_{m_{0}}\) & -0.06 & -0.061 & \(c_{L_{q}}\) & 12.53 & 12.56 \\ \(c_{D_{\alpha}}\) & 0.8930 & 0.893 & \(c_{m_{q}}\) & -40.69 & -37.27 \\ \(c_{L_{\alpha}}\) & 14.88 & 14.52 & \(c_{D_{s_{E}}}\) & 0.1570 & 01483 \\ \(c_{m_{\alpha}}\) & -11.84 & -11.84 & \(c_{L_{s_{E}}}\) & 0.78 & 0.74 \\ \(c_{D_{u}}\) & 0.041 & 0.373 & \(c_{m_{s_{E}}}\) & -5.98 & -5.93 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Stability and control derivatives of Chaka 50 MRJ (1/rad)
\begin{table}
\begin{tabular}{c c} \hline \hline Parameter & Value \\ \hline Required Thrust (\(f_{a_{x}}\)) & 21433.02 (lbs) \\ Required Elevator (\(\delta_{E}\)) & 0.39 (deg) \\ Angle of Attack (\(\alpha\)) & -2.28 (deg) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Trim parameters of Chaka 50 MRJ
coordinates of disturbances that result in a loss of lift and altitude are the most hazardous since the aircraft is getting close to the final approach. The atmospheric disturbance is described in the literature as a stochastic process that is characterized by velocity spectra. There are two widely used models that are typically used in flight dynamic simulations: (1) Dryden Continuous Turbulence Model; and (2) Von Karman Continuous Turbulence Model The Dryden atmospheric turbulence is used in this study for two reasons. First, it allows for easier mathematical modeling and covers both linear and rotational components of disturbance velocity.
\[\begin{array}{l}G_{u}(s)=\sigma_{u}\sqrt{\frac{2L_{u}}{\pi u_{1}}}\Bigg{[} \frac{1}{1+(\frac{L_{u}}{u_{1}}s)}\Bigg{]},\\ G_{v}(s)=\sigma_{v}\sqrt{\frac{L_{v}}{\pi u_{1}}}\Bigg{[}\frac{1+2\sqrt{3} \frac{L_{v}}{u_{1}}s}{(1+\frac{2L_{u}}{u_{1}}s)^{2}}\Bigg{]},\\ G_{w}(s)=\sigma_{w}\sqrt{\frac{2L_{w}}{\pi u_{1}}}\Bigg{[}\frac{1+2\sqrt{3} \frac{L_{w}}{u_{1}}s}{(1+\frac{2L_{w}}{u_{1}}s)^{2}}\Bigg{]},\end{array} \tag{10}\]
where according to [34], \(L_{w}\), \(L_{v}\), and \(L_{u}\) are the scaling length;
\[\begin{array}{l}L_{u}=L_{v}=\frac{z}{(0.177+0.000823z)^{1.2}},\\ L_{w}=z,\end{array} \tag{11}\]
and \(\sigma_{u}\), \(\sigma_{v}\), and \(\sigma_{w}\) are the intensity of turbulence.
\[\begin{array}{l}\sigma_{u}=\sigma_{v}=\frac{\sigma_{w}}{(0.177+0.000823z)^{0. 4}},\\ \sigma_{w}=0.1u_{20},\end{array} \tag{12}\]
\begin{table}
\begin{tabular}{c c c c} \hline Parameter & Value & Parameter & Value \\ \hline Wing Area(\(m^{2}\)) & 43.42 & \(I_{xx}(kg.m^{2})\) & 378056.535 \\ Mean Aerodynamic Chord(\(m\)) & 1.216 & \(I_{yy}(kg.m^{2})\) & 4914073.496 \\ Span(\(m\)) & 28 & \(I_{zz}(kg.m^{2})\) & 5670084.803 \\ Mass(\(kg\)) & 18418.27 & \(I_{xz}(kg.m^{2})\) & 0 \\ Initial Speed \(V_{P_{1}}(\frac{m}{s})\) & 160 & Initial height \(h_{1}(m)\) & 100 \\ \hline \end{tabular}
\end{table}
Table 3: Chaka 50 MRJ specifics for simulation
where wind speed at 20 feet is specified by \(u_{20}\). The motion equations have now been modified to include the effects of the wind based on [35]. In general, the inertial frame is used to compute the wind and its derivatives. But complex computations are required for its transition into the body. As an alternative, one may use the derivatives in the body reference to reach; where the \(\mathbf{W}^{B}\) is the wind velocity vector in body frame; \([W]^{B}=[W_{x}\ \ W_{y}\ \ W_{z}]^{T}\).
\[\dot{W}_{x} =\bigg{[}\frac{\partial W_{x}}{\partial x}\bigg{]}^{B}(u+W_{x})+ \bigg{[}\frac{\partial W_{x}}{\partial y}\bigg{]}^{B}(v+W_{y})+\bigg{[}\frac{ \partial W_{x}}{\partial z}\bigg{]}^{B}(w+W_{z})+\bigg{[}\frac{\partial W_{x}} {\partial t}\bigg{]}^{B},\] \[\dot{W}_{y} =\bigg{[}\frac{\partial W_{y}}{\partial x}\bigg{]}^{B}(u+W_{x})+ \bigg{[}\frac{\partial W_{y}}{\partial y}\bigg{]}^{B}(v+W_{y})+\bigg{[}\frac{ \partial W_{y}}{\partial z}\bigg{]}^{B}(w+W_{z})+\bigg{[}\frac{\partial W_{y}} {\partial t}\bigg{]}^{B}, \tag{13}\] \[\dot{W}_{z} =\bigg{[}\frac{\partial W_{z}}{\partial x}\bigg{]}^{B}(u+W_{x})+ \bigg{[}\frac{\partial W_{z}}{\partial y}\bigg{]}^{B}(v+W_{y})+\bigg{[}\frac{ \partial W_{z}}{\partial z}\bigg{]}^{B}(w+W_{z})+\bigg{[}\frac{\partial W_{z}} {\partial t}\bigg{]}^{B}.\]
The spatial derivatives of the wind speed, which are often stated in the inertial frame, must be transferred to the body frames of reference in (13);
\[[\nabla W]^{B}=[T]^{BE}[\nabla W]^{E}[\bar{T}]^{BE}. \tag{14}\]
The effect of wind on angular rates \(\mathbf{\omega}_{w}^{E}\) can be defined as a rigid solid air caused by fluid stresses, and is expressed in the inertial frame as;
\[{}^{E}=\frac{1}{2}\bigg{[}(\frac{\partial W_{z}}{\partial y}-\frac{\partial W _{y}}{\partial z})\bigg{]}^{E}i+\frac{1}{2}\bigg{[}(\frac{\partial W_{x}}{ \partial z}-\frac{\partial W_{z}}{\partial x})\bigg{]}^{E}j+\frac{1}{2}\bigg{[} (\frac{\partial W_{y}}{\partial x}-\frac{\partial W_{x}}{\partial y})\bigg{]}^{ E}k. \tag{15}\]
The above equation must be transferred to the body axis so as to use in the 6-DoF equation;
\[\begin{bmatrix}p\\ q\\ r\end{bmatrix}^{B}=\begin{bmatrix}p\\ q\\ r\end{bmatrix}^{B}-[T]^{BE}\begin{bmatrix}(\frac{\partial W_{z}}{\partial y}- \frac{\partial W_{y}}{\partial z})\\ (\frac{\partial W_{x}}{\partial z}-\frac{\partial W_{x}}{\partial x})\\ (\frac{\partial W_{y}}{\partial x}-\frac{\partial W_{x}}{\partial y})\end{bmatrix}^ {E}. \tag{16}\]
### Actuator Fault
A type of failure that affects the plant inputs is an actuator fault. Actuator faults in the aircraft might result from improper maintenance procedures, the age of the material, or improper operation. In this study, the actuator fault is expressed in two terms: the first, is a multiplicative term, which is the elevator's inability to achieve the required
amount, and the second, is an additive term, which is the output quantity bias.
\[\begin{split}\delta_{E_{t}}=&\ 0.3\delta_{E_{t}}-0.7^{ \circ},\qquad\text{If $t>12s$,}\\ &\ 0.4\delta_{E_{t}}+0.6^{\circ},\qquad\text{If $12s>t>8s$,}\\ &\ 0.5\delta_{E_{t}}-0.5^{\circ},\qquad\text{If $8s>t>4s$.}\end{split} \tag{17}\]
According to equation (17), the elevator operates with 30% power and concurrently biases its output by \(-0.7^{\circ}\) after 12 seconds of flight because it is anticipated that the issue would worsen over time. After 4 and 8 seconds of flight, identical faults with different parameter values will have occurred. In this study, 50% deficiency and \(60^{\circ}\) additive bias after 4 seconds and 30% deficiency and \(0.6^{\circ}\) additive bias after 8 seconds were taken into consideration.
## 3 Landing Path Planning
Examining Figure 2, the desired approach angle is \(\theta_{a}\leqslant 3\) according to [36]. In this case assuming aircraft speed in touch down point \(V_{TD}=1.15V_{stall}\), and also speed in flare zone \(V_{f}=1.23V_{stall}\). So, the formula of flare circular arc is:
\[R=\frac{V_{f}^{2}}{0.2g}. \tag{18}\]
Now, in order to calculate \(\theta_{f}\) in each time-step, flare altitude \(h_{f}\) is required.
\[h_{f}=R-R\cos\theta_{a}. \tag{19}\]
Figure 2: Landing Path and Landing Distance Diagram
By considering aforementioned formulas, the approach distance \(s_{a}\), and the flare distance \(s_{f}\) are as follows:
\[s_{a}=\frac{-(50-h_{f})}{\tan\theta_{a}}, \tag{20}\]
\[s_{f}=-R\sin\theta_{a}. \tag{21}\]
Also, by using the formula \(s_{td}=s_{a}+s_{f}\) to represent touch-down distance, the desired \(\theta_{f}\) at each time step after covering approach distance is produced by:
\[\theta_{f}=\arcsin(\frac{x-x_{td}}{R}). \tag{22}\]
## 4 Dynamic Inversion Auto Landing Structure
Consider the following formulation to represent the nonlinear dynamic system:
\[\begin{split}\dot{\mathbf{x}}&=\mathbf{f}(\mathbf{x})+\mathbf{g}( \mathbf{x})\mathbf{u},\ \mathbf{x}(0)=\mathbf{x}_{0},\\ \mathbf{y}&=\mathbf{h}(\mathbf{x}),\end{split} \tag{23}\]
where \(\mathbf{x}\in\mathbf{R}\), \(\mathbf{y}\in\mathbf{R}\), \(\mathbf{u}\in\mathbf{R}\) are the vector of state, measurement output, and control input. The goal of the problem is often to design a suitable \(\mathbf{u}\) such that \(\mathbf{y}\) tracks desired \(\mathbf{y}_{des}\). By differentiating the output vector \(\mathbf{y}\) with respect to the state vector \(\mathbf{x}\), this would be accomplished.
\[\dot{\mathbf{y}}=\frac{\partial\mathbf{h}}{\partial\mathbf{x}}\frac{d\mathbf{x}}{dt}=\frac{ \partial\mathbf{h}}{\partial\mathbf{x}}\mathbf{f}(\mathbf{x})+\frac{\partial\mathbf{h}}{\partial \mathbf{x}}\mathbf{g}(\mathbf{x})\mathbf{u}. \tag{24}\]
The error between \(\mathbf{y}\) and \(\mathbf{y}_{des}\) must be eliminated by establishing a first-order dynamic error, which is defined as \(\mathbf{e}_{y}=\mathbf{y}-\mathbf{y}_{des}\):
\[\begin{split}\dot{\mathbf{e}}_{y}+\mathbf{K}\mathbf{e}_{y}=0,\\ \dot{\mathbf{y}}-\dot{\mathbf{y}}_{des}+\mathbf{K}(\mathbf{y}-\mathbf{y}_{des})=0.\end{split} \tag{25}\]
The error can be exponentially reduced by calculating the preceding equation while assuming that \(\mathbf{K}\) is a positive definite matrix. Let assume \(\mathbf{F}_{\mathbf{y}}(\mathbf{x})=\frac{\partial\mathbf{h}}{\partial\mathbf{x}}\mathbf{f}(\mathbf{x})\), \(\mathbf{G}_{\mathbf{y}}(\mathbf{x})=\frac{\partial\mathbf{h}}{\partial\mathbf{x}}\mathbf{g}(\mathbf{x})\) and \(\mathbf{g}(\mathbf{x})\) as an invertible matrix to derive the control signal as follows:
\[\mathbf{u}=[\mathbf{G}(\mathbf{x})]^{-1}[\dot{\mathbf{y}}-\mathbf{F}_{\mathbf{y}}(\mathbf{x})]. \tag{26}\]
By substituting \(\dot{\mathbf{y}}\) as a function of \(\mathbf{e}\):
\[\mathbf{u}=[\mathbf{G}_{\mathbf{y}}(\mathbf{x})]^{-1}[\dot{\mathbf{y}}_{des}-\mathbf{K}(\mathbf{y}-\mathbf{y}_ {des})-\mathbf{F}_{\mathbf{y}}(\mathbf{x})]. \tag{27}\]
The main challenge is assigning \(\mathbf{K}\) for error reduction. Extreme non-linearity, couplings, modeling uncertainty, and aerodynamic uncertainty combined with atmospheric disturbances lead to adaptive \(\mathbf{K}\) designation in this problem, which increases complexity. In this section, the Dynamic Inversion controller was developed based on [37]. According to aforesaid theories, first-order error in outer loop is considered \(\dot{e}_{h}+k_{h}e_{h}=0\), where \(e_{h}=h-h_{des}\). So the desired path angle is expressed as:
\[\theta_{des}=\arcsin\Biggl{(}\frac{\dot{h}_{des}-k_{h}(h-h_{des})}{\sqrt{a_{h} ^{2}+b_{h}^{2}}}\Biggr{)}-\arctan\biggl{(}\frac{b_{h}}{a_{h}}\biggr{)}, \tag{28}\]
where \(a_{h}=u\), and \(b_{h}=v\sin\phi+w\cos\phi\). Then for inner loop control, the first-order error is used respectively. So;
\[\dot{\theta}-\dot{\theta}_{des}+k_{\theta}(\theta-\theta_{des})=0. \tag{29}\]
Obviously, forces and moments are applied to aircraft in body axis.
\[q_{des}=\sec\phi(\dot{\theta}_{des}-k_{\theta}(\theta-\theta_{des})+r\sin\phi). \tag{30}\]
Now the control of \(q\) in order to track \(q_{des}\) is applicable. So the first-order error is applied:
\[\dot{q}-\dot{q}_{des}+k_{q}(q-q_{des})=0. \tag{31}\]
By driving \(\dot{q}\) in equation (1) and substituting equation (31), the desired longitudinal control input \(\delta_{E}\) is accessible.
\[\delta_{E}=\frac{I_{y}(\dot{q}_{des}-k_{q}(q-q_{des}))+(I_{x}-I_{z})rp-M_{A}-M_ {\delta_{E}}\delta_{E}}{M_{\delta_{E}}}, \tag{32}\]
where \(M_{\delta_{E}}=\bar{q}S\bar{c}c_{m_{\delta_{E}}}\).
**Remark:** It is apparent that the elevator deflection computation needs to desired pitch angle and desired pitch rate states. But in the FQL method, it will be seen later that the control signals are computed by just the desired pitch angle state during the trajectory tracking phase.
## 5 Fuzzy Q-learning Auto-landing Structure
Because of their narrow Mean Aerodynamic Chord (MAC), TBW aircraft typically exhibit insufficient longitudinal stability. More specifically, this finding may be confirmed
by evaluating the Phugoid and Short-period modes of the Boeing N+3 TTBW [5] and Chaka 50 against the Cessna 172 [38]. Table (4) contains an overview of the numerical data for the longitudinal modes of the aforementioned transport aircraft. This table can support the claim made above. To be clear, due to their superior manoeuvrability, the Chaka and Boeing, which have similar designs, experience poor longitudinal stability qualities. The Cessna 172, on the other hand, benefits from stable dynamics behaviour. For clarification, the damping ratio of the Chaka 50's Phugoid mode causes low-stability behavior resulting in long-term oscillations as depicted in [29].
### MDP Definition in Auto-landing
A sequential decision-making process, such as an auto-landing problem, must be formalized as MDPs since one action affects not only the next state and its immediate reward but also forthcoming states and their future rewards [39]. To be clear, at each time-step \(t\), the controller receives an state observation from the 6-DoF simulation, including \(\theta_{t}\in\mathbf{S_{1}}\) and \(\dot{\theta_{t}}\in\mathbf{S_{2}}\). Based on it, the controller specifies an action, \(\delta_{E_{t}}\in\mathbf{A}(\mathbf{s})\), which is the elevator deflection. The simulation runs, and in the following time step \(t+1\), the controller receives a reward \(R_{t+1}\in\mathbf{R}\) to assess its performance and find itself in the next state \(\theta_{t+1},\dot{\theta}_{t+1}\) until reaching to terminal state \(\theta_{T},\dot{\theta}_{T}\).
\[\theta_{0},\ \dot{\theta}_{0},\ \delta_{E_{0}},\ R_{1},\ \theta_{1},\ \dot{\theta}_{1},\ \delta_{E_{1}},\ R_{2},\...\ \theta_{T},\ \dot{\theta}_{T}. \tag{33}\]
Here, \(\theta_{0}\), and \(\dot{\theta}_{0}\) are the initial states, and \(\delta_{E_{0}}\) is the initial elevator deflection, moreover \(R_{t}\) defines the instant reward at time-step \(t\). A random pick of \(R_{t}\), \(\theta_{t}\), and \(\dot{\theta}_{t}\) have a clear discrete probability distribution that is solely reliant on the past state-action in this issue. Thus, by treating \(\theta_{t}\) and \(\dot{\theta}_{t}\) as states, the Markov property is satisfied because
\begin{table}
\begin{tabular}{c c c} \hline \hline Aircraft Roots & Short Period Roots & Phugoid Roots \\ \hline Chaka 50 & \(-0.8\pm 0.61i\) & \(-0.0064\pm 0.05i\) \\ Cessna 172 & \(-3.23\pm 5.71i\) & \(-0.025\pm 0.19i\) \\ Boeing N+3 & \(-0.35\pm 0.35i\) & \(-0.0082\pm 0.07i\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Longitudinal Dynamics Characteristic
they contain complete information about all aspects of the controller-aircraft (agent-environment) interaction history that matter in the future. So, for all \(\theta_{c}\), \(\theta_{p}\in\mathbf{S}_{1}\), \(\dot{\theta}_{c}\), \(\dot{\theta}_{p}\in\mathbf{S}_{2}\), and \(\delta_{E}\in\mathbf{A}(\mathbf{s})\), the equation of the MDP problem is defined as \(P\);
\[P(\theta_{c},\dot{\theta}_{c}\ |\ \theta_{p},\dot{\theta}_{p},\delta_{E})=Pr\{ \theta_{t}=\theta_{c},\dot{\theta}_{t}=\dot{\theta}_{c}\ |\ \theta_{t-1}=\theta_{p},\dot{\theta}_{t-1}=\dot{\theta}_{p},\delta_{E_{t}}= \delta_{E}\}, \tag{34}\]
where \(c\) and \(p\) are used for the current state and the previous state, respectively. In this problem, according to table (5), equation (34) always has a deterministic numerical amount in [0 1] for all states. All states are well-defined and each receives a distinct reward.
\[\sum\nolimits_{\theta_{c}\in\mathbf{S}_{1}}\sum\nolimits_{\dot{\theta}_{c}\in \mathbf{S}_{2}}P(\theta_{c},\dot{\theta}_{c}\ |\ \theta_{p},\dot{\theta}_{p},\delta_{E})=1. \tag{35}\]
The goal of finite MDP is to design a policy that maximizes reward over time. To find an optimal policy for taking \(\delta_{E}\) in state \(\theta\), \(\dot{\theta}\), the state-action value function \(\mathbf{Q}_{\pi}(\theta,\dot{\theta},\delta_{E})\) must be maximized, which is defined as the expected return as the sum of discounted instant rewards by starting from one specific state and pursuing policy \(\pi\) to terminal state \(\theta_{T}\), \(\dot{\theta}_{T}\):
\[\mathbf{Q}_{\pi}(\theta,\,\dot{\theta},\,\delta_{E})=\mathbb{E}_{\pi}\bigg{[}\sum \nolimits_{k=0}^{\infty}\gamma^{k}R_{t+k+1}\ \bigg{|}\ \theta_{t}=\theta,\,\dot{\theta}_{t}=\dot{\theta},\,\delta_{E_{t}}=\delta_{E} \bigg{]}, \tag{36}\]
where \(0<\gamma<1\) is the discount factor, whereas \(\gamma\simeq 0\) denotes the agent nearsightedness, and \(\gamma\simeq 1\) means the agent is long-sighted.
### Structure of Fuzzy Q-learning Controller
In the current study, Q-learning, an early breakthrough in Reinforcement Learning, is used to directly approximate the optimal elevator selection policy in each condition [24]. Q-Learning is an off-policy, model-free control algorithm based on the Temporal Difference (TD) method. In general, the system state \((\theta_{t},\dot{\theta}_{t})\) is obtained each time step, and by using the current elevator command \((\delta_{E_{t}})\) the action selection policy will be updated. Because of the highly nonlinear and poor stability characteristics of TBW aircraft, the Q-learning implementation results can be unsatisfactory without employing continuous state-action [29]. Accordingly, two Q-tables are trained by the Fuzzy and basic Q-learning methodologies. Then, they are Incorporated in different auto-landing scenarios through a novel technique namely the Fuzzy Action Assignment (FAA), which can be introduced
as a general connector between various trained Q-tables and continuous environments. Instead of computing a discrete greedy action in a particular state \(\theta\), \(\dot{\theta}\), FAA technique assigns a relative weight (also defined as the validity function or membership function) to each cell of the grid of system states (see Figure 3). This assignment is performed based on the current value of the state-action value function. The membership function of each grid cell with centers \(\theta_{i}\) and \(\dot{\theta}_{j}\) is described as follows:
\[MF_{i,j}=\exp\left(-\frac{1}{2}\left(\frac{\theta_{t}-\theta_{i_{t}}}{\sigma_{ \theta}}\right)^{2}\right)\exp\left(-\frac{1}{2}\left(\frac{q_{t}-\dot{\theta }_{j_{t}}}{\sigma_{\dot{\theta}}}\right)^{2}\right), \tag{37}\]
where \(\sigma_{\theta}\), and \(\sigma_{\dot{\theta}}\) define the validity widths of the membership functions. In this study, the elevator commands of TBW aircraft are specified into \(-0.25\) to \(+0.25\) radians with \(0.025\) intervals, corresponding to \(21\) elevator deflections. Also, the \(\epsilon\)-greedy action selection strategy with epsilon decay is utilized in this research so as to select greedy elevator commands in the last episodes (when the trained policy is near-optimum).
\[\delta_{E_{t}}=\begin{cases}\arg\max_{\delta_{E}}\,\mathbf{Q}(\theta_{t},\dot{ \theta}_{t},\delta_{E})&\text{with probability }1-\epsilon\\ \text{random action}&\text{with probability }\epsilon\end{cases} \tag{38}\]
Following that, \(\delta_{E}\) is determined using a weighted average of neighbor membership functions at each time step as regards:
\[\delta_{E_{t}}=\frac{\sum_{i}\sum_{j}MF_{i,j_{t}}\,\arg\max_{\delta_{E}}\mathbf{Q} (\theta_{i_{t}},\dot{\theta}_{j_{t}},\delta_{E})}{\sum_{i}\sum_{j}MF_{i,j_{t}}}. \tag{39}\]
The computed elevator deflection is applied to the aircraft 6-DoF simulation environment and receives a scalar reward signal as performance feedback which is defined in the Reward Function Definition section.
#### 5.2.1 State-action Value Function Updating Rule
In this study, the state-action value function directly estimates the optimal Q-table controller which is performed over episodes utilizing Fuzzy rules. The base updating rule equation is unchanged but the calculation of its different parts is improved dramatically. So in this part, the updating formalization of the terms of updating function is discussed. In this case, \(Q_{f}\) defines as the value of state-action in the previous time step that is
computed with its neighbor grids as:
\[Q_{f}(\theta_{t},\dot{\theta_{t}},\delta_{E_{t}})=\frac{\sum_{i}\sum_{j}MF_{i,j_{t }}\,Q(\theta_{i_{t}},\dot{\theta}_{j_{t}},\delta_{E})}{\sum_{i}\sum_{j}MF_{i,j_{t }}}. \tag{40}\]
Then, estimating the optimal state-action value of the next time step using a Fuzzy scheme is important in order to, find an estimation of a sequence of best elevator deflection values that will be selected from that state until the terminal state. But note that the quota of neighbor elevators is seen using the membership function. The reason for this computation is obvious. The effect of one specific \(\delta_{E}\) is not only on one pair of \(\theta\), \(\dot{\theta}\) but also has a nearly similar effect on their near neighbors. So the Fuzzy optimal future value is as follows;
\[\max_{\delta_{E}}Q_{f}(\theta_{t+1},\dot{\theta}_{t+1},\delta_{E})=\frac{\sum_ {i}\sum_{j}MF_{i,j_{t+1}}\,\max_{\delta_{E}}Q(\theta_{i_{t+1}},\dot{\theta}_{ j_{t+1}},\delta_{E})}{\sum_{i}\sum_{j}MF_{i,j_{t+1}}}, \tag{41}\]
where the membership function is related to the next time-step. Another step is to calculate the Temporal Difference (TD) in Fuzzy format. Therefore, the TD formulation is as follows;
\[TD_{i,j_{t}}=\frac{\sum_{i}\sum_{j}MF_{i,j_{t}}\,\left[R_{t+1}+\gamma\,\underset {\delta_{E}}{max}\,Q_{f}(\theta_{t+1},\dot{\theta}_{t+1},\delta_{E})-Q_{f}( \theta_{t},\dot{\theta}_{t},\delta_{E_{t}})\right]}{\sum_{i}\sum_{j}MF_{t}}, \tag{42}\]
Figure 3: Grid of state variables used for tabular Q-learning (The center of each cell, which is used to compute the membership function of the cell is shown by a circle point.)
where the membership function in this step is related to the previous time-step. The pseudocode of the proposed control strategy is summarized in Algorithm 1.
#### 5.2.2 Fuzzy Action Assignment Structure
As discussed earlier, the FAA receives optimal policies produced by various continuous or discrete RL algorithms. Then generates continuous actions for different control problems based on Fuzzy methodologies. Its implication are easy-going using just two equations 37, and 39. The pseudocode of FAA is explained in algorithm 2.
#### 5.2.3 Reward Function Definition
The definition of an appropriate reward function is critical to the learning processes' convergence. Therefore, the reward function design and hyper-parameter adjustment have received significant attention in this study likewise. In this manner, the reward function is generated in three phases and comprises plant states such as \(\theta,q,\delta_{E}\).
To begin with, in order to limit the elevator's high-frequency deflecting, severe penalties in the case of aggressive elevator selection are required. This penalty is performed whenever an elevator changes more than 0.1 radians.
\[R_{t}=-10000,\quad\text{If}\ \left(|\delta_{E_{t}}|-|\delta_{E_{t-1}}|\right)>5.73 ^{\circ}. \tag{43}\]
The reward function will then be calculated as follows if the aircraft is close to the desired angle and the elevator operation frequency is reasonable.
\[\begin{split} R_{t}&=(300,\qquad\text{If}\ |e_{\theta_{t}}|<0.05^{\circ}),\\ &+(300,\qquad\text{If}\ |e_{\theta_{t}}|<0.02^{\circ}),\\ &+(400,\qquad\text{If}\ |q_{t}|<0.04^{\circ}),\\ &+(600,\qquad\text{If}\ |q_{t}|<0.02^{\circ}),\\ &+(800,\qquad\text{If}\ |q_{t}|<0.005^{\circ}),\end{split} \tag{44}\]
where \(e_{\theta_{t}}=\theta_{t}-\theta_{des}\) is the proportional error. This definition first examines the state of pitch tracking. The controller then detects and prioritizes fewer pitch rates using higher reward allocations in the last episodes. The concepts stated above were defined for learning convergence. In other words, they are activated when the pitch angles are near the desired values. However, it is essential to guide the learning process in early episodes
with another phrase. As a result, if none of the above two requirements are satisfied, we should urge the air vehicle to proceed at the desired angle. Using the following reward function, this demand is achievable:
\[R_{t}=-(100\times|e_{\theta_{t}}|)^{2}-(40\times|q_{t}|)^{2}. \tag{45}\]
As a result, the further the system deviates from the desired state, the lower the reward. In addition, to avoid excessive pitch rates, a derivative term (the second term) has been added into reward function. More specifically, the presence of the pitch rate (\(q_{t}\)) in equation (44), as well as its weight in equation (45), influences the convergence rate significantly.
## 6 Simulation Results and Discussion
The auto-landing control of an innovative TBW regional jet aircraft is developed utilizing Fuzzy Q-learning (FQL). The development process and the justification of the findings will therefore be touched upon in this section. The procedure is divided into two segments, as will be detailed later. The first is the learning phase, and the second is the trajectory tracking phase, which is the execution of the optimal policy. The proposed approach is then compared with Dynamic Inversion, a well-known robust control method according to studies. A novel continuous action selection method is developed in this study to function as a useful link between every trained Q-table and environment, as shown in figure 4.
In the learning phase, the desired pitch angle is set to 1 degree, and the initial pitch angle is selected as a random number between 0 and 2 degrees during episodes. The observation vector contains the pitch angle \(\theta\), and pitch rate (\(q\)) in each time step. Furthermore, the pitch and pitch angle rate state intervals are defined as a 3D table alongside with action vector (Elevator deflection intervals) which are located in the policy block. This block selects an action based on the \(\epsilon\)-greedy strategy. The FQL core receives the reward, observation vector, and the relevant states of the pitch angle and pitch rate from the policy block (Figure 4) and updates the policy based on the aforementioned algorithm 1. Also, Q-learning receives all mentioned signals except the observation vector for policy updating.
```
Data: Learning Rate \(\alpha\), Discount Factor \(\gamma\), Desired Angle \(\theta_{des}=1\)deg, Validity Widths \(\sigma_{\theta},\sigma_{\dot{\theta}}\), Elevator Deflections \(\delta_{E}\), Pitch (Rate) Angle Intervals \(\theta\), \(\dot{\theta}\). Result:\(\mathbf{Q}_{\pi^{*}}(\theta,\dot{\theta},\delta_{E})\)
1\(\mathbf{Q}(\theta_{0},\dot{\theta}_{0},\delta_{E_{0}})\gets 0\); for all\(\theta\in\mathbf{S}_{1},\dot{\theta}\in\mathbf{S}_{2},\delta_{E}\in\mathbf{A}(\mathbf{s})\);
2for Episode Number = 1 to 20000do
3 Initialize 6-DoF simulation with a random \(\theta_{0}\in[0\ 2]\) deg.
4for time-step (0.01) = 0 to 5 secdo
5if\(\epsilon<\) random number \(\in[0\ 1]\)then
6\(\delta_{E_{t}}\gets random\ \delta_{E}\in\mathbf{A}(\mathbf{s})\);
7else
8for i,j = 1 to length I,Jdo
9\(MF_{i,j_{t}}\leftarrow\) Compute membership function eq 37;
10\(\delta_{E_{t}}\leftarrow\) Compute \(\delta_{E_{t}}\) eq 39;
11
12 end for
13
14 end for
15 Execute 6-DoF simulation using computed \(\delta_{E_{t}}\), observe \(R_{t+1}\), \(\theta_{t+1}\), \(\dot{\theta}_{t+1}\);
16for i,j = 1 to length I,Jdo
17\(MF_{i,j_{t+1}}\leftarrow\) Compute membership function eq 37;
18\(Q_{f}(\theta_{t},\dot{\theta}_{t},\delta_{E_{t}})\gets Q(\theta_{t},\dot{ \theta}_{t},\delta_{E_{t}})\) Computeed by eq 40;
19\(\max_{\delta_{E}}Q_{f}(\theta_{t+1},\dot{\theta}_{t+1},\delta_{E})\leftarrow\max_ {\delta_{E}}Q(\theta_{t+1},\dot{\theta}_{t+1},\delta_{E})\) Computeed by eq 41;
20\(Q(\theta_{i_{t}},\dot{\theta}_{j_{t}},\delta_{E_{t}})\gets Q(\theta_{i_{t}}, \dot{\theta}_{j_{t}},\delta_{E_{t}})+\alpha\bigg{[}TD_{i,j_{t}}\bigg{]}\) Based on eq 42;
21
22 end for
23 Substitute simulation parameters in time-step \(t\) with \(t+1\).
24
25 end for
26
27 end for
28
29return\(\mathbf{Q}_{\pi^{*}}(\theta,\dot{\theta},\delta_{E})\)
```
**Algorithm 1**Fuzzy Q-learning Aircraft Attitude Controller
The second phase is specified for the auto-landing control. In this case, the path planning block generates desired pitch angle in each time step. Then, one of the controllers out of three generates elevator signals. The output of the Fuzzy Q-table and Q-table is sent to the FAA block to produce continuous action (elevator deflections) based on algorithm 2. Several scenarios are defined in this stage including actuator faults, model uncertainties, and atmospheric disturbance plus sensor measurement noise. The parameters required for simulation are gathered in table 5. The positive part of pitch angle and pitch rate intervals are not mentioned in this table because of considering similar intervals. Useful to mention that all steps are developed in MATLAB R2022a and the PC is characterized by an 8 cores processor with 2.30 GHz, and 8 GiB RAM.
The learning results of the two methods are shown in figure 5. The main difference between MDP and POMDP models is observing the pitch rate in the MDP problem definition which in the POMDP is omitted. Obviously, the FQL conquered in comparison with Q-learning, and this insignificant difference will prove the robustness of the FQL later. Furthermore, its fluctuations are less than Q-learning (MDP). The POMDP policy is not included in the trajectory tracking phase owing to its unsuccessful findings [29]. The results of attitude tracking are gathered in figure 6. The first row is specified for
ideal flight conditions. All three methods were prosperous to track the desired angle. But the main difference is related to attitude tracking errors (\(TE_{\theta}\)), altitude tracking errors (\(TE_{h}\)), and control effort (\(CE\)) which are defined as follows;
\[TE_{\theta}=\frac{\int_{0}^{t}|\theta_{t}-\theta_{des}|dt}{t}, \tag{46}\]
\[TE_{h}=\frac{\int_{0}^{t}|h_{t}-h_{des}|dt}{t}, \tag{47}\]
\[CE=\frac{\int_{0}^{t}|\delta_{E_{t}}|dt}{t}. \tag{48}\]
The second row contains simulation results in atmospheric disturbance and sensor measurement noise. The DI and FQL are able to track tough high-frequency responses. Although the path-planning block tried to generate less desired path angles to guide the aircraft to the desired trajectory, the QL diverged. The third row includes the results of actuator faults and finally, the last row gathers the results of model parameter
Figure 4: Block diagram of learning and trajectory tracking phases of all 3 methods
uncertainties which the performance of all controllers are reasonable.
According to figure 7, the elevator deflection of all three methods explains significant data. To clarify, the first subplot indicates a larger initial deflection for DI in ideal flying conditions than the others. Furthermore, the second subplot demonstrates the poor performance of DI in the face of sensor noise and atmospheric disturbances. This result may be successful in trajectory tracking but is unable to be applicable. The third subplot is drawn for faulty elevator deflections with conspicuous jumps in all schemes. Finally, in contrast to the first subplot, the last subplot yields roughly analogous findings. It is useful to note that the DI overshoot before 10 seconds demonstrates its sensitivity to the commencing flare phase.
The next figure 8 demonstrates the altitude tracking of all three controllers. In an overview, the first, third, and last subplots prove the robustness of all three methodologies theoretically. But the second subplot is an exception. The QL controller is unable to accomplish its task properly but the FQL and DI have reasonable findings although the performance of DI is not useful for real applications. However, it is noticeable that the altitude tracking error of FQL in the middle of the approach distance is larger than DI. But during the flare phase that is more critical, the story changes, where the FQL surpasses DI.
Figure 5: Learning result of Q-learning and Fuzzy Q-learning during 20000 Episodes
## 6 Conclusion
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Parameter** & **Definition** & **Value** \\ \hline \hline \(\text{Epsilon}(\epsilon)\) & Exploration Probability & \([0.1:3e{-}6:0.04]\) \\ \(\text{Alpha}(\alpha)\) & Learning Rate & \([0.02:9e{-}7:0.002]\) \\ \(\text{Gamma}(\gamma)\) & Discount Factor & \(0.99\) \\ Episode number & - & \(20000\) \\ \(\theta\)(rad) & Pitch Angle Intervals & \([-10,-0.024:0.002:-0.002,-0.001,0]\) \\ \(\dot{\theta}\)(rad) & Pitch Rate Intervals & \([-10,-0.04,-0.02,-0.005]\) \\ I & Adjacent pitch grids & \([i-2:i+2]\) \\ J & Adjacent pitch rate grids & \([j-2:j+2]\) \\ \(k_{h}\) & Altitude Control Coefficient & \(1.3\) \\ \(k_{\theta}\) & Pitch Control Coefficient & \(5\) \\ \(k_{q}\) & Pitch Rate Control Coefficient & \(10\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Controller parameters in learning and tracking phases
Figure 6: Attitude tracking of all 3 methods in (1) Ideal flight conditions (2) Atmospheric disturbance and sensor measurement noise (3) Actuator faults (4) Model parameters uncertainties.
Figure 8: Altitude tracking of all 3 methods in various scenarios
Figure 7: Computed elevator deflections of all 3 methods in different flight conditions
The angle of attack (AoA) results of 15 seconds flight simulations are shown in figure 9. The less magnitude of AoA changes is clear in the first subplot compared to others. After the first subplot, the actuator faults have less influence on AoA although, there are some little jumps. The second subplot demonstrates an AoA change between -1 to 1 degree which is more than others despite its noisy specificity. This can amplify the drag where its consequences are clear in the next figure. Generally, the effect of actuator faults on AoA is less than model uncertainties.
The last figure depicts the speed of the airplane along its longitudinal body axis. According to the path planning section, the speed of aircraft at the touch-down point should be \(161\frac{m}{sec}\). The simulation of all flight conditions yields satisfactory results, however, the second subplot reveals a problem with DI. The performance of this controller leads the aircraft speed to stall margin, and the authors attribute this occurrence to high drag production caused by the elevator.
A summary of results is gathered numerically in table 6 for better comprehension. At a first glance, the superiority of FQL is noticeable in attitude tracking of all flight conditions except in presence of noise and disturbance where DI depicts better findings apparently. But the performance of DI in this circumstance is called into question owing
Figure 9: Aircraft Angle of Attack results in different flight simulations
to its control effort. More precisely about ideal conditions, the altitude tracking errors of QL and FQL are slightly better than DI, unlike their control effort. However, the differences are insignificant. As previously stated in the second scenario, despite DI superiority in pitch and altitude tracking errors, elevator deflection is saturated drastically. In the simulated flight influenced by actuator faults, the FQL controller is the forerunner. In this scenario, QL performance is inferior to that of other controllers, yet all controllers are successful. The final simulated flight includes model parameter uncertainties, and the FQL attitude tracking error is better than the other two. Although altitude tracking error and control effort of DI are superior, the differences are insignificant. Another discussion is around the working area of the proposed FQL controller to prove its robustness. Although the previous simulations were in different flight conditions, they were designed just around one working point. More precisely, they were simulated in initial aircraft longitudinal speed \(160\frac{m}{s}\), and constant aircraft longitudinal aerodynamic and control coefficients. In this part, the performance of both FQL and DI controllers are examined in a wider alteration of aforesaid coefficients and aircraft speed. In this case, the coefficients are varied between \(\pm 30\%\), and also the aircraft speed initializes between \(150\) to \(220\frac{m}{s}\). The FQL findings were satisfactory in comparison with one of the well
Figure 10: Simulated longitudinal speed results during various scenarios
known robust controllers namely Dynamic Inversion. According to figure 10, the FQL pitch angle tracking error varies between 0.04 to 0.07 degrees during 81 defined initial conditions. Correspondingly, the control effort related to this controller varies between 0.64 to 2.6 degrees. On the other hand, the tracking error of DI computed between 0.05 to 0.07 degrees where the elevator control effort is varied from 0.7 to 2,6 degrees.
## 7 Conclusion
In this research, auto-landing control of a regional jet aircraft with a novel configuration and specific longitudinal dynamic stability characteristics was addressed using Fuzzy Q-learning. The robustness property of this method was evaluated in several probable scenarios including atmospheric disturbances, sensor measurement noise, actuator faults, and model parameter uncertainties. The simulation results illustrated comparable improvements in contrast to Dynamic Inversion and classic Q-learning controllers. An innovative continuous action generator was proposed in this research to be a connector between optimal Q-tables and RL environments. In order to depict the robustness and working area of the proposed method, the aircraft's longitudinal speed, and coefficients have been altered widely, and the pitch angle tracking error as well as control effort are
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Flight Condition**} & \multirow{2}{*}{**Control Method**} & **Attitude Tracking** & **Altitude Tracking** & **Control Effort** \\ & & **Error (deg)** & **Error (m)** & **(deg)** \\ \hline \multirow{4}{*}{Ideal} & Dynamic Inv & 0.057 & 0.662 & **0.593** \\ & Q-learning & 0.047 & **0.643** & 0.649 \\ & Fuzzy QL & **0.040** & 0.644 & 0.669 \\ \hline \multirow{4}{*}{Noise + Disturbance} & Dynamic Inv & **2.541** & **1.477** & 13.772 \\ & Q-learning & 11.178 & 21.655 & **1.998** \\ & Fuzzy QL & 2.629 & 2.17 & 3.954 \\ \hline \multirow{4}{*}{Actuator Fault} & Dynamic Inv & 0.075 & 0.717 & **0.592** \\ & Q-learning & 0.091 & 0.755 & 0.624 \\ & Fuzzy QL & **0.064** & **0.708** & 0.654 \\ \hline \multirow{4}{*}{Model Uncertainty} & Dynamic Inv & 0.060 & **0.903** & **0.734** \\ & Q-learning & 0.056 & 0.936 & 0.793 \\ \cline{1-1} & Fuzzy QL & **0.042** & 0.928 & 0.812 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Attitude tracking error, altitude tracking error, and control effort of three methods in different flight conditions.
reported numerically. Summing up, the competency of the Fuzzy Q-learning method was proved in this problem approaching different flight conditions without being stuck in complicated Artificial Neural Network architectures.
Figure 11: Fuzzy Q-learning and Dynamic Inversion tracking error and control effort examined by alteration of model parameters, and aircraft speeds |
2310.01453 | Enhancing Secrecy Capacity in PLS Communication with NORAN based on
Pilot Information Codebooks | In recent research, non-orthogonal artificial noise (NORAN) has been proposed
as an alternative to orthogonal artificial noise (AN). However, NORAN
introduces additional noise into the channel, which reduces the capacity of the
legitimate channel (LC). At the same time, selecting a NORAN design with ideal
security performance from a large number of design options is also a
challenging problem. To address these two issues, a novel NORAN based on a
pilot information codebook is proposed in this letter. The codebook associates
different suboptimal NORANs with pilot information as the key under different
channel state information (CSI). The receiver interrogates the codebook using
the pilot information to obtain the NORAN that the transmitter will transmit in
the next moment, in order to eliminate the NORAN when receiving information.
Therefore, NORAN based on pilot information codebooks can improve the secrecy
capacity (SC) of the communication system by directly using suboptimal NORAN
design schemes without increasing the noise in the LC. Numerical simulations
and analyses show that the introduction of NORAN with a novel design using
pilot information codebooks significantly enhances the security and improves
the SC of the communication system. | Yebo Gu, Tao Shen, Jian Song, Qingbo Wang | 2023-10-02T07:32:31Z | http://arxiv.org/abs/2310.01453v1 | # Enhancing Secrecy capacity with non-orthogonal artificial noise based on Pilot Information Codebook
###### Abstract
In recent research, non-orthogonal artificial noise (NORAN) has been proposed as an alternative to orthogonal artificial noise (AN). However, NORAN introduces additional noise into the channel, which reduces the capacity of the legitimate channel (L.C). At the same time, selecting a NORAN design with ideal security performance from a large number of design options is also a challenging problem. To address these two issues, a novel NORAN based on a pilot information codebook is proposed in this letter. The codebook associates different suboptimal NORANs with pilot information as the key under different channel state information (CSI). The receiver interrogates the codebook using the pilot information to obtain the NORAN that the transmitter will transmit in the next moment, in order to eliminate the NORAN when receiving information. Therefore, NORAN based on pilot information codebooks can improve the secrecy capacity (SC) of the communication system by directly using suboptimal NORAN design schemes without increasing the noise in the LC. Numerical simulations and analyses show that the introduction of NORAN with a novel design using pilot information codebooks significantly enhances the security and improves the SC of the communication system.
physical layer security, non-orthogonal artificial noise, pilot information codebook, secrecy capacity.
## I Introduction
With the rapid development of communication technology, there has been an increasing demand for enhanced security in data transmission, while at the same time seeking efficient and fast data transmission. Although traditional wireless communication security technology based on encryption algorithms can effectively protect wireless data transmission, it has problems such as key management, information cracking, increased transmission delay and power consumption. PLS technology protects and encrypts signals at the physical layer, which has many advantages such as low computational cost, strong network applicability, anti-eavesdropping and anti-interference. It provides a new way to achieve secure wireless data transmission.
In [1], Wyner presented PLS technology and introduced a PLS communication model consisting of a transmitter, a receiver and an eavesdropper. Despite an initial stagnation in the development of the fundamental theory of PLS technology, researchers began investigations by considering different channel models. In particular, the methods for calculating the SC under channel models such as the broadcast channel [2] and the Gaussian channel [3] are proposed, which effectively simplify the computation of PLS properties. The limited channel degrees of freedom in early models posed a challenge to the advancement of PLS theory, resulting in a long period of limited progress. However, the advent of multiple-input multiple-output (MIMO) communication systems provided a new perspective for PLS research. Researchers subsequently derived SC calculation techniques for MIMO communication systems [4]. These developments have created new opportunities to explore the theoretical foundations of PLS technology and related concepts.
The traditional limitations of PLS technology, which required the eavesdropping channel (EC) to be a weaker version than the LC, have been overcome with the introduction of AN technology. AN has greatly expanded the application range of PLS technology. AN can effectively weaken the EC without affecting the LC [9]. The emergence of AN has attracted considerable attention in the academic community and has found applications in various communication environments. The paper [8] investigates the maximisation of the average secrecy rate in a wireless communication system supported by unmanned aerial vehicles (UAVs). The study centers around the implementation of a secure transmission protocol utilizing AN injection and the simultaneous optimization of UAV trajectory, network transmission power, and AN power allocation to enhance secrecy performance.In the paper [5], the impact of AN on the security of a fingerprint embedding authentication framework in the presence of imperfect CSI is investigated. The results show that AN can significantly improve security, but its benefits diminish as the quality of CSI degrades, potentially compromising key security. The paper [6] investigates the performance of secrecy failures in a large-scale downlink system supported by full-duplex non-orthogonal multiple access (FD-NOMA) transmission and AN. A secure cooperative communication scheme is proposed and simulations demonstrate its superiority over alternative approaches. The influence of secrecy diversity order and power allocation optimisation on system performance is also investigated.In the paper [7], a secure MIMO wireless communication system using AN and intelligent reflecting surfaces (IRS) is studied. The joint optimization of transmit precoding, AN covariance matrix and IRS phase shifts is pursued to maximize the secrecy rate. An efficient algorithm is proposed to solve the optimisation problem with closed-form solutions for all variables. However, previous researchers have overlooked the inherent limitations associated with AN. Under natural communication conditions, the effectiveness of AN is limited by the channel degrees of freedom. In scenarios where the channel degrees of freedom are low, the design options for AN are limited. Furthermore, when the channel degrees of freedom are zero, it becomes |
2305.05483 | Existence of solutions for nonlinear Dirac equations in the
Bopp-Podolsky electrodynamics | In this paper, we study the following nonlinear Dirac-Bopp-Podolsky system
\begin{equation*} \left\lbrace \begin{array}{rll}
\displaystyle{ -i\sum_{k=1}^{3}\alpha_{k}\partial_{k}u+[V(x)+q]\beta
u+wu-\phi u}&=f(x,u), \ \ &\text{in}\ \mathbb{R}^3,
\ & \ & \
-\triangle\phi+a^2\triangle^2 \phi&=4\pi \vert u\vert^2,\ \ & \text{in}\
\mathbb{R}^3, \end{array} \right. \end{equation*} where $a,q>0,w\in
\mathbb{R}$, $V(x)$ is a potential function, and $f(x, u)$ is the interaction
term (nonlinearity). First, we give a physical motivation for this new kind of
system. Second, under suitable assumptions on $f$ and $V$, and by means of
minimax techniques involving Cerami sequences, we prove the existence of at
least one pair of solutions $(u,\phi_u)$. | Hlel Missaoui | 2023-05-09T14:32:46Z | http://arxiv.org/abs/2305.05483v2 | # Existence of solutions for nonlinear Dirac equations in the Bopp-Podolsky electrodynamics
###### Abstract
In this paper, we study the following nonlinear Dirac-Bopp-Podolsky system
\[\left\{\begin{array}{rl}-i\sum_{k=1}^{3}\alpha_{k}\partial_{k}u+[V(x)+q] \beta u+wu-\phi u&=f(x,u),\quad\text{ in }\mathbb{R}^{3},\\ \\ -\triangle\phi+a^{2}\triangle^{2}\phi&=4\pi|u|^{2},\quad\text{ in }\mathbb{R}^{3}, \end{array}\right.\]
where \(a,q>0,w\in\mathbb{R}\), \(V(x)\) is a potential function, and \(f(x,u)\) is the interaction term (nonlinearity). First, we give a physical motivation for this new kind of system. Second, under suitable assumptions on \(f\) and \(V\), and by means of minimax techniques involving Cerami sequences, we prove the existence of at least one pair of solutions \((u,\phi_{u})\).
**Keywords:** Dirac-Bopp-Podolsky systems, Nonlinear Dirac equations, Existence of solutions.
**2010 Mathematics Subject Classification:** 35J50, 35J48, 35Q60
## 1 Introduction
The Dirac equation, proposed by British physicist Paul Dirac in 1928 (see [22]), is a relativistic wave equation that describes the behavior of particles with spin-\(\frac{1}{2}\), such as electrons, in a relativistic quantum mechanical framework. The equation can be written as:
\[i\hbar\frac{\partial\psi}{\partial t}=ic\hbar\sum_{k=1}^{3}\alpha_{k}\partial _{k}\psi-mc^{2}\beta\psi,\]
where \(\hbar\) is the reduced Planck constant, \(\psi\) represents the wave function of the particle, \(c\) is the speed of light, \(m\) is the mass of the particle, and \(\alpha_{1},\alpha_{2},\alpha_{3}\) and \(\beta\) are the \(4\times 4\) Pauli-Dirac matrices
\[\beta=\begin{pmatrix}I&0\\ 0&-I\end{pmatrix},\ \alpha_{k}=\begin{pmatrix}0&\sigma_{k}\\ \sigma_{k}&0\end{pmatrix},\ k=1,2,3,\]
with
\[\sigma_{1}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\ \sigma_{2}=\begin{pmatrix}0&-i\\ i&0\end{pmatrix},\ \text{and}\ \sigma_{3}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}.\]
These matrices are constructed in a way that ensures that they are both Hermitian and satisfy the following anticommutation relations
\[\{\alpha_{k};\alpha_{\ell}\}=2\delta_{k\ell},\ \{\alpha_{k};\beta\}=\{\alpha_{k}; \alpha_{0}\}=\{\beta;\alpha_{0}\}=0,\ \text{and}\ \beta^{2}=1,\]
where \(\{\cdot;\cdot\}\) denotes the anticommutator operation, \(\delta_{k\ell}\) is the Kronecker delta function, \(\alpha_{0}\) represents a \(4\times 4\) matrix operator that acts on the wave function of a particle and corresponds to the energy of the particle, and the indices \(k,\ell\) run from \(1\) to \(3\).
The Dirac equation combined special relativity and quantum mechanics and predicted the existence of antiparticles, which was later confirmed experimentally (see [9]). The equation was initially met with skepticism by many physicists, including Albert Einstein, who had doubts about its mathematical consistency. However, the Dirac equation was soon embraced as a major breakthrough in the development of quantum mechanics, and it paved the way for the development of quantum field theory. In addition to its theoretical significance, the Dirac equation has had a wide range of practical applications, particularly in condensed matter physics, where it has been used to describe the behavior of electrons in solids. It has also been used in high-energy particle physics, where it forms the basis of the standard model of particle physics, for more details see [9, 22, 34, 41, 47, 56]. These references should provide a good starting point for anyone interested in learning more about the Dirac equation and its applications in physics.
On the other hand, the Bopp-Podolsky ((BP) for short) theory (or electrodynamics) (see [45]), developed by Bopp [10], and independently by Podolsky [44] is a second-order gauge theory for the electromagnetic field. It was introduced to solve the so-called "infinity problem" that appears in the classical Maxwell theory. In fact, by the well-known Poisson equation (or Gauss law), the electrostatic potential \(\phi\) for a given charge distribution whose density is \(\rho\) satisfies the equation
\[-\triangle\phi=\rho,\ \ \text{on}\ \mathbb{R}^{3}. \tag{1.1}\]
If \(\rho=4\pi\delta_{x_{0}}\), (\(x_{0}\in\mathbb{R}^{3}\)), then \(\mathcal{G}(x-x_{0})\), with \(\mathcal{G}(x):=\dfrac{1}{|x|}\), is the fundamental solution of (1.1) and
\[\mathcal{E}_{M}(\mathcal{G}):=\dfrac{1}{2}\int_{\mathbb{R}^{3}}|\nabla \mathcal{G}|^{2}dx=+\infty\]
its electrostatic energy. Thus, in the Bopp-Podolsky theory, the equation (1.1) is replaced by
\[-\triangle\phi+a^{2}\triangle^{2}\phi=\rho,\ \ \text{on}\ \mathbb{R}^{3}. \tag{1.2}\]
Therefore, In this case, if \(\rho=4\pi\delta_{x_{0}}\), ( \(x_{0}\in\mathbb{R}^{3}\)), we are able to know explicitly the solution of the equation (1.2) and to see that its energy is finite or not. Fortunately, in [5] P. d'Avenia and G. Siciliano proved that \(\mathcal{K}(x-x_{0})\) with, \(\mathcal{K}(x):=\dfrac{1-e^{-\frac{|x|}{a}}}{|x|}\), is the fundamental solution of the equation
\[-\triangle\phi+a^{2}\triangle^{2}\phi=4\pi\delta_{x_{0}},\ \ \text{on}\ \mathbb{R}^{3}.\]
The solution of the previous equation has no singularity in \(x_{0}\) since it satisfies
\[\lim_{x\to x_{0}}\mathcal{K}(x-x_{0})=\dfrac{1}{a},\]
and its energy is
\[\mathcal{E}_{BP}(\mathcal{K}):=\dfrac{1}{2}\int_{\mathbb{R}^{3}}|\nabla \mathcal{K}|^{2}dx+\dfrac{a^{2}}{2}\int_{\mathbb{R}^{3}}|\triangle\mathcal{K}| ^{2}dx<+\infty.\]
For more details about this subject see [5, Section 2]. Moreover, the (BP) theory may be interpreted as an effective theory for short distances (see [26]) and for large distances, it is experimentally indistinguishable from the Maxwell one. Thus, the Bopp-Podolsky parameter \(a>0\), which has dimension of the inverse of mass, can be interpreted as a cut-off distance or can be linked to an effective radius for the electron. For more physical details about the (BP) electrodynamics, see [8, 35, 40, 45, 51, 53].
From a physical point of view, the relationship between the Dirac equation and the Bopp-Podolsky electrodynamics has been studied by several authors (see [11] and their references). One approach is to modify the four-potential in the Dirac equation to include the (BP) potential term, which leads to a modified Dirac equation that includes the effects of the BP electrodynamics. This modified Dirac equation can then be used to study the behavior of particles in strong electromagnetic fields, such as those found in high-energy physics and astrophysics. The modified Dirac equation in the (BP) theory has been used to investigate the effects of spontaneous emission, as well as other new physical phenomena that arise from the (BP) potential term. However, it is important to note that the relationship between the Dirac equation and the (BP) electrodynamics is still an area of active research, and there is ongoing work to better understand the nature of this relationship and its implications for our understanding of quantum mechanics and electromagnetism, for more details see [4, 11, 21, 28, 43].
Mathematically, there are many papers focused on the existence of solutions for Dirac equations coupling to the electromagnetic field (Maxwell theory) under various hypotheses on the external field and nonlinearity, see [1, 12, 15, 19, 20, 23, 25, 27, 29, 42, 58] and their references. For example, in [58], Jian Zhang, Wen Zhang, and Xianhua Tang studied the following nonlinear Maxwell-Dirac system:
\[\left\{\begin{array}{rl}-i\sum_{k=1}^{3}\alpha_{k}\partial_{k}u+[V(x)+q] \beta u+wu-\phi u&=f(x,u),\quad\mbox{ in }\mathbb{R}^{3},\\ \\ -\triangle\phi&=4\pi|u|^{2},\quad\mbox{ in }\mathbb{R}^{3},\end{array}\right.\] ( \[\mathcal{MD}\] )
where \(u:\mathbb{R}^{3}\rightarrow\mathbb{C}^{4}\), \(\phi:\mathbb{R}^{3}\rightarrow\mathbb{R}\), \(q=\frac{mc}{\hbar}\), and \(w=\frac{\theta}{c\hbar}\), \(\theta\in\mathbb{R}\). Precisely, under suitable assumptions on the potential function \(V(x)\) and the nonlinear term \(f(x,u)\), they proved the existence of infinitely many solutions of the system (\(\mathcal{MD}\)).
On the other side, to the best of our knowledge, mathematically, the Bopp-Podolsky theory has appeared recently in the paper of P. d'Avenia and G. Siciliano [5]. In this last reference, the authors coupled the Schrodinger equations with the (BP) electrodynamics. More precisely, they studied the following Schrodinger-Bopp-Podolsky system:
\[\left\{\begin{array}{rl}-\triangle u+wu+s^{2}\phi u&=|u|^{p-2}u,\quad\mbox{ in }\mathbb{R}^{3},\\ \\ -\triangle\phi+a^{2}\triangle^{2}\phi&=4\pi u^{2},\quad\quad\mbox{ in }\mathbb{R}^{3},\end{array}\right.\] ( \[\mathcal{SBP}\] )
where \(u,\phi:\mathbb{R}^{3}\rightarrow\mathbb{R}\), \(a>0\) is the Bopp-Podolsky parameter, and \(s\neq 0\). Moreover, in [5], the authors proved existence and nonexistence results depending on the parameters \(s,p\) and they showed that in the radial case, the solutions funded tend to solutions of the classical Schrodinger-Poisson system as \(a\to 0\). After the pioneering work by P. d'Avenia and G. Siciliano [5], system (\(\mathcal{SBP}\)) began to attract the attention of many mathematicians; see for
instance [3, 6, 13, 30, 31, 32, 37, 38, 39, 48, 49, 50, 55, 59, 60] for positive solutions and [7, 33, 52, 57] for sign-changing solutions.
Motivated by the physics and mathematics background of Dirac equations and Bopp-Podolsky theory, in this paper, we study the Dirac equations coupled with (BP) electrodynamics. Precisely, we study the existence of solutions for the following Dirac-Bopp-Podolsky system
\[\left\{\begin{array}{rl}-i\sum_{k=1}^{3}\alpha_{k}\partial_{k}u+[V(x)+q] \beta u+wu-\phi u&=f(x,u),\quad\text{ in }\mathbb{R}^{3},\\ \\ -\triangle\phi+a^{2}\triangle^{2}\phi&=4\pi|u|^{2},\quad\quad\text{ in }\mathbb{R}^{3},\end{array}\right.\] ( \[\mathcal{DBP}\] )
where \(u:\mathbb{R}^{3}\rightarrow\mathbb{C}^{4}\), \(\phi:\mathbb{R}^{3}\rightarrow\mathbb{R}\), \(q=\frac{mc}{\hbar}\), \(w=\frac{\theta}{c\hbar}\), \(\theta\in\mathbb{R}\), \(a>0\) is the Bopp-Podolsky parameter, \(V(x)\) is a potential function, and \(f(x,u)\) is the interaction term (nonlinearity).
For what concerns the nonlinearity reaction term \(f:\mathbb{R}^{3}\times\mathbb{C}^{4}\rightarrow\mathbb{R}\), we assume that \(f\) is a measurable in the first variable \(x\in\mathbb{R}^{3}\), continuous in the second variable \(u\in\mathbb{C}^{4}\), and satisfies the following assumptions:
* \(f(x,u)\in C(\mathbb{R}^{3}\times\mathbb{C}^{4},\mathbb{R}_{+})\), \(f(x,u)\) is \(1\)-periodic in \(x_{k}\), \(k=1,2,3\) and \(F(x,u)\geq 0\).
* \(\frac{F(x,u)}{|u|^{2}}\longrightarrow+\infty\), as \(|u|\longrightarrow+\infty\) uniformly in \(x\in\mathbb{R}^{3}\).
* \(f(x,u)=o(|u|)\) as \(|u|\longrightarrow 0\) uniformly in \(x\in\mathbb{R}^{3}\).
* \(\widetilde{F}(x,u)=\frac{1}{2}f(x,u)u-F(x,u)>0\), for \(|u|\) large and there exist constants \(\sigma>\frac{3}{2}\), \(\widetilde{C}>0\) and \(r_{0}>0\), such that \[|f(x,u)|^{\sigma}\leq\widetilde{C}|u|^{\sigma}\widetilde{F}(x,u),\ \ \forall\ (x,u)\in\mathbb{R}^{3}\times\mathbb{C}^{4},\ \ |u|\geq r_{0}.\]
Before stating our main result, we need the following hypotheses on the potential function \(V(x)\) and on the parameters \(w,q\).
* \(w\in(-q,q)\).
* \(V\in C^{1}(\mathbb{R}^{3},\mathbb{R}_{+})\), and \(V(x)\) is \(1\)-periodic in \(x_{k},k=1,2,3\).
Our main result is summarized in the following theorem:
**Theorem 1.1**.: _Suppose that the hypotheses \((A_{1})-(A_{2})\) and \((f_{1})-(f_{4})\) hold. Then, system \((\mathcal{DBP})\) admits at least one pair of solutions \((v,\phi_{v})\)._
From a mathematical point of view, as far as we know, this is the first work that dealt with the existence of solutions for Dirac-Bopp-Podolsky systems. The main feature of our is that the Dirac operator not only has unbounded positive continuous spectrum but also has unbounded negative continuous spectrum, and the corresponding energy functional is strongly indefinite. On the other hand, the main difficulty when dealing with this problem is the lack of compactness of Sobolev embedding, hence our problem poses more challenges in the calculus of variation. In order to overcome these difficulties, we will turn to the linking and concentration compactness arguments [54].
The paper is organized as follows. In Sect. 2, we give the variational setting to study the system \((\mathcal{DBP})\). The Sect. 3 is devoted to proving the linking structure of the associated energy functional with system \((\mathcal{DBP})\). In Sect. 4, we discuss the boundedness of the Cerami sequence. Finally, we prove our main result Theorem 1.1.
## 2 Variational setting
In this Section, we give the variational setting. Below by \(\|\cdot\|_{r}\) we denote the usual \(L^{r}\)-norm. For the reader's convenience, let
\[A:=-i\sum_{k=1}^{3}\alpha_{k}\partial_{k}u+[V+q]\beta\]
be the Dirac operator. It is worth mentioning that \(A\) is a selfadjoint operator acting on \(L^{2}(\mathbb{R}^{3},\mathbb{C}^{4})\) with \(\mathcal{D}(A)=H^{1}(\mathbb{R}^{3},\mathbb{C}^{4})\)[17, Lemma 7.2 a]. Let \(|A|\) and \(|A|^{\frac{1}{2}}\) denote respectively the absolute value of \(A\) and the square root of \(|A|\), and let \(\{\mathcal{F}_{\lambda}:\ -\infty\leq\lambda\leq+\infty\}\) be the spectral family of \(A\). Set \(U=id-\mathcal{F}_{0}-\mathcal{F}_{0-}\). Then \(U\) commutes with \(A\), \(|A|\) and \(|A|^{\frac{1}{2}}\), and \(A=U|A|\) is the polar decomposition of \(A\). Let \(\sigma(A)\) and \(\sigma_{c}(A)\) be, respectively, the spectrum and the continuous spectrum of \(A\). In order to construct a suitable variational setting for the system \((\mathcal{DBP})\), we need some notions and results.
**Lemma 2.1**.: _Assume that \((A_{2})\) holds. Then_
\[\sigma(A)=\sigma_{c}(A)\subset(-\infty,-q]\cap[q,+\infty),\]
_and_
\[\inf\sigma(|A|)\leq q+\sup_{x\in\mathbb{R}^{3}}V(x)\]
It follows that the space \(L^{2}(\mathbb{R}^{3},\mathbb{C}^{4})\) possesses the orthogonal decomposition:
\[L^{2}(\mathbb{R}^{3},\mathbb{C}^{4})=L^{+}\oplus L^{-},\ u=u^{+}+u^{-}\]
such that \(A\) is negative definite on \(L^{-}\) and positive definite on \(L^{+}\). Let \(\mathbb{E}:=\mathcal{D}(|A|^{\frac{1}{2}})\) be the domain of \(|A|^{\frac{1}{2}}\). We define on \(\mathbb{E}\) the following inner product
\[\left\langle u,v\right\rangle=\left\langle|A|^{\frac{1}{2}}u,|A|^{\frac{1}{2} }v\right\rangle_{2}+w\left\langle u,v^{+}-v^{-}\right\rangle_{2}\]
where \(\left\langle\cdot,\cdot\right\rangle_{2}\) denote the usual \(L^{2}\) inner product. Therefore, the induced norm on \(\mathbb{E}\) is
\[\|u\|:=\left(\||A|^{\frac{1}{2}}u\|_{2}^{2}+w\left(\|u^{+}\|_{2}^{2}-\|u^{-} \|_{2}^{2}\right)\right)^{\frac{1}{2}}.\]
Anyone can check that \(\mathbb{E}\) possesses the following decomposition
\[\mathbb{E}=\mathbb{E}^{+}\oplus\mathbb{E}^{-}\ \ \text{and}\ \ \mathbb{E}^{\pm}= \mathbb{E}\cap L^{\pm}.\]
Thus,
\[\left\{\begin{array}{l}Au=-|A|u,\ \text{for all}\ u\in\mathbb{E}^{-},\\ \\ Au=|A|u,\ \text{for all}\ u\in\mathbb{E}^{+},\\ \\ \text{and}\\ \\ u=u^{+}+u^{-},\ \text{for all}\ u\in\mathbb{E}.\end{array}\right.\]
Hence \(\mathbb{E}^{-}\) and \(\mathbb{E}^{+}\) are orthogonal with respect to both \(\langle\cdot,\cdot\rangle_{2}\) and \(\langle\cdot,\cdot\rangle\) inner products.
In what follows, we give a crucial result which is the compact and continuous embedding of the new space \(\mathbb{E}\) into Lebesgue spaces.
**Lemma 2.2** ([58]).:
_Assume that \((A_{1})-(A_{2})\) are satisfied. Then, \(\mathbb{E}=H^{\frac{1}{2}}(\mathbb{R}^{3},\mathbb{C}^{4})\), with equivalent norms. Moreover, we have that_
1. _the embedding_ \(\mathbb{E}\hookrightarrow L^{p}\) _is continuous for all_ \(p\in[2,3]\)_;_
2. _the embedding_ \(\mathbb{E}\hookrightarrow L^{p}_{\text{loc}}\) _is compact for all_ \(p\in[1,3)\)_;_
3. \((q-|w|)\|u\|_{2}^{2}\leq\|u\|^{2}\)_, for all_ \(u\in\mathbb{E}\)_._
An important fact involving system \((\mathcal{DBP})\) is that this class of system can be transformed into a Dirac equation with a nonlocal term (see [5]), which allows us to use variational methods. Effectively, as we mentioned in Section 1, in [5], P. d'Avenia and G. Siciliano, by the Lax-Milgram Theorem, proved that for a given \(u\in H^{1}(\mathbb{R}^{3},\mathbb{C}^{4})\), there exists a unique \(\phi_{u}\in\mathcal{D}\) such that
\[-\triangle\phi_{u}+a^{2}\triangle^{2}\phi_{u}=4\pi|u|^{2},\ \text{in}\ \mathbb{R}^{3},\]
where \(\mathcal{D}\) is the completion of \(C_{0}^{\infty}(\mathbb{R}^{3})\) with respect to the norm \(\|\cdot\|_{\mathcal{D}}\) induced by the inner product
\[\langle\psi,\varphi\rangle_{\mathcal{D}}:=\int_{\mathbb{R}^{3}}\nabla\psi \nabla\varphi+a^{2}\triangle\psi\triangle\varphi dx.\]
Otherwise,
\[\mathcal{D}:=\left\{\phi\in\mathcal{D}^{1,2}(\mathbb{R}^{3}):\ \triangle\phi \in L^{2}(\mathbb{R}^{3})\right\},\]
with
\[\mathcal{D}^{1,2}(\mathbb{R}^{3}):=\left\{\phi\in L^{6}(\mathbb{R}^{3}):\ \nabla \phi\in L^{2}(\mathbb{R}^{3})\right\}.\]
The space \(\mathcal{D}\) is an Hilbert space continuously embedded into \(\mathcal{D}^{1,2}(\mathbb{R}^{3})\), \(L^{6}(\mathbb{R}^{3})\) and \(L^{\infty}(\mathbb{R}^{3})\).
Arguing as in [5], we prove that the unique solution has the following expression \(\phi_{u}:=\mathcal{K}*|u|^{2}=\dfrac{1-e^{-\frac{|x|}{a}}}{|x|}*|u|^{2}\), for all \(u\in H^{1}(\mathbb{R}^{3},\mathbb{C}^{4})\), and it verifies the following properties:
**Lemma 2.3**.:
_For any \(u\in H^{1}(\mathbb{R}^{3},\mathbb{C}^{4})\), we have:_
1. _for every_ \(y\in\mathbb{R}^{3}\)_,_ \(\phi_{u(\cdot+y)}=\phi_{u}(\cdot+y)\)_;_
2. \(\phi_{u}\geq 0\)_;_
3. _for every_ \(r\in(3,+\infty]\)_,_ \(\phi_{u}\in L^{r}(\mathbb{R}^{3})\cap C_{0}(\mathbb{R}^{3})\)_;_
4. _for every_ \(r\in(\frac{3}{2},+\infty]\)_,_ \(\nabla\phi_{u}=\nabla\left(\frac{1-e^{-\frac{|x|}{a}}}{|x|}\right)*|u|^{2}\in L ^{r}(\mathbb{R}^{3})\cap C_{0}(\mathbb{R}^{3})\)_;_
5. \(\phi_{u}\in\mathcal{D}\)_;_
6. \(\|\phi_{u}\|_{6}\leq C\|u\|^{2}\)_;_
* \(\phi_{u}\) _is the unique minimizer of the functional_ \[E(\phi):=\frac{1}{2}\|\nabla\phi\|_{2}^{2}+\frac{a^{2}}{2}\|\triangle\phi\|_{2}^ {2}-\int_{\mathbb{R}^{3}}\phi|u|^{2}dx,\ \ \text{for all }\phi\in\mathcal{D};\]
* _if_ \(v_{n}\rightharpoonup v\) _in_ \(H^{1}(\mathbb{R}^{3},\mathbb{C}^{4})\)_, then_ \(\phi_{v_{n}}\rightharpoonup\phi_{v}\) _in_ \(\mathcal{D}\) _and_ \(\int_{\mathbb{R}^{3}}\phi_{u_{n}}|u_{n}|^{2}dx\to\int_{\mathbb{R}^{3}}\phi_{u}| u|^{2}dx\);
* \(\phi_{tu}=t^{2}\phi_{u}\)_, for all_ \(t\in\mathbb{R}_{+}\)_;_
* \(\int_{\mathbb{R}^{3}}\phi_{u}|u|^{2}dx=\int_{\mathbb{R}^{3}}\int _{\mathbb{R}^{3}}\frac{1-e^{-\frac{|x-y|}{a}}}{|x-y|}|u(x)|^{2}|u(y)|^{2}dxdy \leq\frac{1}{a}\|u\|_{2}^{4}\)_._
Therefore, the pair \((u,\phi)\in H^{1}(\mathbb{R}^{3},\mathbb{C}^{4})\times\mathcal{D}\) is a solution of \((\mathcal{DBP})\) if, and only if, for all \(u\in H^{1}(\mathbb{R}^{3},\mathbb{C}^{4})\), we have that \(\phi=\phi_{u}\) is a weak solution of the following nonlocal problem
\[-i\sum_{k=1}^{3}\alpha_{k}\partial_{k}u+[V(x)+q]\beta u+wu-\phi_{u}u=f(x,u),\ \ \text{in } \mathbb{R}^{3}.\] ( \[\mathcal{P}\] )
Next, we would like to mention that, from assumptions \((f_{1})-(f_{2})\) and Lemma 2.3, the existence of solutions for problem \((\mathcal{P})\) can be made via variational methods. In particular, the corresponding energy functional to problem \((\mathcal{P})\) is \(J:\mathbb{E}\longrightarrow\mathbb{R}\), which is defined by
\[J(u):=\frac{1}{2}\left(\|u^{+}\|^{2}-\|u^{-}\|^{2}\right)-\Gamma(u)-\int_{ \mathbb{R}^{3}}F(x,u)dx,\ \ \text{for all }u\in\mathbb{E}, \tag{2.1}\]
where
\[\Gamma(u):=\frac{1}{4}\int_{\mathbb{R}^{3}}\phi_{u}|u|^{2}dx=\frac{1}{4}\int_ {\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{1-e^{-\frac{|x-y|}{a}}}{|x-y|}|u(x )|^{2}|u(y)|^{2}dxdy.\]
The functional \(J\) belongs to \(C^{1}\left(\mathbb{E},\mathbb{R}\right)\) and a standard argument shows that critical points of \(J\) are solutions of problem \((\mathcal{P})\) (see [17, 54]).
Once we apply variational methods on the problem \((\mathcal{P})\), we find the following nonlocal term \(\int_{\mathbb{R}^{3}}\phi_{u}|u|^{2}dx\) which is homogeneous of degree \(4\). Thus, the natural corresponding Ambrosetti-Rabinowitz condition on \(f(x,u)\) is the following:
* There exists \(\Theta>4\) such that \(0<\Theta F(x,u)\leq f(x,u)|u|\), for a.a. \(x\in\mathbb{R}^{3}\) and all \(|u|>0\).
Therefore, our assumption \((f_{4})\) is weaker than the (AR) condition. Indeed, the following examples satisfies assumptions \((f_{1})-(f_{2})\) but not the (AR) condition:
* \(f(x,u)=b(x)|u|\ln(1+|u|)\), where \(b\in C(\mathbb{R}^{3},\mathbb{R})\) and is \(1\)-periodic in \(x_{k},\ k=1,2,3\),
* \(F(x,u)=b(x)\left(|u|^{\nu}+(\nu-2)|u|^{\nu-\varepsilon}\mathrm{sin}^{2}\left( \frac{|u|^{\varepsilon}}{\varepsilon}\right)\right)\), where \(\nu>2\) and \(0<\varepsilon<\frac{6-\nu}{2}\).
The linking structure
In this section, we discuss the linking structure of the functional \(J\). First, let \(r>0\), set \(B_{r}:=\{u\in\mathbb{E}:\ \|u\|\leq r\}\) and \(S_{r}:=\{u\in\mathbb{E}:\ \|u\|=r\}\). Let us observe that from assumptions \((f_{1})-(f_{4})\), for any \(\varepsilon>0\), there exist positive constants \(r_{\varepsilon}>0\) and \(C_{\varepsilon}\) such that
\[\left\{\begin{array}{l}|f(x,u)|\leq\varepsilon|u|,\ \mbox{for all}\ 0\leq|u|\leq r _{\varepsilon}\ \mbox{and all}\ x\in\mathbb{R}^{3},\\ \\ |f(x,u)|\leq\varepsilon|u|+C_{\varepsilon}|u|^{p-1},\ \mbox{for all}\ (x,u)\in \mathbb{R}^{3}\times\mathbb{C}^{4},\\ \\ \mbox{and}\\ \\ |F(x,u)|\leq\varepsilon|u|^{2}+C_{\varepsilon}|u|^{p},\ \mbox{for all}\ (x,u)\in \mathbb{R}^{3}\times\mathbb{C}^{4},\end{array}\right. \tag{3.1}\]
where \(p\in(2,3)\).
By a standard argument of [54] and arguing as in [58, Lemma 3.1], we get the following result.
**Lemma 3.1**.: _Under the assumption of Theorem 1.1, we have_
1. \(\Gamma\) _and_ \(\int_{\mathbb{R}^{3}}F(x,u)dx\) _are non-negative, weakly sequentially lower semi-continuous;_
2. \(\Gamma^{{}^{\prime}}\) _and_ \(\int_{\mathbb{R}^{3}}f(x,u)dx\) _are weakly sequentially continuous;_
3. _there exists_ \(\xi>0\) _such that for any_ \(c>0\)__ \[\|u\|\leq\xi\|u^{+}\|,\ \mbox{ for all}\ u\in J_{c},\] _where_ \(J_{c}:=\{u\in\mathbb{E}:\ \ J(u)\geq c\}\)_._
**Proposition 3.2**.: _Assume that assumptions of Theorem 1.1 are satisfied. Then, there exists \(r>0\) such that_
\[\rho:=\inf J(S_{r}\cap\mathbb{E}^{+})>0.\]
Proof.: From Lemma 2.2, we have
\[\|u\|_{p}^{p}\leq c_{p}\|u\|^{p},\ \mbox{for all}\ u\in\mathbb{E}\ \mbox{and all}\ p\in(2,3).\]
It follows, by (3.1) and Lemma 2.3-(10), that
\[J(u) =\frac{1}{2}\left(\|u^{+}\|^{2}-\|u^{-}\|^{2}\right)-\Gamma(u)- \int_{\mathbb{R}^{3}}F(x,u)dx\] \[\geq\frac{1}{2}\|u\|^{2}-C_{1}\|u\|^{4}-\varepsilon C_{2}\|u\|^{ 2}-C_{\varepsilon}c_{p}\|u\|^{p}\] \[=\left(\frac{1}{2}-\varepsilon C_{2}\right)\|u\|^{2}-C_{1}\|u\|^{ 4}-C_{\varepsilon}c_{p}\|u\|^{p}.\]
Choosing \(\varepsilon=\frac{1}{4C_{2}}\) in the previous inequality and using the fact that \(p\in(2,3)\), we can find \(r>0\) sufficiently small such that
\[J(u)>0,\ \mbox{for all}\ u\in S_{r}.\]
Thus, the desired result holds.
As a consequence of Lemma 2.1, we have
\[q\leq\overline{\Lambda}\leq q+\sup_{\mathbb{R}^{3}}V,\]
where \(\overline{\Lambda}:=\inf\sigma(A)\cap[0,+\infty)\).
Next, we set \(\overline{\mu}:=2\overline{\Lambda}\) and we take a number \(\mu\) satisfying
\[\overline{\Lambda}\leq\mu\leq\overline{\mu}. \tag{3.2}\]
Since the operator \(A\) is invariant under the action of \(\mathbb{Z}^{3}\) ( by \((A_{2})\)), the subspace \(Y_{0}:=\left(\mathcal{F}_{\mu}-\mathcal{F}_{0}\right)L^{2}\) is infinite-dimensional, and
\[\overline{\Lambda}\|u\|_{2}^{2}\leq\|u\|^{2}\leq\mu\|u\|_{2}^{2},\ \ \text{for all}\ u\in Y_{0}. \tag{3.3}\]
For any finite-dimensional subspace \(Y\) of \(Y_{0}\), we set \(\mathbb{E}_{Y}:=\mathbb{E}^{-}\oplus Y\).
**Proposition 3.3**.: _Assume that assumptions of Theorem 1.1 be satisfied. Then, for any finite-dimensional subspace \(Y\) of \(Y_{0}\), \(\sup J(\mathbb{E}_{Y})<+\infty\), and there is \(R_{Y}>0\) such that_
\[J(u)<\inf J(B_{\delta}),\text{ for all }\ u\in E_{Y}\ \text{ with }\|u\|\geq R_{Y}.\]
Proof.: It is sufficient to show that \(J(u)\longrightarrow-\infty\) as \(u\in\mathbb{E}_{Y}\), \(\|u\|\longrightarrow+\infty\). Arguing indirectly, assume that for some sequence \(\left\{u_{n}\right\}_{n\in\mathbb{N}}\) with \(\|u_{n}\|\longrightarrow+\infty\), there is \(M>0\) such that \(J(u_{n})\geq-M\) for all \(n\in\mathbb{N}\). Then, setting \(v_{n}:=\frac{u_{n}}{\|u_{n}\|}\), we have \(\|v_{n}\|=1,v_{n}\rightharpoonup v,v_{n}^{-}\rightharpoonup v^{-}\), and \(v_{n}^{+}\to v^{+}\in Y\).
Using Lemma 3.1-(1), we find that
\[\frac{1}{2}\left(\|v_{n}^{+}\|^{2}-\|v_{n}^{-}\|^{2}\right)\geq\frac{J(u_{n}) }{\|u_{n}\|^{2}}\geq\frac{-M}{\|u_{n}\|^{2}}, \tag{3.4}\]
which gives that
\[\frac{1}{2}\|v_{n}^{-}\|^{2}\leq\frac{1}{2}\|v_{n}^{+}\|^{2}+\frac{M}{\|u_{n} \|^{2}}.\]
Thus, \(v^{+}\not\equiv 0\).
From assumption \((f_{2})\), there is \(r>0\) such that
\[F(x,u)\geq\overline{\mu}|u|^{2},\ \text{if}\ |u|\geq r. \tag{3.5}\]
It follows, by (3.2) and (3.3), that
\[\|v^{+}\|^{2}-\|v^{-}\|^{2}-\overline{\mu}\int_{\mathbb{R}^{3}}|v |^{2}dx =\|v^{+}\|^{2}-\|v^{-}\|^{2}-\overline{\mu}\|v\|_{2}^{2}\] \[\leq\mu\|v^{+}\|_{2}^{2}-\|v^{-}\|^{2}-\overline{\mu}\|v^{-}\|_{ 2}^{2}-\overline{\mu}\|v^{+}\|_{2}^{2}\] \[\leq-\left((\overline{\mu}-\mu)\|v^{+}\|_{2}^{2}+\|v^{-}\|^{2}\right)\] \[<0. \tag{3.6}\]
Therefore, there is a bounded domain \(\Omega\subset\mathbb{R}^{3}\) such that
\[\|v^{+}\|^{2}-\|v^{-}\|^{2}-\overline{\mu}\int_{\Omega}|v|^{2}dx<0. \tag{3.7}\]
Using (3.5) and Lemma (3.1)-(1), we see that
\[\frac{J(u_{n})}{\|u_{n}\|^{2}} \leq\frac{1}{2}\left(\|v_{n}^{+}\|^{2}-\|v_{n}^{-}\|^{2}\right)- \int_{\Omega}\frac{F(x,u_{n})}{\|u_{n}\|^{2}}dx\] \[\leq\frac{1}{2}\left(\|v_{n}^{+}\|^{2}-\|v_{n}^{-}\|^{2}-\overline {\mu}\int_{\mathbb{R}^{3}}|v_{n}|^{2}dx\right)-\int_{\Omega}\frac{F(x,u_{n})- \frac{\overline{\mu}}{2}|v_{n}|^{2}}{\|u_{n}\|^{2}}dx\] \[\leq\frac{1}{2}\left(\|v_{n}^{+}\|^{2}-\|v_{n}^{-}\|^{2}- \overline{\mu}\int_{\mathbb{R}^{3}}|v_{n}|^{2}dx\right)+\frac{\overline{\mu}r ^{2}|\Omega|}{2\|u_{n}\|^{2}},\]
where \(|\Omega|\) denotes the Lebesgue measure of \(\Omega\). Thus, (3.4) and (3.7) imply that
\[0 \leq\lim_{n\to+\infty}\left(\frac{1}{2}\|v_{n}^{+}\|^{2}-\frac{1 }{2}\|v_{n}^{-}\|^{2}-\int_{\mathbb{R}^{3}}\frac{F(x,u_{n})}{\|u_{n}\|^{2}}\right)\] \[\leq\frac{1}{2}\left(\|v^{+}\|^{2}-\|v^{-}\|^{2}-\overline{\mu} \int_{\mathbb{R}^{3}}|v|^{2}dx\right)\] \[<0,\]
which is a contradiction.
As a consequence, we have the following result.
**Proposition 3.4**.: _Under the assumptions of Theorem 1.1, letting \(e\in Y_{0}\) such that \(\|e\|=1\), there exists \(R_{0}>r>0\) such that_
\[\sup J(\partial Q)\leq\rho,\]
_where \(Q:=\{u=u^{-}+te:\ \ u^{-}\in\mathbb{E}^{-},t\geq 0,\ \|u\|\leq R_{0}\}\) and \(\rho\) given in Proposition 3.2._
## 4 The Cerami sequence
In this section, we consider the boundedness of the Cerami sequence. Firstly, recall that a sequence \(\{u_{n}\}_{n\in\mathbb{N}}\subset\mathbb{E}\) is a Cerami sequence at the level \(c\) (\((C)_{c}\)-sequence for short) for the functional \(J\) if
\[J(u_{n})\longrightarrow c\ \ \text{and}\ \ (1+\|u_{n}\|)J^{{}^{\prime}}(u_{n}) \longrightarrow 0,\ \ \text{as}\ \ n\longrightarrow+\infty.\]
Before starting the main goal of this section, we give a crucial result on the nonlocal term \(\Gamma\).
**Lemma 4.1**.: _For any \(u\in\mathbb{E}\setminus\{0\}\), there exists \(C>0\) such that_
\[\Gamma^{{}^{\prime}}(u)u>0\ \ \text{and}\ \ \|\Gamma^{{}^{\prime}}(u)\|_{ \mathbb{E}^{*}}\leq C\left(\sqrt{\Gamma^{{}^{\prime}}(u)u}+\Gamma^{{}^{\prime} }(u)u\right).\]
Proof.: It's clear that
\[\Gamma^{{}^{\prime}}(u)u=4\Gamma(u)>0,\ \ \text{for all}\ u\in\mathbb{E} \setminus\{0\}.\]
This shows the first inequality. So, it remains to prove the second inequality. Since \(\Gamma\) is the unique nonlocal term in the functional \(J\), from the argument in Ackermann [2], it yields that
\[\int_{\mathbb{R}^{3}}\phi_{u}|v|^{2}dx =\int_{\mathbb{R}^{3}}\left(\frac{1-e^{-\frac{|x|}{a}}}{|x|}*|u|^{ 2}\right)|v|^{2}dx\] \[\leq C_{1}\left(\int_{\mathbb{R}^{3}}\left(\frac{1-e^{-\frac{|x|} {a}}}{|x|}*|u|^{2}\right)|u|^{2}dx\int_{\mathbb{R}^{3}}\left(\frac{1-e^{-\frac {|x|}{a}}}{|x|}*|v|^{2}\right)|v|^{2}dx\right)^{\frac{1}{2}}, \tag{4.1}\]
for all \(u,v\in\mathbb{E}\) and some constant \(C_{1}>0\).
It follows, by Lemma 2.3-(10), Holder inequality and Sobolev embedding theorem, that
\[\int_{\mathbb{R}^{3}}\left(\frac{1-e^{-\frac{|x|}{a}}}{|x|}*|u|^{2 }\right)|uv|dx\] \[\leq\left(\int_{\mathbb{R}^{3}}\left(\frac{1-e^{-\frac{|x|}{a}}}{ |x|}*|u|^{2}\right)|u|^{2}dx\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}^{3}} \left(\frac{1-e^{-\frac{|x|}{a}}}{|x|}*|u|^{2}\right)|v|^{2}dx\right)^{\frac{1 }{2}}\] \[\leq C_{2}\left(\int_{\mathbb{R}^{3}}\left(\frac{1-e^{-\frac{|x|} {a}}}{|x|}*|u|^{2}\right)|u|^{2}dx\right)^{\frac{1}{2}}\left(\int_{\mathbb{R}^ {3}}\left(\frac{1-e^{-\frac{|x|}{a}}}{|x|}*|u|^{2}\right)|u|^{2}dx\right)^{ \frac{1}{4}}\] \[\quad\times\left(\int_{\mathbb{R}^{3}}\left(\frac{1-e^{-\frac{|x| }{a}}}{|x|}*|v|^{2}\right)|v|^{2}dx\right)^{\frac{1}{4}}\] \[\leq C_{2}\left(\int_{\mathbb{R}^{3}}\left(\frac{1-e^{-\frac{|x|} {a}}}{|x|}*|u|^{2}\right)|u|^{2}dx\right)^{\frac{3}{4}}\frac{1}{a}\|v\|_{2}^{4}\] \[\leq C_{3}\left(\int_{\mathbb{R}^{3}}\left(\frac{1-e^{-\frac{|x|} {a}}}{|x|}*|u|^{2}\right)|u|^{2}dx\right)^{\frac{3}{4}}\|v\|. \tag{4.2}\]
Thus,
\[|\Gamma^{{}^{\prime}}(u)v|\leq C_{3}\left(\Gamma^{{}^{\prime}}(u)u\right)^{ \frac{3}{4}}\|v\|\leq C\left(\sqrt{\Gamma^{{}^{\prime}}(u)u}+\Gamma^{{}^{ \prime}}(u)u\right)\|v\|.\]
This implies the second inequality.
**Proposition 4.2**.: _Assume that the assumptions of Theorem 1.1 hold. If \(\{u_{n}\}_{n\in\mathbb{N}}\subset\mathbb{E}\) is a \((C)_{c}\)-sequence for \(J\), that is_
\[J(u_{n})\longrightarrow c\ \ \text{and}\ \ (1+\|u_{n}\|)J^{{}^{\prime}}(u_{n}) \longrightarrow 0,\ \ \text{as}\ \ n\longrightarrow+\infty.\]
_Then, \(\{u_{n}\}_{n\in\mathbb{N}}\) is bounded in \(\mathbb{E}\)._
Proof.: Let \(\{u_{n}\}_{n\in\mathbb{N}}\subset\mathbb{E}\) be a \((C)_{c}\)-sequence for \(J\), that is
\[J(u_{n})\longrightarrow c\ \ \text{and}\ \ (1+\|u_{n}\|)J^{{}^{\prime}}(u_{n}) \longrightarrow 0,\ \ \text{as}\ \ n\longrightarrow+\infty.\]
Then,
\[J(u_{n})\longrightarrow c\ \ \text{and}\ \ \ J^{{}^{\prime}}(u_{n})u_{n} \longrightarrow 0,\ \ \text{as}\ \ n\longrightarrow+\infty. \tag{4.3}\]
By (4.3) and assumption \((g_{2})\), for \(n\) sufficiently large, there exists \(C_{0}>0\) such that
\[C_{0} \geq J(u_{n})-\frac{1}{2}J^{{}^{\prime}}(u_{n})u_{n}=\Gamma(u_{n} )+\int_{\mathbb{R}^{3}}\widetilde{F}_{+}(x,u_{n})dx\] \[\geq\int_{\mathbb{R}^{3}}\widetilde{F}_{+}(x,u_{n})dx. \tag{4.4}\]
Arguing by contradiction, we assume that \(\|u_{n}\|\longrightarrow+\infty\), then \(\|u_{n}\|\geq 1\) for \(n\) is large enough. Setting \(v_{n}:=\frac{u_{n}}{\|u_{n}\|}\in\mathbb{E}\), then \(\|v_{n}\|=1\) and
\[\|v_{n}\|_{r}\leq C_{r}\|v_{n}\|=C_{r},\ \ \text{for all}\ r\in[2,2^{*}). \tag{4.5}\]
Moreover, up to a subsequence, we can assume that
\[v_{n}\rightharpoonup v\text{ in }\mathbb{R}\text{ and }\ v_{n}(x)\to v(x)\text{ a.e. }x\in\mathbb{R}^{3}.\]
On the other hand, for \(n\) large enough, we observe that
\[J^{{}^{\prime}}(u_{n})(u_{n}^{+}-u_{n}^{-}) =\|u_{n}\|^{2}-\Gamma^{{}^{\prime}}(u_{n})(u_{n}^{+}-u_{n}^{-})- \int_{\mathbb{R}^{3}}f(x,u_{n})(u_{n}^{+}-u_{n}^{-})dx\] \[=\|u_{n}\|^{2}\left(1-\frac{\Gamma^{{}^{\prime}}(u_{n})(u_{n}^{+} -u_{n}^{-})}{\|u_{n}\|^{2}}-\int_{\mathbb{R}^{3}}\frac{f(x,u_{n})(u_{n}^{+}-u_ {n}^{-})}{\|u_{n}\|^{2}}dx\right)\] \[=\|u_{n}\|^{2}\left(1-\frac{\Gamma^{{}^{\prime}}(u_{n})(u_{n}^{+} -u_{n}^{-})}{\|u_{n}\|^{2}}-\int_{\mathbb{R}^{3}}\frac{f(x,u_{n})}{|u_{n}|}|v _{n}|(v_{n}^{+}-v_{n}^{-})dx\right), \tag{4.6}\]
which is equivalent to
\[\frac{J^{{}^{\prime}}(u_{n})(u_{n}^{+}-u_{n}^{-})}{\|u_{n}\|^{2}}=1-\frac{ \Gamma^{{}^{\prime}}(u_{n})(u_{n}^{+}-u_{n}^{-})}{\|u_{n}\|^{2}}-\int_{ \mathbb{R}^{3}}\frac{f(x,u_{n})}{|u_{n}|}|v_{n}|(v_{n}^{+}-v_{n}^{-})dx. \tag{4.7}\]
It follows, by (4.3), that
\[\lim_{n\to+\infty}\left(\int_{\mathbb{R}^{3}}\frac{f(x,u_{n})}{|u_{n}|}|v_{n} |(v_{n}^{+}-v_{n}^{-})dx+\frac{\Gamma^{{}^{\prime}}(u_{n})(u_{n}^{+}-u_{n}^{- })}{\|u_{n}\|^{2}}\right)=1. \tag{4.8}\]
Now, we set for \(r\geq 0\)
\[\mathfrak{F}(r):=\inf\left\{\widetilde{F}(x,s):\ x\in\mathbb{R}^{3}\text{ and }s\in\mathbb{C}^{4}\text{ with }|s|\geq r\right\}.\]
By \((f_{4})\), we have
\[\mathfrak{F}(r)>0,\text{ for all }r\text{ large, and }\mathfrak{F}(r)\to+\infty\text{ as }r\to+\infty.\]
For \(0\leq a<b\leq+\infty\) we define
\[A_{n}(a,b):=\left\{x\in\mathbb{R}^{3}:\ a\leq|u_{n}(x)|<b\right\}\]
and
\[c_{a}^{b}:=\inf\left\{\frac{\widetilde{F}(x,s)}{|s|^{2}}:\ x\in\mathbb{R}^{3} \text{ and }s\in\mathbb{C}^{4}\setminus\{0\}\text{ with }a\leq|s|<b\right\}.\]
Note that
\[\widetilde{F}(x,u_{n})\geq c_{a}^{b}|u_{n}|^{2},\ \text{ for all }x\in A_{n}(a,b). \tag{4.9}\]
It follows, by (4.4), that
\[C_{0} \geq\int_{\mathbb{R}^{3}}\widetilde{F}(x,u_{n})dx\] \[=\int_{A_{n}(0,a)}\widetilde{F}(x,u_{n})dx+\int_{A_{n}(a,b)} \widetilde{F}(x,u_{n})dx+\int_{A_{n}(b,+\infty)}\widetilde{F}(x,u_{n})dx\] \[\geq\int_{A_{n}(0,a)}\widetilde{F}(x,u_{n})dx+c_{a}^{b}\int_{A_{ n}(a,b)}|u_{n}|^{2}dx+\mathfrak{F}(b)|A_{n}(b,+\infty)| \tag{4.10}\]
for \(n\) large enough.
Let \(0<\varepsilon<\frac{1}{3}\). By assumption \((f_{3})\), there exists \(a_{\varepsilon}>0\) such that
\[|f(x,s)|\leq\frac{\varepsilon}{3C_{r}}|s|,\ \ \text{for all}\ \ |s|\leq a_{\varepsilon}. \tag{4.11}\]
where \(C_{r}\) is defined in (4.5). From (4.11) and (4.5), we obtain
\[\left|\int_{A_{n}(0,a_{\varepsilon})}\frac{f(x,u_{n})}{|u_{n}|} |v_{n}|(v_{n}^{+}-v_{n}^{-})dx\right| \leq\int_{A_{n}(0,a_{\varepsilon})}\frac{|f(x,u_{n})|}{|u_{n}|}|v _{n}||v_{n}^{+}-v_{n}^{-}|dx\] \[\leq\frac{\varepsilon}{3C_{2}}\int_{A_{n}(0,a_{\varepsilon})}|v_{ n}||v_{n}^{+}-v_{n}^{-}|dx\] \[\leq\frac{\varepsilon}{3C_{2}}\int_{A_{n}(0,a_{\varepsilon})}|v_ {n}|^{2}dx\] \[\leq\frac{\varepsilon}{3C_{2}}C_{2}\|v_{n}\|^{2}\] \[=\frac{\varepsilon}{3},\ \ \text{for all}\ n\in\mathbb{N}. \tag{4.12}\]
Now, exploiting (4.10) and assumption \((f_{4})\), we see that
\[C^{{}^{\prime}}\geq\int_{A_{n}(b,+\infty)}\widetilde{F}(x,u_{n})dx\geq \mathfrak{F}(b)\left|A_{n}(b,+\infty)\right|,\]
where \(C^{{}^{\prime}}>0\). It follows, using the fact \(\mathfrak{F}(b)\to+\infty\) as \(b\to+\infty\), that
\[|A_{n}(b,+\infty)|\to 0,\ \ \text{as}\ b\to+\infty,\ \ \text{uniformly in}\ n. \tag{4.13}\]
Setting \(\sigma^{{}^{\prime}}:=\frac{\sigma}{\sigma-1}\) ( where \(\sigma\) is defined in \((f_{4})\) ). Since \(\sigma>\frac{3}{2}\), one can check that \(2\sigma^{{}^{\prime}}\in(2,2^{*})\). Now, let \(\tau\in(2\sigma^{{}^{\prime}},2^{*})\). Using (4.5), the Holder inequality and (4.13), for \(b\) large, we find that
\[\left(\int_{A_{n}(b,+\infty)}|v_{n}|^{2\sigma^{{}^{\prime}}}dx \right)^{\frac{1}{\sigma^{{}^{\prime}}}} \leq|A_{n}(b,+\infty)|^{\frac{\tau-2\sigma^{{}^{\prime}}}{\tau \sigma^{{}^{\prime}}}}\left(\int_{A_{n}(b,+\infty)}|v_{n}|^{2\sigma^{{}^{ \prime}}\frac{\tau}{2\sigma^{{}^{\prime}}}}dx\right)^{\frac{2}{\tau}}\] \[\leq|A_{n}(b,+\infty)|^{\frac{\tau-2\sigma^{{}^{\prime}}}{\tau \sigma^{{}^{\prime}}}}\left(\int_{A_{n}(b,+\infty)}|v_{n}|^{\tau}dx\right)^{ \frac{2}{\tau}}\] \[\leq|A_{n}(b,+\infty)|^{\frac{\tau-2\sigma^{{}^{\prime}}}{\tau \sigma^{{}^{\prime}}}}C_{\tau}\|v_{n}\|^{2}\] \[=|A_{n}(b,+\infty)|^{\frac{\tau-2\sigma^{{}^{\prime}}}{\tau \sigma^{{}^{\prime}}}}C_{\tau}\] \[\leq\frac{\varepsilon}{3},\ \ \text{uniformly in}\ n. \tag{4.14}\]
By \((f_{4})\), Holder inequality, (4.4) and (4.14), we can choose \(b_{\varepsilon}\geq r_{0}\) large so that
\[\left|\int_{A_{n}(b_{\varepsilon},+\infty)}\frac{f(x,u_{n})}{|u_{n} |}|v_{n}|(v_{n}^{+}-v_{n}^{-})dx\right|\] \[\leq\int_{A_{n}(b_{\varepsilon},+\infty)}\frac{|f(x,u_{n})|}{|u_{ n}|}|v_{n}||v_{n}^{+}-v_{n}^{-}|dx\] \[\leq\left(\int_{A_{n}(b_{\varepsilon},+\infty)}\left|\frac{f(x,u_ {n})}{|u_{n}|}\right|^{\sigma}dx\right)^{\frac{1}{\sigma}}\left(\int_{A_{n}(b _{\varepsilon},+\infty)}(|v_{n}||v_{n}^{+}-v_{n}^{-}|)^{2\sigma^{\prime}}dx \right)^{\frac{1}{\sigma^{\prime}}}\] \[\leq\left(\widetilde{C}\int_{A_{n}(b_{\varepsilon},+\infty)} \widetilde{F}(x,u_{n})dx\right)^{\frac{1}{\sigma}}\left(\int_{A_{n}(b_{ \varepsilon},+\infty)}|v_{n}|^{2\sigma^{\prime}}dx\right)^{\frac{1}{\sigma^{ \prime}}}\] \[\leq\frac{\varepsilon}{3},\ \ \text{uniformly in $n$}. \tag{4.15}\]
Next, from (4.10), we have
\[\int_{A_{n}(a_{\varepsilon},b_{\varepsilon})}|v_{n}|^{2}dx =\frac{1}{\|u_{n}\|^{2}}\int_{A_{n}(a_{\varepsilon},b_{ \varepsilon})}|u_{n}|^{2}dx\] \[\leq\frac{C^{{}^{\prime\prime}}}{c_{a_{\varepsilon}}^{b_{ \varepsilon}}\|u_{n}\|^{2}}\longrightarrow 0\ \text{ as $n\longrightarrow+\infty$}, \tag{4.16}\]
where \(C^{{}^{\prime\prime}}\) is a positive constant independent from \(n\).
Since \(\frac{f(x,s)}{|s|}\) is a continuous function on \(a_{\varepsilon}\leq|s|\leq b_{\varepsilon}\), there exists \(C>0\) depend on \(a_{\varepsilon}\) and \(b_{\varepsilon}\) and independent from \(n\), such that
\[|f(x,u_{n})|\leq C|u_{n}|,\ \text{ for all $x\in A_{n}(a_{\varepsilon},b_{ \varepsilon})$}. \tag{4.17}\]
Using (4.16) and (4.17), we can find \(n_{0}>0\) such that
\[\left|\int_{A_{n}(a_{\varepsilon},b_{\varepsilon})}\frac{f(x,u_{ n})}{|u_{n}|}|v_{n}|(v_{n}^{+}-v_{n}^{-})dx\right| \leq\int_{A_{n}(a_{\varepsilon},b_{\varepsilon})}\frac{f(x,u_{ n})}{|u_{n}|}|v_{n}||v_{n}^{+}-v_{n}^{-}|\] \[\leq C\int_{A_{n}(a_{\varepsilon},b_{\varepsilon})}|v_{n}|^{2}dx\] \[\leq C\frac{C^{{}^{\prime\prime}}}{c_{a_{\varepsilon}}^{b_{ \varepsilon}}\|u_{n}\|^{2}}\] \[\leq\frac{\varepsilon}{3},\ \ \text{for all $n\geq n_{0}$}. \tag{4.18}\]
Putting together (4.12), (4.15) and (4.18), we obtain
\[\int_{\Omega}\frac{f(x,u_{n})}{|u_{n}|}|v_{n}|(v_{n}^{+}-v_{n}^{-})dx\leq \varepsilon,\ \text{ for all $n\geq n_{0}$}.\]
It follows, by (4.8), that
\[\lim_{n\rightarrow+\infty}\frac{\Gamma^{{}^{\prime}}(u_{n})(u_{n}^{+}-u_{n}^{ -})}{\|u_{n}\|^{2}}=1. \tag{4.19}\]
On the other side, from (4.4) for the nonlocal term, we easily show that
\[\lim_{n\to+\infty}\frac{\Gamma(u_{n})}{\|u_{n}\|^{2}}=0. \tag{4.20}\]
Moreover, by Lemma 4.1, we have
\[\left|\frac{\Gamma^{{}^{\prime}}(u_{n})(u_{n}^{+}-u_{n}^{-})}{\|u_{ n}\|^{2}}\right| \leq\frac{\|\Gamma^{{}^{\prime}}(u_{n})\|_{\mathbb{E}^{*}}\|u_{n} ^{+}-u_{n}^{-}\|}{\|u_{n}\|^{2}}\] \[\leq C_{3}\left|\frac{\left(\sqrt{\Gamma^{{}^{\prime}}(u_{n})u_{ n}}+\Gamma^{{}^{\prime}}(u_{n})u_{n}\right)\|u_{n}^{+}-u_{n}^{-}\|}{\|u_{n}\|^{2}}\right|\] \[\leq C_{4}\left|\frac{\sqrt{\Gamma^{{}^{\prime}}(u_{n})u_{n}}+ \Gamma^{{}^{\prime}}(u_{n})u_{n}}{\|u_{n}\|}\right|\] \[=C_{4}\left(\frac{1}{\sqrt{\|u_{n}\|}}\sqrt{\frac{4\Gamma(u_{n})} {\|u_{n}\|}}+\frac{4\Gamma(u_{n})}{\|u_{n}\|}\right)\longrightarrow 0,\ \ \text{as}\ n \longrightarrow+\infty. \tag{4.21}\]
Thus,
\[\lim_{n\to+\infty}\frac{\Gamma^{{}^{\prime}}(u_{n})(u_{n}^{+}-u_{n}^{-})}{\|u _{n}\|^{2}}=0,\]
which contradicts (4.19). Therefore, \(\{u_{n}\}_{n\in\mathbb{N}}\) is bounded in \(\mathbb{E}\). This ends the proof.
Let \(\{u_{n}\}_{n\in\mathbb{N}}\subset\mathbb{E}\) be a \((C)_{c}\)-sequence of the functional \(J\), by the previous proposition, the sequence \(\{u_{n}\}_{n\in\mathbb{N}}\) is bounded in \(\mathbb{E}\). Since \(\mathbb{E}\) is a reflexive space, up to a subsequence, we may find \(u\in\mathbb{E}\) such that
\[u_{n}\rightharpoonup u,\ \text{in}\ \mathbb{E},\]
\[u_{n}\to u,\ \text{in}\ L^{r}_{\text{loc}}\ \text{for all}\ r\in(1,3),\]
and
\[u_{n}(x)\to u(x),\ \text{a.a.}\ x\in\mathbb{R}^{3}.\]
Moreover, it's clear that this \(u\) is a critical point of \(J\). Setting \(w_{n}:=u_{n}-u\), then
\[w_{n}\to 0\ \text{in}\ \mathbb{E}.\]
Arguing as in [58, Lemma 3.7] and [18], we can prove the following results on the sequence \(w_{n}\).
**Lemma 4.3**.: _Under the assumptions of Theorem 1.1, we have_
1. \(\lim_{n\to+\infty}\int_{\mathbb{R}^{3}}\left(F(x,u_{n})-F(x,u)-F(x,w_{n}) \right)dx=0;\)__
2. \(\lim_{n\to+\infty}\int_{\mathbb{R}^{3}}\left(f(x,u_{n})v-f(x,u)v-f(x,w_{n})v \right)dx=0\)_, for all_ \(v\in\mathbb{E}\)_;_
3. \(\lim_{n\to+\infty}\left(\Gamma(u_{n})-\Gamma(u)-\Gamma(w_{n})\right)\)_;_
4. \(\lim_{n\to+\infty}\left(\Gamma^{{}^{\prime}}(u_{n})v-\Gamma^{{}^{\prime}}(u)v -\Gamma^{{}^{\prime}}(w_{n})v\right)\)_, for all_ \(v\in\mathbb{E}\)_._
As a consequence,
**Lemma 4.4**.: _Under the assumptions of Theorem 1.1, one has, as \(n\longrightarrow+\infty\),_
1. \(J(w_{n})\longrightarrow c-J(u)\)_;_
2. \(J^{{}^{\prime}}(w_{n})\longrightarrow 0\)_._
Next, we set the set of nontrivial critical points of the functional \(J\) as follows
\[\mathcal{J}:=\left\{u\in\mathbb{E}\setminus\{0\}:\ J^{{}^{\prime}}(u)=0\right\}.\]
**Proposition 4.5**.: _Under the assumptions of Theorem 1.1, the following assertions hold_
1. \(\vartheta:=\inf\{\|u\|:\ u\in\mathcal{J}\}>0;\)__
2. \(\theta:=\inf\{J(u):\ u\in\mathcal{J}\}>0.\)__
Proof.: Let's prove (1). Let \(u\in\mathcal{J}\), it holds
\[J^{{}^{\prime}}(u)(u^{+}-u^{-})=\|u\|^{2}-\Gamma^{{}^{\prime}}(u)(u^{+}-u^{-}) -\int_{\mathbb{R}^{3}}f(x,u)(u^{+}-u^{-})dx=0. \tag{4.22}\]
It follows, by (3.1) and Lemma 2.3-10, that
\[\|u\|^{2}\leq C\|u\|^{4}+\varepsilon\|u\|^{2}+C_{\varepsilon}\|u\|^{p}\]
for \(p\in(2,2^{*})\). Choosing \(\varepsilon=\frac{1}{2}\) in the previous inequality, we obtain
\[0<\frac{1}{2}\|u\|^{2}\leq C\|u\|^{4}+C_{\varepsilon}\|u\|^{p}.\]
Thus, \(\|u\|>0\).
(2) Arguing indirectly, suppose that there is a sequence \(\{u_{n}\}_{n\in\mathbb{N}}\subset\mathcal{J}\) such that \(J(u_{n})\longrightarrow 0\) as \(n\longrightarrow+\infty.\) By the first assertion, \(\|u_{n}\|\geq\vartheta\). Clearly, \(u_{n}\) is a \((C)_{0}\)-sequence of \(J\), and hence is bounded by Proposition 4.2. Moreover, \(u_{n}\) is nonvanishing. By the invariance under the translation of \(J\), we can assume, up to a translation, that
\[u_{n}\longrightarrow u\ \in\mathcal{J}.\]
Using Fatou's lemma and Lemma 4.1-(8), we infer that
\[0 =\lim_{n\to+\infty}J(u_{n})=\lim_{n\to+\infty}\left(J(u_{n})- \frac{1}{2}J^{{}^{\prime}}(u_{n})u_{n}\right)\] \[=\lim_{n\to+\infty}\left(\Gamma(u_{n})+\int_{\mathbb{R}^{3}} \widetilde{F}(x,u_{n})dx\right)\] \[\geq\Gamma(u)+\int_{\mathbb{R}^{3}}\widetilde{F}(x,u)dx\] \[>0,\]
which is a contradiction. This ends the proof.
Let \([r]\) denote the integer part of \(r\in\mathbb{R}\). As a consequence of Lemma 4.4 and Proposition 4.5, we have the following result (see [16, 36]).
**Proposition 4.6**.: _Assume that assumptions of Theorem 1.1 are fulfill, and let \(\{u_{n}\}_{n\in\mathbb{N}}\subset\mathbb{E}\) be a \((C)_{c}\)-sequence of \(J\). Then, one of the following assertions holds_
1. \(u_{n}\longrightarrow 0\)_, and hence_ \(c=0\)_._
2. \(c\geq\theta\) _and there exist a positive integer_ \(\ell\leq[\frac{c}{\theta}]\)_, points_ \(\overline{u}_{1},\cdots,\overline{u}_{\ell}\in\mathcal{J}\)_, a subsequence denoted again by_ \(u_{n}\)_, and sequences_ \(\{a_{n}^{k}\}_{n\in\mathbb{N}}\subset\mathbb{Z}^{3}\)_,_ \(k=1,\cdots,\ell\) _such that_ \[\left\{\begin{array}{l}\left\|u_{n}-\sum_{k=1}^{\ell}a_{n}^{k}*\overline{u}_ {k}\right\|\longrightarrow 0\text{ as }n\longrightarrow+\infty,\\ \\ |a_{n}^{k}-a_{n}^{m}|\longrightarrow+\infty,\text{ for all }k\neq m\text{ as }n \longrightarrow+\infty,\\ \\ \text{and}\\ \\ \sum_{k=1}^{\ell}J(\overline{u}_{k})=0.\end{array}\right.\]
## 5 Proof of our main result
As usual, in variational problems, to prove the existence of weak solutions it is enough to find critical points of the energy functional associated with the problem. To this end, we shall use the following abstract theorem which is taken from [18].
First of all, we state definitions and notations. Let \(\mathbb{W}\) be a Banach space with direct sum \(\mathbb{W}=\mathbb{X}\oplus\mathbb{Y}\) and corresponding projections \(P_{\mathbb{X}},P_{\mathbb{Y}}\) onto \(\mathbb{X},\mathbb{Y}\). Let \(\mathcal{S}\subset\mathbb{X}^{*}\) be a dense subset, for each \(s\in\mathcal{S}\) there is a semi-norm on \(\mathbb{W}\) defined by
\[p_{s}:\mathbb{W}\longrightarrow\mathbb{R},\ p_{s}(u):=|s(x)|+\|y\|,\ \text{ for all }u=x+y\in\mathbb{W}.\]
We denote by \(\mathcal{T}_{\mathcal{S}}\) the topology induced by semi-norm family \(\{p_{s}\}_{s\in\mathcal{S}}\), \(w^{*}\) denote the weak\({}^{*}\)-topology on \(\mathbb{W}^{*}\). For a functional \(\Phi\in C^{1}(\mathbb{W},\mathbb{R})\) we write
\[\Phi_{c}:=\{u\in\mathbb{W}:\ \ \Phi(u)\geq c\}.\]
We may suppose that
1. for any \(c\in\mathbb{R}\), super level \(\Phi_{c}\) is \(\mathcal{T}_{\mathcal{S}}\)-closed and \(\Phi^{{}^{\prime}}:(\Phi_{c},\mathcal{T}_{\mathcal{S}})\longrightarrow( \mathbb{E}^{*},w^{*})\) is continuous;
2. for any \(c>0\), there exists \(\xi>0\) such that \(\|u\|<\xi\|P_{\mathbb{Y}}u\|\), for all \(u\in\Phi_{c}\);
3. there exists \(r>0\) such that \(\rho:=\inf\Phi(S_{r}\cap\mathbb{Y})>0\), where \(S_{r}:=\{u\in\mathbb{W}:\ \ \|u\|=r\}\).
**Theorem 5.1**.: _Let the assumptions \((\Phi_{1})-(\Phi_{3})\) be satisfied and suppose that there exist \(R>r>0\) and \(e\in\mathbb{Y}\) with \(\|e\|=1\) such that_
\[\sup\Phi(\partial Q)\leq\rho,\]
_where \(Q:=\{u=x+te:\ \ x\in\mathbb{X},t\geq 0,\ \|u\|\leq R\}\). Then, \(\Phi\) has a \((C)_{c}\)-sequence with_
\[\rho\leq c\leq\sup\Phi(Q).\]
Proof of Theorem 1.1.: Taking \(\mathbb{W}=\mathbb{E}\), \(\mathbb{X}=\mathbb{E}^{-}\), \(\mathbb{Y}=\mathbb{E}^{+}\), and \(\Phi=J\) in the previous theorem. By Lemma 3.1 and Proposition 3.2, we see that \((\Phi_{1})-(\Phi_{3})\) are satisfied. The Proposition 3.4 shows that \(J\) possesses the linking structure of Theorem 5.1. Therefore, there exists \(\{u_{n}\}_{n\in\mathbb{N}}\subset\mathbb{E}\) a \((C)_{c}\)-sequence of \(J\) at level \(c\). By Proposition 4.2, \(u_{n}\) is bounded in \(\mathbb{E}\). Let
\[\delta:=\limsup_{n\to+\infty}\sup_{y\in\mathbb{R}^{3}}\int_{B(y,1)}|u_{n}|^{2 }dx.\]
Here, we distinguish two possibilities for \(\delta\): \(\delta=0\) or \(\delta>0\).
If \(\delta=0\), by Lion's concentration compactness principle in [54, Lemma 1.21], we have that
\[u_{n}\longrightarrow 0\text{ in }L^{r},\text{ for all }r\in(2,3).\]
It follows, from (3.1) and Lemma 2.3-(10), that
\[\left.\begin{aligned} \int_{\mathbb{R}^{3}}f(x,u_{n})u_{n}dx& \longrightarrow 0,\\ \int_{\mathbb{R}^{3}}F(x,u_{n})dx& \longrightarrow 0,\\ \text{and}\end{aligned}\right\}\text{ as }n\longrightarrow+\infty.\]
Consequently,
\[c =\lim_{n\to+\infty}J(u_{n})=\lim_{n\to+\infty}\left(J(u_{n})- \frac{1}{2}J^{{}^{\prime}}(u_{n})u_{n}\right)\] \[=\lim_{n\to+\infty}\left(\Gamma(u_{n})+\int_{\mathbb{R}^{3}} \widetilde{F}(x,u_{n})dx\right)\] \[=0,\]
this is a contradiction. Thus, \(\delta>0\).
Going if necessary to a subsequence, we may assume the existence of \(k_{n}\subset\mathbb{Z}^{3}\) such that
\[\int_{B(k_{n},1+\sqrt{3})}|u_{n}|^{2}dx>\frac{\delta}{2}.\]
Lets define \(v_{n}(x):=u_{n}(x+k_{n})\). So,
\[\int_{B(0,1+\sqrt{3})}|v_{n}|^{2}dx>\frac{\delta}{2}. \tag{5.1}\]
Since \(J\) and \(J^{{}^{\prime}}\) are \(\mathbb{Z}^{3}\)-translation invariant, we obtain \(\|v_{n}\|=\|u_{n}\|\) and
\[J(v_{n})\longrightarrow c\ \text{ and }\ (1+\|v_{n}\|)J^{{}^{\prime}}(v_{n}) \longrightarrow 0,\ \text{ as }\ n\longrightarrow+\infty. \tag{5.2}\]
Passing to a subsequence, we have \(v_{n}\rightharpoonup v\) in \(\mathbb{E}\), \(v_{n}\to v\) in \(L^{r}\), for all \(r\in[1,3)\) and \(v_{n}(x)\to v(x)\) a.e. on \(\mathbb{R}^{3}\). Hence, it follows, from (5.1) and (5.2), that \(J^{{}^{\prime}}(v)=0\) and \(v\not\equiv 0\). This shows that \(v\in\mathcal{J}\). Therefore, \(v\in\mathbb{E}\) is a nontrivial weak solution of problem \((\mathcal{P})\). Thus, \((v,\phi_{v})\) is a pair of solutions for system \((\mathcal{DBP})\). |
2303.02062 | Unusual coercivity and zero field stabilization of fully saturated
magnetization in single crystals of LaCrGe$_3$ | LaCrGe$_3$ is an itinerant, metallic ferromagnet with a Curie temperature
($T_C$) of $\sim$ 86 K. Whereas LaCrGe$_3$ has been studied extensively as a
function of pressure as an example of avoided ferromagnetic quantum
criticality, questions about its ambient pressure ordered state remain;
specifically, whether there is a change in the nature of the ferromagnetically
ordered state below $T_C$ $\sim$ 86 K. We present anisotropic $M$($H$)
isotherms, coupled with anisotropic AC susceptibility data, and demonstrate
that LaCrGe$_3$ has a remarkable, low temperature coercivity associated with
exceptionally sharp, complete magnetization reversals to and from fully
polarized states. This coercivity is temperature dependent, it drops to zero in
the 40 - 55 K region and reappears in the 70 - 85 K regions. At low
temperatures LaCrGe$_3$ has magnetization loops and behavior that has
previously associated with micromagnetic/nanocrystalline materials, not bulk,
macroscopic samples. | M. Xu, S. L. Bud'ko, R. Prozorov, P. C. Canfield | 2023-03-03T16:30:58Z | http://arxiv.org/abs/2303.02062v1 | Unusual coercivity and zero field stabilization of fully saturated magnetization in single crystals of LaCrGe\({}_{3}\)
###### Abstract
LaCrGe\({}_{3}\) is an itinerant, metallic ferromagnet with a Curie temperature (\(T_{C}\)) of \(\sim\) 86 K. Whereas LaCrGe\({}_{3}\) has been studied extensively as a function of pressure as an example of avoided ferromagnetic quantum criticality, questions about its ambient pressure ordered state remain; specifically, whether there is a change in the nature of the ferromagnetically ordered state below \(T_{C}\sim\) 86 K. We present anisotropic \(M(H)\) isotherms, coupled with anisotropic AC susceptibility data, and demonstrate that LaCrGe\({}_{3}\) has a remarkable, low temperature coercivity associated with exceptionally sharp, complete magnetization reversals to and from fully polarized states. This coercivity is temperature dependent, it drops to zero in the 40 - 55 K region and reappears in the 70 - 85 K regions. At low temperatures LaCrGe\({}_{3}\) has magnetization loops and behavior that has previously associated with micromagnetic/nanocrystalline materials, not bulk, macroscopic samples.
## I Introduction
Recently, LaCrGe\({}_{3}\) has become an intensively studied metallic, itinerant ferromagnet, in particular, due to its complex but comprehensible behavior under pressure. [1; 2; 3; 4; 5; 6] In addition to intriguing high pressure properties, the ambient pressure characteristics of LaCrGe\({}_{3}\) potentially make it a fertile ground to study different aspects of magnetic behavior.
LaCrGe\({}_{3}\) crystallizes in a hexagonal structure (space group \(P6_{3}/mmc\)). [7; 8] The samples of LaCrGe\({}_{3}\) can be grown in a single crystal form [9] with a rod-like morphology (\(c\) - axis along the rod's axis) and up to several hundreds mg mass. At ambient pressure LaCrGe\({}_{3}\) is a ferromagnet with some degree of itinerancy, with a Curie temperature of \(T_{C}\sim\) 86 K, and magnetic moments ordered along the \(c\)-axis with the value of the saturated moment, \(\mu_{S}\approx\) 1.2 \(\mu_{B}\)/Cr [7; 8; 9; 10] and an anisotropy field of 40 - 50 kOe. [3; 9] Although the ground state of LaCrGe\({}_{3}\) appears to be that of a simple ferromagnet, there are several experimental observations that require further consideration and understanding. The low field, temperature dependent magnetization, measured using field-cooled protocol, has an unusual shape, exhibiting a small peak around 68 K. [9] Electrical transport measurements suggested two ferromagnetic phase transitions with \(T_{C}\sim\) 86 K and \(T_{x}\sim\) 71 K,[3] although no corresponding features were observed in thermodynamic measurements. [1; 9] Electron spin resonance [11] and AC susceptibility [10] measurements on polycrystalline samples reveal, in addition to \(T_{C}\), some anomalies in the 40 - 50 K temperature range. All these features are not understood and strongly suggest the need for more detailed studies of the temperature and field dependent magnetization of LaCrGe\({}_{3}\).
In order to better identify possible changes in the ferromagnetically ordered state, we performed systematic, anisotropic magnetization measurements. As a result we have found that for field applied along the \(c\)-axis, LaCrGe\({}_{3}\) manifests exceptionally sharp, square, hysteretic, magnetization loops that imply that there is a discontinuous reversal of the fully saturated magnetization of a bulk sample at a well defined field. Of many possible phenomena associated with ferromagnetism, the coherent magnetization reversal of an uniformly magnetized (single domain) sample, is interesting for an apparent simplicity of the model that describes it [12] as well as for potential applications. In all so far known cases, the monodomain regime is found in small particles, typically below 100 nm in size, consistent with the theoretical consideration of the energy balance between the energy of stray fields (proportional to particle's volume) around such particle and the energy of the domain wall formation ( proportional to the cross-sectional area) [13; 14]. In macroscopic, bulk samples magnetic domains always form. In this paper we will show that bulk, single crystalline LaCrGe\({}_{3}\) appears to be a macroscopic manifestation of such a zero field, fully polarized, state that undergoes a discontinuous and full reversal of magnetization at a well defined, finite, coercive field.
## II Crystal growth and experimental method
Single crystals of LaCrGe\({}_{3}\) were grown in two steps from melts of La\({}_{18}\)Cr\({}_{12}\)Ge\({}_{70}\)[15; 16] using fritted Canfield Crucible Sets (CCS).[17; 18] We first heated the La\({}_{18}\)Cr\({}_{12}\)Ge\({}_{70}\) to 1150 \({}^{\circ}\)C and cooled to 950 \({}^{\circ}\)C over 50 - 100 hours. At 950 \({}^{\circ}\)C, a mixture of LaGe\({}_{2-x}\) plates and LaCrGe\({}_{3}\) rods were separated from the remaining liquid phase. This decanted liquid was subsequently resealed, heated to 1000 \({}^{\circ}\)C (so as to fully remelt it) and then slowly cooled from 950 \({}^{\circ}\)C to 825 \({}^{\circ}\)C over roughly 100 hours. At 825 \({}^{\circ}\)C the growth was decanted and the resulting single phase LaCrGe\({}_{3}\) crystalline solid phase was separated from excess liquid. These crystals were used in our study. All of the LaCrGe\({}_{3}\) data presented in this paper, with the
exception of figures 8 and 11 in the appendix, were taken on a specific single crystal of LaCrGe\({}_{3}\) shown in the inset of figure 1. Similar results are found for other single crystalline samples as discussed below and shown in figure 11, in the appendix.
LaCrGe\({}_{3}\) forms as metallic, rod-like, brittle single crystals. Powder x-ray diffraction measurements were made by grinding single crystals in an agate mortar and pestle and placing the powder onto a single crystalline silicon, zero background sample holder with a small amount of vacuum grease. Powder X-ray diffraction measurements were carried out by using a Rigaku MiniFlex II powder diffractometer in Bragg-Brentano geometry with Cu K\(\alpha\) radiation (\(\lambda\) = 1.5406 A). The result of powder X-ray diffraction measurement agrees well with literature [7] and is shown figure 8, in the appendix.
Temperature- and magnetic-field-dependent DC and VSM magnetization data were collected using Quantum Design (QD), Magnetic Property Measurement Systems (MPMS and MPMS3). Temperature- and magnetic-field-dependent DC magnetization measurements were taken for \(H\) parallel and perpendicular to the crystallographic _c_-axis by placing the rod-like sample between two collapsed plastic straws with the third, uncollapsed, straw providing support as a sheath on the outside or by using of a quartz sample holder. Samples were fixed on the straw or quartz sample holder by GE-7031-varnish. In VSM magnetization measurements a peak amplitude of 4 mm and an averaging of time 2 sec were used. The sample for VSM was glued to a quartz sample holder by GE-7031-varnish. AC susceptibility measurements were performed using the AC-option of the MPMS3 magnetometer for two orientations of the LaGrGe\({}_{3}\) crystal and for two frequencies, 7.57 Hz and 75.7 Hz, in 5 Oe ac field and zero dc bias field (after demagnetization procedure was applied to the superconducting magnet). These measurements were performed on cooling. For some of the magnetization measurements made we needed to set a finite applied magnetic field to zero at base temperature without demagnetization; the remnant field of the superconductor coil for these measurements was measured by lead (285.2 mg, 6-9's grade Cominco American) and palladium (QD standard, 258.2 mg, Serial number: 18060702-058) in both the MPMS and MPMS3 units used. The remnant field value in the MPMS3 70 kOe magnet after setting the field to zero at base temperature from + 20 kOe was \(\sim\) -19 Oe. The remnant field for the MPMS 55 kOe magnet after setting the field to zero at base temperature from + 20 kOe was \(\sim\) -3 Oe. These remnant field values will be important when we discuss figures 7 and 12 below.
## III Magnetization measurements
Figure 1a shows the low temperature (1.8 K - 120 K), zero-field-cooled-warming (ZFCW) and field-cooled (FC) magnetization of the rod-like LaCrGe\({}_{3}\) single crystal(see inset) with 100 Oe magnetic field parallel to the crystallographic _c_-axis (a) and perpendicular to the _c_-axis (i.e. \(H\ ||\ ab\)). (Figure 9 in the appendix shows similar data for applied fields of 25 Oe, 50 Oe, and 100 Oe for comparison.) For \(H\ ||\ c\), the ferromagnetic transition, \(T_{C}\sim\) 86 K, and the peak around 70 K are similar to what was found at 50 Oe in reference [9]. Whereas the kink like feature in the ZFCW data is not unusual, often associated with domain wall motion upon warming, the feature near 70 K in the FC data that leads to a de
Figure 1: Zero-field-cooled-warming (ZFCW) and field-cooled (FC) low temperature magnetization as a function of temperature for the LaCrGe\({}_{3}\) single crystal with a field of 100 Oe applied parallel (a) or perpendicular (b) to the crystallographic _c_-axis. Left inset shows 4 quadrants of magnetization at 5 K as a function of magnetic field applied parallel to the crystallographic _c_-axis with a very small hysteresis and the coercive field is \(\sim\) 3 Oe(not resolvable on this scale). Right inset shows the picture of measured LaCrGe\({}_{3}\) single crystal with blue arrow denoting crystallographic _c_-axis.
crease of \(M(T)\) on further cooling is somewhat unusual. The inset of figure 1 shows a 5 K, 4-quadrant \(M(H)\) loop for fields applied parallel to the crystallographic _c_-axis up to 0.2 kOe. (Field increases from 0 Oe to 0.2 kOe, then decreases to -0.2 kOe and then returns to 0 kOe.) There is a very small hysteresis and the coercive field is \(\sim\) 3 Oe. Such a small hysteresis is not unusual for single crystalline ferromagnetic samples which often have very small bulk pinning. For \(H\)\(||\)\(ab\) there is a much smaller response, given that \(c\)-axis is the axis of strong unidirectional anisotropy and \(H\)\(||\)\(c\) is the easy direction of the ferromagnetically ordered state.
To our surprise, we found that much larger and sharper(essentially vertical, discontinuous jumps) hysteresis exists for LaCrGe\({}_{3}\) for larger, H \(\geq\) 5 kOe, applied fields. In figure 2 we show, 5 K, \(M(H)\) loops for systematically increasing maximum applied field. For the data in each plot we demagnetized the system at 120 K (much higher than T\({}_{C}\) = 86 K), zero field cooled to 5 K, applied a given maximum field along the \(c\)-axis, and then collected a 4-quadrant (\(H_{max}\) to -\(H_{max}\) to \(H_{max}\)) \(M(H)\) curve. For maximum fields of 0.2 kOe and 1.0 kOe there are essentially reversible \(M(H)\) plots. For maximum applied fields of 1.5, and 2.0 kOe there are discontinuously sharp and increasingly hysteretic \(M(H)\) plots and for maximum applied fields of 5.0 kOe and greater the coercive field saturates near \(\sim\)0.5 kOe. For all of these \(M(H)\) curves, the magnetization saturates near 1.1 \(\mu_{B}\)/Cr. This type of \(M(H)\) curve, is very unusual for a bulk, macroscopic, single crystal. The fact that the system has full magnetization persevered for \(\sim\pm\)0.5 kOe and then discontinuously switches to the same full magnetization with the opposite sign is more commonly seen for nano-crystals rather than bulk samples with dimensions measured in mm.
Such rectangular-looking magnetization loops are found in the phenomenological Stoner-Wohlfarth model for the magnetic field parallel to the anisotropy easy axis [13; 14]. The similar loops are also obtained in direct micromagnetic simulations [19] and analytical theories [20]. Experimentally such \(M(H)\) loops are observed in elongated (shape anisotropy) and/or magneto-crystalline-anisotropic nanoparticles [21]. However, only at low temperatures, below so-called blocking temperature [22] because above it a particle's magnetic moment can be easily flipped by thermal fluctuations, \(k_{B}T\)[21]. (Indeed, in our case of large crystals thermal activation is not relevant in a large temperature interval.) Experimentally coherent magnetization reversal was directly observed in 25 nm nanoparticles using magnetic force microscopy(MFM) imaging, but showed that larger particles favor different modes (e.g. so-called vortex core switching) of the reversal [23]. In addition to a classical rotation of the Stoner-Wohlfarth type, nanoparticle magnetization reversal can also proceed via quantum tunneling of virtual domain walls; both were studied using \(\mu\)-SQUID measurements [24]. Of course, in macroscopic, bulk single crystals such as the LaCrGe\({}_{3}\) samples studied here, quantum switching in not feasible. In another work also using \(\mu\)-SQUID, not only was agreement with the Stoner-Wohlfarth model found, but it was also shown that surface pinning plays crucial role in magnetization reversal [25]. In the absence of significant bulk pinning such surface pinning contributions may also be relevant for bulk crystals.
In order to study the temperature evolution of LaCrGe\({}_{3}\)'s coercivity, we measured \(M(H)\) isotherms from 5 K to above T\({}_{C}\). Figure 3 shows magnetization of a single crystal of LaCrGe\({}_{3}\) as a function of magnetic field applied parallel to the crystallographic _c_-axis at different temperatures. The maximum applied field was 5 kOe. In figure 3a, \(M(H)\) behaves similar to the 5 K data in figure 2 and coercive fields decrease monotonically as temperature increase. Ferromagnetism with "softer" behavior is shown at \(T\) = 42 K. Figure 3b shows \(M(H)\) in the temperature range of 42 K \(\leq\)\(T\)\(\leq\) 75 K. Coercivity is low and almost constant up to \(\sim\) 55 K and then starts to increase as temperature increases further. Figure 3c shows the \(M(H)\) loops for 75 K \(\leq\)\(T\)\(\leq\) 100 K. In this temperature range, the coercive field decreases and for 100 K > \(T_{C}\) the response is only weakly paramagnetic. Although there is clear hysteresis visible in the \(M(H)\) data in the higher temperature region, it is not as sharp (\(M(H)\) loops not as square) as it is for \(T\) < 40 K.
The coercivity fields inferred from figures 3 are plotted as a function of temperature in figure 4, the 5 K coercivity drops to zero just above 40 K, stays near zero up to 60 K and then rises through a local maxima at \(\sim\) 75 K and drops to zero at \(T_{C}\)\(\sim\) 86 K.
Figure 2: Magnetization of a LaCrGe\({}_{3}\) single crystal at 5 K as a function of magnetic field applied parallel to the crystallographic _c_-axis. We demagnetized the system at 120 K, zero field cooled to 5 K, applied a given maximum field denoted by different color along the \(c\)-axis. Inset shows \(H_{max}\) = 70 kOe \(M(H)\).
single crystalline samples of LaCrGe\({}_{3}\) have qualitatively similar, coercive field as a function of temperature plots, as shown in Appendix figure 11. The primary difference between samples is the size of the 5 K coercive field and the precise temperature it drops to zero near 40-50 K; the higher temperature coercivity seems less variable, sample to sample.
Figure 5 shows magnetization at different temperatures as a function of magnetic field applied parallel to the crystallographic _ab_-plane. At 5 K, the magnetization saturates for \(H\) > 40 kOe; for \(H\) < 40 kOe the \(M(H)\)
Figure 4: Coercive field of a LaCrGe\({}_{3}\) single crystal as a function of temperature with magnetic field applied parallel to the crystallographic _c_-axis.
Figure 5: Magnetization of a LaCrGe\({}_{3}\) single crystal at different temperature as a function of magnetic field applied parallel to the crystallographic _ab_-plane.
Figure 3: Magnetization of a single crystal of LaCrGe\({}_{3}\) at different temperature as a function of magnetic field applied parallel to the crystallographic _c_-axis. Each isothermal loop is a 4-quadrant loop with field being swept from +5 kOe to -5 kOe and back to +5 kOe. Between loops the system is taken to 120 K, and then cooled in zero field to the next temperature, a) shows \(M(H)\) at \(T\leq 42\) K. b) shows \(M(H)\) at 42 K \(\leq T\leq\) 75 K. c) shows \(M(H)\) at 75 K \(\leq T\leq 100\) K.
plot appears to be linear, at least on the scale shown in figure 5. As temperature increases the deviation from the low field, linear behavior decreases.
Figure 6 shows \(\chi^{\prime}\) and \(\chi^{\prime\prime}\) of AC susceptibility as a function of temperature for a LaCrGe\({}_{3}\) single crystal with zero applied DC field and an AC field of 5 Oe, frequency 7.57 Hz and 75.7 Hz. The AC field is applied parallel (figure 6 a) or perpendicular (figure 6 b) to the crystallographic _c_-axis. The data manifests very large anisotropy with the scales for the \(H\)\(||\)\(ab\) data sets being two and three orders of magnitude smaller for \(\chi^{\prime}\) and \(\chi^{\prime\prime}\) data respectively. The ferromagnetic transition around 86 K is clearly shown. Whereas details of the \(H\)\(||\)\(c\) data will be discussed further below, it is worth pointing out now that, for \(H\)\(||\)\(ab\), the slope of the low field linear \(M(H)\) seen below \(T_{C}\) in figure 5 is the same value as the \(\sim\) 0.20 emu/mole-Oe value of \(\chi^{\prime}\) seen below \(T_{C}\) in figure 6b.
## IV Discussion and Summary
The \(H\)\(||\)\(c\) coercivity of single crystalline LaCrGe\({}_{3}\), especially for \(T\) < 40 K is remarkable. The \(M(H)\) loops jump from being fully saturated in the negative-\(c\)-direction to being fully saturated in the positive-\(c\)-direction (and vice-versa) in a discontinuous manner. In magnetic nanoparticles with effective magnetic anisotropy constant K (that comes from a variety of sources - shape, magneto-crystalline anisotropy, surface spin canting, etc) and exchange energy per a pair of ions in nanoparticle lattice, \(JS^{2}\), the width of the domain wall is proportional to \(S\sqrt{(J/K)}\), so it becomes thinner in more anisotropic material, but the total surface energy of such a wall is proportional to \(S\sqrt{(JK)}\), so both larger exchange constant and larger anisotropy make domain walls energetically less desirable. It seems that in LaCrGe\({}_{3}\) we have found a macroscopic bulk system where domain walls are absent and magnetic irreversibility (coercivity) is determined by physics similar to Stoner-Wohlfarth magnetization reversal when the external magnetic field is applied along the easy axis. This is further supported by the fact that the magnetic response in the perpendicular orientation is much smaller.
The \(H\)\(||\)\(c\) coercivity shown in figure 4 can be further tested by a variant of what is referred to as a "trapped flux" measurement for superconductors. [26] Figure 7 plots the temperature dependence of the magnetization of LaCrGe\({}_{3}\) that has either been cooled from 120 K to 5 K in a 20 kOe field (CHF) and then had the field set to zero and warmed or has been cooled from 120 K in zero
Figure 6: \(\chi^{\prime}\) and \(\chi^{\prime\prime}\) of AC susceptibility as a function of temperature for a LaCrGe\({}_{3}\) single crystal with DC field is zero, AC field of 5 Oe, frequency 7.57 Hz and 75.7 Hz. Field is applied parallel (a) or perpendicular (b) to the crystallographic _c_-axis.
Figure 7: Cooled in zero field (CZF) and cooled in high field (CHF) magnetization at as a function of temperature (5 K - 120 K) for a LaCrGe\({}_{3}\) single crystal after a 20 kOe field applied parallel to the crystallographic _c_-axis was reduced to residual field at 5 K and measured by DC or VSM mode of MPMS3.
applied field (CZF) and then at 5 K a field of 20 kOe was applied and subsequently removed. The rate of changing field is 500 Oe/s without overshoot and the interval is 60 s before changing field. In both cases data were taken in the remnant field which we have established to be \(\sim\) -19 Oe. Out of an abundance of skepticism we measured the magnetization of the same sample in both a DC mode as well as in a VSM mode All four data sets are shown in figure 7 and all four are essentially identical. As the sample is warmed, in a remnant, near zero field, from 5 K its magnetization has a very slight temperature dependence, decreasing from \(\sim\)1.10 \(\mu_{B}\)/Cr to \(\sim\)1.05 \(\mu_{B}\)/Cr as the sample warms through 35 K. Just below 40 K the sample's magnetization jumps, discontinuously to \(\sim\) -0.8 \(\mu_{B}\)/Cr. Subsequent warming has the remnant field magnetization decrease monotonically to zero as \(T\) increases through \(T_{C}\). The jump in remnant field magnetization can be understood readily in terms of the coercivity data shown in figure 4. As the coercive field decreases from its relatively large value at 5 K toward zero with increasing temperature, it passes through the remnant field value and, at that point, there is the discontinuous change in the samples magnetization. Figure 12, in the appendix, shows that we can manipulate the sign and size of the \(T\)\(\sim\) 40 K jump by adjusting the size and sign of small (or remnant) field experienced by the sample.
The \(M(H)\) isotherms demonstrate that there is some change in the details of the magnetic order of LaCrGe\({}_{3}\) that results in rather dramatic changes in its coercivity. The AC susceptibility data also indicate that below \(T_{C}\) there is at least one other significant temperature. For \(H\)\(||\)\(c\), there is a sharp feature at \(T_{C}\) and then near 55 K a second feature. The lower temperature, 55 K, features are most likely associated with the onset of the lower temperature, strongly hysteretic \(M(H)\) behavior. Although this occurs at a significantly lower temperature (\(\sim\)38 K) for the data shown in figures 3, 4 and 7, figure 11, in the appendix, shows that for a second sample, with a larger 5 K coercive field, the lower temperature hysteretic region extends to 50 - 55 K. Given that the jump in magnetization seen in figure 7 occurs at the temperature when the remnant field is equal to the coercivity, the larger the 5 K coercivity, the higher the slope of the low temperature part of the coercivity curve (figure 11) and the more accurately this jump determines the temperature of the change in LaCrGe\({}_{3}\)'s magnetic nature. Both the coercive field data (figures 4 and 11) as well as the AC susceptibility data (figure 6) support the hypothesis that there is some change in the nature of the ferromagnetically ordered state near 50 - 55 K. The challenge for future work will be to determine the precise nature of this change.
In summary, LaCrGe\({}_{3}\) continues to be a fascinating and phenomenal compound. In addition to its avoided quantum criticality, [1; 3] we have now demonstrated that LaCrGe\({}_{3}\) has anomalous \(M(H)\) curves exhibiting full saturation with sharp transitions and substantial coercivity. The origin of this behavior needs further study. It seems likely that very low bulk pinning, strong uniaxial anisotropy and, possibly, strong surface pinning, are needed to explain the \(M(H)\) and \(M(T)\) curves. The surface pinning is invoked to explain the observation that the sample needs to be driven rather deeply into the saturated for the full coercivity to manifest (i.e. the need to drive out all domain walls from the bulk sample). What determines the size of the base temperature coercivity and how sensitive this coercivity is to geometry (i.e length, width, demagnetization factor, etc) or surface roughness or other details all need to be studied further. An even bigger question is whether other ferromagnetic materials can be found that demonstrate zero field pinning of fully saturated magnetization over wide temperature ranges and how large of a saturated moment can be stabilize in this manner. As such LaCrGe\({}_{3}\) simply highlights the fact that basic and applied research on intermetallic ferromagnet still has many surprises yet to uncover.
###### Acknowledgements.
M. Xu thank J. Schmidt and B. Kuthanazhi for valuable discussions. Work at Ames National Laboratory was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. Ames National Laboratory is operated for the U.S. Department of Energy by Iowa State University under contract No. DE-AC02-07CH11358. |
2307.10191 | A Lightweight Approach for Network Intrusion Detection based on
Self-Knowledge Distillation | Network Intrusion Detection (NID) works as a kernel technology for the
security network environment, obtaining extensive research and application.
Despite enormous efforts by researchers, NID still faces challenges in
deploying on resource-constrained devices. To improve detection accuracy while
reducing computational costs and model storage simultaneously, we propose a
lightweight intrusion detection approach based on self-knowledge distillation,
namely LNet-SKD, which achieves the trade-off between accuracy and efficiency.
Specifically, we carefully design the DeepMax block to extract compact
representation efficiently and construct the LNet by stacking DeepMax blocks.
Furthermore, considering compensating for performance degradation caused by the
lightweight network, we adopt batch-wise self-knowledge distillation to provide
the regularization of training consistency. Experiments on benchmark datasets
demonstrate the effectiveness of our proposed LNet-SKD, which outperforms
existing state-of-the-art techniques with fewer parameters and lower
computation loads. | Shuo Yang, Xinran Zheng, Zhengzhuo Xu, Xingjun Wang | 2023-07-09T11:09:23Z | http://arxiv.org/abs/2307.10191v1 | # A Lightweight Approach for Network Intrusion Detection based on Self-Knowledge Distillation
###### Abstract
Network Intrusion Detection (NID) works as a kernel technology for the security network environment, obtaining extensive research and application. Despite enormous efforts by researchers, NID still faces challenges in deploying on resource-constrained devices. To improve detection accuracy while reducing computational costs and model storage simultaneously, we propose a lightweight intrusion detection approach based on self-knowledge distillation, namely LNet-SKD, which achieves the trade-off between accuracy and efficiency. Specifically, we carefully design the DeepMax block to extract compact representation efficiently and construct the LNet by stacking DeepMax blocks. Furthermore, considering compensating for performance degradation caused by the lightweight network, we adopt batch-wise self-knowledge distillation to provide the regularization of training consistency. Experiments on benchmark datasets demonstrate the effectiveness of our proposed LNet-SKD, which outperforms existing state-of-the-art techniques with fewer parameters and lower computation loads.
Intrusion detection, deep learning, lightweight network, self-knowledge distillation.
## I Introduction
Accompanied by the rapid development of network technology, various network attacks emerge with more serious and huge threats [1]. As a response, Network Intrusion Detection (NID) plays an essential role in providing the desired security by constantly monitoring malicious and suspicious activities in network traffic. Nowadays, Intrusion Detection Systems (IDSs) have been widely used in military, medical, transportation, IoT, industrial control systems, and other fields [2].
IDSs apply two types of detection manners, _signature-based_ and _anomaly-based_[3]. The signature-based NID establishes the knowledge base by state modeling or string matching in advance and detects abnormal behavior by matching the data flow with the existing signature. Signature-based NID shows quite well performance on known attacks while failing to deal with attacks that are not in the knowledge base. Compared with it, anomaly-based NID has the ability to recognize unknown attacks by measuring the deviation between the detected activity and normal ones, which is vigorously developing.
With the success of Deep Learning (DL), numerous DL-based intrusion detection models [4, 5, 6] have been proposed and promote the accuracy and robustness of intrusion detection by a large margin. Despite the satisfactory accuracy these models achieved, most of them are difficult to be implemented on resource-constraint devices as high computational overhead and large model size. For example, DBN-based methods and RNN-based methods need more parameters resulting in the burden of storage (See Fig. 1).
Several schemes have been proposed to lighten the model size for NID [7, 8, 9], where [7] designed a lightweight network using depthwise convolution instead of standard convolution. However, the reduction of network complexity is at the expense of ignoring the correlation of channels, which leads to a decline in detection accuracy. In contrast, Depthwise Separable Convolution (DSConv) [10] is more wildly used that introduces point-wise convolution to combine features between channels. Furthermore, motivated by the observation of the feature combination and selection ability of Max-Feature-Map (MFM) [11], we present a simple yet novel network structure named DeepMax block composed of DSConv and MFM, which allows a NID model to reinforce representation learning with lower computational cost.
To compensate for the performance degradation caused by the lightweight, an intuitive solution uses Knowledge Distillation (KD) [12, 13, 14] to optimize the shallow student models by learning knowledge extracted from large and deep teacher models. However, we notice that student performance is highly dependent on the teacher models, which requires extra network design and results in additional training burdens to NID. To ameliorate it, we apply Self-Knowledge Distillation (SKD) [15] to obtain the instantaneous knowledge generated during
Fig. 1: F1 score _v.s._ the number of model parameters on NSL-KDD datasets. Each data point is visualized as a circle whose radius is proportional to \(log(p)\), where \(p\) is the model’s FLOPs. Notice that LNet-SKD achieves the best performance with satisfying model parameters and FLOPs.
the training phase and improve the performance of lightweight models succinctly and effectively without heavy networks.
Based on the above analysis, we propose LNet-SKD for NID, which is a lightweight approach composed of LNet and SKD. LNet is a succinct but effective model stacked by several lightweight DeepMax blocks for the feature process. Furthermore, batch-wise SKD is employed to provide the regularization of training consistency, which improves the detection capability significantly. Compared with existing methods, LNet-SKD achieves superior performance with the trade-off between efficiency and accuracy (See Fig. 1). Our contributions are summarized as follows:
1. We propose the DeepMax with a meticulous design which is composed of MFM and DSConv, to reduce the model complexity while achieving efficient feature extraction and compact representation.
2. We propose the LNet by stacking DeepMax blocks for NID, which realizes satisfactory performance with lower model storage and computational cost.
3. We utilize SKD to guide the LNet to obtain more instantaneous and coherent knowledge. To the best of our knowledge, we are the first to use SKD in NID to cover the performance drop incurred by lightweight.
4. Extensive experiments have shown that our LNet-SKD has significant advantages in accuracy and efficiency on challenging NSL-KDD and CICIDS-2017 datasets compared to state-of-the-art methods.
## II Related work
Network intrusion detection is one of the most effective approaches for network security defense. Traditional IDSs are based on fixed or dynamic rules to detect network attacks, which are only suitable for relatively simple scenarios and are difficult to deal with unknown security risks [16]. With the development of Machine Learning (ML), almost all related algorithms have been applied to NID, such as KNN [17], SVM [18], and LightGBM [19]. However, they can no longer resist the increasingly complex and diverse network threats due to limited learning ability [20].
In recent years, DL has shown promising potential in learning inherent rules and the representation of samples and has been used to extract features from abnormal traffic in an end-to-end manner. [4] proposed a deep learning approach for intrusion detection using the Convolution Neural Network (CNN). The proposed method effectively recognized abnormal traffic with higher accuracy than ML-based IDSs. However, it performed poorly against the minority classes in multi-class classification. Imrana _et al._[5] proposed a bidirectional Long-Short-Term-Memory (BiDLSTM) based intrusion detection system to improve the detection rate of minority classes. D'Angelo _et al._[21] embedded the autoencoder into the convolution neural network and recurrent neural network to obtain elicit relevant knowledge about the relations existing among the spatial features and temporal features, which are used to help improve the performance for network traffic classification. Belarbi _et al._[6] developed a multi-class classification IDS based on Deep Belief Network (DBN) by stacking multiple Restricted Boltzmann Machines (RBMs), and its performance has been verified with the CICIDS2017 dataset.
Complex networks provide high detection accuracy for IDS but bring challenges to deployment with resource-constrained devices. To address this problem, [7] designed a lighter model by modifying the existing paradigm, which results in a loss of detection rate. Another popular solution is knowledge distillation. Wang _et al._[14] proposed a knowledge distillation model to reduce the complexity of the model. Although the teacher model does improve the performance of the student model, it is challenging to design appropriate teacher and student models. Instead, our approach does not require a teacher model, distillation is conducted batch-wise, in which soft knowledge is propagated batch by batch.
## III The proposed approach
In this section, we present a detailed discussion of the proposed lightweight self-knowledge distillation approach for network intrusion detection. Fig. 2(a) displays the framework of our approach, which comprises two essential parts. First, we propose the LNet by stacking lightweight DeepMax blocks which are carefully designed for feature extraction and selection without redundant parameters. LNet is able to extract the robust feature representation of intrusion behavior with lower computation overhead. Second, batch-wise self-knowledge distillation is introduced to instruct the LNet to acquire instantaneous and effective knowledge.
### _Preliminary_
Given an \(M\)-classes labeled dataset containing \(N\) training instances, \(\mathcal{D}=\left\{\left(x_{1},y_{1}\right),\left(x_{2},y_{2}\right),\ldots, \left(x_{N},y_{N}\right)\right\}\). Let \(\mathcal{B}_{t}=\left\{\left(x_{1}^{t},y_{1}^{t}\right),\left(x_{2}^{t},y_{2} ^{t}\right),\ldots,\left(x_{n}^{t},y_{n}^{t}\right)\right\}\) be a batch set of \(t^{th}\) iteration during the training process, where \(n\ll N\). Furthermore, we define a base NID model as \(\mathcal{M}_{\theta}\), which is parameterized by \(\theta\). For each input sample \(\left(x,y\right)\), the encoder extracts the feature representation \(\mathcal{F}\in\mathbb{R}^{H\times W}\), where \(W\) and \(H\) denote the spatial width and height of the corresponding feature map. We note the output logits \(\mathbf{z}=\mathcal{M}(x|\theta)\in\mathbb{R}^{M}\). In this paper, both teacher and student adopt the same architecture \(\mathcal{M}_{\theta}\) for self-knowledge distillation.
### _LNet_
In NID, CNNs have been widely used due to their excellent feature extraction ability. Our motivation comes from the observation that the CNN-based models proposed for NID have constantly been growing larger, which hinders the deployment of edge devices. Hence, we propose an efficient feature processing block DeepMax and further build the lightweight LNet model by stacking it, as shown in Fig. 2. Under this structure, the computing overhead can be significantly reduced, and extract features effectively with generalization.
The core component of LNet is the DeepMax block, which is based on _Depthwise Separable Convolution (DSConv) layer_ and _Max-Feature-Map (MFM) layer_. As illustrated in Fig. 2(b), DSConv factorizes the standard convolution
into depth-wise convolution and point-wise convolution. The depth-wise convolution operation extracts each channel's feature with separate kernels, which reduces the computation drastically. Then, point-wise convolution is used to match the output feature channel, which is implemented as \(1\times 1\) kernel convolution. The whole process for \(C_{i}\) input and \(C_{o}\) output channels can be written as:
\[\mathcal{F}_{j}^{new}=\frac{1}{C_{i}}\cdot\mathcal{K}_{j}^{p}*\sum_{i=0}^{C_{i }-1}\mathcal{K}_{i}^{d}*\mathcal{F}_{i}^{old},j\in[0,...,C_{o}-1], \tag{1}\]
where \(*\) indicates convolutional operation, \(\mathcal{K}\) is the corresponding kernel and \(\mathcal{F}\) is the feature map.
To further reduce the network size, we concatenate the MFM layer after the aforementioned DSConv layer to half the channel numbers. As shown in Fig. 2(c), MFM applies Eq. 2 to combine two feature maps and output the maximum value of each element, thus focusing the network on prominent elements.
\[\mathcal{F}_{i}^{new}(x,y)=\max\left[\mathcal{F}_{i}^{old}(x,y),\mathcal{F}_{i +C/2}^{old}(x,y)\right], \tag{2}\]
where \(i\in[0,C/2-1]\) and \(C\) is current number of channels.
Finally, we add a pooling layer to reduce the feature map size and construct the complete DeepMax block. Through effective feature extraction and selection, compact representation can be obtained with satisfactory computational consumption. To be implemented as a lightweight network for intrusion detection, it is enough to stack only two DeepMax blocks with a liner layer in LNet to achieve state-of-the-art results.
### _Complexity analysis_
Here, we give an in-depth analysis of how LNet reduces the model size and saves the FLOPs. Consider an input feature map \(\mathcal{F}\in\mathbb{R}^{H\times W}\) and in / out channel \(C_{i}\) / \(C_{o}\). In the standard convolution block, the filtering and combination steps are performed upon all input channels, and the number of parameters is \(C_{i}\times K\times K\times C_{o}\), where \(K\) denotes the kernel size. In contrast, our DeepMax block conducts separable convolution for each input channel and an additional \(1\times 1\) convolution to create a linear combination to match the output channel. Considering no extra parameters are introduced in the MFM or pooling operation, the total parameter number in the whole block is \(C_{i}\times K\times K+C_{i}\times 1\times 1\times C_{o}\). It is clear that the DeepMax only needs \(C_{i}\times(K^{2}+C_{o})\) gradient-required parameters, which is less than classical ones \(C_{i}\times(K^{2}\times C_{o})\) by a large margin. For the computational cost, the DeepMax block narrows half the channel number by the MFM module and half the output feature size by the pooling layer if the pooling size is \(2\times 2\).
### _Self-Knowledge Distillation_
To compensate for the performance drop incurred by model compression, we adopt the batch-wise self-knowledge distillation strategy inspired by [15], which improves the generalization ability by learning sample-level soft labels given by skillful teacher models. Specifically, the soft prediction of the last iteration is used as the smooth label for self-distillation to provide instantaneous distillation for each batch of training samples, which leads to the regularization of training consistency. As shown in Fig. 2(a), for each input sample\((x,y)\), the LNet will produce the predicted distribution \(\mathbf{p}=\{p_{1},\cdots,p_{m}\}\in\mathbb{R}^{M}\) by the suggested softmax function [12].
\[p_{i}(x;\tau)=\frac{\exp\left(z_{i}(\mathbf{x})/\tau\right)}{\sum_{j}\exp \left(z_{j}(\mathbf{x})/\tau\right)}, \tag{3}\]
where \(\tau\) denotes the temperature scale to soften the probability distribution for better distillation. Considering the imbalance of intrusion detection datasets, we use a class-balanced cross
Fig. 2: The overview of proposed LNet-SKD for network intrusion detection. (a) The soft prediction in the last iteration will guide current iteration training. (b) Each filter kernel only calculates with the corresponding feature map. (c) Channels and feature sizes will be half with the DeepMax block.
entropy loss function to improve the attention to tail-category samples, which is written as follows:
\[\mathcal{L}_{\text{CB}}(\mathbf{z},y)=-\frac{1-\beta}{1-\beta^{n_{y}}}\log\left( \frac{\exp\left(z_{y}/\tau\right)}{\sum_{j=1}^{M}\exp\left(z_{j}/\tau\right)} \right). \tag{4}\]
Batch-wise distillation transfers the knowledge by optimizing the Kullback-Leibler (KL) divergence between the two batches' consecutive iterations and the loss of self-knowledge distillation as follows:
\[\mathcal{L}_{SKD} =\frac{1}{n}\sum_{i=1}^{n}\tau^{2}\cdot D_{KL}\left(\mathbf{p}_{i}^{ \tau,t-1}\|\mathbf{p}_{i}^{\tau,t}\right) \tag{5}\] \[=\frac{1}{n}\sum_{i=1}^{n}\tau^{2}\mathbf{p}_{i}^{\tau,t-1}\log(\mathbf{ p}_{i}^{\tau,t-1})-\mathbf{p}_{i}^{\tau,t-1}\log(\mathbf{p}_{i}^{\tau,t}),\]
where \(\mathbf{p}_{i}^{\tau,t-1}\) is the soften labels generated by LNet at \((t-1)^{th}\) iteration and \(\mathbf{p}_{i}^{\tau}\) is the soften predictions at \(t^{th}\) iteration. The soften degree is controlled by temperature parameter \(\tau\), higher temperatures lead to a more uniform distribution, resulting in a smoother batch-wise regularization effect. Compared to vanilla KD [12], SKD plays a double role of student and teacher in the training phase. It keeps soft targets and extracts such smooth labels from the previous iteration for regularization. We use the soft prediction of the previous iteration to generate a dynamic sample-level smoothing label for self-distillation to provide the most instantaneous distillation for each training sample.
Combined the class-balanced loss and the SKD loss with a trade-off factor \(\lambda\), we derive the overall loss function:
\[\mathcal{L}=\mathcal{L}_{CB}+\lambda\cdot\mathcal{L}_{SKD}. \tag{6}\]
## IV Experimental Results
In this section, we first introduce the datasets and evaluation metrics used in experiments. All experiments are based on python 3.7 and PyTorch 1.12.0, using a 2.4GHz Intel Core i9 processor and 16GB RAM. We adopt stochastic gradient descent as the optimizer with a momentum of 0.9, and weight decay of \(1e-4\). The initial learning rate is 0.1 and adjusted with cosine annealing.
### _Dataset_
We use two benchmark datasets in experiments. One is the classic NSL-KDD dataset [22] and the other is CICIDS2017 dataset [23] which includes up-to-date typical attacks in the real world.
NSL-KDD dataset consists of 39 different types of attacks, which are divided into four main classes, i.e., Denial of Service (DoS), User-to-Root (U2R), Remote-to-Local (R2L), and Probe. Each intrusion record is composed of 9-_dim_ basic TCP connection features, 13-_dim_ TCP connection content features, 9-_dim_ time-based network traffic statistics features, 10-_dim_ host-based network traffic statistics features and the category label. The NSL-KDD dataset is described in Table I.
CICIDS2017 dataset contains five days of network traffic data collected from Monday to Friday, including normal traffic and abnormal traffic caused by common attacks. The benign traffic corresponds to the human interaction of 25 users based on standard network protocols such as HTTP(S), FTP, SSH, IMAP, and POP3. Each record contains 6 basic features and more than 70 functional features. We follow previous work [6] to utilize the dataset, and a detailed description of the CICIDS2017 dataset is shown in Table II.
### _Evaluation Metrics_
Accuracy, Precision, Recall, and F1 score are used to evaluate the detection ability and stability of LNet-SKD. Notably, we apply macro metrics instead of micro metrics because the former is more suitable for multi-class tasks. Besides, we evaluate the computational cost via Floating Point Of Operations (FLOPs) and the number of parameters to compare the required implementation resources.
Fig. 3: The impact of hyper-parameters.
### _Ablation w.r.t. Hyper-parameters_
In LNet-SKD, we take trade-offs about the performance of self-knowledge distillation with temperature parameter \(\tau\) and balancing coefficient \(\lambda\). Thus, effective hyper-parameters must be found to get the best classification result. Fig. 3 shows the impact of different values for \(\tau\) and \(\lambda\) used in LNet-SKD.
We fix \(\lambda\) to 1 and vary \(\tau\) from 1 to 10. As shown in Fig. 3(a), LNet-SKD achieves the best accuracy in both datasets with a temperature \(\tau=3\), which means that more knowledge can be transferred when \(\tau=3\). The effect of balance factor \(\lambda\) is plotted in Fig. 3(b), with a fixed \(\tau=3\). As noted in Eq. 6, the balance coefficient \(\lambda\) is considered for the contribution of the gap between two adjacent batches to the final loss. For NSL-KDD dataset, LNet-SKD performs best with \(\lambda=2\), and for CICIDS2017, the value is \(\lambda=1\).
### _Ablation w.r.t. LNet-SKD_
To evaluate our proposed method comprehensively, we perform multi-classification tasks on CNN, LNet, and LNet-SKD. CNN applies a network structure similar to LNet but uses standard convolution. Moreover, a specially designed LNet\({}^{-}\) (LNet without MFM) is considered for comparison. We apply the parameters suggested by the previous experiment to train the LNet-SKD model. The result is shown in Table III.
Compared with the standard CNN model, LNet uses 37.7% of its parameters, resulting in a succinct model with the FLOPs of 194.58K (\(\downarrow\) 63.1%). Meanwhile, our self-distillation approach does not introduce additional parameters while improving the accuracy of LNet by 2.1% / 1.6% on NSL-KDD and CICIDS2017, respectively, so that LNet-SKD achieves a classification accuracy on par with CNN and outperforms on F1-score. From the Table III, we observe that the DeepMax block only reduces accuracy by 1.89% and 1.57% on two datasets compared with standard convolution. However, the value of LNet\({}^{-}\) is 5.11% / 1.94%, which means LNet has benefited a lot from the more representative features extracted by the DeepMax block that integrates MFM.
### _Visualization_
The visualized confusion matrix in Fig. 4 further verifies the effectiveness of LNet-SKD in detecting different attack types. Especially to U2R, which is totally neglected by LNet\({}^{-}\), whereas LNet detects a few instances from more valuable features. Further enabled by SKD, LNet-SKD achieves a highly competitive detection rate even compared to the standard CNN model. It can be concluded that our approach achieves satisfactory intrusion detection performance with low model storage and computational cost.
### _Compared with other methods_
This section provides a comparison of the LNet-SKD with baseline DL models and SOTA methods [6, 14] applied to the NSL-KDD and CICIDS2017 datasets. We present our experiment of LNet-SKD against the aforementioned ones in Table IV, where - means the calculation is not considered. As shown in the table, our LNet-SKD obtains the best performance in terms of accuracy and F1 score on both datasets with only \(~{}4.94K\) and \(~{}5K\) parameters, respectively. Compared with KD-TCNN designed for resource-constrained IoT devices, the LNet-SKD model has a better detection performance and lower
Fig. 4: Confusion matrix for each model on NSL-KDD. |
2301.03505 | Advances in Medical Image Analysis with Vision Transformers: A
Comprehensive Review | The remarkable performance of the Transformer architecture in natural
language processing has recently also triggered broad interest in Computer
Vision. Among other merits, Transformers are witnessed as capable of learning
long-range dependencies and spatial correlations, which is a clear advantage
over convolutional neural networks (CNNs), which have been the de facto
standard in Computer Vision problems so far. Thus, Transformers have become an
integral part of modern medical image analysis. In this review, we provide an
encyclopedic review of the applications of Transformers in medical imaging.
Specifically, we present a systematic and thorough review of relevant recent
Transformer literature for different medical image analysis tasks, including
classification, segmentation, detection, registration, synthesis, and clinical
report generation. For each of these applications, we investigate the novelty,
strengths and weaknesses of the different proposed strategies and develop
taxonomies highlighting key properties and contributions. Further, if
applicable, we outline current benchmarks on different datasets. Finally, we
summarize key challenges and discuss different future research directions. In
addition, we have provided cited papers with their corresponding
implementations in https://github.com/mindflow-institue/Awesome-Transformer. | Reza Azad, Amirhossein Kazerouni, Moein Heidari, Ehsan Khodapanah Aghdam, Amirali Molaei, Yiwei Jia, Abin Jose, Rijo Roy, Dorit Merhof | 2023-01-09T16:56:23Z | http://arxiv.org/abs/2301.03505v3 | # Advances in Medical Image Analysis with Vision Transformers: A Comprehensive Review
###### Abstract
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in [https://github.com/mindflow-institue/Awesome-Transformer](https://github.com/mindflow-institue/Awesome-Transformer).
keywords: Transformers, Medical Image Analysis, Vision Transformers, Deep Neural Networks +
Footnote †: journal:
## 1 Introduction
Convolutional neural networks (CNNs) have been an integral part of research in the field of medical image analysis for many years. By virtue of convolutional filters whose primary function is to learn and extract necessary features from medical images, a wealth of research has been dedicated to CNNs ranging from tumor detection and classification [1], detection of skin lesions [2; 3; 4] to segmentation of intervertebral discs [5; 6], brain tumor segmentation [7; 8], to name only a few. CNNs have also contributed significantly to the analysis of different imaging modalities in clinical medicine, including X-ray radiography, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US), and digital pathology. Despite their outstanding performance, CNNs suffer from conceptual limitations and are innately unable to model explicit long-distance dependencies due to the limited receptive field of convolution kernels. Moreover, the convolutional operator suffers from the fact that at inference time, it applies fixed weights regardless of any changes to the visual input. To mitigate the aforementioned problems, there have been great research efforts to integrate attention mechanisms, which can be regarded as a dynamic weight adjustment process based on input features to the seminal CNN-based structures to improve the non-local modeling capability [9; 10; 11]. To this end, Wang et al. [12] designed a non-local flexible building block, which can be plugged into multiple intermediate convolution layers. SENet [13] suggested a channel attention squeeze-and-excitation (SE) block, which collects global information in order to recalibrate each channel accordingly, in order to create a more robust representation [14]. Inspired by this line of research, there has been an overwhelming influx of models with attention variants proposed in the medical imaging field [15; 16; 17; 18; 19]. Although these attention mechanisms allow the modeling of full image contextual information, as the computational complexity of these approaches typically grows quadratically with respect to spatial size, they imply an intensive computational burden, thus making them inefficient in the case of medical images that are dense in pixel resolution [20]. Moreover, despite the fact that the combination of the attention mechanism with the convolutional operation leads to systematic performance gains, these models inevitably suffer from constraints in learning long-range interactions. Transformers [21] have demonstrated exemplary performance on a broad range of natural language processing (NLP) tasks, e.g., machine translation, text
classification, and question answering. Inspired by the eminent success of Transformer architectures in the field of NLP, they have become a widely applied technique in modern Computer Vision (CV) models. Since the establishment of Vision-Transformers (VITs) [22], Transformers proved to be valid alternatives to CNNs in diverse tasks ranging from image recognition [22], object detection [23], image segmentation [24] to video understanding [25] and image super-resolution [26]. As a central piece of the Transformer, the self-attention mechanism comes with the ability to model relationships between elements of a sequence, thereby learning long-range interactions. Moreover, Transformers allow for large-scale pre-training for specific downstream tasks and applications and are capable of dealing with variable-length inputs. The immense interest in Transformers has also spurred research into medical imaging applications (see Figure 1). Being dominant in reputable top-tier medical imaging conferences and journals, it is extremely challenging for researchers and practitioners to keep up with the rate of innovation. The rapid adoption of Transformers in the medical imaging field necessitates a comprehensive summary and outlook, which is the main scope of this review. Specifically, this review provides a holistic overview of the Transformer models developed for medical imaging and image analysis applications. We provide a taxonomy of the network design, highlight the major strengths and deficiencies of the existing approaches and introduce the current benchmarks in each task. We inspect several key technologies that arise from the various medical imaging applications, including medical image segmentation, medical image registration, medical image reconstruction, and medical image classification. So far, review papers related to Transformers do not concentrate on applications of Transformers in the medical imaging and image analysis domain [27]. The few literature reviews that do focus on the medical domain [28; 29], despite being very comprehensive, do not necessarily discuss the drawbacks and merits of each method. In our work, we explicitly cover this aspect and also provide a taxonomy that comprises the imaging modality, organ of interest, and type of training procedure each paper has selected. More specifically, in Section 3 (Medical Image Classification), we comprehensively elaborate on the most promising networks along with their key ideas, limitations, the number of parameters, and the specific classification task they are addressing. In Section 4 (Medical Image Segmentation), we analyze network architectures in terms of their design choice and propose a detailed taxonomy to categorize each network to provide insight for the reader to understand the current limitations and progress in segmentation networks based on the Transformer architecture. In Section 5 (Medical Image Reconstruction), we take a different perspective to categorize networks based on their network structure and the imaging modality they are built upon. We categorize the synthesis methods in Section 6 based on their objective (intra-modality or inter-modality) and then provide detailed information regarding the network architecture, parameters, motivations, and highlights. In the sections related to detection (Section 7), registration (Section 8), and report generation (Section 9) we briefly summarize the state-of-the-art (SOTA) networks and provide detailed information regarding the network architectures, advantages, and drawbacks. Moreover, due to the swift development of the field, we believe that the community requires a more recent overview of the literature.
We hope this work will point out new research options and provide a guideline for researchers and initiate further interest in the vision community to leverage the potential of Transformer models in the medical domain. Our major contributions are as follows:
* We systematically and comprehensively review the applications of Transformers in the medical imaging domain and provide a comparison and analysis of SOTA approaches for each task. Specifically, more than 200 papers
Figure 1: Overview of the applications covered in this review.
are covered in a hierarchical and structured manner.
* Our work provides a taxonomized (Figure 1), in-depth analysis (e.g. task-specific research progress and limitations), as well as a discussion of various aspects.
* Finally, We discuss challenges and open issues and also identify new trends, raise open questions and identify future directions.
_Paper Organizations._ The remaining sections of the paper are organized as follows. In Section 2, we provide an overview of the key components of the well-established Transformer architecture. Moreover, this section clarifies the categorization of neural network variants in terms of the position where the Transformer is located. Section 3 to Section 9 comprehensively review the applications of Transformers in diverse medical imaging tasks as depicted in Figure 1. For each task, we propose a taxonomy to characterize technical innovations and major use cases. Section 10 presents open challenges and future perspectives of the field as a whole, while finally, Section 11 concludes this work
## 2 Background
In this section, we first provide an overview of the overall architecture of the Transformer module and the key ideas behind its feasible design. Then, we outline a general taxonomy of Transformer-based models, characterized by their core techniques of using Transformers, i.e., whether they are purely Transformer-based, or whether the Transformer module is either used in the encoder, decoder, bottleneck, or skip connection, respectively.
### Transformers
The original Transformer [21] was first applied to the task for machine translation as a new attention-driven building block. The vanilla Transformer consists of an encoder and a decoder, each of which is a stack of \(L\) tandem of consecutive identical blocks. The Transformer module is convolutional-free and solely based on the self-attention mechanism or attention mechanism in short. Specifically, these attention blocks are neural network layers that relate different positions of a single sequence to compute the sequence's representation. Since the establishment of Transformer models, they have attained remarkable performance in diverse natural language processing tasks [30]. Inspired by this, Dosovitskiy et al. proposed the Vision Transformer (ViT) [22] model as illustrated in Figure 2. When trained on large datasets, for instance, JFT-300M, ViT outperforms the then state-of-the-art, namely ResNet-based models like BiT [31]. In their approach, an image is turned into fixed-sized patches before being flattened into vectors. These vectors are then passed through a trainable linear projection layer that maps them into \(N\) vectors with the dimensionality of \(D\times N\) is the number of patches. The outputs of this stage are referred to as patch embeddings. To preserve the positional information present within each patch, they add positional embeddings to the patch embeddings. In addition to this, a trainable class embedding is also appended to the patch embeddings before going through the Transformer encoder. The Transformer encoder is comprised of multiple Transformer encoder blocks. There are one multi-head self-attention (MSA) block and an MLP block in each Transformer encoder block. The activations are first normalized using LayerNorm (LN) before going into these blocks in the Transformer encoder block. Furthermore, there
Figure 2: Architecture of the Vision Transformer as proposed in [22] and the detailed structure of the Vision Transformer encoder block. In the Vision Transformer, sequential image patches are used as the input and processed using a Transformer encoder to produce the final classification output.
are skip connections before the LN that add a copy of these activations to the corresponding MSA or MLP block outputs. In the end, there is an MLP block used as a classification head that maps the output to class predictions. The self-attention mechanism is a key defining characteristic of Transformer models. Hence, we start by introducing the core principle of the attention mechanism.
#### 2.1.1 Self-Attention
In a self-attention layer (Figure 2(a)), the input vector is firstly transformed into three separate vectors, i.e., the query vector \(q\), the key vector \(k\), and the value vector \(v\) with a fixed dimension. These vectors are then packed together into three different weight matrices, namely \(W^{Q}\), \(W^{K}\), and \(W^{V}\). A common form of \(Q\), \(K\), and \(V\) can be formulated as Equation1 for an input \(X\)
\[K=W^{K}X,Q=W^{Q}X,V=W^{V}X, \tag{1}\]
where \(W^{K}\), \(W^{Q}\), and \(W^{V}\) refers to the learnable parameters. The scaled dot-product attention mechanism is then formulated as
\[\text{Attention}(Q,K,V)=\text{Softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}} \right)V, \tag{2}\]
where \(\sqrt{d_{k}}\) is a scaling factor, and a softmax operation is applied to the generated attention weights to translate them into a normalized distribution.
#### 2.1.2 Multi-Head Self-Attention
The multi-head self-attention (MHSA) mechanism (Figure 2(b)) has been proposed [21] to model the complex relationships of token entities from different aspects. Specifically, the MHSA block helps the model to jointly attend to information from multiple representation sub-spaces, as the modeling capability of the single-head attention block is quite coarse The process of MHSA can be formulated as
\[\text{MultiHead}(Q,K,V)=[Concat\,(\text{head}_{1},\dots,\text{head}_{h})]W^{ O}, \tag{3}\]
where \(\text{head}_{i}=\text{Attention}\left(QW^{Q}_{i},KW^{K}_{i},VW^{V}_{i}\right)\), and \(W^{O}\) indicates a linear mapping function to combine multi-head representation. Note that \(h\) is a hyper-parameter set to \(h=8\) in the original paper.
### Transformer modes
While the Transformer was originally introduced with an encoder-decoder pipeline, many modern architectures generally exploit the Transformer architecture in different fashions, which generally depend on the target application. The usage of Transformers in vision tasks can broadly be classified into pure and hybrid designs.
#### 2.2.1 Pure Transformers
Due to the deficiency of CNN-based architectures in learning global and long-range semantic information interactions, which stems from the locality of convolution operation, a cohort study has investigated the purely Transformer-based models without any convolution layer. These models usually consist of encoder, bottleneck, decoder, and skip connections directly built upon the ViT or its variants. In this criteria, there are usually multiple multi-head self-attention modules in both encoding and decoding sections that allow the decoder to utilize information from the encoder. Examples of such methods are the Swin-Unet [32] and the TransDeepLab [33] networks which, as their name suggests, try to model the seminal U-Net [34], and DeepLab [35] architectures.
#### 2.2.2 Transformer: Hybrid
The hybrid Transformer models usually modify the base CNN structure by replacing the encoder or decoder modules.
**Encoder**: Encoder-only models such as the seminal BERT [36] are designed to make a single prediction per input or a single prediction for an entire input sequence. In the computer vision era, these models are applicable for classification tasks. Moreover, as utilizing a pure Transformer can result in limited localization capacity stemming from inadequate low-level features, many cohort studies try to combine CNN and Transformer in the encoding section [24]. Such a design can enhance finer details by recovering localized spatial information.
**Decoder:** Transformers can also be used in a decoding fashion. Such a causal model is typically used for generation tasks such as language modeling. Besides that, the modification can apply to the skip connections of the decoder module. Skip connection is a widely-used technique to improve the performance and the convergence of deep neural networks. It can also serve as a modulating mechanism between the encoder and the decoder. To effectively provide low-level spatial information for the decoding path, the idea of exploiting Transformers in designing skip connections has emerged. This notable idea can lead to finer feature fusion and recalibration while guaranteeing the aggregation scheme of using both high-level and low-level features [24; 37].
Figure 3: (a) The process of self-attention. (b) Multi-head attention. The MSA consists of multiple SA blocks (heads) concatenated together channel-wise as proposed in [21].
## 3 Medical Image Classification
Image classification is still one of the challenging problems in computer vision, which aids in segregating extensive quantities of data into meaningful categories. Vision Transformers (ViT) have recently demonstrated outstanding results in various image classification tasks and offer significant advantages over conventional CNNs [57; 58; 59; 60; 61; 62]. These advantages include long-range relationships, adaptive modeling, and attention maps that yield intuition on what the model deems more important inside an image [42]. Due to these aluring advantages, there is rising interest in building Transformer-based models for medical image classification. Therefore, highly precise classification is becoming increasingly vital for facilitating clinical care.
In this section, we exhaustively examine ViTs in medical image classification. As illustrated in Figure 4, we have broadly classified these methods based on the role ViT plays in their architecture. These categories include pure Transformers and Hybrid Models. Generally, a vision Transformer-based classification architecture consists of three modules: (1) a backbone for capturing input features, (2) an encoder for modeling the information, and (3) a classification head for generating output based on the specified task. Therefore, the Transformer can be adopted in each module. However, some works, including Lesion Aware Transformer (LAT) [52] and Deformable Transformer for Multi-Instance Learning (DT-MIL) [53], take a different approach and utilize encoder-decoder structures. LAT proposes a unified encoder-decoder system for Diabetic Retinopathy (DR) grading, and DT-MIL introduces a Transformer-based encoder-decoder architecture for classifying histopathological images, where the deformable Transformer was embraced for the encoder part. In the following, we will go into great depth on both hybrid and pure models.
### Pure Transformers
Since the emergence of Transformers, there has been a growing debate regarding whether it is time to entirely switch from CNNs to Transformers. Matsoukas et al. [42] conduct a series of experiments to answer this critical question. They take ResNet50 [63] and the DeiT-S [64] models to represent CNN and ViT models, respectively. They train each of these two models in 3 different fashions: a) randomly-initialized weights, b) pre-trained on ImageNet (transfer learning), and c) pre-training on the target dataset in a self-supervised scheme using DINO [65]. Their findings show that when utilizing random initialization, ViTs are inferior to CNNs. In the case of transfer learning, the results are similar for both models, with ViT being superior for two out of three datasets. Additionally, ViT performs better when self-supervision on the target data is applied. They conclude that Vision Transformers, indeed, are suitable replacements for CNNs.
Transformers have had a profound effect on medical development. Researchers have thoroughly investigated adopting the ViT in medical image classification tasks since its introduction. However, the limited number of medical images has hindered Transformers from replicating their success in medical image classification. **ViT-BUS**[43] studies the use of ViTs in medical ultrasound (US) image classification for the first time. They propose to transfer pre-trained ViT models based on the breast US dataset to compensate for the data-hunger of ViTs. Evaluated results on B [66], BUSI [67], and B+BUSI datasets indicate the predominance of attention-based ViT models over CNNs on US datasets. Likewise, **COVID-Transformer**[44] utilizes ViT-L/16 to detect COVID from Non-COVID based on CXR images. Due to the limitation of sufficient data, they introduce a balanced dataset containing 30K chest X-ray images for multi-class classification and 20K images for binary classification. The published dataset is created by merging datasets [68], [69], and [70]. They fine-tune the model on the dataset with
Figure 4: Taxonomy of ViT-based approaches in medical image classification. Methods are categorized based on their proposed architecture into pure and hybrid methods, in which they adopt the vanilla ViT or present a new type of vision Transformer in medical image classification. Notably, we utilize the prefix numbers in the paper’s name in ascending order and denote the reference for each study as follows: 1. [38], 2. [39], 3. [40], 4. [41], 5. [42], 6. [43], 7. [44], 8. [45], 9. [46], 10. [47], 11. [48], 12. [49], 13. [50], 14. [51], 15. [52], 16. [53], 17. [54], 18. [55], 19. [56].
a custom MLP block on top of ViT to classify chest x-ray (CXR) images. Moreover, COVID-Transformer exploits the GradCAM Map [71] to visualize affected lung areas that are significant for disease prediction and progression to display the model interpretability. Similarly, Mondal et al. [40] present **xViTCOS** for detecting COVID-19 in CTs and CXRs. xViTCOS employs a model that has been pre-trained on ImageNet-21k [72]. Nevertheless, the training data capacity might overshadow the generalization performance of the pre-trained ViT to transfer the knowledge from the learned domain to the target domain. By training the model on the COVIDx-CT-2A dataset [73], a moderately-sized dataset, xViTCOS overcomes this problem. However, due to the shortage of the insufficient amount of CXR images, the pre-trained ViT model is fine-tuned using the CheXpert dataset [74]. In addition, xViTCOS leverages the Gradient Attention Rollout algorithm [75] to visually demonstrate the model's prediction on the input image for clinically interpretable and explainable visualization. In experiments using COVID CT-2A and their custom-collected Chest X-ray dataset, xViTCOS significantly outperforms conventional COVID-19 detection approaches. **MIL-VT**[39] similarly suggests pre-training the Transformer on a fundus image large dataset beforehand, initialized by the pre-trained weight of ImageNet, then fine-tuning it on the downstream retinal disease classification task in order to encourage the model to learn global information and achieve generalization. Unlike previous approaches, they apply some modifications to the vanilla ViT structure. In the classic ViT, embedded features are neglected for classification; instead, only the class token, which retains the summarization of embedded features' information, is used. Yu et al. [39] propose a novel multiple-instance learning (MIL)-head module to exploit those embedded features to complement the class token. This head comprises three sub-modules that attach to the ViT in a plug-and-play manner: 1) the MIL embedding submodule that maps the feature embeddings to a low-dimensional embedding vector, 2) the attention aggregation submodule that outputs a spatial weight matrix for the low-dimensional patch embeddings; this weight matrix is then applied to the low-dimensional embeddings to ascertain each instance's importance, 3) the MIL classifier submodule that determines the probability of each class through aggregated features. In the downstream task, both MLP and MIL heads use the weighted cross-entropy loss function for training. The outputs of both heads are then weight-averaged for the inference time. Results indicate the effectiveness of the proposed training strategy and the MIL-head module by dramatically boosting the performance over APTOS2019 [76] and RFMiD2020 [77] datasets when compared to CNN-based baselines. In contrast to the previous 2D-based methods that employ transfer learning, **COVID-ViT**[38] proposes training ViT to classify COVID and non-COVID cases using 3D CT lung images. Given that a COVID volume may contain non-COVID 2D slices, COVID-ViT applies a slice voting mechanism after the ViT classification result in which the subject is categorized as having COVID if more than a certain percentage of slices (e.g., 25%) are predicted to be COVID. The findings reported for the MIA-COVID19 competition [78] confirm that ViT outperforms CNN-based approaches such as DenseNet [79] in identifying COVID from CT images.
Besides the remarkable accuracy of Transformers compared to CNNs, one of their major drawbacks is their high computational cost, thereby making them less effective for real-world applications, such as detecting COVID-19 in real-time. In light of the prevalence of COVID-19, the rapid diagnosis will be beneficial for starting the proper course of medical treatment. CXR and lung CT scans are the most common imaging techniques employed. However, CT imaging is a time-consuming process, and using CXR images is unreliable in identifying COVID-19 in the early stage. In addition, vision Transformers are computationally expensive to deploy on mobile devices for real-time COVID-19 classification. Therefore, Perera et al. [45] present a lightweight **Point-of-Care Transformer (POCFormer)**. The compactness of POCFormer allows for real-time diagnosis of COVID-19 utilizing commercially accessible POC ultrasound devices. POCFormer reduces the complexity of the vanilla ViT self-attention mechanism from quadratic to linear using Linformer [80]. The results display the superiority of POCFormer in the real-time detection of COVID-19 over the CNN-based SOTAs on the POCUS dataset [81].
In addition, despite the great potential shown by ViTs in ImageNet classification, their performance is still lower than the latest SOTA CNNs without additional data. These Transformers mainly focus on a coarse level by adopting a self-attention mechanism to establish global dependency between input tokens. However, relying only on a coarse level restricts the Transformer's ability to achieve higher performance. Thus, Liu et al. [47] leverage a pre-trained version of **VOLO** for an X-ray COVID-19 classification. VOLO [82] first encodes fine-level information into the token representations through proposed outlook attention, alleviating the limitations of Transformers that require a large amount of data for training, and second aggregates the global features via self-attention at the coarse level. Through the outlook attention mechanism, VOLO dynamically combines fine-level features by treating each spatial coordinate (\(i,j\)) as the center of a \(K\times K\) local window and calculating its similarity with all its neighbors. The findings indicate that fine-tuning VOLO on Dataset-1 [83] leads to 99.67% top1 accuracy on Dataset-1 test cases and 98.98% top1 accuracy on unseen Dataset-2 [84], which demonstrates the generality of the approach.
Furthermore, accessible labeled images have considerably influenced research on the use of Transformers to diagnose COVID-19. Considering the shortage of labeled data, data sharing between hospitals is needed so as to create a viable centralized dataset. However, such collaboration is challenging due to privacy concerns and patient permission. Motivated by Federated Learning (FL) and Split Learning (SL), Park et al. [41] present a **Federated Split Task-Agnostic (FESTA)** framework that uses ViT for multi-task learning of classification, detection, and segmentation of COVID-19 CXR images. FESTA benefits from the decomposable modular design of ViT to train heads and tails via clients and share the server-side Transformer body across clients to aggregate extracted features and process each task. The embedded features from the body Transformer are
then passed to their task-specific tail on the client side to produce the final prediction (Figure 5(a)). Figure 5(b) illustrates the single-task learning scheme and (c) the multi-task learning scheme. In multi-task learning, heads, tails, and a task-agnostic Transformer body are first jointly trained for 6000 rounds (see Figure 5(c)). Then, heads and tails are fine-tuned according to the desired specific task while freezing the weights of the Transformer body. FESTA merits from 220000 decentralized CXR images and attains competitive results compared to the data-centralized training approaches. The experimental results also demonstrate the stable generalization performance of FESTA, where multi-task learning enhances the performance of the individual tasks through their mutual effect during training.
Most attention-based networks utilized for detection and classification rely on the neural network to learn the necessary regions of interest. Bhattacharya et al. [46] in **Radio-Transformer** argue that in certain applications, utilizing experts' opinions can prove beneficial. Specifically, they apply this notion to leverage radiologists' gaze patterns while diagnosing different diseases on medical images; then, using a teacher-student architecture, they teach a model to pay attention to regions of an image that a specialist is most likely to examine. The teacher and the student networks consist of two main components: global and focal. The global component learns coarse representation while the focal module works on low-level features, and both these segments are comprised of Transformer blocks with shifting windows. In addition, the global and focal components are interconnected using two-way lateral connections to form the global-focal module; this is to address the inherent attention gap between the two. The teacher network is first directly pre-trained on human visual attention maps. Then, the entire model is trained for different downstream tasks, e.g., object detection and classification. Furthermore, the authors propose a self-supervised Visual Attention Loss (VAL) that incorporates both GIoU and MSE loss. The student network is trained to predict probability values for different classes and attention regions. These attention regions are then compared to those obtained from the teacher model, and the weights are optimized using VAL.
### Hybrid Models
In spite of the vision Transformers' ability to model global contextual representations, the self-attention mechanism undermines the representation of low-level details. CNN-Transformer hybrid approaches have been proposed to ameliorate the problem above by encoding both global and local features using the locality of CNNs and the long-range dependency of Transformers.
**TransMed**[48] proposes a hybrid CNN-Transformer network that leverages the locality of CNNs and the long-range dependency character of Transformers for parotid gland tumor and knee injury classification. Multimodal medical images primarily have long-range interdependencies, and improving performance requires an effective fusion strategy. TransMed proposes a novel image fusion strategy. Firstly, three neighboring 2D slices of a multimodal image are overlaid to create three-channel images. Then, each image is partitioned into \(K\times K\) patches. This fusion approach allows the following network to learn mutual information from images of different modalities. Patch tokens are fed into a CNN network to capture their low-level features and generate patch embeddings. The classic ViT is then used to determine the relationship between patch sequences. TransMed's final results verify the effectiveness of hybrid models in classifying multimodal medical images by outperforming all its counterparts by a large margin. TransMed-S enhances average accuracy on the PGT dataset by about 10.1% over its nearest counterpart, BoTNet [86], while requiring fewer parameters and FLOP count. Comparably, Tanzi et al. [50] develop a new CAD system (**Femur-ViT**) based on Vision Trans
Figure 5: Overview of the FESTA framework [41], which utilizes ViT for multi-task learning of COVID-19 CXR classification, detection, and segmentation. (a) FESTA leverages ViT’s decomposable modular design to train heads (\(\mathcal{H}\)) and tails (\(\mathcal{T}\)) via clients while sharing the server-side Transformer body (\(\mathcal{B}\)) between clients to integrate retrieved features. Final predictions are then derived by feeding embedded features to their task-specific tails on the client side. (b) illustrates the single-task learning scheme, and (c) two steps multi-task learning scheme. The former is trained for 12000 rounds, while the latter undergoes two training steps. First, the whole parts train in 6000 rounds. Then by freezing the weights of the Transformer body, the heads and tails are fine-tuned for 6000 steps based on the desired specific task.
Figure 6: Overview of RadioTransformer [46]. Human Visual Attention Training (HVAT) block first uses radiologists’ visual observations of chest radiographs to train a global-focal teacher network. The pre-trained teacher network is then utilized to distill the teacher’s knowledge to a global-focal student network through visual attention loss, enabling the student to learn visual information. Following the teacher-student strategy and incorporating radiologist visual examinations leads to an improvement in the classification of disease on chest radiographs.
formers for diagnosing femoral fractures. First, YOLOv3 [87] is utilized to detect and crop the left and right femur regions. Afterward, a CNN (InceptionV3 [88]) and a hierarchical CNN (different InceptionV3 networks in cascade) [89] are applied to the dataset, and the results serve as baselines for the classification. Then, they use a modified ViT to classify seven different fracture types. Finally, a clustering approach is proposed as an evaluation technique for the ViT encoder. This study highlights the power of using ViT models for medical image classification and the ability of the proposed CAD system to significantly increase clinicians' diagnostic accuracy. **3DMeT**[49] proposes applying a 3D medical image Transformer for assessing knee cartilage defects in three grades: grade 0 (no defect), grade 1 (mild defect), and grade 2 (severe defect). Primarily, using medical 3D volumes as an input to the Transformer is computationally expensive, thereby making it impractical. 3DMeT resolves the high computational cost problem by replacing conventional linear embedding with 3D convolutional layers. The weights of convolutional layers are adopted using the teacher-student training strategy. 3DMeT takes an exponential moving average from the first one/few-layer(s) of the CNN teacher's weights and uses it as convolutional layers' weights. This method enables Transformers to be compatible with small medical datasets and to benefit from CNNs' spatial inductive biases. Lastly, the Transformer and CNN teacher's outputs are combined in order to derive the classification results.
Operating Transformers over Whole Slide Images (WSIs) is computationally challenging since WSI is a gigapixel image that retains the original structure of the tissue. MIL and CNN backbones have demonstrated practical tools for acting on WSI. MIL is a weakly supervised learning approach that enables deep learning methods to train high-resolution images like WSI. Since annotating such images at the pixel level is impractical, MIL proposes to divide an input WSI into a bag of instances and assign a single label to the bag of each image based on pathology diagnosis. The bag has a positive label if it contains at least one positive instance, and it is considered negative if all the instances in the bag are negative. Then CNN backbones are employed to down-sample and extract the features of each instance and allow Transformers to operate according to the generated feature maps and currently available hardware. Therefore, **DT-MIL**[53] proposes to compress WSIs into compact feature images by embedding each patch of the original WSI into a super-pixel at its corresponding position using EfficientNet-B0 [90]. The resulting thumbnail image feed into a \(1\times 1\) Conv for feature reduction, followed by a deformable Transformer encoder that aggregates instance representations globally. A similar approach is adopted by **H**olistic **AT**tention **Net**work **(HATNet)**[55], where they first divide an input image into \(n\) non-overlapping bags, each broken down into \(m\) non-overlapping words (or patches). \(n\times m\) words are fed into the CNN encoder to obtain word-level representations for each bag. HATNet aims to develop a computer-aided diagnosis system to help pathologists in reducing breast cancer detection errors. According to the World Health Organization (WHO), breast cancer is the most frequent non-skin cancer in women, accounting for one out of every four new female cancers annually [91]. As illustrated in Figure 7, HATNet follows a bottom-up decoding strategy such that it first performs multi-head attention to words in a _word-to-word attention_ block, then considers the relationship between words and bags in _word-to-bag attention_, followed by _bag-to-bag attention_ to attain inter-bag representations. The acquired bag features are then aggregated in _bag-to-image attention_ to build image-level representations. A linear classifier is ultimately applied to achieve the final results. Furthermore, unlike most MIL methods that take all the instances in each bag independent and identically distributed [92; 93; 94], **TransMIL**[54] suggests that it is essential to consider the correlation between different instances and explore both morphological and spatial information. Two Transformer layers address the morphological information, and a conditional position encoding layer named Pyramid Position Encoding Generator (PPEG) addresses the spatial information. The proposed PPEG module has two merits: 1) It handles positional encoding of sequences with a variant number of instances by using group convolution over the 2D reshaped patch tokens,
Figure 7: The overall architecture of [85]. HATNet hierarchically divides an input image into \(n\times m\) words, which are then fed into the CNN encoder to provide word-level representations for each bag. Then by performing a bottom-up decoding strategy and applying a linear classifier, breast biopsy classification results are obtained. Notably, bag-to-image attention has the same procedure as word-to-bag attention, shown in (c).
and 2) It enriches the features of tokens by capturing more context information through convolutions. In contrast to conventional iid-based MIL methods requiring many epochs to converge, TransMIL converges two to three times faster by using morphological and spatial information. TransMIL also outperforms all the latest MIL methods [95, 96, 97, 98, 92] in terms of accuracy and AUC by a significant margin in binary and multiple classification tasks and exhibits the superiority of taking the correlation between different instances into account and considering both morphological and spatial information.
Previous methods mainly rely on weakly supervised learning or dividing WSIs into image patches and using supervised learning to assess the overall disease grade. Nevertheless, these approaches overlook WSI contextual information. Thus, Zheng et al. [56] propose a **Graph-based Vision Transformer (GTP)** framework for predicting disease grade using both morphological and spatial information at the WSIs. The graph term allows for the representation of the entire WSI, and the Transformer term allows for computationally efficient WSI-level analysis. The input WSI is first divided into patches, and those that contain more than 50% of the background are eliminated and not considered for further processing. Selected patches are fed forward through a contrastive learning-based patch embedding module for feature extraction. A graph is then built via a graph construction module utilizing patch embeddings as nodes of the graph. In the graph Transformer section, a graph convolution layer followed by a mincut pooling layer [99] is applied first to learn and enrich node embeddings and then lessen the number of Transformer input tokens. Since the graph adjacency matrix contains spatial information of nodes, by adding an adjacency matrix to node features, GTP obviates the need for adding extra learnable positional embeddings to nodes. The final Transformer layer predicts the WSI-level class label for three lung tumor classes: Normal, LUAD, and LSCC. GTP also introduces a graph-based class activation mapping (GraphCAM) technique that highlights the class-specific regions. GraphCAM exploits attention maps from multi-head self-attention (MHSA) blocks in the Transformer layer and maps them to the graph space to create a heatmap for the predicted class. The experiments show that GTP performs as a superior interpretable and efficient framework for classifying WSI images while considering morphological and spatial information.
Diabetic Retinopathy (DR) is an eye disorder that can cause impaired vision and sightlessness by damaging blood vessels in the retina. Most deep-learning approaches view lesion discovery and DR grading as independent tasks that may produce suboptimal results. In contrast to conventional methods, LAT [52] proposes a unified encoder-decoder structure that comprises a pixel relation-based encoder to capture the image context information and a lesion filter-based decoder to discover lesion locations, which the whole network jointly optimized and complemented during training. The encoder is particularly in charge of modeling the pixel correlations, and the Transformer-based decoder part is formulated as a weakly supervised localization problem to detect lesion regions and categories with only DR severity level labels. In addition, LAT proposes two novel mechanisms to improve the effectiveness of lesion-aware filters: 1) Lesion region importance mechanism, \(g(\cdot|\Phi)\), to determine the contribution of each lesion-aware feature, and 2) Lesion region diversity mechanism to diversify and compact lesion-aware features. The former is a linear layer followed by a sigmoid activation function that generates importance weights for lesion-aware features, and the latter adopts a triplet loss [100] to encourage lesion filters to find diverse lesion regions. In the DR grading branch, LAT presents a DR grading classification module that calculates a global consistency loss based on the lesion-aware features, indicated as \(h(\cdot|\sigma)\). Eventually, the final DR grading prediction is achieved by calculating the cross-entropy loss between the predicted labels obtained from the fusion of \(g(\cdot|\Phi)\) and \(h(\cdot|\sigma)\) and the ground truth. The total loss is the aggregation of cross-entropy loss, global consistency loss, and triplet loss. Visual results of LAT regarding the lesion discovery are depicted in Figure 8.
### Discussion and Conclusion
Section 3 thoroughly outlines 19 distinctive Transformer-based models in medical image classification. We have categorized the introduced models based on their architectures into hybrid and pure. These approaches differ according to whether they adhere to the original structure of the vanilla ViT or provide a new variant of the vision Transformer that can be applied to medical applications. In addition, we have presented details on the studied classification methods regarding their architecture type, modality, organ, pre-trained strategy, datasets, metrics, and the year of publication in Table 1. Additional descriptions of the methods, including their model size, contributions, and highlights, are described in Table 2.
As is evident in the storyline of this section, we have discussed methods in each paragraph regarding the underlying problems in medical image classification and introduced solutions and how they address such issues. However, the need for more research on these problems is crucial to making such approaches widely applicable.
Figure 8: LAT [52] vs. CAM [101] visual comparison. The ground truth consists of microaneurysms, hemorrhages, soft exudates, and hard exudates, which are colored as green, yellow, green, and blue dots, respectively.
Data availability in the medical domain is one of the most challenging aspects of developing Transformer-based models since Transformer models are known for being data-hungry to generalize. Reasons for data scarcity in the medical field can be referred to as privacy concerns of patients, the time-consuming and costly process of annotation, and the need for expert staff. To this end, the use of generative models [130, 131, 132] and their integration with Transformer models can become prominent since they are capable of creating synthetic data that is comparable to genuine data. In addition, another way to attack this problem is by utilizing federated learning, such as [41]. Nevertheless, there is still room for improvement when it comes to privacy concerns since, in federated learning, communication between the client and server is required.
Despite their SOTA performance, Transformer-based networks still face challenges in deploying their models in the real world due to computational limitations. As shown in Table 2, most approaches have a high number of parameters which provokes a serious problem. Different novel approaches have been introduced to reduce the quadratic complexity of self-attention, which can be leveraged in the med
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
**Method** & **Modality** & **Organ** & **Type** & **Pre-trained Module: Type** & **Datasets** & **Metrics** & **Year** \\ \hline \multirow{2}{*}{ViT-vs-CNN [42]} & \multirow{2}{*}{Dermoscopy Mammograms} & \multirow{2}{*}{Multi-organ} & \multirow{2}{*}{2D ViT: Self-supervised \& Supervised & \({}^{1}\)APTOS-2019 [102] & Kappa & \multirow{2}{*}{2021} \\ & & & & & & \({}^{2}\)SIC-2019 [102] & Recall & \\ & & & & & & \({}^{\ast}\)CBS-DDSM [103] & ROC-AUC & \\ \hline \multirow{2}{*}{ViT-BUS [43]} & \multirow{2}{*}{Ultraound} & \multirow{2}{*}{Breast} & \multirow{2}{*}{2D} & \multirow{2}{*}{ViT: Supervised} & \({}^{1}\)B [66] & ACC & \multirow{2}{*}{2021} \\ & & & & & \({}^{2}\)BUSI [67] & AUC & \\ \hline \multirow{2}{*}{POCFormer [45]} & \multirow{2}{*}{Ultraound} & \multirow{2}{*}{Chest} & \multirow{2}{*}{2D} & \multirow{2}{*}{\(\mathbf{\kappa}\)} & \multirow{2}{*}{POCUS [81]} & Recall, F1 & \multirow{2}{*}{2021} \\ & & & & & & SP, SE, ACC & \\ \hline \multirow{2}{*}{MIL-VT [39]} & \multirow{2}{*}{Fundus} & \multirow{2}{*}{Eye} & \multirow{2}{*}{2D} & \multirow{2}{*}{ViT: Supervised} & \({}^{1}\)Private Dataset & Recall, F1 & \multirow{2}{*}{2021} \\ & & & & & \({}^{2}\)APTOS-2019 [76] & ACC, AUC & \\ & & & & & \({}^{3}\)RFMiD-2020 [77] & Precision & \\ \hline COVID-VIT [38] & CT & Chest & 3D & \(\mathbf{\kappa}\) & MIA-COV19 [78] & ACC, F1 & \multirow{2}{*}{2021} \\ \hline \multirow{2}{*}{xViTCOS [40]} & \multirow{2}{*}{\begin{tabular}{c} X-ray \\ CT \\ \end{tabular} } & \multirow{2}{*}{Chest} & \multirow{2}{*}{2D} & \multirow{2}{*}{ViT: Supervised} & \({}^{1}\)COVID CT-2A [73] & Recall, F1 & \multirow{2}{*}{2021} \\ & & & & \({}^{2}\)CheCpert [74] & Precision & \\ & & & & & \({}^{2}\)CheCpert [74] & SP, NPV & \\ \hline \multirow{2}{*}{FESTA [41]} & \multirow{2}{*}{X-ray} & \multirow{2}{*}{Chest} & \multirow{2}{*}{2D} & \multirow{2}{*}{ViT: Supervised} & \({}^{2}\)Four Private Datasets & \multirow{2}{*}{
\begin{tabular}{c} \\ \end{tabular} } \\ & & & & \({}^{2}\)CheCpert [74], \({}^{3}\)BRACV [103] & Recall, F1 & \multirow{2}{*}{2021} \\ & & & & & \({}^{2}\)Brixia [105], \({}^{3}\)NHI [106] & SP, SE, AUC & \\ & & & & & \({}^{6}\)SIIM-ACR [107], \({}^{7}\)RSNA [108] & & \\ \hline COVID-Transformer [44] & X-ray & Chest & 2D & ViT: Supervised & \({}^{1}\)[68], \({}^{2}\)[70] & Acc, AUC & 2021 \\ & & & & \({}^{2}\)[69] & Precision & \\ \hline \multirow{2}{*}{COVID-VOLO [47]} & \multirow{2}{*}{X-ray} & \multirow{2}{*}{Chest} & \multirow{2}{*}{2D} & \multirow{2}{*}{ViT: Supervised} & \({}^{1}\)[83] & ACC & 2021 \\ & & & & & \({}^{1}\)RSNA [109], \({}^{2}\)CRU Pneumonia [110] & & \\ & & & & & \({}^{3}\)COVID-19 Radiography [83, 111] & Recall, F1 & \multirow{2}{*}{2022} \\ & & & & & \({}^{3}\)NHI [106], \({}^{5}\)VinBiBpData [112] & ACC, AUC & \\ & & & & & \({}^{3}\)SIIM-BIBAD-RSAN [113] & Precision & \\ & & & & & \({}^{3}\)RSNA-MIDIC [114, 115] & & \\ & & & & & \({}^{4}\)TCIA-SBU COVID-19 [116, 117] & & \\ \hline \multirow{2}{*}{TransMIL [54]} & \multirow{2}{*}{Microscopy} & \multirow{2}{*}{Multi-organ} & \multirow{2}{*}{2D} & \multirow{2}{*}{CNN: Supervised} & \({}^{1}\)Camelyon16 [118] & \multirow{2}{*}{ACC} \\ & & & & \({}^{2}\)TCGA-NSCLC [119, 20] & & \\ & & & & \({}^{3}\)TCGA-RCC [121] & AUC & \\ \hline \multirow{2}{*}{LAT [52]} & \multirow{2}{*}{Fundus} & \multirow{2}{*}{Eye} & \multirow{2}{*}{2D} & \multirow{2}{*}{CNN: Supervised} & \({}^{1}\)Messidor-2 [122] & \multirow{2}{*}{AUC \& Kappa} \\ & & & & \({}^{3}\)EyePACKS [124] & & \\ \hline \multirow{2}{*}{TransMed [48]} & \multirow{2}{*}{MRI} & \multirow{2}{*}{Eare} & \multirow{2}{*}{3D} & \multirow{2}{*}{ViT: Supervised} & \({}^{1}\)POT [48] & Precision & \multirow{2}{*}{2021} \\ & & & & & \({}^{2}\)MRST [125] & ACC & \\ \hline \multirow{2}{*}{3DMeT [49]} & \multirow{2}{*}{MRI} & \multirow{2}{*}{Knee} & \multirow{2}{*}{3D} & \multirow{2}{*}{CNN: Supervised} & Private dataset & ACC, Recall, F1 & \multirow{2}{*}{2021} \\ & & & & & \({}^{3}\)CBS-DDSM [103] & AUC, ACC & \\ \hline Hybrid-COVID-VIT [51] & X-ray & Chest & 2D & CNN: Supervised & CheXpert [74] & \multirow{2}{*}{AP, SE} & \multirow{2}{*}{2021} \\ & & & & & \({}^{3}\)BUSI [66] & & \\ \hline \multirow{2}{*}{Fumar-ViT [50]} & \multirow{2}{*}{X-ray} & \multirow{2}{*}{Fumar} & \multirow{2}{*}{2D} & \multirow{2}{*}{ViT: Supervised} & Private dataset & Recall, F1 & \multirow{2}{*}{2022} \\ & & & & & \({}^{3}\)CNN: Unsupervised & Precision, ACC & \\ \hline \multirow{2}{*}{DT-MIL [53]} & \multirow{2}{*}{Microscopy} & \multirow{2}{*}{Lung} & \multirow{2}{*}{2D} & \multirow{2}{*}{CNN: Supervised} & \({}^{1}\)CPT-LUAD [116] & Recall, F1 & \multirow{2}{*}{2021} \\ & & & & \({}^{2}\)BREAST-LNM [53] & AUC, precision & \\ \hline GTP [56] & Microscopy & Lung & 2D & CNN: Self-supervised & \({}^{1}\)NLST [126], \({}^{1}\)CPTAC [127] & Precision, Recall & \\ & & & & \({}^{2}\)TCGA [128] & SP, SE, ACC, AUC & \\ \hline \multirow{2}{*}{HATNet [55]
ical domain. Furthermore, though ViTs have shown impressive capabilities in ImageNet classification, their performance is still lower than the latest SOTA CNNs without additional data [47]. Hence, existing methods mostly follow pre-training strategies on the ImageNet dataset to build the pre-trained weights for the subsequent downstream tasks. However, despite the enhancement, the domain of natural images is significantly different from medical data, thereby may restrict the performance of further improvement. Therefore, we believe efficient Transformers will considerably influence the future research of Transformer-based models.
## 4 Medical Image Segmentation
Medical segmentation is a significant sub-field of image segmentation in digital image processing [133]. It aims to extract features from a set of regions partitioned from the entire image and segment the key organs simultaneously, which can assist physicians in making an accurate diagnosis in practice. X-ray, positron emission tomography (PET), computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound are common imaging modalities used to collect data. The CNN-based U-Net [34, 133] has been the main choice in this field due to its effective performance and high accuracy. Nevertheless, it cannot extract long-range dependencies in high-dimensional and high-resolution medical images [134]. Therefore, the flexible combination of the U-Net structure with Transformers become a prevalent solution to the segmentation problem at present. Take the multi-organ segmentation task as an example: some networks can achieve state-of-the-art multi-organ segmentation performance on the Synapse dataset (as shown in Figure 9) for abdominal images.
In this section, we present the application of ViTs in segmentation tasks. First, we divide the approaches into two categories: _pure Transformers_ and _hybrid Transformers_, where the
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **Frame** & **Conditionation** & **Point** \\ \hline U-Net-CNN [24] & 22.04 & \(\frac{1}{2}\)-The majority-class feature representation representation, which is the most challenging feature representation. The first (or 10) feature representation is the most challenging feature representation. The second (or 10) feature representation is the most challenging feature representation. The second (or 10) feature representation is the most challenging feature representation of the majority-class feature representation. The second (or 10) feature representation is the most challenging feature representation of the majority-class feature representation. The second (or 10) feature representation is the most challenging feature representation of the majority-class feature representation. The second (or 10) feature representation is the most challenging feature representation of the majority-class feature representation. The third (or 10) feature representation is the most challenging feature representation of the majority-class feature representation. The third (or 10) feature representation is the most challenging feature representation of the majority-class feature representation. The third (or 10) feature representation is the most challenging feature representation of the majority-class feature representation. The third (or 10) feature representation is the most challenging feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority feature representation of the majority-class feature representation of the majority-class feature representation of the majority feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority feature representation of the majority feature representation of the majority-class feature representation of the majority feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority-class feature representation of the majority feature representation of the majority-class feature representation of the majority feature representation of the majority-class feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the minority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the minority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the minority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the majority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the minority feature representation of the minority feature representation of the majority feature representation of the minority feature representation of the majority feature representation of the minority feature
pure Transformer_ denotes the use of the multiple multi-head self-attention modules in both the encoder and decoder. Hybrid architecture-based approaches fuse the ViTs with convolution modules as the encoder, bottleneck, decoder, or skip connection part to leverage information about the global context and local details. Furthermore, we review some methods with other architectures that propose several novel manners for self-supervised learning. Figure 10 demonstrates the different directions of the methods employing Transformers in the U-Net architecture.
### Pure Transformers
In this section, we review several networks referred to as _pure Transformers_, which employ Transformer blocks in both the encoding and the decoding paths. Despite the great success of CNN-based approaches in medical segmentation tasks, these models still have limitations in learning long-range semantic information of medical images. The authors proposed **Swin-Unet**, a symmetric Encoder-Decoder architecture motivated by the hierarchical Swin Transformer [57], to improve segmentation accuracy and robust generalization capability. In contrast to the closest approaches [142; 140; 141; 149] using integrations of CNN with Transformer, Swin-Unet explores the possibility of pure Transformer applied to medical image segmentation.
As shown in Figure 11, Swin-Unet consists of encoder, bottleneck, decoder, and skip connections utilizing the Swin Transformer block with shifted windows as the basic unit. For the encoder, the sequence of embeddings transformed from image patches is fed into multiple Swin Transformer blocks and patch merging layers, with Swin Transformer blocks performing feature learning, and patch merging layers downsampling the feature resolution and unifying the feature dimension. The designed bottleneck comprises two consecutive Swin Transformer blocks to learn the hierarchical representation from the encoder with feature resolution and dimension unchanged.
Swin Transformer blocks and patch-expanding layers construct the symmetric Transformer-based decoder. In contrast to the patch merging layers in the encoder, each patch expanding layer is responsible for upsampling the feature maps into double resolutions and halving the corresponding feature dimension. The final reshaped feature maps pass through a linear projection to produce the pixel-wise segmentation outputs. Inspired by the U-Net, the framework also employs skip connections to combine multi-scale features with the upsampled features at various resolution levels to reduce the loss of fine-grained contextual information caused by down-sampling.
In contrast to the CNN-based methods showing over-segmentation issues, the proposed U-shape pure Transformer presents better segmentation performance resulting from learning both local and long-range dependencies. Compared to the previous methods [150; 24], the HD evaluation metric of Swin-Unet shows an improvement in accuracy for better edge prediction. The experiments on the Synapse multi-organ CT dataset and ACDC dataset from MRI scanners also demonstrate the robustness and generalization ability of the method.
Compared to Swin-Unet and DS-TransUNet, **nnFormer**[137] proposed by Zhou et al. preserves the superior performance of convolution layers for local detail extraction and employs a hierarchical structure to model multi-scale features. It utilizes the volume-based multi-head self-attention (V-MSA) and the shifted version (SV-MSA) in the Transformer blocks instead of processing 2D slices of the volume. The overall architecture of nnFormer is composed of an encoder and a decoder. Each stage in the encoder and decoder consists of a Transformer block applying V-MSA and SV-MSA and a successive upsampling or downsampling block built upon convolution layers, which is referred to as the interleaved architecture. V-MSA conducts self-attention within 3D local volumes instead of 2D local windows to reduce the computational complexity by approximately 98% and 99.5% on the Synapse and ACDC datasets, respectively.
nnFormer is first pre-trained on the ImageNet dataset and utilizes symmetrical initialization to reuse the pre-trained weights of the encoder in the decoder. The results of experiments that compare nnFormer with prior Transformer-based [24; 32] and CNN-based arts [151] illustrate nnFormer makes significant progress on the segmentation task.
Although the recent Transformer-based methods improve the problem that CNN methods cannot capture long-range dependencies, they show the limitation of the capability of modeling local details. Some methods directly embedded the convolution layers between fully-connected layers in the feed-forward network. Such structure supplements the low-level information but limits the discrimination of features. Huang et al. propose **MISSFormer**[138], a hierarchical encoder-decoder network, which employs the Transformer block named Enhanced Transformer Block and equips the Enhanced Transformer Context Bridge.
The Enhanced Transformer Block utilizes a novel efficient self-attention module that illustrates the effectiveness of spatial reduction for better usage of the high-resolution map. The
Figure 9: Transformer-based models can perform image segmentation on medical image datasets. Figure **a** and **c** illustrate two 2D slices of raw images with the labels from Synapse dataset [135]. Figure **b** and **d** show the 3D visualization of the labeled organs from different angles. These images were generated with MITK Workbench [136].
original multi-head self-attention can be formulated as follows:
\[Attention(Q,K,V)=Softmax(\frac{QK^{T}}{\sqrt{d_{head}}})V, \tag{4}\]
where \(Q\), \(K\), and \(V\) refer to query, key and value respectively and have the same shape of \(N\times C\), \(d_{head}\) denotes the number of heads. The computational complexity is \(\mathcal{O}(N^{2})\). In efficient self-attention, the \(K\) and \(V\) are reshaped by a spatial reduction ratio \(R\). Take \(K\) for example:
\[new\_K=Reshape(\frac{N}{R},C\cdot R)W(C\cdot R,C) \tag{5}\]
\(K\) is first resized from \(N\times C\) to \(\frac{N}{K}\times(C\cdot R)\) and then projected linearly to restore the channel depth from \(C\cdot R\) to \(C\). The computational cost reduces to \(\mathcal{O}(\frac{N^{2}}{R})\) accordingly. Furthermore, the structure of the Enhanced Mix Feed-forward network (Mix-FFN) extended from [152] introduces recursive skip connections to make the model more expressive and consistent with each recursive step.
The U-shaped architecture of the MISSFormer contains the encoder and decoder built on the Enhanced Transformer blocks connected with an enhanced Transformer context bridge. Multi-scale features produced from the encoder are flattened and concatenated together and passed through the Enhanced Transformer Context Bridge. The pipeline of the Enhanced Transformer Context Bridge is based on the Enhanced Transformer Block to fuse the hierarchical features. The output of the bridge is split and recovered to each original spatial dimension to pass through the corresponding stage in the decoder. The results of experiments show a robust capacity of the method to capture more discriminative details in medical image segmentation. It is worth mentioning that MISSFormer trained from scratch even outperforms state-of-the-art methods pre-trained on ImageNet.
The results in Figure 12 show that the performance of MISSFormer for prediction and segmentation of edges in pure Transformer network structures is more accurate compared to TransUNet and Swin-Unet. Comparing MISSFormer and MISSFormer-S (MISSFormer without bridge), MISSFormer has fewer segmentation errors because the bridge is effective for integrating multi-scale information.
Inspired by the notable DeepLabv3 [153] which utilizes the Atrous Spatial Pyramid Pooling (ASPP) to learn multi-scale feature representations and depth-wise separable convolution to reduce the computational burden, the authors propose **TransDeepLab**[33] to combine the DeepLab network with the Swin-Transformer blocks. Applying the Swin-Transformer module with windows of multiple sizes enables the fusion of multi-scale information with a lightweight model.
TransDeepLab is a pure Transformer-based DeepLabv3+ architecture, as shown in Figure 13. The model builds a hierarchical architecture based on the Swin-Transformer mod
Figure 11: The architecture of the Swin-Unet [32] which follows the U-Shape structure. It contains the encoder, the bottleneck and the decoder part which are built based on the Swin Transformer block. The encoder and the decoder are connected with skip connections.
Figure 10: An overview of ViTs in medical image segmentation. Methods are classified into the pure Transformer, hybrid Transformer, and other architectures according to the positions of the Transformers in the entire architecture. The prefix numbers of the methods denote 1. [32], 2. [137], 3. [138], 4. [33], 5. [139], 6. [24], 7. [140], 8. [141], 9, [142], 10. [143], 11. [144], 12. [145], 13. [37], 14. [146], 15. [147], 16. [148].
ules. TransDeepLab first employs \(N\) stacked Swin-Transformer blocks to model the input embedded images into a deep-level feature space. 2D medical images are first to split into non-overlapping patches of dimension \(C\) and size \(4\times 4\). The ensuing Swin-Transformer block learns local semantic information and global contextual dependencies of the sequence of patches. Then, the authors introduce windows with different sizes to process the output of the Swin-Transformer and fuse the resulting multi-scale feature layers, which are then passed through Cross Contextual attention. This design, referred to as Spatial Pyramid Pooling (SSPP) block, replaces the original Atrous Spatial Pyramid Pooling (ASPP) module exploited in DeepLabV3. A cross-contextual attention mechanism is utilized to explore the multi-scale representation after fusion. This attention module applies channel attention and spatial attention to the output from windows of each size (from each layer of the spatial pyramid). Finally, in the decoder part, the low-level features from the encoder are concatenated with the multi-scale features extracted by Cross Contextual Attention after bilinear upsampling. The last two Swin-Transformer blocks and the patch expanding module generate the final prediction masks.
### Hybrid Models
Hybrid Transformers concatenate Transformer blocks with convolution layers to extract local details and long-range dependencies. We further classify this category into _Transformer: Encoder_, _Transformer: Decoder_ and _Transformer: skip connection_ according to the position of the combined module in the U-Net architecture.
#### 4.2.1 Transformer: Encoder
Starting with TransUNet [24], multiple methods in the medical image segmentation field adopt the self-attention mechanism in the encoder.
Transformers have developed as an alternative architecture for modeling global context that exclusively relies on attention mechanisms instead of convolution operators. However, its inner global self-attention mechanism induces missing low-level details. Direct upsampling cannot retrieve the local information, which results in inaccurate segmentation results. The authors propose the **TransUNet** architecture, a hybrid approach that integrates CNN-Transformer hybrid as the encoder and cascaded upsampler as the decoder, combining the advantages of Transformer and U-Net to boost the segmentation performance by recovering localized spatial information.
The framework of the TransUNet is illustrated in Figure 14. The proposed encoder initially employs CNN as a feature extractor to build a feature map for the Transformer input layer, rather than the Transformer directly projecting the raw tokenized picture patches to latent embedding space. In this way, the intermediate CNN feature maps of different resolutions can be saved and utilized in the following process.
For the decoder, the Cascaded Upsampler (CUP) is proposed to replace naive bilinear upsampling, applying several upsampling blocks to decode the hidden feature and output the final segmentation result. Finally, the hybrid encoder and the CUP constitute the overall architecture with skip connections to facilitate feature aggregation at different resolution levels. This strategy can compensate for the loss of local fine-grained details caused by the Transformer encoder and merge the encoded global information with the local information contained in intermediate CNN feature maps.
The experiments show that TransUNet significantly outperforms the model consisting of pure Transformer encoder and naive upsampling, as well as the ViT-hybrid model without skip connections [24]. Comparisons with prior work [34; 150] also demonstrate the superiority of TransUNet over competing CNN-based approaches in terms of both qualitative visualization and the quantitative evaluation criteria (i.e.average DSC and HD). TransUNet integrates the benefits of both high-level global contextual information and low-level details as an alternative approach for medical image segmentation.
Wang et al. [140] propose the encoder-decoder architecture, **TransBTS**, which leverages Transformer on learning global contextual information and merits the 3D CNN for modeling local details. In contrast to the concurrent Transformer-based
Figure 12: A visual comparison with the state-of-the-art approaches on Synapse dataset. Above the red line shows the successful cases of segmentation, and below the red line are the failed cases with relatively large errors [138]
Figure 13: The overview architecture of TransDeepLab, which comprises encoder and decoder built on Swin-Transformer blocks. It is the pure Transformer-based extension of DeepLabv3++ [33].
model [24], which analyzes 3D medical volumetric data in a slice-by-slice manner, TransBTS also explores the local features along the depth dimension by processing all the image slices at once.
The network encoder initially employs a 3D CNN to capture volumetric spatial features, simultaneously downsampling the input 3D images, yielding compact volumetric feature maps. Each feature map is projected into a token and fed into the Transformer encoder to investigate the global relationships. The full-resolution segmentation maps are generated by the 3D CNN decoder after the progressive upsampling while using the feature embedding from the Transformer. For the encoder part, TransBTS first utilizes the \(3\times 3\times 3\) convolution blocks with downsampling to process the 3D input medical image data, which boosts the effective embedding of rich local 3D features a cross spatial and depth dimensions into the low-resolution feature representation \(F\). They apply a linear projection to the feature representation \(F\) to obtain the sequence \(f\), which is then integrated with position embeddings, as the input for the Transformer encoder. The Transformer encoder consists of multiple Transformer layers, each of which comprises a Multi-Head Attention(MHA) block and a Feed-Forward Network(FFN). The output sequence of the Transformer encoder passes through the feature mapping module to be reshaped to a 4D feature map \(Z\) of the same dimension as \(F\). The approach employs cascaded upsampling and convolution blocks to progressively restore the segmentation predictions at the original resolution. Furthermore, skip connections combine the fine-grained details of local information with the decoder modules, resulting in more accurate segmentation masks.
The authors conduct comparisons between the proposed TransBTS and the closest method TransUNet [24]. TransUNet essentially processes 3D medical images slice by slice, while TransBTS is a 3D model that explores the continuous interaction through the depth dimension by processing a 3D medical image in a single pass. In contrast to TransUNet, which adopts pre-trained ViT models on other large-scale datasets, TransBTS is trained on the dataset for the specified task without relying on pre-trained weights.
The framework is evaluated on the Brain Tumor Segmentation (BraTS) 2019 challenge and 2020 challenge. Compared to the 3D U-Net baseline, TransBTS achieves a significant enhancement in segmentation. The prediction results indicate the improved accuracy and the superiority of modeling long-range dependencies.
Previous approaches [141] primarily focus on replacing convolution operation with Transformer layers or consecutively stacking the two together to address the inherent lack of pure Transformer-based models to learn local information. In this study, the authors propose a new strategy-**TransFuse** which consists of the CNN-based encoder branch and the Transformer-based branch in parallel fused with the proposed BiFusion module, thus further exploring the benefits of CNNs and Transformers. The construction of the Transformer branch is designed in the typical encoder-decoder manner. The input images are first split into non-overlapped patches. The linear embedding layer then projects the flattened patches into the raw embedding sequence which is added to the learnable position embedding of the same dimension. The obtained embeddings are fed into the Transformer encoder, which comprises \(L\) layers of MSA and MLP. The output of the last layer of the Transformer encoder is passed through layer normalization to obtain the encoded sequence.
The decoder part utilizes the same progressive upsampling (PUP) approach as SETR [154]. The encoded sequence is first reshaped back to a sequence of 2D feature maps. Then they employ two stacked upsampling-convolution layers to restore the feature scales. The feature maps with different spatial resolutions generated by each upsampling-convolution layer are retained for the subsequent fusion operation. For the CNN branch, the approach discards the last layer of the traditional CNNs architecture and combines the information extracted from the CNNs with the global contextual features obtained from the Transformer branch. A shallower model is yielded as a result of this design, avoiding the requirement for extremely deep models that exhaust resources to get long-range dependencies. For instance, there are five blocks in a typical ResNet-based network where only the outputs of the 4th, 3rd, and 2nd layers are saved for the following fusion with the feature maps from the Transformer branch.
The BiFusion module is proposed to fuse the features extracted from the two branches mentioned above to predict the segmentation results of medical images. The global features from the Transformer branch are boosted by the channel attention proposed in SE-Block [13]. Meanwhile, the feature maps from the CNN branch are filtered by the spatial attention which is adopted in CBAM [155] block to suppress the irrelevant and noisy part and highlight local interaction. Then the Hadamard product is applied to the features from the two branches to learn the interaction between them. They concatenate the interaction feature \(b^{i}\) with attended features \(\tilde{t}^{i}\) and \(\tilde{g}^{i}\) and feed the results through a residual block to produce the feature \(f^{i}\), which successfully models both the global and local features at the original resolution. Finally, the segmentation prediction is generated by integrating the \(f^{i}\) from different BiFusion modules via the attention-gated (AG) [156] skip connection.
They evaluate the performance of three variants of TransFuse on four segmentation tasks with different imaging modalities and target sizes. TransFuse-S is constructed with
Figure 14: The overview architecture of the TransUNet [24]. The Transformer layers are employed in the encoder part. The schematic of the Transformer is shown on the left.
ResNet-34 (R34) and 8-layer DeiT-Small (DeiT-S) [64]. Besides, TransFuse-L is composed of Res2Net-50 and 10-layer DeiT-Base (DeiT-B). TransFuse-L* is implemented based on ResNetV2-50 and ViT-B [22]. For polyp segmentation, Transfuse-S/L outperforms significantly the CNN baseline models with fewer parameters and faster running time. TransFuse-L* also achieves the best performance among the previous SOTA Transformer-based methods with a faster speed for inference. It runs at 45.3 FPS and about 12% faster than TransUNet. The experiments for other segmentation tasks also show the superiority of the segmentation performance.
Despite the powerful results of applying Transformers to segmentation tasks [157; 154], the dilemma is that properly training existing Transformer-based models requires large-scale datasets, whereas the number of images and labels available for medical image segmentation is relatively limited. To overcome the difficulty, **MedT**[142] proposes a gated position-sensitive axial attention mechanism where the introduced gates are learnable parameters to enable the model to be applied to a dataset of arbitrary size. Furthermore, they suggested a Local-Global(LoGo) training strategy to improve the segmentation performance by operating on both the original image and the local patches.
The main architecture of MedT, as shown in Figure 15 (a), is composed of 2 branches: a shallow global branch that works on the original resolution of the entire image, and a deep local branch that acts on the image patches. Two encoder blocks and two decoder blocks comprise the global branch, which is sufficient to model long-range dependencies. In the local branch, the original image is partitioned into 16 patches and each patch is feed-forwarded through the network. The output feature maps are re-sampled based on their locations to obtain the output feature maps of the branch. Then the results generated from both branches are added and fed into a \(1\times 1\) convolution layer to produce the output segmentation mask. The LoGo training strategy enables the global branch to concentrate on high-level information and allows the local branch to learn the finer interactions between pixels within the patch, resulting in improved segmentation performance.
Figure 15 (b) and (c) illustrates the gated axial Transformer layer, which is used as the main building block in MedT, and the feed-forward structure in it. They introduced four learnable gates \(G_{V1},G_{V2},G_{Q},G_{K}\in\mathbb{R}\) that control the amount of information the positional embeddings supply to key, query, and value. Based on whether a relative positional encoding is learned accurately or not, the gate parameters will be assigned weights either converging to 1 or some lower value. The gated mechanism can control the impact of relative positional encodings on the encoding of non-local context and allows the model to work well on any dataset regardless of size.
Unlike the fully-attended baseline [157], MedT trained on even smaller datasets outperforms the convolutional baseline and other Transformer-based methods. In addition, improvements in medical segmentation are also observed since the proposed method takes into account pixel-level dependencies.
In contrast to multiple proposed methods [154; 142; 141; 24] that investigate the task of 2D medical image segmentation, **UNETR**[143] proposes a novel Transformer-based architecture for 3D segmentation which employs the Transformer as the encoder to learn global contextual information from the volumetric data. In addition, unlike the previous frameworks proposed for 3D medical image segmentation [145; 140], the encoded feature from the Transformer of this proposed model is directly connected to a CNN-based decoder via skip connections at different resolution levels. The U-shaped UNETR comprises a stack of Transformers as the encoder and a decoder coupling with it by skip connections. They begin by generating the 1D sequence of patches by splitting the 3D input volume in a non-overlapping manner. The flattened input patches are then passed through a linear projection layer to yield \(K\) dimensional patch embeddings. They attach a 1D learnable positional embedding to each patch embedding taking into account the spatial information of the extracted patches. After the embedding layer, the global multi-scale representation is captured using Transformer blocks composed of multi-head self-attention modules and multilayer perceptron layers. They resize and project the sequence representation extracted from the Transformer at different resolutions for use in the decoder in order to retrieve spatial information of the low-level details.
In the expanding pattern of the framework, the proposed CNN-based decoder combines the output feature of different resolutions from the Transformer with upsampled feature maps to properly predict the voxel-wise segmentation mask at the original input resolution.
The paper claims UNETR achieves new state-of-the-art performance on all organs compared against CNN [158; 159; 160; 35] and competing for Transformer-based [145; 24; 154] baselines on BTCV dataset, with significant improvement in performance on small organs in particular. In addition, it outperforms the closest methodologies on brain tumor and spleen segmentation tasks in MSD dataset. UNETR shows the superiority of
Figure 15: Overview of the MedT [142] architecture. The network uses the LoGo strategy for training. The upper global branch utilizes the first fewer blocks of the Transformer layers to encode the long-range dependency of the original image. In the local branch, the images are converted into small patches and then fed into the network to model the local details within each patch. The output of the local branch is re-sampled relying on the location information. Finally, a \(1\times 1\) convolution layer fuses the output feature maps from the two branches are to generate the final segmentation mask.
learning both global dependencies and fine-grained local relationships in medical images.
Figure 16 presents qualitative segmentation comparisons for brain tumor segmentation on the MSD dataset between UNETR [143], TransBTS [140], CoTr [145] and U-Net [34]. It can be seen that the details of the brain tumor are captured well by UNETR [143].
As opposed to other methods which attempted to utilize the Transformer module as an additional block beside the CNN-based components in the architectures, UNETR [143] leverages the Transformer as the encoder instead of the CNN-based encoder. The Swin Transformer [57] is a hierarchical visual Transformer featuring an efficient shift-window partitioning scheme for computing self-attention. Inspired by these two approaches, a novel model termed **Swin Unet Tran**r**ransformer (**Swin Unetr**) [144] is proposed for brain tumor segmentation in this work.
The proposed framework applies a U-shape architecture with the Swin Transformers as the encoder and a CNN-based module as the decoder connected to the encoder via skip connections at different resolution levels. The model initially converts 3D MRI images with four channels to non-overlapping patches and creates windows of a specific size with a patch partition layer.
The Swin UNETR encoder is composed of 4 stages. Each stage comprises 2 Transformer blocks and a patch merging layer. In the Transformer blocks, the self-attention is computed with a shifted windowing mechanism. Swin UNETR employs the windows of size \(M\times M\times M\) to partition the 3D token with resolution of \(H^{\prime}\times W^{\prime}\times D^{\prime}\) into regions of \(\lceil\frac{H^{\prime}}{M}\times\frac{W}{M}\times\frac{H^{\prime}}{M}\rceil\) at layer \(l\). The partitioned window regions are then shifted by \((\lfloor\frac{M}{2}\rfloor,\lfloor\frac{M}{2}\rfloor,\lfloor\frac{M}{2}\rfloor)\) voxels at the following \(l+1\) layer. The patch merging layer after the Transformer components reduces the resolution of feature maps by a factor of two and concatenates them to form a feature embedding with the doubled dimensionality of the embedding space.
For the decoder of the architecture, the output feature representations of the bottleneck are reshaped and passed through the residual block containing two convolutional layers. The subsequent deconvolutional layer increases the resolution of feature maps by a factor of 2. The outputs are then concatenated with the outputs of the previous stage and fed into another residual block. After the resolutions of the feature maps are restored to the original \(H^{\prime}\times W^{\prime}\times D^{\prime}\), a head is utilized to generate the final segmentation predictions.
The authors conduct the experiments to compare Swin UNETR against the previous methodologies SegResNet [161], nn-UNet [151]and TransBTS [140] in this work. The results demonstrate that the proposed model has prominence as one of the top-ranking approaches in the BraTS 2021 challenge. It is due to the better capability of learning multi-scale contextual information and modeling long-range dependencies by Swin Transformers in comparison to regular Transformers with a fixed resolution of windows.
#### 4.2.2 Transformer: Decoder
Another direction is to modify the decoder of the U-shape structure to aggregate the Transformer-CNN-based modules.
In the **Segran** framework [139], Squeeze-and-Expansion Transformer is proposed to "squeeze" the attention matrix and aggregate multiple sets of contextualized features from the output. A novel Learnable Sinusoidal Position Encoding is also employed to impose the continuity inductive bias for images. The Segran consists of five components: a CNN backbone to extract image features, 2) input/output feature pyramids to do upsampling, 3) the Learnable Sinusoidal Positional Encoding, 4) Squeeze-and-Expansion Transformer layers to contextualize features, and 5) a segmentation head. The pretrained CNN backbone is first utilized to learn feature maps from the input medical images. Since the input features to Transformers are of a low spatial resolution, the authors increase their spatial resolutions with an input Feature Pyramid Network (FPN) [162] to upsample the feature maps by bilinear interpolation. Then the proposed Learnable Sinusoidal Positional Encoding is added to the visual features to inject spatial information. In contrast to the previous two mainstream PE schemes [163; 22], the new positional embedding vector, a combination of sine and cosine functions of linear transformations of \((x,y)\), brings in the continuity bias with adaptability. The equation of the encoding strategy varies gradually with pixel coordinates. Thus, close units receive similar positional encodings, increasing the attention weights between them towards higher values. The encoding vectors generated from the addition of positional encodings and visual features are then fed into the Transformer.
The novel Transformer architecture combines Squeezed Attention Block (SAB) [164] with an Expanded Attention Block. Here this method employs the Induced Set Attention Block (ISAB) proposed by [164] as a squeezed attention block. The Squeezed Attention Block computes attention between the input and inducing points and compresses the attention matrices to lower rank matrices, reducing noises and overfitting. The Expanded Attention Block (EAB), a mixture-of-experts model, outputs \(N_{m}\) sets of complete contextualized features from \(N_{m}\) modes. Each mode is an individual single-head Transformer and shares the same feature space with each other. That is as opposed to multi-head attention in which each head outputs an
Figure 16: Comparison of visualization of brain tumor segmentation on the MSD dataset. The whole tumor (WT) includes a combination of red, blue, and green regions. The union of red and blue regions demonstrates the tumor core (TC). The green regions indicate the enhanced tumor core (ET) [143].
exclusive feature subset. All features are then aggregated into one set using dynamic mode attention. The dynamic mode attention can be obtained by doing a linear transformation of each mode feature and taking softmax over all the modes.
Compared with representative existing methods in the experiments, Segtran consistently achieved the highest segmentation accuracy and exhibited good cross-domain generalization capabilities.
#### 4.2.3 Transformer: Skip Connection
In this section, Transformer blocks are incorporated into the skip connections to facilitate the transmission of detailed information from the encoder to the decoder.
Although Transformer-based methods overcome the limitation of capturing long-range dependency, they present extreme computational and spatial complexity in analyzing high-resolution volumetric image data. Some studies [163; 24] employ hybrid structures, fusing CNN with Transformer in an attempt to reduce the training requirement on huge datasets. The recent approach, TransUNet [24], shows good performance. However, it is difficult to optimize the model due to the inner self-attention mechanism of the vanilla Transformer. First, it takes a long time to train the attention, which is caused by initially distributing attention uniformly to each pixel within the salient regions [21]. Second, a vanilla Transformer struggles to handle multi-scale and high-resolution feature maps due to its high computational cost.
Motivated by this, [145] proposes a novel encoder-decoder framework, **CoTr**, which bridges CNN and Transformer. The architecture exploits CNN to learn feature representations. An efficient deformable self-attention mechanism in the Transformer is designed to model the global context from the extracted feature maps, which reduces the computational complexity and enables the model to process high-resolution features. And the final segmentation results are generated by the decoder.
As shown in Figure 18, the DeTrans-encoder consists of an input-to-sequence layer and multiple DeTrans Layers. The input-to-sequence layer first flattens the feature maps at different resolutions extracted from CNN-encoder into 1D sequences \(\{f_{l}\}_{l=1}^{L}\). Then the corresponding 3D positional encoding sequence \(p_{l}\) is supplemented with each of the flattened sequences \(f_{l}\) to complement the spatial information. The combined sequence is fed as the input into the DeTrans Layers. Each of the DeTrans Layers is a composition of an MS-DMSA and a Feed-Forward Network (FFN). In contrast to the self-attention mechanism which casts attention to all the possible locations, the proposed MS-DMSA layer only attends to a small set of key sampling locations around a reference location. As a result, it can achieve faster convergence and lower computational complexity. The skip connection is utilized after each DeTrans Layer for preserving the low-level details of local information. The output of the DeTrans-encoder is successively upsampled by the pure CNN encoder to restore the original resolution. Besides, they apply skip connections and a deep supervision strategy to add fine-grained details and auxiliary losses to the prediction outputs.
The results of experiments indicate CoTr with the hybrid architecture has superiority of performance over the models with pure CNN encoder or pure Transformer encoder. It also outperforms other hybrid methods like TransUNet [24] in processing multi-scale 3D medical images with reduced parameters and complexity.
**HiFormer**[37] is proposed to aggregate a fusion module in the skip connections to learn richer representations. Figure 19 demonstrates the end-to-end network structure of the strategy that incorporates the global dependencies learned with the Swin Transformer and the detailed local features extracted by the CNN modules. The encoder is composed of two hierarchical CNN, Swin Transformer modules and the novel Double-Level Fusion module (DLF module). First, medical images are fed into a CNN module to obtain a local fine-grained semantic representation. After the CNN layer catches the shallow feature layers, HiFormer introduces the Swin Transformer modules to complement the global feature information. The Swin Transformer module employs windows of different sizes
Figure 17: Segtran network extracts image features using a CNN backbone and combines the features with the position encoding of pixels flattened into a series of local feature vectors. Multiple squeezed and extended transform layers are stacked to process the local feature vectors. Finally, an output FPN after the Transformer upsamples the features to generate the final prediction [139].
Figure 18: Overview of the CoTr [145] architecture. It is composed of a CNN-encoder, a DeTrans-encoder and a decoder. The CNN-encoder models the local information of the input images and provides the outputs at each stage. The outputs of different resolutions are flattened, fused and passed through the Deformable Transformer Layers along with positional encoding. The decoder reshapes the processed sequences from the DeTrans-encoder and produces the final predictions.
to learn the dependencies between multiple scales. To reuse the shallow and deep multi-scale feature information in the encoder, HiFormer designs a novel skip connection module, the DLF module. The deep-level semantic and shallow-level localization information are fed into the DLF module and fused by the cross-attention mechanism. Finally, both generated feature maps are passed into the decoder to produce the final segmentation prediction results. The experiments conducted on the Synapse dataset [135], SegPC [165], and ISIC 2017 dataset [166] demonstrate the superiority of the learning ability of HiFormer. Moreover, the lightweight model with fewer parameters also exceeds CNN-based methods and previous Transformer-based approaches with lower computational complexity.
### Other Architectures
Most ViT-based models rely on pre-training of large natural image datasets to obtain pre-weights and then solve downstream tasks by transfer learning. Several works explore training in a self-supervised or semi-supervised manner to efficiently utilize medical image datasets of limited size or datasets without manual labels. Furthermore, some approaches apply Transformers to seek the design of architectures that implement medical image segmentation, instead of using the Transformers to act directly on the input image.
Unlike the previously proposed methods that employ the Transformers to act directly on the medical image for feature extraction, this method [146] adopts the AutoML for automatically designing the network architecture without much human heuristics or assumptions, where the Transformer is applied to encode the embedding vector regarding the architecture configurations. The approach reduces the workload of algorithm design by automatically estimating "almost" all the components of the framework instead of manually designing for the network and training strategies. That improves the model performance of segmentation simultaneously.
The proposed Transformer-based **T-AutoML** inspired by SpineNet [167] leverages neural architecture search (NAS) with a larger search space to optimize the selection of the network connections. This framework can connect the feature maps at different spatial levels of the network with another one arbitrarily, compared with the previous methods that only search for the encoder-decoder U-shape networks [168; 169; 170]. The candidates of different blocks in the network consist of 3D residual blocks, 3D bottleneck blocks, and 3D axial-attention blocks. The residual blocks and bottleneck blocks are effective in alleviating the vanishing gradient. The axial-attention blocks are applied to model the long-range dependency in the 2D medical images. Another upsampling layer (linear interpolation) is utilized at the end of the architecture to produce the results of feature maps at the original volume size.
To search for the optimal architecture and training configuration, the authors first encode the necessary components in the search space to form a one-dimensional vector \(v\). The search space contains candidates of different configurations with regard to data augmentation, learning rates, learning rate schedulers, loss function, the optimizer, the number and spatial resolution of blocks, and block types.
After the obtainment of the encoding vector \(v\), the proposed new predictor predicts the binary relation of validation accuracy values between \(vi\) and \(v_{j}\). The predictor employs the Transformer encoder to encode the vector \(v\) of varying lengths into feature maps of a fixed resolution. Then the feature maps are passed through the Multiple FC layers to generate the binary relation predictions denoted as \(GT_{v_{i},v_{j}}\). Since the predictor is designed for ranking the vectors with respect to the accuracy values and estimating the relations, the actual values of the predicted accuracy are not necessary to be calculated for each vector. Thus, the new predictor requires less overall training time.
The experiments indicate that the proposed method can achieve the state of the art(SOTA) in lesion segmentation tasks and shows superiority in transferring to different datasets.
Despite the promising results achieved by the CNNs and Transformer-based methods with large-scale images, these approaches require expert labeling at the pixel/voxel level. Expensive time costs and manual annotation limit the size of the medical image dataset. Due to this dilemma, the proposed semi-supervised segmentation [147] provides a low-cost and practical scheme, called **Cross Teaching** between CNN and Transformer, to train effective models using a little amount of correctly labeled data and a large amount of unlabeled or coarsely labeled data.
Inspired by the existing works [171; 172; 173] for semi-supervised learning which introduce perturbation at different levels and encourage prediction to be consistent during the training stage, the designed cross teaching introduces the perturbation in both learning paradigm-level and output-level. As shown in Figure 20, each image within the training set containing labeled and unlabeled images is fed into two different learning paradigms: a CNN and a Transformer respectively. For the unlabeled dataset with raw images, the cross teaching scheme allows the cross supervision between a CNN (\(f^{c}_{\theta}(.)\)) and a Transformer(\(f^{q}_{\theta}(.)\)), which aims at integrating the properties of the Transformer modeling the long-range dependency and CNN t0 learn local information in the output level.
The unlabeled data initially passes through a CNN and a
Figure 19: HiFormer comprises the CNN-Transformer encoder, the CNN-based decoder and the Double-Level Fusion Module (DLF).The feature layers of the shallowest level \(p^{i}\) and of the deepest level \(p^{s}\) are fed into the DLF module for the fusion of hierarchical information. Blue blocks and orange blocks refer to Swin Transformer and CNN modules, respectively [37].
Transformer respectively to generate predictions \(p_{i}^{c}\) and \(p_{i}^{t}\).
\[p_{i}^{c}=f_{\phi}^{c}(x_{i});p_{i}^{t}=f_{\phi}^{t}(x_{i}); \tag{6}\]
Then the pseudo labels \(pl_{i}^{c}\) and \(pl_{i}^{t}\) are produced in this manner:
\[pl_{i}^{c}=argmax(p_{i}^{c});p_{i}^{t}=argmax(p_{i}^{c}); \tag{7}\]
The pseudo label \(pl_{i}^{c}\) used for the CNN training is generated by the Transformer. Similarly, the CNN model provides pseudo labels for Transformer training. The cross-teaching loss for the unlabeled data is defined as follows:
\[L_{cl}=\underbrace{L_{Dice}(p_{i}^{c},p_{i}^{c})}_{supervision\ for\ CNNs}+ \underbrace{L_{Dice}(p_{i}^{t},p_{i}^{t})}_{supervision\ for\ Transformers} \tag{8}\]
which is a bidirectional loss function. One direction of the data stream is from the CNN to the Transformer, and the other direction is from the Transformer to the CNN. For the labeled data, the CNN and Transformer are supervised by the ground truth. The commonly-used supervised loss functions, i.e. the cross-entropy loss and Dice loss, are employed to update model parameters.
\[L_{sup}=L_{cv}(p_{i},y_{i})+L_{Dice}(p_{i},y_{i}) \tag{9}\]
where \(p_{i}\), \(y_{i}\) represent the prediction and the label of image \(x_{i}\). And the overall objective combining the cross-teaching branch and supervised branch is defined as:
\[L_{total}=L_{sup}+\lambda L_{cl} \tag{10}\]
where \(\lambda\) is a weight factor, which is defined by time-dependent Gaussian warming up function [174; 175]:
\[\lambda(t)=0.1\cdot e^{-\left(-S\left[1-\frac{t_{i}}{total}\right]\right)^{2}} \tag{11}\]
The results of the ablation study indicate that the combination of CNN and Transformer in a cross-teaching way shows superiority over the existing semi-supervised methods. Furthermore, the novel method has the potential to reduce the label cost by learning from limited data and large-scale unlabeled data. However, it is observed that achieving SOTA via semi-supervised approaches remains a significant challenge.
Zhou et. al. [148] hypothesize that the ability to aggregate contextual information is imperative to improve the performance of medical image analysis. Nonetheless, there is no ImageNet-scale medical image dataset for pre-training. Therefore, they investigate a novel self pre-training paradigm based on Masked Autoencoder (MAE), **MAE self pre-training**, for medical image analysis, one of the masked image modeling (MIM) frameworks [194][195][196][197]. MIM encourages the framework to restore the masked target by integrating information from the context, where the main idea of MIM is masking and reconstructing: masking a set of image patches before input into the Transformer and reconstructing these masked patches at the output.
The pipeline for segmentation with MAE self pre-training contains two stages. In the first stage (as shown on the left of Figure 21), ViT is pre-trained with MAE as the encoder. The input patches are randomly divided into visible ones and masked ones. The ViT encoder only acts on the visible patches. Compared to other MIM methods, MAE does not employ mask tokens in the encoder, which saves time and allows for faster pre-training. A lightweight Transformer decoder is appended to reconstruct the full image. The decoder is only an auxiliary part used for pre-training and will not be applied in downstream tasks.
In the second stage (as shown on the right of Figure 21), the pre-trained ViT weights are transferred to initialize the segmentation encoder. Then, the task-specific heads are appended to perform downstream tasks. The whole segmentation network, e.g., UNETR, is finetuned to perform the segmentation task.
The experiments, including the MAE pre-training and the downstream task, are conducted to evaluate the performance of the proposed method. The results show that MAE can recover the lost information in the masked input patches. MAE pre-training enables the model to improve its classification and segmentation performance on medical image analysis tasks, surpassing the ImageNet pre-trained model to SOTA.
Figure 21: Illustration of MAE self pre-training. First, MAE is pre-trained as an encoder for ViT. the ViT encoder is fed with a random subset of patches and the decoder of the Transformer reconstructs the complete image as shown on the left. Then, the pre-trained ViT weights are transferred to the initialized segmentation encoder, as shown on the right. Finally, the whole segmentation network, such as UNETR, is fine-tuned to perform the segmentation task.
Figure 20: The model performs the semi-supervised medical image segmentation task. The regularization scheme between CNN and Transformer is referred to as Cross Teaching. L denotes the labeled data and U denotes the unlabeled data. The cross-teaching employs a bidirectional loss function: one path is from the CNN branch to the Transformer branch, and the other is from the Transformer to the CNN. A Transformer is applied for complementary training instead of prediction generation [147].
### _Discussion and Conclusion_
This section comprehensively investigates the overview of around 16 Transformer-based models for medical image segmentation presented in Section 4.1 to Section 4.3. We provide information on the reviewed segmentation approaches about the architecture type, modality, organ, input size, the pre-trained manner, datasets, metrics, and the year in Table 3. Table 4 also lists the methods along with the number of parameters, contributions, and highlights. ViT-based works offer solutions in a broad range of multimodal tasks of 2D or 3D. Most of the approaches demonstrate superior results over CNN-based segmentation models on benchmark medical datasets.
Despite the state-of-the-art performance Transformer-based networks have achieved, there are some challenges in deploying the Transformer-based models at present. The first challenge is the high computational burden due to the relatively large number of parameters of the Transformer-based models [198]. The reason is that the time and space complexity of the attention mechanism is quadratic to the sequence length. For example, the CNN-based models such as U-Net [34] requires 3.7M parameters [142] to reach Dice Score 74.68 [24]. However, TransUNet, which achieves Dice Score 77.48 needs 96.07M [143] parameters. The researchers have to meet the high demand for GPU resources. Thus, several novel approaches such as Swin Transformer employed in Swin-Unet [32], volume-based Transformer utilized in nnFormer [137] and efficient self-attention module in MISSFormer [138] are proposed to simplify the computation of the Transformer models. The direction of facilitating the efficiency of models will play a crucial role in future research. We also note that most existing methods require pre-training strategies on the ImageNet dataset to obtain the pre-trained weights for the following downstream tasks. However, the natural image datasets and medical datasets differ dramatically from one another, which may impact the final performance of extracting the medical features. Meanwhile, pre-training leads to high computational costs, which hinders the training of models in practice. Multiple segmentation networks which can be trained from scratch on the medical dataset are suggested as the solutions, such as MISSFormer [138]. We expect more approaches to exploring more efficient pre-training strategies or without pre-training. Furthermore, considering the limited size of some medical datasets, some approaches propose semi-supervised technologies or self-pre-training paradigms
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
**Method** & **Modality** & **Organ** & **Type** & **Pre-trained Module: Type** & **Datasets** & **Metrics** & **Year** \\ \hline \hline \multicolumn{8}{c}{**Pure**} \\ \hline Swin-Unet [32] & CT & Multi-organ & 2D & VIT: Supervised & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ \hline nnFormer [137] & CT & Multi-organ & 3D & VIT: Supervised & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ \hline MISSFormer [138] & CT & Multi-organ & 2D & ✗ & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ \hline \multirow{2}{*}{TransDeepLab [33]} & CT & Multi-organ & \multirow{2}{*}{2D} & \multirow{2}{*}{\(\mathcal{K}\)} & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ & MRI & Multi-organ & & & & \({}^{2}\)ACDC [176] & Hausdorff distance & \\ \hline \multirow{2}{*}{TransUNet [24]} & CT & Multi-organ & \multirow{2}{*}{2D} & \multirow{2}{*}{\(\mathcal{K}\)} & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ & MRI & & & & & \({}^{2}\)ACDC [176] & Hausdorff distance & \\ \hline \multirow{2}{*}{TransPure [140]} & CT & Multi-organ & \multirow{2}{*}{2D} & \multirow{2}{*}{\(\mathcal{K}\)} & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ & MRI & & & & & \({}^{2}\)ACDC [176] & Hausdorff distance & \\ \hline \multirow{2}{*}{TransPure [141]} & Colonoscopy & Multi-organ & \multirow{2}{*}{2Dk 3D} & \multirow{2}{*}{VIT: Supervised} & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ & Ultrasound & & & & \({}^{2}\)ACDC [176] & Hausdorff distance & \\ \hline \multirow{2}{*}{MoTT [142]} & CT & Multi-organ & \multirow{2}{*}{2D} & \multirow{2}{*}{\(\mathcal{K}\)} & \({}^{1}\)Brain US (Private) & \multirow{2}{*}{2021} \\ & Ultrasound & & & & \({}^{2}\)GLAS [182] & F1 & 2021 \\ \hline UNETR [143] & CT & \multirow{2}{*}{Brain, Spleen} & \multirow{2}{*}{3D} & \multirow{2}{*}{\(\mathcal{K}\)} & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ & MRI & & & & \({}^{2}\)MSD [184] & Hausdorff distance & \\ \hline Swin UNER [144] & MRI & & & & & & \\ \hline \multirow{2}{*}{Soft permeation} & CT & Multi-organ & \multirow{2}{*}{3D} & \multirow{2}{*}{\(\mathcal{K}\)} & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ & MRI & Multi-organ & & & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ & Dornoscopy & Skin & 2D & VIT: Supervised & \({}^{1}\)Squeeze [138] & Dice & 2021 \\ & Microscopic & Cells & & & \({}^{3}\)SPC 2021 [186, 187, 188] & Hausdorff distance & \\ & & & & & & & \\ \hline \multirow{4}{*}{Soft permeation} & Fundus & \multirow{2}{*}{2Dk 3D} & \multirow{2}{*}{CNN: Supervised} & \multirow{2}{*}{\({}^{1}\)Squeeze [138] & Dice & 2021 \\ & MRI & X- Colonoscopy & & & \({}^{2}\)KVASR [191] & & \\ & & & & & & \\ \hline \multirow{2}{*}{T-AutoMI [146]} & CT & Liver and lung tumor & \multirow{2}{*}{3D} & \multirow{2}{*}{\(\mathcal{K}\)} & MSD 2019 [192] & Dice & \multirow{2}{*}{2021} \\ & & & & & the normalized surface distance (NSD) & \\ \hline \multirow{2}{*}{Cross Teaching [147]} & MRI & Multi-organ & 2D & ✗ & \({}^{1}\)Cheel-Xiny [14] & Dice & \multirow{2}{*}{2021} \\ & MRI & & & & \({}^{2}\)Squeeze [192] & Hausdorff distance & \\ \hline \multirow{2}{*}{Self-pretaining with MAE [148]} & CT & Lang & \multirow{2}{*}{3D} & \multirow{2}{*}{VIT: supervised} & \({}^{1}\)Cheel-Xiny [14] & Dice & \multirow{2}{*}{2021} \\ & MRI & Brain & & & \({}^{3}\)Squeeze [192] & Hausdorff distance & \\ & X-ray & Multi-organ & & & \({}^{3}\)MSD [198] & Hausdorff distance & \\ \hline \hline \end{tabular}
\end{table}
Table 3: An overview of the reviewed Transformer-based medical image Segmentation models.
to reduce the dataset burden of training or pre-training. Nevertheless, the performance is still not comparable to that of fully-supervised models. Designing semi-supervised models with improved accuracy in this direction requires more attention.
## 5 Medical Image Reconstruction
3D Medical imaging is a clinical breakthrough and very popular in medical diagnosis and follow-up after treatment. In Computed Tomography (CT), Single Photon Emission Tomography (SPECT) and Positron Emission Tomogrpahy (PET), the imaging process relies on ionizing radiation [214; 215], which implies a potential risk for the patient [216]. A non-invasive 3D imaging technique is Magnetic Resonance Imaging (MRI), which does not rely on ionizing radiation. However, image acquisition may take longer and confines the patient in a discomforting narrow tube [217]. In order to reconstruct 3D volumetric datasets from the acquired data, Medical image reconstruction is one of the essential components of 3D medical imaging. The primary objective of 3D image reconstruction is to generate high-quality volumetric images for clinical usage at minimal cost and radiation exposure, whilst also addressing potential artifacts inherent to the physical acquisition process. Image reconstruction solves an inverse problem that is generally challenging due to its large-scale and ill-posed nature [218].
In medical imaging, there are ongoing research efforts to reduce the acquisition time (i.e. to reduce cost and potential movement artifacts) as well as radiation dose. However, lowering the radiation dose results in higher noise levels and reduced contrast, which poses a challenge for 3D image reconstruction.
Vision Transformers (ViTs) have effectively demonstrated possible solutions to address these challenges. We categorize the literature in this domain into _low dose enhancement_, _sparse-view reconstruction_, _undersampled reconstruction_, and _super-resolution reconstruction_. This section will overview some of the SOTA Transformer-based studies that fit into our taxonomy. Figure 21(a) and Figure 21(b) demonstrate our proposed taxonomy for this field of study. Figure 21(a) indicates the diversity of our taxonomy based on the medical imaging modalities we studied in this research. Figure 21(b) endorses the usage of the Transformer within the overviewed studies' pipelines.
### Low Dose Enhancement
Zhang et al. [199] used a very general intuition about image denoising: the noisy image constructed with high-frequency and low-frequency counterparts as \(X=X_{H}+X_{L}\) in a study, namely, **TransCT**. Zhang et al. [199] claim that the noisy image's low counterpart contains two sub-components of main
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & **Transect** & **Classification** \\ \hline \hline \multirow{2}{*}{**Time Control**[3]} & - & - & - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ \hline \hline \multirow{2}{*}{**Time Control**[4]} & - & - & - - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ \hline \hline \multirow{2}{*}{**Time Control**[5]} & - & - & - - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: A brief description of the reviewed Transformer-based medical image segmentation models. The unreported number of parameters indicates that the value was not mentioned in the paper, and the code was unavailable.
image content and weakened image textures, which are entirely noise-free. They applied a Gaussian filter on the input image to decompose an image into a high-frequency sub-band and a low-frequency sub-band. After this, they extracted \(X_{L_{\text{L}}}\) content features and \(X_{L_{\text{L}}}\) latent texture features by applying two shallow CNNs on the low-frequency counterpart of the input image. Simultaneously, they applied a sub-pixel layer on a high-frequency counterpart to transform it into a low-resolution image and extracted embedding features (\(X_{H_{\text{J}}}\)) by applying a shallow CNN. Then the resultant latent texture features (\(X_{L_{\text{L}}}\)) and corresponding high-frequency representation are fed to the Transformer for noise removal from a high-frequency representation. Ultimately, they reconstruct the high-quality image piecewise. They showed that the latent texture features are beneficial in screening noise from the high-frequency domain.
Despite the TransCT [199], Wang et al. proposed a convolution-free Token-to-Token vision Transformer-based Encoder-decoder Dilation network (**TED-net**) design for CT image denoising [200]. Their approach is based on a U-Net encoder-decoder scheme enriched by different modules, i.e., Basic Vision Transformer, Token-to-Token Dilation (T2TD), and (Inverse) Cyclic Shift blocks. Consider \(y\in\mathbb{R}^{N\times N}\) a clean natural dose CT image, \(x\in\mathbb{R}^{N\times N}\) noisy low dose CT image, and \(T:\mathbb{R}^{N\times N}\rightarrow\mathbb{R}^{N\times N}\) is a Transformer-based denoising model. According to the Figure 23 after tokenization of \(x\) and passing through the Vision Transformer block to capture long-range dependencies and alleviate the absence of local inductive bias in Transformers, they employed Token-to-Token serialization [219]. Also, they utilized feature re-assembling with a Cyclic Shift block (CSB) to integrate more information. Obvious from Figure 23, all of these blocks are replicated in a symmetric decoder path, but instead of the CSB, the Inverse Cyclic Shift block (ICSB) is implemented to avoid pixel shifts in the final denoising results (\(y=x+T(x)\)). They reached SOTA results compared to CNN-based methods and a competitive benchmark with regard to the TransCT [199].
Luthra et al. [201] proposed a Transformer-based network, **Eformer**, to deal with low-dose CT images while concurrently using the edge enhancement paradigm to deliver more accurate and realistic denoised representations. Their architecture builds upon the LeWin (Locally-enhanced Window) Transformer block [220], which is accompanied by an edge enhancement module. The success of the Swin Transformer [57] in capturing the long-range dependencies with the window-based self-attention technique makes it a cornerstone in designing new Transformer blocks due to its linear computational complexity. LeWin Transformer is one of these blocks that capture the global contextual information and, due to the presence of a depth-wise block in its structure, could also capture a local context. Eformer's first step is through the Sobel edge enhancement filter. In every encoder-decoder stage, convolutional features
Figure 23: An overview of TED-net [200]. Tokenize and DeToken blocks are invertible operations that apply the process of patch embedding and converting patches again to image, respectively. TB represents a standard Transformer block. (IJCSB denotes the (inverse) Cyclic Shift block to modify the feature map, nonetheless, the reverse operation avoids pixel shifts in the final result. T2T block represents the Token-to-Token process [219] to improve the spatial inductive bias of Transformers by merging the neighboring tokens. The Dilated T2T (T2TD) block is used to refine contextual information further.
Figure 22: An overview of medical image reconstruction taxonomies either as categorizing by the task or the location of using Transformer in an architecture.
pass through the LeWin Transformer block, and downsampling and upsampling procedures are done by convolution and deconvolution layers. Eformer's learning paradigm is a residual learning scheme, meaning it learns the noise representation rather than a denoised image due to the ease of optimization in predicting a residual mapping.
Akin to Low Dose CT (LDCT), Low-Dose PET (LDPET) is preferable to avoid the radiation risk, especially for cancer patients with a weakened immune system who require multiple PET scans during their treatment at the cost of sacrificing diagnosis accuracy in Standard-Dose PET (SDPET). Luo et al. [202] proposed an end-to-end Generative Adversarial Network (GAN) based method integrated with a Transformer block, namely **3D Transformer-GAN**, to reconstruct SDPET images from the corresponding LDPET images. To alleviate the inter-slice discontinuity problem of existing 2D methods, they designed their network to work with 3D PET data. Analogous to any GAN network, they used a generator network, encoder-decoder, with a Transformer placed in the bottleneck of the generator network to capture contextual information. Due to the computational overhead of Transformers, they did not build their proposed method solely on it. Therefore, they were satisfied to place a Transformer counterpart across CNN layers of the generator to guarantee to extract low-level spatial feature extraction and global semantic dependencies. They also introduced adversarial loss term to their voxel-wise estimation error to produce more realistic images.
In contradiction with other works, Zhang et al. [203] proposed leveraging the PET/MRI data simultaneously for denoising low-count PET images, which is a crucial assessment for cancer treatment. PET scan is an emission Computed Tomography (CT) operating by positron annihilation radiation. Due to the foundation and requirements of PET scans, there is a severe risk of getting infected with secondary cancer by radiotracers. So to degrade the side effects of this imaging process, there are two potential methods: reduction in radiotracter dose and lessening the patient's bedtime duration. The aforementioned approaches, without a doubt, affect the imaging result quality with decreased contrast to noise ratio and bias in texture. The traditional low-count PET denoising approaches are based on Non-Local Means (NLM) [221], Block Matching 3D (BM3D) [222], and Iterative methods [223], etc., which are firmly in bond with hyperparameter tuning for new data or result in unnatural smoothings over denoised images. Zhang et al. [203] testify that simultaneous PET/MRI could boost one modality in terms of correct attenuation, motion, and partial volume effects, and also, due to the high contrast among soft tissues in MRI, the denoising process of PET images is preferably straightforward. **STFNet**[203] is a U-Net based structure with different medications. They proposed a new Siamese encoder comprising dual input flow for each modality in the encoding path. To obtain sufficient features from different modalities, they used the Spatial Adaptive (SA) block, a dual path in each block with the residual block design, which consists of different consecutive convolutional blocks and deformable convolution with fusion modulation. This module aims to learn more contextual features from each modality. To leverage global attention, they used a Transformer to produce a pixel-to-pixel interaction between the PET and the MRI modality. After this integration, the fused features are input to the two branches based on residual convolution blocks for PET denoising.
Wang et al. [204] proposed the enhancement for their previous work, TED-net [200] convolution-free, solely Transformer-based network, namely **CTformer**. From Figure 24, it is apparent that their network is an unsupervised residual learning, U-Net-like encoder-decoder structure, rather than direct map learning of LDCT to Natural Dose CT (NDCT). The CTformer tries to compensate for the Transformers' deficiency in capturing path inter-boundary information and spatial inductive bias with token rearrangement, T2T [219]. To do so, analogously like TED-net, they used dilation and cyclic shift blocks in the Token2Token block to broaden the receptive field to capture more contextual information and not increase the computational cost.
Yang et al. [205] were inspired by how sinogram works and proposed Singoram Inner-Structure Transformer (**SIST**) (Figure 25). This inner structure of the sinogram contains the unique characteristics of the sinogram domain. To do so, they mimic the global and local characteristics of sinogram in a loss function based on sinogram inner-structure, namely Sinogram Inner-Structure Loss (SISL). The global inner-structure
Figure 24: An overview of CTformer [204]. This structure is analogous to the TED-net [200] structure, the previous study by the same authors.
\begin{table}
\begin{tabular}{l l|c c c c} \hline \hline & **Methods** & **Task** & **Dataset** & **SSIM \(\uparrow\)** & **RMSE \(\downarrow\)** \\ \hline \multirow{3}{*}{\(\mathcal{G}\)} & Eformer & LDE & NIH-AAPM-Mayo & 0.9861 & 0.0067 \\ & [201] & & [224] & & \\ & TransCT & LDE & NIH-AAPM-Mayo & 0.923 & 22.123 \\ & [199] & & [224] & & \\ \multirow{3}{*}{\(\mathcal{G}\)} & CTformer & LDE & NIH-AAPM-Mayo & 0.9121 & 9.0233 \\ & [204] & & [224] & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison result on NIH-AAPM-Mayo [224] dataset in low dose enhancement task. _LDE_ indicates the Low Dose Enhancement task.
loss utilizes conjugate sampling pairs in CT, and local inner-structure loss considers the second-order sparsity of sinograms. The amalgamation of these two terms could be beneficial in reconstructing NDCT images while retaining the noise. Due to the CT imaging mechanism, each row of the sinogram representation denotes the projection at a certain view. Naturally, this procedure is suitable for leveraging the Transformer block for modeling the interaction between different projections of diverse angles to capture contextual information. Therefore, the SIST module applies to raw sinogram input and captures structural information. Afterward, the unified network reconstructs the high-quality images in a residual policy with the image reconstruction module.
Table 5 represents the benchmark results in the LDCT task over the NIH-AAPM-Mayo [224] dataset respecting SSIM and RMSE metrics on overviewed methods in this study. For clarification, TED-net [200] achieved better results than CTformer [204], but due to two studies originating from the same authors and the resemblance between architectures, we preferred to mention CTformer to count in the comparison table. This result endorses the capability of the pure Transformer-based Eformer [201] method in reconstructing natural dose CT images.
### Sparse-View Reconstruction
Due to the customary usage of CT images in medical diagnosis, another policy to lessen the side effects of X-ray radiation is acquiring fewer projections, known as sparse-view CT, which is a very feasible and effective method rather than manipulating the standard radiation dose [225; 226]. However, the resultant images from this method suffer from severe artifacts, and decreasing the number of projections demands profound techniques to reconstruct high-quality images. Wang et al. [206] is the first paper that inspected the usage of Transformers in this field which was quite successful, namely **DuDoTrans**. Their intuition was to shed light on the globality nature of the sinogram sampling process, which the previous CNN architectures neglected. DuDoTrans, unlike the conventional iterative methods in this literature, does not provide blocky effects in reconstructed images. This method simultaneously benefits from enhanced and raw sinogram streams to restore informative sinograms via long-range dependency modeling in a supervised policy. DuDoTrans from Figure 26 is built on three main modules, namely Singoram Restoration Transformer (SRT), the DuDo Consistency layer, and the Residual Image Reconstruction Module (RIRM). SRT block consists of successive hybrid Swin Transformer modules and convolutional layers to model local semantic features and inherent global contextual information in the sinogram to produce the enhanced sinogram.
Buchholz et al. [207] presented the Fourier Image Transformer (**FIT**) that operates on the image frequency representation, especially the Fourier description of the image, which in their study is known as Fourier Domain Encoding (FDE), that encodes the entire image at lower resolution. The intuition in their idea is underlying the CT's acquisition process physics. CT utilizes a rotating 1D detector array around the patient body to calculate the Radon transform [227] of a 2D object, which leads to a sequence of density measurements at different projection angles, namely sinogram as a 2D image in which each column of this representation corresponds to one 1D measurement. The Filtered Back Projection (FBP) [228; 227] is a reconstruction method to map sinograms to tangible CT images. FBP is based on the Fourier slice theorem; hence, computing the 1D Fourier transform of 1D projection and rearranging them by their projection angle in Fourier space, followed by an inverse Fourier transformation, results in a reconstructed 2D CT image slice. Limiting the number of projections leads to missing Fourier measurements, which ultimately conduce to reconstruction artifacts. FIT is the first study that uses a Transformer to query arbitrary Fourier coefficients and fill the unobserved Fourier coefficients to conceal or avoid the probable artifacts in reconstruction within sparse-view CT reconstruction literature. From Figure 27 this procedure starts with calculating the FDE of the raw sinogram. To do so, first, the discrete Fourier transform (DFT) of the sinogram will be calculated. Secondly, after dropping half of the coefficients on the Fourier rings of the resultant Fourier representation, it preserves the lower frequency counterparts to recover the lower resolution of the raw sinogram. Afterward, the complex coefficients convert into 1D
Figure 26: DuDoTrans [206] framework for sparse-view CT image reconstruction. First, the sparse-view sinogram \(\mathbf{Y}\) maps to a low-quality image \(\mathbf{\widetilde{X}}_{1}\) and other estimation \(\mathbf{\widetilde{X}}_{2}\) generated by SRT module’s enhanced sinogram output \(\mathbf{\widetilde{Y}}\) followed by DuDo Consistency Layer. Lastly, the predicted estimations are concatenated and fed to the RIRM module that outputs the CT image of \(\mathbf{\widetilde{X}}\) in a supervised manner.
Figure 25: The overall architecture of SIST [205] pipeline. \(S_{it}\) and \(I_{id}\) are the LDCT sinogram and image, \(S\) and \(\bar{I}\) denote the output sinogram and image, \(S_{\text{sin}it}\) and \(I_{\text{cos}it}\) the sinogram noise and image noise. First, the LDCT sinogram feed to the Transformer for sinogram domain denoising, then the denoised sinogram \(\mathbf{\hat{S}}\) input to the image reconstruction module for image domain denoising. Within the image reconstruction module, the sinogram noise \(S_{\text{sin}it}\) with the usage of residual CNN block generates image domain \(I_{\text{cos}it}\). NDCT \(\bar{I}\), outputs from applying refinement steps on \(I_{id}\) minus \(I_{\text{cos}it}\).
sequences by unrolling the Fourier rings. These complex values convert to normalized amplitudes and phases. Therefore, each complex coefficient has its own polar representation, which is a normalized real-valued matrix with \(N\times 2\) entries (\(N\) is equal to half of the DFT coefficients number). A linear layer applies on this tensor to upsample the feature dimensionality to \(\frac{F}{2}\). Finally, a 2D positional encoding concatenates to this tensor and produces a 2D FDE image with the size of \(N\times F\).
Shi et al. [208] presented a CT reconstruction network with Transformers (**CTTR**) for sparse-view CT reconstruction. In contrast to DuDoTrans [206], CTTR enhances low-quality reconstructions directly from raw sinograms and focuses on global features in a simple policy in an end-to-end architecture. CTTR contains four parts: two CNN-based residual blocks extracting local features from FBP [228] images reconstruction and sinograms, an encoder-decoder Transformer for long-range modeling dependencies, and contextual information between features, and a CNN block to map features to a high-quality reconstruction.
Cone-Beam Computed Tomography (CBCT) is a conventional way of dental and maxillofacial imaging; due to its fast 3D imaging qualifications, its popularity has extended to lung imaging. However, studies approved that its radiation dose is higher than plain radiographs [230] hence sparse-view CBCT could be a suitable method to lower radiation dose. Wu et al. [209] proposed a novel untrained 3D Transformer-based architecture, namely **ARMLUT**, with a multi-level loss function for CBCT reconstruction. While the Transformer module, especially the UNETR [143] in this study, captures long-range contextual information and enhances the resulting image. The intuition behind this strategy is Deep Image Prior (DIP) [231] to succeed in the reconstruction field. From Figure 28a, ARMLUT is an iterative optimization problem between the Image Reconstructor module and Image Generator module to fit a CBCT inverse solver without a large number of data or ground truth images. The multi-level loss function comprises a Mean Squared Error (MSE) and Perceptual Loss (PL) [232] to reconstruct smooth and streak artifact-free outputs. The entire framework (Figure 28) has three main counterparts: Image Reconstruction, Image Generator, and Feature Extractor. Image Reconstructor uses Feldkamp-Davis-Kress (FDK) algorithm [229] to produce a low enhanced reconstruction from \(M\)-view measurements, and the Image generator module maps the noisy voxel inputs to reconstruct a regularised image. The Feature Extractor module applies the VGG-11 pre-trained network on two representations and produces a PL paradigm. To minimize the distance between these two reconstructions, ARMLUT utilizes an adaptively re-weight multi-loss technique to stabilize the convergence of the Transformer in the optimization.
### Undersampled Reconstruction
Magnetic Resonance Imaging (MRI) is a dominant technique for assistive diagnosis. However, due to the physics behind its operation, the scanning time can take longer and be very tedious, affecting the patient experience and leading to inevitable artifacts in images [241]. Hence, reducing the number of MRI measurements can result in faster scan times and artifacts reduction due to the patient's movement at the cost of aliasing artifacts in the image [217].
Lin et al. [210] proposed a comprehensive analytical study to investigate the usage of ViT in a pure (CNN-free modules) and most straightforward Transformer design. This study is evidence of the prominent effect of ViTs in medical image reconstruction. For this work, they adopted the original ViT [22] for image reconstruction by discarding the classification token and replacing the classification head with the reconstruction head, which is comprised of successive Norm and Linear layers to map the Transformer output to a visual image. They performed a complete ablation study with different ViT settings, from the number of stacked Transformers to embedding dimension and number of heads in Multi-Head Self-Attention (MHSA). Their results were quite effective and proved that trained ViT on sufficient data from natural images like ImageNet or medical data could perform better or achieve on-par reconstruction accuracies compared to CNN baselines such as U-Net [34]. The proposed design's distinguished power based on the mean attention distance metric [242] proves that it effectively mimics the convolutional receptive fields and could concurrently capture local and global dependencies. In addition, they showed that the ViT benefits from two times faster inference times and fewer memory requirements compared to the U-Net.
Feng et al. [211] address the particular issue in this domain by designing an end-to-end multi-task learning paradigm to boost feature learning between two sub-tasks, MRI reconstruction, and super-resolution, which have a high overlap with each other named Task Transformer Network (**T\({}^{2}\)Net**) is showed in Figure 29. Their network consists of two branches, each for a specific task. T\({}^{2}\)Net utilizes a Transformer between two branches to share feature representation and transmission. T\({}^{2}\)Net applies a convolution layer and EDSR [243] backbone to extract task-specific features in each task branch. To share information between two branches and benefit from the interactions of these two task-specific features concerning the nature of the Transformer's globality, T\({}^{2}\)Net uses a unique Transformer design to learn a generalized representation. Since the reconstruction branch has more potential in artifact removal capacity than the super-resolution branch, the task Transformer
Figure 27: FIT [207] framework for sparse-view CT reconstruction. FDE representation of the sinogram calculates that serves as an input to an encoder of Transformer design. The decoder predicts the Fourier coefficients from the encoder’s latent space. The Fourier coefficients of applying the FBP [228] algorithm on sinogram information are fed into a Transformer’s decoder to enrich the Fourier query representation. A shallow CNN block applies after inverse FFT to hamper the frequency oscillations.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Method** & **Task(s)** & **Modality** & **Type** & **Pre-trained Module: Type** & **Dataset(s)** & **Metrics** & **Year** \\ & & & & **Pure** & & & \\ TED-net [200] & LDE & CT & 2D & ✗ & NIH-AAPM-Mayo Clinical LDCT [224] & \begin{tabular}{c} SSIM \\ RMSE \\ \end{tabular} & 2021 \\ \hline Eformer [201] & LDE & CT & 2D & ✗\({}^{\dagger}\) & NIH-AAPM-Mayo Clinical LDCT [224] & \begin{tabular}{c} PSNR, SSIM \\ RMSE \\ \end{tabular} & 2021 \\ \hline CTformer [204] & LDE & CT & 2D & ✗ & NIH-AAPM-Mayo Clinical LDCT [224] & \begin{tabular}{c} SSIM \\ RMSE \\ \end{tabular} & 2022 \\ \hline FIT [207] & SVR & CT & 2D & ✗ & LoDoPaB [233] & PSNR & 2021 \\ \hline ViT-Rec [210] & USR & MRI & 2D & Supervised & fastMRI [234] & SSIM & 2021 \\ \hline \hline \multicolumn{8}{c}{**Encoder**} \\ TransCT [199] & LDE & CT & 2D & ✗ & \begin{tabular}{c} \({}^{1}\) NIH-AAPM-Mayo Clinical LDCT [224] \\ \end{tabular} & \begin{tabular}{c} RMSE \\ SSIM \\ \end{tabular} & 2021 \\ \hline STFNet [203] & LDE & \begin{tabular}{c} PET \\ MRI \\ \end{tabular} & 2D & ✗ & Private Dataset & \begin{tabular}{c} RMSE, PSNR \\ SSIM, PCC \\ \end{tabular} & 2022 \\ \hline SIST [205] & LDE & CT & 2D & ✗ & \begin{tabular}{c} \({}^{1}\)LDCT Dataset [235] \\ \end{tabular} & \begin{tabular}{c} PSNR, SSIM \\ RMSE \\ \end{tabular} & 2022 \\ \hline DuDoTrans [206] & SVR & CT & 2D & ✗ & NIH-AAPM-Mayo Clinical LDCT [224] & \begin{tabular}{c} PSNR, SSIM \\ RMSE \\ \end{tabular} & 2021 \\ \hline CTTR [208] & SVR & CT & 2D & ✗ & LIDC-IDRI [236] & \begin{tabular}{c} RMSE, PSNR \\ SSIM \\ \end{tabular} & 2022 \\ \hline \hline \multicolumn{8}{c}{**Skip Connection**} \\ Coff-T [213] & SRR & MRI & 2D & ✗ &
\begin{tabular}{c} \({}^{1}\) BraTS2018 [181] \\ \({}^{2}\) IXI [237] \\ \end{tabular} & 2022 \\ \hline \hline \multicolumn{8}{c}{**Other Architectures**} \\
3D T-GAN [202] & LDE & PET & 3D & ✗ & Private Dataset & \begin{tabular}{c} PSNR, SSIM \\ RMSE \\ \end{tabular} & 2021 \\ \hline ARMLUT [209] & SVR & CT & 3D & ViT: Supervised \({}^{\dagger}\) & \begin{tabular}{c} \({}^{1}\) SPARE Challenge Dataset [238] \\ \({}^{2}\) Walnut dataset [239] \\ \end{tabular} & PSNR, SSIM & 2022 \\ \hline T\({}^{2}\)Net [211] & \begin{tabular}{c} USR \\ SRR \\ \end{tabular} & MRI & 2D & ✗ & \begin{tabular}{c} \({}^{1}\) IXI [237] \\ \({}^{2}\) Private Dataset \\ \end{tabular} & \begin{tabular}{c} PSNR, SSIM \\ RMSE \\ \end{tabular} & 2021 \\ \hline DisCNN-ViT [212] & SRR & MRI & 3D & ViT: Self-Supervised & \begin{tabular}{c} \({}^{1}\)fastMRI [234] \\ \({}^{2}\) IXI [237] \\ \end{tabular} &
\begin{tabular}{c} PSNR, SSIM \\ RMSE \\ \end{tabular} & 2021 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Medical Image Reconstruction. _LDE_, _SVR_, _USR_, and _SRR_ stand for Low Dose Enhancement, Sparse-View Reconstruction, Undersampled Reconstruction, and Super Resolution Reconstruction, respectively. \(\dagger\) indicates that this network uses a pre-trained perceptual loss (loss network).
Figure 28: (a) represents multi-loss untrained network for sparse-view CBCT reconstruction. (b) architecture of UNETR [143], as a Transformer module in a ARMLUT.
module guides the super-resolution branch into high-quality representation from the reconstruction branch. The Transformer module inherits the query (\(Q\): from super-resolution branch), key (\(K\): from reconstruction branch), and value (\(V\): from reconstruction branch) from each scale's two branches' output. It forms three main concepts: relevance embedding, Transfer attention, and soft attention, which differ from the original Transformer blocks. Relevance embedding tries to enclose the correlated features from the reconstruction branch to the super-resolution branch. Transfer attention aims to transmit the anatomical and global features between two branches, and last but not least, soft attention amalgamates features from the previous two steps. Ultimately, this module lets the whole network transfer and synthesize the representative and anatomical features to produce a high-quality, artifacts-free representation from highly undersampled measurements. The experimental results on two datasets expressed the high potential of this approach rather than conventional algorithms.
### Super-Resolution Reconstruction
Improving the resolution of images leads to the more detailed delineation of objects. Increasing the medical image resolution plays a crucial role in computer-aided diagnosis due to its rich anatomical and textural representation. Based on the aforementioned fact and the MRI's pipeline physics during the image acquisition process for having high-resolution images, a patient needs to lie a long time in the MRI tube. Hereupon lower signal-to-noise ratio and more minor spatial coverage drawbacks are inevitable [241]. Therefore in this section, we investigate Transformer-utilized algorithms that try to alleviate this problem. Of note, due to the analogy between MRI and super-resolution reconstruction, some studies investigate these two tasks in conjunction with each other.
Mahapatra et al. [212] proposed the GAN-based model with structural and textural information preservation done by multiple loss function terms. Their pipeline included two pre-trained modules named feature disentanglement module, a conventional autoencoder, and a Transformer-based feature encoder, UNETR [143]. UNETR captures the global and local
\begin{table}
\begin{tabular}{p{34.1pt} p{142.3pt} p{142.3pt}} \hline \hline
**Method** & **Construction** & **Inaptals** \\ \hline TransCT [199] & * The proposed prototype was the first-convolutional implementation of a Transformer algorithm to CNN in the Low Done CT reconstruction algorithm by exploring in terms within high-frequency and low-frequency & * The Transformer effectively could learn the embedded feature representation from the noisy counterpart, employing in terms within high-frequency and low-frequency & * The Transformer is not continuously free until two Transformer is a complement. \\ \hline \hline TED-net [201] & * Convolution-free I-Net based Transformer model. & * Inhomological Dislab-to-linear-based bilinear initialization for an improved receptive field in Transformer. \\ \hline \hline \multirow{2}{*}{**Element** [201] & * Inhomological Dislab Classification based on the network for improving single reconstruction and improving the overall performance of the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dis Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dis Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dis Classification based on the network. & * Inhomological Dislab Classification based on the network. & * Inhomological Dis Classification based on the network. & * Inhomological Dis Classification based on the network. & * Inhomological Dislab Classification
context of the original low-resolution image and induces the high-resolution image to preserve these contexts too. These two modules fine-tune on a different medical dataset, and afterward, the low-resolution input plus the intermediate generator produced image feed to these modules. The disentanglement network contains two autoencoders to learn two counterparts, latent space, structural and textural space, with fed medical images. In an end-to-end setting, these two pre-trained assistive modules help to generate more realistic and structural and textural preserving high-resolution images by imposing module-related loss terms such as adversarial loss to constrain for producing realistic images and cosine similarity loss for each mentioned module. Results on the IXI dataset proved that Mahapatra et al.'s [212] network outperformed a couple of the CNN-based attention mechanism networks and T\({}^{2}\)Net [211].
Maintaining structural information during the acquiring high-resolution images plays a crucial role. Hence, the structure information is embedded in an image's high-frequency counterpart, like in an image's gradients. In addition, due to the less time-consuming nature of obtaining MR T1WI (T1 Weighted Image), it is wise to use it as an inter-modality context prior to producing a high-resolution image. Accordingly, Fang et al. [213] devised a network to leverage these two concerns in their super-resolution pipeline: Cross-Modality High-Frequency Transformer (**Cohf-T**). This network is divided into two streams, the first stream is applied on low-resolution T2WI, and the second one manipulates T2WI's gradient and high-resolution T1WI. The Cohf-T module interacts between two streams to embed the prior knowledge in the super-resolution stream's features. The Cohf-T module consists of three different attention modules: short-distance and long-distance window attention and inter-modality attention. The first two attention modules help to model intra-modality dependency. To be precise, the short-distance window helps recover the local discontinuity in boundaries with the help of surrounding structure information, and the long-distance window can capture the textural and structural patterns for enhanced results. Due to the discrepancy in intensity levels between T1WI and T2WI, it is vital to make an alignment between these two domains, and Fang et al. [213] presented a Feature Alignment (FA) module to reduce the cross-modality representation gap. They compared their results with T\({}^{2}\)Net [211] and MTrans [245], which outperformed both approaches by \(\sim\) 1% in terms of PSNR.
### Discussion and Conclusion
In this section, we outline different Transformer-based approaches for medical image reconstruction and present a detailed taxonomy of reconstruction approaches. We overviewed 15 studies that profit from the Transformer design to compensate for the deficiency of CNN's limited receptive field. We investigate each study in depth and represent Table 6 for detailed information about the dataset, utilized metrics, modality, and objective tasks. In Table 7, we provide the main contribution of each study and the prominent highlight of each method.
Most of the studies in this domain use the original Transformer as a plug-and-play module in their design and only a limited number of studies utilize hierarchical and efficient Transformers. However, the criteria for using multi-scale and hierarchical architectures are generally important for dense prediction tasks, e.g. image reconstruction, and should be considered further. Also, another direction to follow for future research could be to investigate the influence of using pre-training weights on Transformers due to the need for a large amount of data for better convergence results in Transformers, which contradicts the nature of the medical domain, due to the scarceness of annotated medical data.
In addition, we noticed that most of the studies focus on MRI and CT image reconstruction tasks. So there is a need for evaluating the applicability of these methods on other modalities, too.
Figure 30: The pipeline of Cohf-T [213] consists of three main branches with the corresponding input modalities as follows: \(\textbf{I}_{in}\), \(\textbf{R}_{in}\), and \(\textbf{R}\), denote the low-resolution T2WI, the gradient of low-resolution T2WI and high-resolution T1WI, respectively. A fully-convolutional branch for density-domain super-resolution, a Transformer-based branch for restoring high-frequency signals in the gradient domain, and a guidance branch for extracting priors from the T1 modality. _Conv_, _RRDB_ and _MLP_ represent a 3 \(\times\) 3 convolution operation and residual-in-residual dense block and multi-layer perceptron, respectively.
Figure 29: An overview of T\({}^{2}\)Net [244]. (a) Multi-Task T\({}^{2}\)Net pipeline and (b) Task Transformer Module—T\({}^{2}\) Module
## 6 Medical Image Synthesis
In this section, we will overview several instances of Transformers in the medical image synthesis task. The scarcity of medical data and the high cost of acquisition processes make this task very valuable in the medical field. Some studies aim to synthesize missing slices from MRI and CT sequences. In addition, some methods target capturing the structural information in diverse modalities, e.g., CT to MRI image-to-image translation and vice versa. Figure 31 shows our taxonomy for the image-synthesized methods.
### Intra-Modality
The main objective of the intra-modality methods is to synthesize high-quality images using low-quality samples from the same modality. In this respect, several Transformer-based approaches are presented to formulate the synthesis task as a sequence-to-sequence matching problem to generate fine-grained features. In this section, we will briefly present some recent samples [246].
Brain development monitoring is a de facto standard in predicting later risks; hence it is critical to screen brain biomarkers via available imaging tools from early life stages. Due to this concern and the nature of MRI and subjective infants' restlessness, it is not relatively straightforward to take all the MR modalities during the MRI acquisition. Zhang et al. [246] proposed a **Pyramid Transformer Net (PTNet)** as a tool to reconstruct realistic T1WI images from T2WI. This pipeline is an end-to-end Transformer-based U-Net-like and multi-resolution structure network utilizing an efficient Transformer, Performer [251], in its encoder (PE) and decoder (PD). Analogously to the original U-Net [34], they used skip connection paths for preserving fine-grained features and accurate localization features for reconstruction. Moreover, the paradigm's two-level pyramidal design helps the network capture local and global information in a multi-resolution fashion. They achieved the SOTA results on the dHCP [252] dataset compared with the flagship GAN-based image generation method pix2pix (HD) [253; 254].
Dalmaz et al. [247] introduced a conditional generative adversarial network based on the cooperation of Transformers and CNN operators, namely **ResViT**. This paradigm addresses the issue of needing to rebuild separate synthesis models for varying source-target modality settings and represents a unified framework as a single model for elevating its practicality. The ResViT (Figure 32) pervasively refers to the generator of its pipeline, whereby it leverages a hybrid pipeline of residual convolutional operations and Transformer blocks that enable effective aggregation of local and long-range dependencies. The discriminator is based on a conditional PatchGAN framework [253]. Utilizing standalone Transformer architectures (e.g., PTNet [246]) in pixel-to-pixel tasks is quite challenging due to the quadratic complexity, which limits its usage to fixed-size patches that hamper its effectiveness. From Figure 32, it is evident that residual Transformer blocks stacked successively, known as aggregated residual Transformer (ART) blocks, in the bottleneck of the encoder-decoder design of the generator to extract the hidden contextual information of input features. The primary motivation of ART blocks is to learn an integrated representation that combines contextual, local, and hybrid local-contextual features underhood from the input flow. Channel Compression (CC) module recalibrates the concatenated features from the previous ART block and Transformer module to select the most discriminant representations. Due to the cascade of Transformers in design, to decrease the model complexity and computational burden, ResViT utilizes weight sharing strategy among projection tensors for Query, Key, value, and attention heads besides weight matrices for multi-layer perceptron operation. The superiority of this method has been proved over several MRI datasets in multi-contrast MRI synthe
Figure 31: An overview of ViTs in medical image synthesis. Methods are categorized by target and source modality. The prefix numbers in the paper’s name in ascending order denote the reference for each study as follows: 1. [246], 2. [247], 3. [248], 4. [249], 5. [250].
Figure 32: The ResViT [247] framework for multi-modal medical image synthesis. The bottleneck of this encode-decoder comprises successively residual Transformers and residual convolutions layers for synergistically capturing the fine-grained global and local context.
sis and MRI to CT experiments with high PSNR and SSIM metrics over the conventional SOTA methods, e.g., pGAN [255], SAGAN [256], pix2pix [253] and PTNet [246].
Likewise, Liu et al. [248] addressed the issue of missing contrasts in MRI imaging and proposed a multi-contrast multi-scale Transformer (**MMT**) framework to handle the unavailability of this information by synthesizing the existing contrasts as a means to substitute the missing data. To achieve efficient contrast synthetization, the task is considered as a seq-to-seq problem, in which the model learns to generate missing contrasts by leveraging the existing contrast in the following manner: A Swin multi-contrast Transformer encoder is implemented that creates hierarchical representation from the input MRI image. Then, a Swin Transformer-based architecture decodes the provided representation at multiple scales to perform medical image synthesis. Both the encoder and decoder are composed of two sequential swin blocks that capture contrast dependencies effectively. Conducted experiments on the IXI [237] and BraTS [185] datasets demonstrated MMT's advantage compared to previous methods.
### Inter-Modality
Unlike the intra-modality strategies, the inter-modality methods are designed to learn the mapping function between two different modalities. This approach allows the network to convert the samples from the base modality into a new modality and leverage the generated samples in the training process for the sake of performance gain. In this section, we will elaborate on two Transformer-based [249, 250] strategies.
Several medical conditions may prevent patients from receiving intravenous contrast agents while getting CT screening. However, the contrast agent is crucial in assisting medical professionals in identifying some specific lesions. Therefore, **CyTran**[249] is proposed as an unsupervised generative adversarial convolutional Transformer for translating between contrast and non-contrast CT scans and image alignment of contrast-enhanced CT scans to non-enhanced. Its unsupervised part is also derived from its cyclic loss. As illustrated in Figure 33, CyTran is composed of three main modules: **I)** A downsample CNN-based module designed for handling high-resolution images, **II)** A convolutional Transformer module tailored for incorporating both local and global features, and **III)** An upsampling module developed to revert the transformation of the downsampling block through transpose-convolution. Additionally, the authors introduce a new dataset, Coltea-Lung-CT-100W, comprised of 100 3D anonymized triphasic lung CT scans of female patients.
Furthermore, Kamran et al. [250] trained a ViT-based generative adversarial network (**VTGAN**) in a semi-supervised fashion on the Fundus Fluorescein Angiography (FFA) dataset provided by [258] via incorporating multiple modules, including residual blocks as generators for coarse and fine image generation, and two ViT architectures consisting of identical transformer encoder blocks for concurrent retinal abnormality classification and FA image synthesis.
### Discussion and Conclusion
This section covers the adoption of ViT architectures in medical image synthesis applications. We explored the proposed methods based on two synthesis approaches: **(I)** inter-modality, in which the target modality is synthesized in a way that encapsulates crucial diagnostic features from different source images; and **(II)** intra-modality, with the objective of yielding target images with better quality by integrating information from lower resolution source images. To demonstrate their effectiveness, these approaches usually rely on SSIM, PSNR, and LPIPS as the evaluation metrics, since they are designed to measure the similarity between images. We also reviewed a ViT-based synthesis model [250] that operates in a decoder fashion for the task of fundus-to-angiogenic translation with different evaluation measurements, including Frechet Inception Distance (FID) and Kernel Inception Distance (KID). We have additionally provided the architectural type, modality, input size, training setting, datasets, metrics, and year for every medical registration technique analyzed in Table 8. Furthermore, Table 9 lists the contributions and highlights of the proposed works. In particular, with the scarcity of works with ViT implementations and the recent advancement in the medical synthesis field with Transformer models, we believe that these systems require more research effort to be put into them. For example, Transformers have much room for improvement to generate more realism and high-quality synthesized medical images. One way to achieve this is by incorporating more detailed anatomy and physiology features using more efficient and effective attention mechanisms. Additionally, while much of the current research in this area has focused on 2D medical images and CT and MRI modalities, there is potential to apply these techniques to other types of medical images, including 3D and microscopy images.
## 7 Medical Image Detection
Object detection remains one of the challenging problems in computer vision, especially detection in the medical image domain has its own challenges. Current state-of-the-art architectures which work on 2D natural images use Vision Transformers. The Vision Transformers used in the detection task can be
Figure 33: An overview illustration of CyTran [249]. An input image is fed through a downsampling block to extract its features and make it compatible with high-resolution images. The output then passes through a convolutional Transformer block to enrich features by capturing local and global information. In the final step, enriched features are upsampled to the image size using transpose-convolution.
classified into two Transformer backbones and detection Transformers. In addition, the Transformer module can be used in a hybrid manner. Detection Transformers generally represent an end-to-end detection pipeline with an encoder-decoder structure, while the Transformer backbone solely utilizes the Transformer encoder for feature refinement. In order to increase detection performance, object detectors combine variants of vision Transformers with classical convolutional neural networks (CNNs).
Quite recently, Carion et al. [163] introduced the concept of DETR, which forms a foundation for Detection Transformers. DETR uses a ResNet backbone to create a lower-resolution representation of the input images. Even though this approach achieves very good 2D detection results, comparable to the R-CNN backbone, high computational complexity is a downside of this method. The deformable DETR [23] approach has improved DETR's detection performance overcoming the problem of high computational complexity. Many recent approaches have tried to improve DETR's detection concept over time. Efficient DETR [259] eliminated DETR's requirement for iterative refinement. Conditional DETR [260] introduced the concept of a conditional cross-attention module. DN-DETR [261] introduced a denoising strategy, and DINO [262] improved many aspects, such as denoising training, etc. Recently, some studies performed experiments on 2D medical data such as [263], [264] etc. However, only very few attempts tried to adapt it to 3D. Spine Transformer was proposed by Tao et al. [265] for sphere-based vertebrae detection. Another approach in 3D detection was proposed by Ma et al. [266], which introduced a novel Transformer that combines convolutional layers and Transformer encoders for automatically detecting coronary artery stenosis in Coronary CT angiography (CCTA). An approach to better extract complex tooth decay features was proposed by [267]. For end-to-end polyp detection, Shen et al. [268] proposed an approach that was based on the DETR model. Kong et al. have proposed the approach CT-CAD [269], context-aware Transformers for end-to-end chest abnormality detection. Table 10 indicates details on modalities, organs, datasets, metrics, etc. The highlights of different approaches are summarized in Table 11. Some of the aforementioned detection papers in the medical image domain are summarized in this section.
### Backbone
This section explains Transformer networks using only the Transformer encoder layers for object detection. The work proposed by Ma et al. [266] uses a Transformer network **(TR-Net)**
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
**Method** & **Concept(s)** & **Modality** & **Type** & **Pre-trained Module: Type** & **Dataset(s)** & **Metrics** & **Year** \\ \hline \multicolumn{10}{c}{**Pure**} \\ \hline PTNet [246] & Intra-Modality & MRI & 2D & ✗ & dHCP dataset [252] & SSIM & 2021 \\ & \multirow{2}{*}{MMT [248]} & Intra-Modality & \multirow{2}{*}{MRI} & \multirow{2}{*}{2D} & \multirow{2}{*}{✗} & \({}^{1}\) IXI [237] & SSIM & \multirow{2}{*}{2022} \\ & Inter-Modality & & & & \({}^{2}\) BraTS [185] & & & \\ \hline \multicolumn{10}{c}{**Boitleneck**} \\ \hline ResViT [247] & Intra-Modality & \multirow{2}{*}{CT} & \multirow{2}{*}{2D} & \multirow{2}{*}{VIT: Supervised} & \({}^{1}\) IXI [237] & PSNR & \multirow{2}{*}{2021} \\ & Inter-Modality & & MRI & & & \({}^{3}\) Multi-modal pelvic MRI-CT [257] & & \\ \hline CyTran [249] & Inter-Modality & \multirow{2}{*}{CT} & \multirow{2}{*}{2D} & \multirow{2}{*}{✗} & \multirow{2}{*}{Coltea-Lung-CT-100W [249]} & MAE & \multirow{2}{*}{2022} \\ & & & & & & & RMSE & \\ & & & & & & & SSIM \\ \hline \multicolumn{10}{c}{**Decoder**} \\ \hline VTGAN [250] & Inter-Modality & Angiography & 2D & ✗ & Fundus Fluorescein Angiography [258] & Fréchet inception distance & \multirow{2}{*}{2021} \\ & & & & & & Kernel Inception distance & \\ \hline \hline \end{tabular}
\end{table}
Table 8: An overview of the reviewed Transformer-based medical image synthesizing approaches.
\begin{table}
\begin{tabular}{c c} \hline \hline
**Method** & **Conditions** & **Highlights** \\ \hline PTNet [246] & * Introduced the pure Transformer-based network with linear computational complexity for image-synthesis encoder \\ \hline MMT [248] & * Proposed a pure Transformer-based architecture with incorporates Skin Transformer blocks to perform training data implicitly by leveraging the existing MMI common. & * Since the detection mechanism can be utilized in pipeline without differential features in the model’s reasoning and decision-making, the constraints of the Transformer module is MMT made it impossible by exploiting information in different contexts that play an important role in generating the output sequence and confirm their model’s efficiency. & * The framework can be applied to a variety of medical analysis tools, including image segmentation and cross modality synthesis. * \\ \hline ResViT [247] & * First conditional adversarial model for medical image-to-image translation with hybrid CNN Transformer generator & * Utilized weight sharing strategy among Transformers to hinder the computational overhead and between the model’s complexity * An end-to-end design for the synthesized model that generalizes through multiple settings of source target modalities, e.g., one-to-one and many-to-one tasks \\ \hline CyTran [248] & * Proposing a generative adversarial convolutional Transformer for two tasks of image-resolution and image separation. * Unitized up transfer techniques to improve alignment between context and one-out CTC sums \\ & * Ingoing CTC sums of female patterns. & \\ \hline VTGAN [240] & * Proposed a synthesis model for the task of hand-to-engagement that incorporates VIT architecture in the model’s creation of the system’s uncertainty density residual formulations and stochastic FA images. * Proposed experimental data based on quantitative metrics regarding the model’s potential action ability under the influence of spatial and radial transformations. \\ \hline \hline \end{tabular}
\end{table}
Table 9: A brief description of the reviewed Transformer-based medical image synthesizing models.
for identifying stenosis. A leading threat to the lives of cardiovascular disease patients globally is Coronary Artery Disease (CAD). Hence, the automatic detection of CAD is quite significant is considered a challenging task in clinical medicine. The complexity of coronary artery plaques, which results in CAD, makes the detection of coronary artery stenosis in Coronary CT angiography (CCTA) challenging.
The architecture introduces a Transformer and combines the feature extraction capability of convolutional layers and Transformer encoders. TR-Net can easily analyze the semantic information of the sequences and can generate the relationship between image information in each position of a multiplayer reformatted (MPR) image. This model can effectively detect stenosis based on both local and global features. The CNN easily extracts the local semantic information from images, and the Transformer captures global semantic details more easily. A 3D-CNN is employed to capture the local semantic features from each position in an MPR image. After this step, the Transformer encoders are mainly used to analyze feature sequences. The main advantage here is that this helps in mining the dependency of local stenosis on each position of the coronary artery. The architecture of the TR-Net is given in Figure 35. One part of the figure indicates the 3D-CNN. This module extracts the local features. The other part indicates the Transformer encoder structure. This module associates the local feature maps of each position. This module also helps in analyzing the dependency between different positions, which in turn is helpful for classifying the significant stenosis at each position. The CNN part mainly has two main advantages: it prevents the overfitting of semantic information and improves the model's efficiency. The input to the network architecture is the coronary artery MPR image.
The 3D-CNN module has four sequentially connected sub-structures, which consist of a convolutional kernel of size \(3\times 3\times 3\), a non-linear ReLU layer and a \(2\times 2\times 2\) max-pooling layer. The number of filters is 16 in the first part, and in subsequent parts, the number of filters is double the number in the previous part. Since Transformers have 1D vector sequences as input, the feature maps are flattened. The Transformer in the proposed architecture consists of 12 Transformer encoders. Each Transformer encoder mainly consists of two sub-blocks - multi-head self-attention (MSA) and the feed-forward network (FFN), which are connected sequentially. Layer normal (LN) and residual connections are employed before and after two sub-blocks. In order to ensure the consistency of the encoders, the size of the input is made the same as the size of the output. The output of the previous encoder is given as input to the next encoder. In the final layer, the embeddings are fed into softmax classifiers to detect significant stenosis.
**RDFNet** approach proposed by Jiang et al. [267] basically incorporates the Transformer mechanism in order to better extract the complex tooth decay features. The incorporation of the Transformer has improved the detection accuracy. The main three modules of the network are the backbone, neck, and prediction modules. The backbone module is mainly used to extract the features from caries images. In the backbone module, the focus operation is a slicing operation that could easily replace the convolution operation and reduce the loss of feature information. The C3Modified layer is a convolution module activated by the FReLU function, which extracts complex visual-spatial information of the caries images. SPP [272] module has a spatial pyramid structure that could expand the perceptual field, which intern fuses the local and global features and enhance the feature maps. After the SPP structure, RDFNet
Figure 34: An overview of Transformers in medical image detection. Methods are classified into the backbone, neck, and head according to the positions of the Transformers in their architecture. The prefix numbers in the paper’s name in ascending order denote the reference for each study as follows: 1. [266], 2. [267], 3. [270], 4. [268], 5. [269], 6. [265], 7. [271].
Figure 35: Proposed architecture of TR-Net model [266].
appends an improved Transformer-encoder module to improve the feature extraction capability. The main functionality of the neck module is to mainly fuse the feature maps of different sizes and extract high-level semantic structures. This module mainly uses the structure of the feature pyramid network (FPN) proposed in [273], and path aggregation network (PAN) proposed in [274]. The FPN approach is employed in a top-down fashion, and PAN is performed in a bottom-up fashion to generate the feature pyramids. In order to prevent information loss, feature fusion is performed using both bottom-up and top-down approaches. An improved C3Modified convolutional module is adopted into the neck module to better extract the semantic features of caries images. The high-level features generated by the neck module are used by the prediction module, which in turn is used to classify and regress the location and class of the objects. To overcome the problems of the single-stage detection method, which has quite a low detection accuracy, it mainly has three detection heads for detecting large, medium, and small objects. As Transformers have proved to have strong feature extraction capability, in order to extract complex features, they utilized the Transformer model. To better extract the features, three Transformer encoders were stacked together. To simplify the model, the authors removed the original normalization layer from the Transformer encoder. In order to extract the deep features, the feature map was fed into this structure. For each head, the attention values were calculated independently and later concatenated.
Wagner et al. [270] proposed a novel hybrid cell detection approach (**CellCentroidFormer**) in microscopic images that combines the advantages of vision Transformers (ViTs) and convolutional neural networks (CNNs). A CNN model pre-trained on the ImageNet dataset is mainly used for extracting the features and reducing the amount of training data, and the concept of transfer learning is also proposed. Authors show that the combined use of convolutional and Transformer layers is advantageous as the convolutional layers can focus on the local information (cell centroids), and the Transformer layers can focus on the global information ( overall shapes of a cell). The proposed centroid-based approach represents the cells as ellipses and is trainable in an end-to-end fashion. Four different 2D microscopic datasets were used for experimental evaluations, and the results outperformed the fully convolutional architecture-based methods. Figure 36 shows the architecture. The encoder is then folded into a 3D tensor, which is afterward concatenated with the input tensor. The MobileViT block is a lightweight alternative to the actual encoder-decoder approach using a Transformer [21]. Due to the multi-head self-attention layers, the MobileViT block causes a much higher computational complexity than convolutional layers. To not increase the computational complexity excessively, the MobileViT blocks are combined in the neck part of the proposed model. Layer normalization is added for regularization and to allow higher learning rates. The backbone module of our proposed model is the EfficientNetV2S [275] CNN model. This block mainly consists of six high-level blocks, out of which five blocks are used to extract image features. To use the advantage of transfer learning, the backbone module is initialized with weights learned from training on ImageNet. This, in turn, reduces the amount of required training data. The EfficientNetV2S [275] CNN models are generally optimized for a fixed input size. Therefore the input images need to be resized to this input size. The cells are represented mainly by the centroid, width, and height parameters. Mainly, two fully convolutional heads are used to predict these cell parameters in the paper. These heads contain 2D convolution, batch normalization, and bilinear upsampling layers. More MobileViT blocks are not used as it will increase the computational complexity. Later convolutional layers have a bigger receptive field which helps in capturing the global information [276] effectively. The first convolutional head predicts a heatmap for detecting the cell centroids, and the second head is used for predicting the cell dimensions. The output dimensions of this model are \(384\times 384\). The authors use one decoder of the Dual U-Net to predict the centroid heatmap, and the second branch predicts the dimensions of the detected cells. The shapes of the cells are focused on by the Transformer layers in the network.
### Head
Detection Transformers based on Transformer encoder-decoder architecture require a large amount of training data to deliver the highest performance. However, this is not feasible in the medical domain, where access to labeled data is limited. To address this problem, for the detection of 3D anatomical structures from the human body, Wittmann et al. [271] proposed a detection Transformer network with a **focused decoder**. This network considers the relative position of the anatomical structures and thus requires less training data. The focused decoder uses an anatomical region atlas to deploy query anchors to focus on the relevant anatomical structures. The proposed network omits the Transformer encoder network and consists of only Transformer decoder blocks. The authors show that in 3D datasets, avoiding the encoder can reduce the complexity of modeling relations with a self-attention module.
The model architecture contains a backbone network for feature extraction, a focus decoder network for providing well-defined detection results, a classification network to predict the classes, and a bounding box regression network to output the best possible bounding box. The feature extraction backbone
Figure 36: Proposed architecture of CellCentroidFormer model [270].
network is a feature pyramid network (FPN) inspired by the RetinaNet [277]. Features from the second layer (P2) are flattened before being given as input to the focus decoder. A specific anatomical region atlas [278] containing regions of interest (RoI) is determined for each dataset. Then to each RoI, uniformly spaced query anchors are placed, and a dedicated object query is assigned to each. Such an object query will restrict the focus decoder network to predict solely within their respective RoI.
The focused decoder network contains a self-attention module, a focused cross-attention module, and a feedforward network (FFN). The self-attention module encodes strong positional inter-dependencies among object queries. The focused cross-attention module matches the input sequence to object queries to regulate the individual feature map for prediction via attention. The FFN network then enables richer feature representation. Also, residual skip connections and normalizations are used to increase gradient flow. The classification network consists of a single fully-connected layer, and the bounding box regression network consists of three layers. The bounding box predictions are combined with query anchors to get the bounding box together with class-specific confidence scores. The network is trained to predict 27 candidates predictions per class. Dynamic labeling with the help of generalized intersection over union (GIoU) is created during training to get 27 predictions. During inference, the prediction with the highest confidence score indicates the best candidate. The model is trained end-to-end with the above GIoU loss, binary cross-entropy loss for the classification network, and L1 loss for the bounding box predictions.
### Neck
Detection methods using region-based approaches need to generate anchor boxes to encode their prior knowledge and use a non-maximum suppression to filter the resulting bounding boxes after prediction. These pre-and post-processing steps remarkably reduce the detection performance. To bypass these surrogate tasks, Carion et al. [163] proposed Detection Transformer (DETR), which views the object detection task as a direct set prediction problem using an encoder-decoder architecture using Transformers. The self-attention mechanism of the Transformers, which explicitly models all pairwise interactions between elements in a sequence, helps to predict the set of detections with absolute prediction boxes directly from the image rather than using an anchor. For the end-to-end detection of polyp lesions, Shen et al. [268] proposed a convolution in Transformer **(COTR)** network based on the DETR model. COTR consists of 4 main layers: 1) a CNN backbone network used for extracting features, 2) Transformer encoder layers embedded with convolutional layers used for feature encoding and reconstruction, 3) Transformer decoder layers used for object querying, and 4) a feed-forward network used for detecting prediction. Embedding convolutional layers into the Transformer encoder layer leads to convergence acceleration compared to the slow convergence of the DETR model.
The CNN backbone uses a pre-trained model with ResNet18 [63] architecture for feature extraction. This layer converts input medical images to a high-level feature map. The authors then use a \(1\times 1\) convolution to reduce the channel dimensions. In the Transformer encoder layers, they used six convolution-in-Transformer encoders to collapse this spatial structure into a sequence. Then they use a convolution layer to reconstruct the sequential layer back to the spatial one. In the encoder layer, each Transformer has a standard architecture with a multi-head self-attention module and a feed-forward network. To the input of each attention layer, a positional embedding [21] is also introduced. In the Transformer decoder layers, they used six decoders which follow the standard architecture of the Transformer except that it also decodes object queries in parallel. Each object query will correspond to a particular object in the image. The decoders take these object queries with position embeddings as well as output embeddings from the encoder network and convert them into decoded embeddings. Then they used a feed-forward network with two fully connected layers for converting these decoded embeddings into object predictions. The first fully connected layer is a box regression layer to predict object location, and the second one is a box-classification layer to predict object scores. Therefore, the object queries are independently decoded into box coordinates and classes by the feed-forward network, which results in final predictions, including object and no object (background) predictions. This model transforms the object detection problem into a direct set prediction problem by training end-to-end by calculating bipartite matching loss (Hungarian algorithm) between predictions and ground truth for each query. If the number of queries exceeds the number of objects in the image, the remaining boxes are annotated as no object class. Thus, the model is trained to predict output for each query as an object or no object detection. For the class prediction, they used negative log-likelihood loss, and for the bounding box localization, they used an L1 loss with generalized intersection over union (GIOU) [294] loss. The experiments demonstrated that the proposed model achieved comparable performance against state-of-the-art methods.
Many deep learning detection methods lack using context-relevant information for improved accuracy, and they also generally suffer from slower convergence issues and high computational costs. The proposed **CT-CAD**[269], context-aware Transformers for end-to-end chest abnormality detection, address these problems. The model consists of two main modules: 1) a context-aware feature extractor module for enhancing the features, and 2) a deformable Transformer detector module for detection prediction and to accelerate the convergence speed. The context-aware feature extractor network uses a ResNet50 backbone, dilated context encoding (DCE) blocks, and positional encoding structure. The deformable Transformer detector contains a Transformer encoder-decoder architecture and a feed-forward network. The proposed design of the context-aware feature extractor is inspired by the feature fusion scheme from DetectoRS [295] which is based on the Feature Pyramid Networks (FPN) [296]. The feature fusion scheme iteratively enhances the features of the FPN to powerful feature representations. Likewise, the DCE blocks enhance the features extracted from the ResNet50 backbone by expanding the receptive fields to fuse multiscale context information using dilated
convolution filters of different sizes. This powerful feature map benefits in detecting objects across various scales. Inspired by YOLOF [297] the DCE block uses dilated convolution and skip connections to achieve a larger receptive field and acquire more local context information. Finally, all the features from different DCE blocks computed at different scales are summed up to get the feature map for the output.
The proposed design of the deformable Transformer detector contains single-scale and multi-head attention properties. The deformable attention block attends to a small set of key sampling points, thus allowing the Transformer to focus on the feature space and accelerate the convergence. The authors used six encoder and decoder layers with positional encoding to obtain the decoder outputs. The outputs from the decoder are the number of abnormalities detected and the dimension of the decoder layers. Finally, a feed-forward network is used to output the category classification and location regression results. The model is trained end-to-end with a combination of bounding box loss and classification (cross-entropy) loss. The authors adopted GIoU [298] to balance the loss between large and small object bounding boxes.
The attention module in the detection Transformers computes similarity scores between elements of each input data to identify complex dependencies within these data. Calculating similarities of all possible positional pairs in the input data scales quadratically with the number of positions and thus becomes computationally very expensive. For this reason, the Transformer-based object detection model from 3D images has never been applied. Tao et al. [265] proposed a novel Transformer-based 3D object detection model as a one-to-one set prediction problem for the automatic detection of vertebrae in arbitrary Field-Of-View (FOV) scans, called the **Spine-Transformers**. Here the authors used a one-to-one set-based global loss that compels a unique prediction for preserving the sequential order of different levels of vertebrae and eliminated bipartite matching between ground truth and prediction. The main modules of the Spine-Transformer are (1) a backbone network to extract features, (2) a light-weighted Transformer encoder-decoder network using positional embeddings and a skip connection, and (3) two feed-forward networks for detection prediction. The authors used a ResNet50 [63] architecture without the last SoftMax layer as the backbone network to extract high-level features. These features are passed through a \(1\times 1\times 1\) convolutional layer to reduce the channel dimensions and then flattened to get a feature sequence to feed as the input for the Transformer network. The light-weighted Transformer encoder-decoder network contains only a two-layer encoder and two-layer decoder to balance between feature resolution and memory constraint. In both the encoder and decoder layers of the network, learnable positional embeddings are used. The authors found that using a skip connection across the Transformer encoder-decoder network will help in the propagation of context and gradient information during training and thus improves performance. The two feed-forward networks are then used to predict the existence of the objects and regress their coordinates. The authors also proposed a sphere-based bounding box detector to replace the rectangular-based bounding box to introduce rotational invariance called InSphere detector. The Spine-Transformer is trained end-to-end with fixed-size patch images to predict all the vertebrae objects in parallel by forcing one-to-one matching. Binary cross-entropy loss is used as classification loss, and to enforce the order of the predicted vertebrae objects, an edge loss is introduced, which is an L1 distance loss introduced between the centers of the top and bottom neighborhood vertebrae objects. For better localization accuracy of the bounding sphere detection, the authors
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
used generalized inception-over-union (GIoU) [294] loss. The results of this model showed superior results to all the state-of-the-art methods. The authors also claim that by using a 3D CNN-based landmark regression [299], the localization accuracy can be further improved.
### Discussion and Conclusion
In this chapter, several well-known Transformer architectures are analyzed to address the automatic detection challenge. Based on the Transformer model contribution to the network structure, we grouped the set of literature work into the backbone, neck, or head strategies and for each category, we provided sample works. In this respect, the core idea behind each network design along with the pros and cons of the strategies are highlighted in the summary tables. Vision Transformers have been shown to make more accurate diagnoses compared to traditional methods of analyzing medical images. These deep learning models can be trained on large datasets, such as ImageNet, and fine-tuned on medical image datasets to improve their performance in detecting abnormalities in X-rays, CT scans, and MRIs. By incorporating information from multiple modalities, Transformers can further enhance their ability to identify and detect rare or subtle abnormalities in medical images. Many medical images are often taken over time, and incorporating temporal information into the model can improve its performance. For example, the model can be designed to take into account the temporal evolution of diseases or conditions. Overall, Transformers have demonstrated their capabilities to significantly improve the accuracy and efficiency of medical image analysis, leading to advances in healthcare.
## 8 Medical Image Registration
Medical image registration is the task of transforming a set of two or more images of an organ or a biological process taken with different poses, time stamps, or modalities (e.g., CT and MRI) into a geometrically aligned and spatially corresponding image that can be utilized for medical analysis. The transformation can be discovered by solving an optimization problem
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Method** & **\# Params** & **Coarifications** & **Holapides** \\ \hline
26-Ne [266] & - & \(\bullet\) This work is the first attempt to detect coronary artery stenosis more accurately by employing Transformers. & \(\bullet\) While detecting significant stenosis, the TR-Net architecture is capable of combining the information of local areas near stenoses and the global information of coronary artery branches. & \(\bullet\) Weidenschap the information of the RT-Net model has better results on multiple indicators. & \(\bullet\) The initial CNN layer prevents the overfitting of semantic information and improves the overall efficiency. & \(\bullet\) The gain is performance comes with a trade-off in the number of parameters, which affects the computational complexity. \\ \hline RDPNat [267] & - & \(\bullet\) An image dataset of caries is created, which is annotated by professional dentits. & \(\bullet\) Compared with existing approaches, the accuracy and speed of caries detection are better. & \(\bullet\) Method is applicable to portable devices. \\ & & \(\bullet\) For better extraction of the complex features of dental caries, the Transformer mechanism is incorporated. & \(\bullet\) The method can work really well when the illumination of the image is insufficient. & \(\bullet\) The initial image is insufficient. \\ & & \(\bullet\) In order to increase the inference speed significantly, the PReLU activation function is adopted. & \(\bullet\) Byeon though detection accuracy and speed are improved compared to the original approach, the detection speed is not the fastest. \\ \hline CalICmnoidFomnet [270] & 11.5M & \(\bullet\) A novel deep learning approach that combines the self-attention of Transformers and the combination operation of convolutional neural networks is proposed. & \(\bullet\) The model outperforms after state-of-the-art fully convolutional convolutional outputakos on four microscopy datasets, despite having a lower number of parameters. & \(\bullet\) Larger output audio worsen the performance. \\ & & \(\bullet\) In order to increase the inference speed significantly, the PReLU activation function is adopted. & \(\bullet\) Better results compared to existing detection models using a Transformer network like DETR [16] and deformable DETR [23]. & \(\bullet\) Comparable results on the Retailizer [277]. \\ \hline CalICmnoidFomnet [270] & 11.5M & \(\bullet\) A novel deep learning approach that combines the self-attention of Transformers and the combination operation of convolutional neural networks is proposed. & \(\bullet\) The model outperforms after state-of-the-art fully convolutional outputakos on four microscopy datasets, despite having a lower number of parameters. & \(\bullet\) In order to increase the performance. \\ & & \(\bullet\) In order to increase the inference speed significantly, the PReLU activation function is adopted. & \(\bullet\) Better results compared to existing detection models using a Transformer network like DETR [16] and deformable DETR [23]. & \(\bullet\) Comparable results on the Retailizer [277]. \\ & & \(\bullet\) Varying anatomical fields of view (IoVv) on affect the robustness of the model. & \(\bullet\) Ourstic methods are the robustness of the model. \\ \hline COTR [145] & - & \(\bullet\) Proposed a convolution layer embedded into the Transformer encoder for better feature reconstruction and faster convergence compared to DETR. & \(\bullet\) Ourstic methods are the same as the Transformer network like DETR [299], the localization accuracy can be further improved. \\ \hline CTCAD [269] & - & \(\bullet\) Proposed a context-aware feature extractor, which enhances the receptive fields to encode multi-scale context-invariant information. & \(\bullet\) CT-CAD outperforms the existing methods in Cascade R-CNN [291], Yola [292], and DETR [23]. & Yola [292], and DETR [23]. \\ & & \(\bullet\) Proposed a deformable Transformer detector that learns to a small set of key sampling locations and then Transformers can focus to feature subsequent and accelerate the convergence speed. & \(\bullet\) Compared to the Cascade R-CNN 10 dataset, this model has a lower performance on the Visible Cluster X-Ray dataset such has higher categories of abnormalities with more complex patterns. \\ \hline Spine-Transformers [265] & - & \(\bullet\) Proposed a 3D object detection model based on the Transformer’s architecture. & \(\bullet\) Obtained better results for all dimension compared to state-of-the-art methods. & \(\bullet\) Obtained better results for all dimension compared to state-of-the-art methods. \\ & & \(\bullet\) Proposed a one-to-one set global loss that enforces unique prediction and preserves the sequential order of vertebrae. & \(\bullet\) The model has a higher IoR-Rate on both the datasets, but a higher IoError compared to the benchmark by [293]. \\ & & \(\bullet\) Proposed a Sphere-based bounding box to enforce rotational invariance. & \\ \hline \hline \end{tabular}
\end{table}
Table 11: A brief description of the reviewed Transformer-based medical image detection models. The unreported number of parameters in the value was not mentioned in the paper.
that maximizes the similarity between the images to be registered [300]. A pair-wise registration of two MRI brain scans is shown in Figure 37 for illustration.
Despite remarkable advancements in the quality of medical imaging techniques that aid professionals in better visualization and analysis of image data, a prominent challenge prevails in developing a system capable of effective integration of visual data that captures useful information from original images with high precision. Most registration procedures take into account the whole image as input by utilizing global information for spatial transformation, which leads to inefficient and slow integration of data. Furthermore, the collection process of medical images for training is slow and toilsome, performance degrades due to the presence of outliers, and local maxima entail negative effects on performance during optimization [301; 302]. The emergence of deep learning methods alleviated these problems by automatic extraction of features utilizing convolutional neural networks (CNN), optimizing a global function, and improving registration accuracy. For instance, Balakrishnan et al. [303] utilized a CNN to achieve unsupervised deformable registration by treating it as a parametric function to be optimized during training. Furthermore, Chen et al. [304] presented an unsupervised CNN-based registration algorithm to produce anthropomorphic phantoms. However, there are still limitations in capturing long-range spatial correspondence in CNN-based frameworks [305; 42].
Fueled by the strong ability of Transformers to model long-range dependencies and detect global information [306; 307; 27], they have gained the attention of researchers in the medical image registration domain in recent years. In this section, we review Transformer-based methods in medical image registration that ameliorate the aforementioned shortcomings of previous systems by utilizing the self-attention mechanism. We have organized the relevant approaches based on their type of registration:
1. Deformable registration, which employs an optimization algorithm to tune the transformation model, is a way that maximizes the similarity measure function for the images of interest [308];
2. Rigid registration, which achieves correspondence by maintaining the relative distance between each pair of points between the patient's anatomy images [308].
3. Affine registration, which contains the same operations as rigid registration plus non-isometric scaling.
### Deformable Registration
Most existing Transformer-based algorithms focus on deformable transformation to perform medical image registration. **ViU-V-Net**[309] is the earliest work that incorporates Transformers to perform medical image registration in a self-supervised fashion. It is inspired by the integration of vision Transformer-based segmentation methods with convolutional neural networks to enhance the localization information recovered from the images. Unlike previous research that employed 2D images for spatial correspondence, Vit-V-net stepped towards utilizing ViT [22] as the first study for volumetric medical image registration (i.e., 3D image registration). As illustrated in Figure 39, the images are first encoded into high-level feature representations by implementing multiple convolution blocks; then, these features get split into \(P\) patches in the ViT block. Next, the patches are mapped to a D-dimensional embedding space to provide patch embeddings, which are then integrated with learnable positional encodings to retain positional information. Next, these patches are passed into the encoder block of the Transformer, followed by multiple skip connections to retain localization information, and then decoded employing a V-Net style decoder [314]. Finally, a spatial Transformer [315] warps the moving image by utilizing the final output of the network. TransMorph [310] extended ViT-V-Net and proposed a hybrid Transformer ConvNet framework that utilizes the Swin Transformer [57] as the encoder and a ConvNet as the decoder to provide a dense displacement field. Like ViT-V-Net, it employed long skip connections to retain the flow of localization information that may enhance registration accuracy. The output of the network, which is a nonlinear warping function, gets applied to the moving image with the deformation field utilizing the spatial transformation
Figure 38: Taxonomy of Transformer-based image registration based on their transformation type. We use the prefix numbers in the figure in ascending order and reference the corresponding paper as follows: 1. [307], 2. [309], 3. [310], 4. [311], 5. [312], 6. [313].
Figure 37: An example of pair-wise medical image registration. The goal of image registration is to geometrically align the moving image with the target or fixed image by performing the spatial transformation.
function proposed in [315]. An affine transformation Transformer network is incorporated to align the moving image with the fixed image before feeding it to the deformable registration network. This work also proposed two variants of TransMorph: diffeomorphic TransMorph (TransMorph-diff) to facilitate topology-preserving deformations and Bayesian TransMorph (TransMorph-Bayes) to promote a well-calibrated registration uncertainty estimate.
Likewise, Zhang et al. [311] introduced the dual Transformer network (**DTN**) framework to perform diffeomorphic registration. It is composed of a CNN-based 3D U-Net encoder [299] for the embedding of separate and concatenated volumetric images and a dual Transformer to capture the cross-volume dependencies. One of the Transformers is responsible for modeling the inter- and intra-image dependencies, and the other one handles the modeling of the global dependencies by employing the self-attention mechanism. The concatenation of the generated features from these Transformers results in enhanced feature embeddings, which are utilized by the CNN-based decoder to provide a diffeomorphic deformation field. The evaluation of the framework was conducted on the brain MRI scans of the OASIS dataset [316], which substantiates their improvements in diffeomorphic registration compared to the existing deep-learning-based approaches.
Furthermore, **XMorpher**[312] put emphasis on the significance of backbone architectures in feature extraction and match of pair-wise images, and proposed a novel full Transformer network as the backbone, which consists of two parallel U-Net structures [34] as the sub-networks with their convolutions replaced by the introduced Cross Attention Transformer for feature extraction of moving and fixed images, and cross-attention-based fusion modules that utilize these features for generating the feature representation of moving-fixed correspondence and fine-grained multi-level semantic information that contributes to a fine registration.
### Affine Registration
To perform affine medical image registration with Transformers, Mok et al. [313] proposed **C2FViT**, a coarse-to-fine vision Transformer that performs affine registration, a geometric transformation that preserves points, straight lines, and planes while registering 3D medical images. Former studies have relied on CNN-based affine registration that focuses on local misalignment or global orientation [322; 323], which limits the modeling of long-range dependencies and hinders high generalizability. C2FVit, as the first work that takes into account the non-local dependencies between medical images, leverages vision Transformers instead of CNNs for 3D registration. As depicted in Figure 40, the model is split into \(L\) stages, each containing a convolutional patch embedding layer and \(N_{i}\) Transformer encoder blocks (i indicates the stage number), intending to learn the optimal affine registration matrix. In each stage, the fixed and moving images are downsampled and concatenated with each other, then the new representation gets passed to the convolutional patch embedding layer to produce image patch embeddings. Next, the Transformer receives the embeddings and produces the feature embedding of the input. Conducted experiments on OASIS [316] and LPBA [320] demonstrated their superior performance compared to existing CNN-based affine registration techniques in terms of registration accuracy, robustness, and generalisation ability.
### Rigid Registration
**SVoRT**[307] addressed the necessity of slice-to-volume registration before volumetric reconstruction for the task of volumetric fetal brains reconstruction, and employed a Transformer network trained on artificially sampled 2D MR slices that learns to predict slice transformation based on the information gained from other slices. The model also estimates the underlying 3D volume from the input slices to promote higher accuracy in transformation prediction. The superiority of their proposed method in terms of registration accuracy and reconstruction based on the evaluation on synthetic data and their experiments on real-world MRI scans demonstrated the ability of the model in high-quality volumetric reconstruction.
### Discussion and Conclusion
According to the research discussed in this section, vision Transformers are prominent tools in image registration tasks due to their training capability on large-scale data, which is made feasible by parallel computing and self-attention mechanisms. Leveraging Transformers to encourage better global dependency identification improves registration in terms of dice scores and Jacobian matrix determinant compared to CNNs.
Figure 40: The model has \(L\) stages with convolutional patch embedding layers and \(N\) Transformer encoder blocks to learn the optimal affine registration matrix. In each stage, fixed and moving images are downsampled and concatenated, then passed to the convolutional patch embedding layer to produce image patch embeddings. The Transformer then produces the input feature embedding from the embeddings [313].
Figure 39: Overview of ViT-V-Net. Multiple convolution blocks encode images into high-level features, which the ViT block splits into patches. These patches are then mapped to D-dimensional patch embeddings that get integrated with learnable positional encodings to retain positional information. Next, these patches are passed into the Transformer encoder block, followed by multiple skip connections to retain localization information, and decoded using a V-Net style decoder. Using the network’s final output, a spatial Transformer warps the moving image. Figure taken from [309].
To mitigate the burden of quadratic complexity when processing images at high-resolution and modelling local relationships, reviewed studies usually employ CNNs to provide feature maps or dense displacement fields [309, 310, 311]. C2FViT [313] disregarded convolutional networks and implemented convolutional patch embeddings to promote locality. However, in deformably registering medical content, XMorpher recently demonstrated the power of cross-attention in better capturing spatial relevancy without a CNN implementation [312], and SVoRT purely utilized Transformers to perform rigid registration [307].
The notable experimental attempts on brain MRI scan data, such as OASIS [316] and FeTA [321], show the importance of accurate automatic registration for neuroimaging data. One particular work [312] proposed to evaluate their registration on images of cardiac region datasets including MMWHO-2017 [318] and ASOCA [319]. To further clarify the modality type used in the aforementioned proposed methods, all works conducted their evaluations on 3D or volumetric imaging modalities.
Based on the brief review of Transformer-based medical image registration research, we believe that other regions of interest (ROI) such as neurons, retina, and neck area are worth exploring to facilitate diagnostic operations in different domains with more precise registration models.
We have also specified the architectural type, modality, organ, data size, training paradigm, datasets, metrics, and year for each medical registration technique reviewed in Table 12. Furthermore, Table 13 provides a list of the contributions and highlights of the proposed works.
## 9 Medical Report Generation
Medical report generation focuses on producing comprehensive captions and descriptions pivoting on medical images for diagnostic purposes. Designing automatic methods capable of performing this task can alleviate tedious and time-consuming work in producing medical reports and promote medical automation [324]. Recently, advancements in deep learning have brought the attention of researchers to employing an intelligent system capable of understanding the visual content of an image and describing its comprehension in natural language format [325]. Research efforts in improving this area can be employed in medical imaging by implementing systems capable of providing descriptions and captions (i.e., generating medical reports) concerning medical images. These captioning systems usually utilize encoder-decoder models that encode medical im
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Method** & **Conditions** & & & & & \\ \hline \hline VIT-V-Net [309] & MRI & Brain & 3D & Private Dataset & Dice & 2021 \\ \hline \multirow{4}{*}{TransMorph [310]} & MRI & Brain & \multirow{4}{*}{3D} & \({}^{1}\)XI [237] & Dice \\ & CT & & & & & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & MRI & Brain & \multirow{4}{*}{3D} & \({}^{2}\)T1-weighted brain MRI scans from Johns Hopkins University & \% of \(|\mathcal{A}|\leq 0\) & 2021 \\ & CT & & & & & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & MRI & Brain & \multirow{4}{*}{3D} & \({}^{3}\) Chest-Abdomen-Peiks CT [317] & SSIM & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & MRI & Brain & \multirow{4}{*}{3D} & \({}^{1}\) MM-WHS 2017 [318] & Dice & \\ \cline{1-1} & XMorpher [312] & CT & & & & \\ \cline{1-1} & MRI & & & & & \\ \cline{1-1} & MRI & & & & & \\ \cline{1-1} & & & & & \\ \hline \hline \multirow{4}{*}{C2FViT [313]} & MRI & Brain & \multirow{4}{*}{3D} & \({}^{1}\)OASIS [316] & Dice & \\ \cline{1-1} & & & & & & \\ \cline{1-1} & MRI & Brain & \multirow{4}{*}{3D} & \({}^{2}\) LPBA [320] & Hausdorff distance & \\ \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-1} \cline{2-2} \cline{4-6} \cline{1-2
ages and decode their understandings to provide diagnostic information in a natural language format.
Despite the success of deep learning, limitations including reliability on an immense amount of data, unbalanced data in radiology datasets (e.g., IU X-ray chest X-Ray [326]), and the black box nature of DL models entail challenges in medical report generation [327]. The success of Transformer models in many vision-and-language tasks has drawn the attention of researchers in the medical report generation domain to the employment of this architecture. In this section, we discuss approaches that utilize Transformers to promote effective capture of long-range context dependencies and better report generation. As illustrated in Figure 41, the following is our taxonomy of these systems according to the mechanism by which they produce accurate and reliable clinical reports:
1. _Reinforcement Learning-based._ The ultimate goal of a medical report generation system is to provide clinically accurate and reliable reports. In reinforcement learning, the MRG system is considered an agent with the objective of maximizing clinical accuracy based on the feedback given by the reward signal, which is directly calculated by the evaluation metric score (e.g., CIDEr [328]).
2. _Graph-based._ Radiology reports are typically composed of a long finding section with multiple sentences that make report generation a challenging task. Therefore, the inclusion of prior information is beneficial for facilitating the generation of long narratives from visual data. Knowledge graphs, which are powerful models that can capture domain-specific information in a structured manner, can be used to exploit prior information for medical report generation [329; 330; 331].
3. _Memory-based._ Memory is a resource through which important information is recorded. In designing a proper MRG system, it is crucial to store vital and diagnostic information that can benefit the generation process by incorporating prior knowledge and experience. Hence, configuring a memory mechanism with Transformers as a report generation framework facilitates longer and more coherent text generation by sharing information gained through the process [332; 333].
4. _Other Systems._ Systems that introduce different ideas from previous categories to improve clinical accuracy, such as curriculum learning, contrastive learning, and alternate learning, belong to this group.
### Reinforcement Learning-based Systems
The first work to implement a Transformer architecture for medical report generation is RTMIC [334]. It used the reinforcement learning strategy in training to mitigate the problem of exposure bias prevailing in Seq2Seq models [358]. In their approach, the original images are fed into a DenseNet [335] as the region detector to extract bottom-up visual features. These features are then passed into a visual encoder to generate visual representations from the detected regions, which the captioning detector then utilizes to generate captions for the specified regions. The proposed method was experimented on the IU X-Ray dataset [326] and achieved state-of-the-art results. Integration of RL and Transformers was also applied in surgical instruction generation since the joint understanding of surgical activity along with modeling relations linking visual and textual data is a challenging task. Zhang et al. [337] employed a Transformer-backboned encoder-decoder architecture and applied the self-critical reinforcement learning [359] approach to optimize the CIDEr score [328] as the reward. Their approach surpasses existing models in performance on the DAISI dataset [338] with caption evaluation metrics applied to the model. This work's key difference from others is that their model is proposed to generate instructions instead of descriptions.
### Graph-based Systems
In graph-based medical report generation, Li et al. [330] proposed **KERP**, a Graph Transformer implementation to generate robust graph structures from visual features that are extracted by a DenseNet [335] backbone. This approach is composed of three modules: Encode, Retrieve and Paraphrase. First, it constructs an abnormality graph by converting the visual features extracted from the medical images via an encoder module. Next, a sequence of templates is retrieved considering the detected abnormalities by utilizing a retrieve module. Subsequently, the terms of the produced templates are paraphrased into a report by employing the paraphrase module. The KERP's workflow is illustrated in Figure 42.
Additionally, Liu et al. [342] addressed the visual and textual data biases and their consequences in generating radiology reports and proposed the **PPKED** framework to alleviate these challenges. Their work introduced three modules to perform report generation: (1) Prior Knowledge Explorer (PrKE), which obtains relevant prior information for the input images; (2) Posterior Knowledge Explorer (PoKE), which extracts the posterior information, including the abnormal regions of the medical image; and (3) Multi-domain Knowledge Distiller (MKD), which distills the obtained information from the previous modules to perform the final report generation. PPKED then formulated the problem by employing the presented modules in the following manner: PoKE first extracts the image features corresponding to the relevant disease topics by taking the visual features extracted by ResNet-152 [63] from the input image and abnormal topic word embeddings as the input. Next, the PrKE module filters the prior knowledge from the introduced prior working experience (a BERT encoder) and prior medical knowledge component that is relevant to the abnormal regions of the input image by utilizing the output of the PoKE module. Next, the MKD module generates the final medical report by using this obtained information, which is implemented based on the decoder part of the Transformers equipped with Adaptive Distilling Attention.
### Memory-based Systems
Concerning the development of systems that rely on a memory mechanism to generate medical reports, Chen et al. [332]
presented a Memory-Driven Transformer (**MDT**), a model suitable for the generation of long informative reports and one of the first works on the MIMIC-CXR dataset [343]. MDT employs a relational memory to exploit characteristics prevailing in reports of similar images, and then the memory is incorporated into the decoder section of the Transformer by implementing a memory-driven conditional layer normalization (MCLN).
Likewise, Nooralahzadeh et al. [345] introduced **M\({}^{2}\) TRprogressive**, a report generation approach that utilizes curriculum learning, which is a strategy of training machine learning models by starting with easy samples and gradually increasing the samples' difficulty [360]. Instead of directly generating full
\begin{table}
\begin{tabular}{|c c c c c c c c|} \hline \hline
**Method** & **Modality** & **Organ** & **Type** & **Visual Backbone** & **Datasets** & **Metrics** & **Year** \\ \hline \hline RTMIC [334] & X-ray & Lung & 2D & DenseNet-121 [335] & IU Chest X-ray [326] & BLEU [336] & 2019 \\ & & & & & & CIDEr [328] & 2019 \\ \hline \hline SIG [337] & Ultrasound Colonoscopy & Multi-organ & 3D & ResNet-101 [63] & DAISI [338] & GLUE [328] & 2021 \\ & & & & & & RDUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**Graph**} & & & & & & \\ KERP [330] & X-ray & Lung & 2D & DenseNet-121 [335] & \({}^{1}\) IU Chest X-ray [326] & BLEU [336] & \\ & & & & 2 & CX-CHR (private dataset) & CIDEr [328] & 2019 \\ & & & & & & ROUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**PPKED [342]**} & X-ray & Lung & 2D & ResNet-152 [63] & \({}^{1}\) IU Chest X-ray [326] & BLEU [336] & \\ & & & & 2 & MIMIC-CXR [343] & Meteor [339] & 2021 \\ & & & & 2 & MIMIC-CXR [343] & CIDEr [328] & \\ & & & & & & ROUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**Memory**} & & & & & & \\ MDT [332] & X-ray & Lung & 2D & ResNet-121 [63] & \({}^{1}\) IU Chest X-ray [326] & BLEU [336] & \\ & & & & 2 & MIMIC-CXR [343] & Meteor [339] & 2020 \\ & & & & 2 & MIMIC-CXR [343] & ROUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**AlignTransformer [344]**} & X-ray & Lung & 2D & ResNet-50 [63] & \({}^{1}\) IU Chest X-ray [326] & BLEU [336] & \\ & & & 2 & MIMIC-CXR [343] & Meteor [339] & 2021 \\ & & & & 2 & MIMIC-CXR [343] & ROUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**M\({}^{2}\) TR-progressive [345]**} & X-ray & Lung & 2D & DenseNet-121 [335] & \({}^{1}\) IU Chest X-ray [326] & BLEU [336] & \\ & & & 2 & MIMIC-CXR [343] & Meteor [339] & 2021 \\ & & & & 2 & MIMIC-CXR [343] & ROUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**MDT-WCL [346]**} & X-ray & Lung & 2D & ResNet [63] & \({}^{1}\) MIMIC-ABN [347] & BLUE [336] & \\ & & & 2 & MIMIC-CXR [343] & Meteor [339] & 2021 \\ & & & & 2 & MIMIC-CXR [343] & ROUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**CMN [333]**} & X-ray & Lung & 2D & ResNet-101 [63] & \({}^{1}\) IU Chest X-ray [326] & BLEU [336] & \\ & & & 2 & MIMIC-CXR [343] & Meteor [339] & 2022 \\ & & & & 2 & MIMIC-CXR [343] & ROUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**Other**} & & & & & & \\ CRG [348] & X-ray & Lung & 2D & DenseNet-121 [335] & MIMIC-CXR [343] & BLUE [336] & \\ & & & & & & Meteor [339] & \\ & & & & & CIDEr [328] & 2021 \\ & & & & & & ROUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**Medical-VLBERT [349]**} &
\begin{tabular}{c} CT \\ X-ray \\ \end{tabular} & Lung & 2D & DenseNet-121 [335] & \({}^{1}\) Chinese Covid-19 CT [350] & BLUE [336] & \\ & & & 2 & CX-CHR (private dataset) & CIDEr [328] & 2021 \\ & & & & 2 & KOUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**CDGPT2 [351]**} & X-ray & Lung & 2D & DenseNet-121 [335] & IU chest X-ray [326] & BLEU [336] & \\ & & & & 2 & Meteor [339] & \\ & & & & & CIDEr [328] & 2021 \\ & & & & & ROUGE [340] & \\ \hline \hline \multicolumn{1}{|c}{**CGI [352]**} & X-ray & Lung & 2D & DenseNet-121 [335] & \({}^{1}\) MIMIC-CXR [343] & BLUE [336] & \\ & & & 2 & IU chest X-ray [326] & Meteor [339] & 2021 \\ & & & & 2 & ROUGE [340] & \\ \hline \hline \end{tabular}
\end{table}
Table 14: An overview of the reviewed Transformer-based Medical Report Generation approaches.
reports from medical images, their work formulates the problem into two steps: first, the Meshed-Memory Transformer (M2 TR.) [361], as a powerful image captioning model, receives the visual features extracted by a DenseNet [335] backbone and generates high-level global context. Second, BART [362], as a Transformer-based architecture, encodes these contexts with a bidirectional encoder and decodes its output using a left-to-right decoder into coherent reports. The overview of the process is depicted in Figure 43.
Additionally, You et al. [344] proposed **AlignTransformer**, a framework composed of two modules: Align Hierarchical Attention (AHA) and Multi-Grained Transformer (MGT). In their approach, first visual features and disease tags are extracted from the medical image by an image encoder, then they get aligned hierarchically to obtain multi-grained disease-grounded visual features in the AHA module. The obtained grounded features are capable of tackling the data bias problem by promoting a better representation of abnormal sections. Next, these grounded visual features are exploited by an adaptive exploiting attention (AEA) [361] mechanism in the MGT module for the generation of the medical reports. They also justified their model's efficiency through the manual evaluation of clinical radiologists.
In **MDT-WCL**[346], the problem is approached with a weakly supervised contrastive loss, which lends more weight to the reports that are semantically close to the target reports, and a memory-driven Transformer is adopted as the backbone model to store key information in its memory module. To aid the contrastive learning during training, after clustering the reports into groups with the K-Means algorithm, each report is assigned a label corresponding to its cluster, and the semantically closed ones are considered to be in the same cluster.
Although previous approaches have achieved promising results, they lack the ability to generate mappings between images and texts to align visual-textual information and assist medical diagnosis. In order to facilitate visual-textual alignment, the Cross-modal Memory Network (**CMN**) [333] extended encoder-decoder methods by utilizing a shared memory for better alignment of information between images and texts. It uses a pre-trained ResNet [63] as the visual extractor to output visual features, then passes them to the cross-modal memory network that utilizes a matrix to store information where each row represents the embedding of information linking images and texts. To access the stored information aligning the modalities, memory querying and responding are implemented in a multi-threaded manner.
### Other Systems
Other MRG systems focus on solving the problem with different ideas. Lovelace et al. [348] proposed a generation framework composed of two stages. In the first stage, a Transformer
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Method** & **Contributions** & **Inputs** \\ \hline \hline
**EMEM-[134]** & **A**: _Frontiers ranked hierarchical features for classifying meta-classed images. & **A**: _Frontiers ranked hierarchical features for classifying meta-classed images. & **A**: _Frontiers ranked in classifying meta-classed images. & **A**: _Frontiers ranked
model is adopted to map the input image features extracted by a DenseNet-121 [335] to contextual annotations and learn report generation. In the second stage, a procedure is introduced to differentiably sample a clinical report from the Transformer decoder and obtain observational clinical information from the sample. This differentiability is further employed to fine-tune the model for improving clinical coherence by applying their differentiable CheXpert to the sampled reports. Fueled by recent progress in explainable artificial intelligence and the introduction of algorithms that attempt to provide interpretable prediction in DL-based systems, Likewise, in **CDGPT2**[351], the medical image is passed into a Chexnet [363] to provide localizations of 14 types of diseases from the images as visual features. To implement better semantic features, the model was fine-tuned as a multi-label classification problem to extract manual tags from the IU-Xray dataset [326] by replacing the final layer of the model with a layer containing 105 neurons to produce 105 tags. The vector representation of the tags is then fed into a pre-trained distilGPT2 [364] as the decoder to generate medical reports. Moreover, Wang et al. [353] presented a confidence-guided report generation (**CGRG**) approach to support reliability in report generation by quantifying visual and textual uncertainties. It's comprised of an auto-encoder that reconstructs images, a Transformer encoder that encodes the input visual feature extracted by ResNet-101 [63], and a Transformer decoder for report generation. Visual uncertainty is obtained by the AutoEncoder, which acts as a guide for the visual feature extractor, and textual uncertainty is quantified based on the introduced Sentence Matched Adjusted Semantic Similarity (SMAS) which captures the similarity between the generated reports. These uncertainties are further utilized to aid the model optimization process.
The recent outbreak of COVID-19, one of the deadliest pandemics, has influenced the research community to alleviate the tedious and time-consuming work of producing medical reports. VL-BERT, [365] as an extension of BERT, [36] can be employed as an intelligent medical report generation system to expedite the diagnosis process. **Medical-VLBERT**[349] introduced VL-BERT to the medical report generation domain. It defines the problem as a two-step procedure: First, it utilizes two distinct VL-BERTs as terminology encoders to produce terminology-related features (textual and visual), and then these features are fed into a shared language decoder to produce medical textbooks and reports. The proposed method takes into account predefined terminology word embeddings that represent medical domain knowledge. These embeddings are paired distinctly with two other embeddings as an input to the encoders: textbook embeddings, which are generated by employing a lookup table, and spatial feature embeddings (termed "visual context") that are extracted from medical images by implementing DenseNet-121 [335]. The encoders then integrate this pairwise information separately to produce textual and visual terminological features. Subsequently, a shared language decoder is trained by utilizing an alternate approach to properly exchange the knowledge captured by the encoders.
Figure 41: Taxonomy of Transformer-based medical report generation approaches based on the mechanism by which they generate clinical reports. We reference the papers in ascending order corresponding to their prefix number: 1. [334], 2. [337], 3. [332], 4. [344], 5. [345], 6. [346]. 7. [333], 8. [330], 9. [342], 10. [348], 11. [349], 12. [351], 13. [352], 14. [353].
Figure 42: Using an encoder module, KKRP creates an abnormality graph from the extracted visual features. Then, a retrieval module retrieves a sequence of templates based on detected abnormalities. Next, the paraphrase module paraphrases the templates’ terms into a report [330].
Figure 43: Workflow of the \(M^{2}\) Tr. Progressive framework. The task is accomplished in two stages: First, the Meshed-Memory Transformer (\(\text{M}^{2}\) TR.) receives visual features extracted by a DenseNet [335] backbone and generates high-level global context. Second, the BART [362] architecture encodes contexts with a bidirectional encoder and decodes them with a left-to-right decoder to produce coherent reports [345].
Furthermore, in the work of Nguyen et al. [352], a classification, generation, and interpretation framework (**CGI**) is proposed to address clinical accuracy. Each term of the framework's name represents a different module to perform the task. The classification module learns how to discover diseases and generate their embeddings, which consist of an image and text encoder to extract the global visual features from medical images and obtain text-summarized embeddings from clinical documents. The generation module is a Transformer model that takes the disease embeddings as input and generates medical reports from them. The interpretation module then takes these reports for evaluation and fine-tuning.
### Discussion and Conclusion
This section offers a systematic review of the Transformer architectures configured for medical report generation. Compared to previous sections that reviewed ViT-based frameworks to tackle different medical tasks and problems, this section focuses mostly on using standard Transformers as the core of a medical report generation system. A common theme prevailing in these systems is to solve the problem with an encoder-decoder architecture supported by a CNN-based visual backbone. As mentioned in previous sections, the self-attention mechanism undermines the representation of low-level details. On the other hand, since medical reports consist of long and multiple sentences, Transformers are of great significance to model long-term dependencies, which assists clinically accurate report generation [366, 352]. To exploit the power of both CNNs and Transformers simultaneously, state-of-the-art MRG systems usually embed CNNs along with Transformers in their frameworks [334, 351, 353]. We have provided information in Table 14 on the reviewed report generation methods concerning their architectural type, modality, organ, pre-trained strategy, datasets, metrics, and year. Table 15 contains summarized information about the methodologies, including their contributions and highlights. In addition, it should be noted that several survey publications have been published in this field of medicine [355, 367, 327], and the most recent one provided a technical overview of Transformer-based clinical report generation [28]. We approach our review differently by distinguishing the proposed methods based on the mechanism they used to support the prevailing concerns such as long and coherent text generation, reliability, and visual-textual biases.
The ultimate goal of these frameworks is to increase clinical accuracy to expedite the diagnosis process and reduce the workloads in radiology progressions [348, 352]. Numerous works have attempted to facilitate diagnostic decision-making by aligning correlated sections of medical image and textual report that provide valuable information for detecting abnormalities [333, 344]. Also, multiple studies emphasized the importance of universal knowledge, and designed a system to incorporate prior information for detecting disease [330, 332]. Some research effort was also put into better representation learning by contrasting normal and abnormal samples against each other in representation space by utilizing a contrastive loss as the objective [346]. One recent work was inspired by curriculum learning to imitate the order of the human learning process [345].
Overall, we believe that MRG systems need more research and progression to be robustly incorporated in a practical setting.
## 10 Open Challenges and Future Perspectives
So far, we discussed the application of Transformers (especially vision Transformers) and reviewed state-of-the-art models in medical image analysis. Even though their effectiveness is exemplified in previous sections by delicately presenting their ideas and analyzing the significant aspects that were addressed in their proposed methods, there is still room for improvement in many areas to devise a more practical and medically accurate system by leveraging Transformers. Consequently, we discuss the challenges and future directions hoping to help researchers gain insight into the limitations and develop more convenient automatic medical systems based on Transformers.
### Explainability
Fueled by recent progress in XAI (explainable artificial intelligence) and the introduction of algorithms that attempt to provide interpretable prediction in DL-based systems, researchers are putting effort into incorporating XAI methods into constructing Transformer-based models to promote a more reliable and understandable system in different areas, including medical analysis [368, 369]. Existing approaches usually highlight important regions of the medical image that contribute to the model prediction by employing attention maps [370, 40]. Furthermore, Vision Transformers (ViTs) have the ability to provide attention maps that indicate the relevant correlations between the regions of the input and the prediction. However, the challenge of numerical instabilities in using propagation-based XAI methods such as LRP [371] and the vagueness of the attention maps, which leads to inaccurate token associations [75, 372], makes interpretable ViTs an open research opportunity in computer vision, especially in medical image analysis. We believe that including interpretable vision Transformers, such as ViT-NeT [372], in various medical applications can promote user-friendly predictions and facilitate decision-making in the diagnosis of medical conditions, and is a promising direction in medical research problems.
### Richer Feature Representation
An effective and suitable representation space is substantially influential in building medical analysis systems. Transformers have demonstrated their efficiency in obtaining global information and capturing long-term dependencies in many areas, such as Natural Language Processing (NLP), Computer Vision, and Speech Recognition [306], and CNNs have proven to be effective in extracting local context from visual data [373]. However,
this locality usually enables these networks to capture rich local texture representation [374; 375] and lacks model global dependency. As a result, many approaches stack Transformers along with CNNs to leverage both local and global information simultaneously in clinical applications (e.g., medical report generation) [344; 348; 50]. Recent studies stated that the single-scale representation of ViTs hinders improvement in dense prediction tasks, so a multi-scaled feature representation is implemented which achieves better performance in computer vision tasks, including image classification, object detection, and image segmentation [376; 377]. Generalizing this idea to medical applications of ViTs to facilitate devising a clinically suitable system can be considered as future work.
### Video-based analysis
There has been an increasing interest in the vision community in extending ViT architectures to video recognition tasks. Recently, a handful of papers have integrated standard Transformers with their models in AI-assisted dynamic clinical tasks [378; 379; 380; 381]. However, the scarcity of the proposed approaches puts video-based medical analysis in an infancy stage and open for future investigations. Another potential research direction is to explore the power of video vision Transformer variants, such as Video Swin Transformer [382], in clinical video understanding and to facilitate automatic robotic surgery.
### High Computational Complexity
The robustness of Transformer models in layouts that implement large numbers of parameters is one of their strengths. While this is a beneficial trait that makes it possible to train models of enormous scale, it leads to the requirement of large resources for training and inferencing [27]. Particularly disadvantageous to medical image analysis is that expanding the use of ViTs for pretraining in new tasks and datasets comes with substantial expenses and burdens. Additionally, gathering medical samples can be difficult and the dataset scale is often limited. For instance, according to empirical studies in [22], pretraining a ViT-L/16 model on the large-scale dataset of ImageNet takes approximately 30 days employing a standard cloud TPUv3 with 8 cores. As a result, a notable number of papers utilized the pre-trained weights of ViT models to exploit the transfer learning strategy to alleviate training load [44; 24; 43], but in some cases, such as dealing with volumetric medical images, where transfer learning doesn't demonstrate any improvements [143; 309], the pretraining process is necessary to capture domain-specific features for generalization and better performance. Ultimately, designing effective Transformer systems with fewer parameters while maintaining optimality in terms of clinical accuracy and robustness is a preferable research direction.
### Transformer-based Registration
As reviewed in Section 8, the idea of employing Transformers to support efficient medical image registration has become popular in recent years. The ability of the self-attention mechanism assists the learning of long-term visual correlations since their unlimited receptive field promotes a more accurate understanding of the spatial relationship between moving and fixed images [309; 310]. However, registration systems composed of Transformer architectures are still in their infancy and require more research effort to be put into them.
### Data-Driven Predictions
With supervised learning as a popular fashion in building intelligent systems, the model learns features based on the provided annotations that are suitable to accomplish a specific task, which hinders generalizability. In other words, supervised learning modifies the bias-variance trade-off in favor of the strong inductive biases that lead to making assumptions as a means to aid the model in learning a particular task quicker and with higher sample efficiency. However, these hard assumptions sacrifice adaptability to other settings and unseen datasets, and the model learns to accomplish its task without having an innate understanding of the data. To tackle this issue, unsupervised regimes enable the algorithms to act as general descriptors and capture features that will assist them in performing efficiently in a wide range of tasks. Similarly, in medical image analysis, adopting Transformer networks with unsupervised learning algorithms promotes robustness and generalizability to other datasets and tasks.
### Medical Software Ecosystems
A future direction for advancing in the automatic medical analysis is to provide an open-source environment that contains libraries suitable for solving multiple medical tasks and challenges with Transformer architectures. Developers can further contribute to the ecosystem by updating and adding additional tasks, bringing novelty, and proposing ideas to enhance performance and accuracy [132]. Companies and organizations can support the system by preparing the necessary computational resources and hardware requirements. Sample of software prototypes in this direction are nnU-Net [383], Ivadomed [384], and preliminary works such as [133], which provides an end-to-end pipeline for implementing deep models on medical data.
## 11 Discussion and Conclusion
In this paper, we presented a comprehensive encyclopedic review of the applications of Transformers in medical imaging. First, we provided preliminary information regarding the Transformer structures and the idea behind the self-attention mechanism in the introduction and background sections. Starting from Section 3, we reviewed the literature on Transformer architecture in diverse medical imaging tasks, namely, classification, segmentation, detection, reconstruction, synthesis, registration, and clinical report generation. For each application, we provided a taxonomy and high-level abstraction of the core techniques employed in these models along with the SOTA approaches. We also provided comparison tables to highlight the pros and cons, network parameters, type of imaging modality they are considering, organ, and the metrics they are using. Finally, we outlined possible avenues for future research directions.
**Acknowledgments** This work was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) under project number 191948804. We thank Johannes Stegmaier for his contribution to the proofreading of this document.
|
2307.00382 | Low-Resource Cross-Lingual Adaptive Training for Nigerian Pidgin | Developing effective spoken language processing systems for low-resource
languages poses several challenges due to the lack of parallel data and limited
resources for fine-tuning models. In this work, we target on improving upon
both text classification and translation of Nigerian Pidgin (Naija) by
collecting a large-scale parallel English-Pidgin corpus and further propose a
framework of cross-lingual adaptive training that includes both continual and
task adaptive training so as to adapt a base pre-trained model to low-resource
languages. Our studies show that English pre-trained language models serve as a
stronger prior than multilingual language models on English-Pidgin tasks with
up to 2.38 BLEU improvements; and demonstrate that augmenting orthographic data
and using task adaptive training with back-translation can have a significant
impact on model performance. | Pin-Jie Lin, Muhammed Saeed, Ernie Chang, Merel Scholman | 2023-07-01T16:47:36Z | http://arxiv.org/abs/2307.00382v1 | # Low-Resource Cross-Lingual Adaptive Training for Nigerian Pidgin
###### Abstract
Developing effective spoken language processing systems for low-resource languages poses several challenges due to the lack of parallel data and limited resources for fine-tuning models. In this work, we target on improving upon both text classification and translation of Nigerian Pidgin (Naija) by collecting a large-scale parallel English-Pidgin corpus and further propose a framework of cross-lingual adaptive training that includes both continual and task adaptive training so as to adapt a base pre-trained model to low-resource languages. Our studies show that English pre-trained language models serve as a stronger prior than multilingual language models on English-Pidgin tasks with up to \(2.38\) BLEU improvements; and demonstrate that augmenting orthographic data and using task adaptive training with back-translation can have a significant impact on model performance.
Pin-Jie Lin\({}^{*1,2}\), Muhammed Saeed\({}^{*1}\), Ernie Chang\({}^{*3}\), Merel Scholman\({}^{2,4}\)+\({}^{1}\)Saarland Informatics Campus, Germany
\({}^{2}\)Language Science and Technology, Saarland University, Germany
\({}^{3}\)Reality Labs, Meta Inc.
\({}^{4}\)ILS, Utrecht University, the Netherlands
{pinjie, musaeed, m.c.j.scholman}@lst.uni-saarland.de, [email protected], [email protected]
Footnote †: Equal contribution.
**Index Terms**: spoken language understanding, low-resource machine translation, low-resource language
## 1 Introduction
Over the past few years, there has been an increasing interest in developing spoken language processing systems for low-resource languages such as the Nigerian Pidgin (Naija) [1, 2]. With a population of \(75\) million people in Nigeria, Nigerian Pidgin is a low-resource language that lacks sufficient data for spoken language processing tasks. Consequently, models tend to underperform when it comes to critical tasks, such as sentiment analysis [3] and machine translation [4]. Additionally, the orthographic variation of low-resource languages presents a challenge for language processing models, which can be addressed by collecting diverse datasets and performing data augmentation using the target language lexicon [5, 6, 7]. The absence of parallel Pidgin data creates a considerable obstacle to training neural models with a high number of parameters. It also poses difficulties for fine-tuning pre-trained models on the tasks involving Pidgin language with limited resources, as seen in spoken machine translation and text classification [8, 9, 10].
In this paper, we mitigate the issues of data scarcity by collecting and releasing a large-scale parallel English-Pidgin corpus (Section 2). English being the lexifier of Pidgin proves to be a useful high resource language for pivoting Nigerian Pidgin to other languages [11]. Thus, we use this English-Pidgin parallel dataset to train language models. Prior work proposed that using multilingual models can benefit low-resource language settings [12]. However, fine-tuning existing models [13] for specific tasks can be challenging due to their large number of parameters and sensitivity to parameter values. Thus, to more effectively leverage existing pre-trained models, we introduce a cross-lingual adaptive framework which involves two training procedures consisting of continual adaptive training and task adaptive training with back-translation. Our approach is designed to adapt a base model to a new language, making it more effective for low-resource languages.
To this end, we introduce a cross-lingual adaptation framework for fine-tuning existing models to Nigerian Pidgin [13, 14] (Section 3). Specifically, we perform continual and task adaptation by continually pre-training language models for Naija, and then fine-tuning the models [15, 16] for the downstream tasks. In our analysis, presented in Section 4, we found that the English-based model is superior to the multilingual one, indicating the importance of training on data specific to the target language. Additionally, we found that using task adaptive training provides a significant impact on model performance in the low-data setting. Our results suggest that cross-lingual adaptive training is a promising approach for building effective spoken language systems for low-resource languages2.
Footnote 2: [https://github.com/muhammed-saeed/CLaT](https://github.com/muhammed-saeed/CLaT).
Footnote 3: We release the English-Pidgin dataset and 5 million synthetic parallel corpus at [https://drive.google.com/file/d/16Oi0h5y09XPzFRDYCa-hJRzf_Nx_uRE1/view](https://drive.google.com/file/d/16Oi0h5y09XPzFRDYCa-hJRzf_Nx_uRE1/view)
Our main contributions are as follows:
* We release the first large-scale English-Pidgin dataset3 to our knowledge, which consists of \(29.73\)K sentence pairs.
* Using the collected corpus, we trained a baseline machine translation model, and release a corpus with \(5\) million synthetic sentence pairs generated using this system. We further improve upon this translation model with task adaptive training [17], and demonstrate a significant BLEU improvement of \(2.28\) and \(1.69\) for Pidgin-English and English-Pidgin respectively over the baseline model. Footnote 3: We release the English-Pidgin dataset and 5 million synthetic parallel corpus at [https://drive.google.com/file/d/16Oi0h5y09XPzFRDYCa-hJRzf_Nx_uRE1/view](https://drive.google.com/file/d/16Oi0h5y09XPzFRDYCa-hJRzf_Nx_uRE1/view)
* We show that the English-based pre-trained model (T5) [13] outperforms its multilingual variant (MT5) [18] by \(2.38\) BLEU in English-to-Pidgin translation, demonstrating the superiority of English models over multilingual one on English-Pidgin and Pidgin-English translations.
## 2 Corpus Collection
While there have been several efforts to create datasets for Pidgin [19, 20], the language still lacks a sufficiently sized dataset
for application in machine translation models. To address this issue, we combine and enrich various parallel and monolingual texts and datasets to generate a high-quality parallel dataset. The Nigerian Pidgin corpus collection includes six resources: (1) The Holy Bible, where each were in English was mapped to its corresponding verse in Pidgin, resulting in \(29,737\) parallel sentences4. A limited number of chapters required manual processing to ensure their quality. (2) JW300 corpus, which contains texts from two religious magazines covering various topics. (3) The Naija Treebank, which is a parallel corpus of transcribed spoken Pidgin text with English translations. (4) The NaijaSenti corpus, which consists of \(21,017\) crawled tweets in Pidgin and three additional Nigerian languages. (5) The Pidgin subset of the Afri-BERTa dataset, which consists of \(176\)K Pidgin sentences, and Pidgin text from \(17\)K Pidgin articles from BBC Pidgin, ASR, and PidginUNMT. (6) \(5\) million synthetic sentence pairs in English-Pidgin, which were generated from the ISWLT'15 and WMT14 datasets and the Pidgin sentences in the monolingual corpus. Table 1 presents the overview of collected datasets included in the current study along with their respective size.
Footnote 4: We utilize the edition provided by Wycliffe Bible Translators, Inc.
**Orthographic analysis.** Due to the lack of a commonly accepted standard orthography in Nigerian Pidgin, we observe various forms of orthographic variation in the data. The data is characterized by both intra-textual variation (i.e. variation within texts from the same source) and inter-textual variation (i.e. between different sources). We identify four main classes of systematic variations that occur in the data: (1) alternation between similar sounds; (2) conversion of digraphs into a single letter or alternate digraphs; (3) phonetic transcription of (blended) letter pairings; and (4) deletion of silent letters. Table 2 presents examples of each of these classes.5
Footnote 5: Note that the different datasets adhere to different orthographies – some aim to stay close to English spellings and others aim for phonetic spellings. Both the English spellings and the variations, therefore, occur in our data.
These variations all have phonetic origins. For example, the alternation between "c" and "k" can be attributed to both consonants being ejective, and the conversion of "ee" to "i" can be attributed to both vowels having similar sounds in the Pidgin pronunciation of certain words. As such, we address the inconsistent input by collecting diverse datasets, highlighting the significance of our released data.
## 3 Cross-Lingual Adaptive Training
Considering the challenges posed by orthographic variations and the scarcity of labeled data for developing performant spoken language processing systems, we introduce two supplementary training approaches--adapting the model to the new lan
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Corpus** & **Language** & \(|\textbf{Train}|\) & **Domain** \\ \hline \multicolumn{4}{c}{Parallel} \\ \hline Bible & En., Pg. & \(29,737\) & religious \\ JW300 [19] & En., Pg. & \(20,218\) & religious \\ Naija Treebank [20] & En., Pg. & \(9,240\) & misc. \\ \hline \multicolumn{4}{c}{Monolingual} \\ \hline NaijaSenti [3] & Pg. & \(8,524\) & social media \\ Afri-BERTa [21] & Pg. & \(176,843\) & news, misc. \\ BBC Pidgin & Pg. & \(4,147\) & news \\ ASR [22] & Pg. & \(7,958\) & news \\ PidginUNMT [23] & Pg. & \(5,397\) & news \\ IWSLT’15 [24] & En. & \(143,609\) & wiki, misc. \\ WMT14-En [24] & En. & \(4,468,840\) & news \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Overview of Pidgin datasets.** En. indicate English language and Pg. for Pidgin language. Data included in the corpus, along with their size in a number of sentences.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Type** & **Subtype** & **Example** \\ \hline \multirow{2}{*}{Alternation} & c / k & carry - k carry \\ & a / o & call - coll \\ \hline \multirow{2}{*}{Conversion} & ou / a & our - awa \\ & ou / o & your - yor \\ \hline \multirow{2}{*}{Transcription} & bl / bol & trouble - trobl \\ & er / a & whether - weda \\ \hline \multirow{2}{*}{Deletion} & initial & he - e \\ & medial & different - difren \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Types of orthographic variation in Nigerian Pidgin.**
Figure 1: _Overview of the framework for low-resource sentiment classification and translation in Pidgin language: (1) **Continual adaptive training**: We consider a base model \(M\) and a set of in-domain data \(x^{domain}\) in the target language. We then train \(M\) with MLM objective which enables a base model to adapt to a new language domain. (2) **Task adaptive training**: Starting from the observed sequence in the source language, the translation model synthesizes an inference in the target language creating the pseudo sentence pair. We construct a bi-directional back-translation by involving the forward and reverse translations. Next, the combined synthetic data serves as a supplementary task for the base model which enables the language model to adapt to more complex tasks via supervised task training.
guage and task before fine-tuning on downstream tasks--that can be utilized to benchmark and enhance the performance of low-resource Pidgin sentiment classification and translation tasks: (1) CAT: Continual Adaptive Training and (2) TAT: Task Adaptive Training.
**Continual adaptive training.** Given the limited availability of labeled Pidgin data, fine-tuning the large number of weights in pre-trained language models (PLMs) is challenging. To this end, we transfer the knowledge about one language absorbed in the weights to the target language by continually adapting the model to a new language via the unlabeled Pidgin corpus. The _Continual Adaptive Training_ (CAT) provides supplementary training for the base model to transfer to a specific language domain and thus improves the model's performance on the downstream task. Figure 1 depicts the training phase where the base model \(M\) conducts language adaptation via data assuming from the same domain, thus building an adapted model specialized in a new language. More specifically, an English-based \(M^{English}\) is adapted to Pidgin language using large-scale unlabeled data, resulting in a language-specific \(M^{Pidgin}\). Subsequently, we fine-tuned this model for the target tasks.
**Task adaptive training.** To enhance the model's ability to tackle more intricate tasks, we further introduce _Task Adaptive Training_ (TAT) which allows the model to adapt to the translation task through supervised learning. Our task training involves combining the two sets of synthetic data that possess shared characteristics across both source and target languages for \(M\). To create synthetic data, TAT employs back-translation, a technique that has proven effective in low-resource machine translation scenarios. By leveraging bi-directional back-translation data in our approach, we augment the volume of task-specific training data accessible to the model which can potentially enhance the performance on more complex translation tasks. Specifically, we obtain a synthetic dataset \(D^{\prime}_{x\to y^{\prime}}=\{(x,y^{\prime})|x\in D\}\) via back-translation where the pseudo translation \(y^{\prime}\) was generated according to the sequence \(x\) in the source language. We combine two translation directions as the bi-directional back-translation data \(D^{BT}=D^{\prime}_{x\to y^{\prime}}\cup D^{\prime}_{y\to x^{\prime}}\).
## 4 Main Results
**General setup.** We closely followed the training procedure in transformers [25]. We trained the transformer translation models using Fairseq [26]. For experiments with T5 [13] and mT5 [27], we use Huggingface [28]. We consider Base for all the checkpoints of the models.
### Sentiment Classification
**Data.** We derived the low-resource dataset from NaijaSenti [3], which performs sentiment analysis with \(3\) classes (\(6.7\)K/\(0.6\)K/\(1.2\)K)6. We report F1 score.
**Setup.** We leverage RoBERTa [15] and BERT [16] in base versions. We added Init baselines where the weights of models are randomly initialized and refer _fine-tuning_ as FT which directly transfers the pre-trained language model to Pidgin language. When performing CAT we continually train RoBERTa and BERT on monolingual Pidgin corpus with masked language modeling objective following the instruction in [29], followed by fine-tuning on multi-class classification ("positive", "negative", "neutral") task.
**Cat improves Pidgin comprehension.** As shown in Table 3, BERT and RoBERTa with continual adaptive training have both improved FT after the additional pre-training epochs on Pidgin data, resulting in +1 and +\(2.4\) point improvement in F1. Furthermore, CAT enables significant performance gains compared to Init by +\(8.9\) and +\(14.1\) points of F1. The reason for this can be attributed to poor initialization from Init where fine-tunes a high number of randomly initialized parameters is challenging, while pre-training and additional adaptive training enable the acquisition of a highly informative language prior to the downstream task.
Footnote 6: We obtained the portion of the dataset from the authors.
### English-Pidgin Translation
**Data.** We use JW300 translation benchmark [4]. The baseline model uses the JW300 parallel English-Pidgin dataset only. For augmented data, we consider Bible which consists of \(29\)K [4]. All models are evaluated on the test set using BLEU score.
**Setup.** To facilitate a direct comparison with the Pidgin translation benchmark on JW300 [4], we use the identical model architectures for the baselines. The word-level model consists of \(4\)-\(4\) encoder-decoder layers and \(10\) heads with an embedding size of \(300\), while BPE model has \(6\)-\(6\) layers, \(4\) heads, and an embedding size of \(256\). We performed shared embedding and the shared vocabulary of size \(4000\). We refer to Data Augmentation and Data Aug.+TAT as the model with data augmentation from Bible and the model conducting task training on the bi-directional noisy data via back-translation. We exploited back-translation (BT) to produce \(430\)K synthetic parallel sentences from our collected monolingual Pidgin data for TAT. We also release the generated \(5\) million parallel sentences from the ISWLT15 and WMT14 datasets.
**Data augmentation improves performance.** Table 4 demonstrates that BPE model with data augmentation significantly improves the baselines by \(6.45\) and \(15.76\) BLEU points in both translation directions. For word-level models, augmentation leads to an increase in the BLEU score by \(6.14\), while the score for Pidgin-to-English translation decreases by \(2.06\) points. We analyzed the dataset and the model in order to uncover the reason for this decrease, and we found that the _Bible_ dataset introduces a lot of orthographic variation when text is segmented at
\begin{table}
\begin{tabular}{l c c} \hline \hline & **English-Pidgin** & **Pidgin-English** \\ \hline _Word-level_ & & \\ \hline JW300 [4] & \(17.73\) & \(\mathbf{24.67}\) \\ Data Aug. & \(\mathbf{23.87}\) & \(22.61\) \\ \hline _BPE_ & & \\ \hline JW300 [4] & \(24.29\) & \(13\) \\ Data Aug. & \(30.74\) & \(28.76\) \\ Data Aug.+TAT & \(\mathbf{32.43}\) & \(\mathbf{31.04}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: _Results on JW300 translation benchmark with data augmentation_ (Data Aug.)_ and task adaptive training_ (TAT).
\begin{table}
\begin{tabular}{l|c c c} \hline \hline
**Model Type** & **Init** & **FT** & **Cat** \\ \hline BERT & \(71.8\) & \(79.7\) & \(\mathbf{80.7}\) \\ RoBERTa & \(68.4\) & \(80.1\) & \(\mathbf{82.5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: _Results of sentiment classification._
the word-level while BPE enables sharing more semantic units.
**TAT with back-translation yields further improvement.** As the investigation of TaT's effectiveness, we generated corresponding parallel sentences by using monolingual Pidgin data with the T5 for \(3\) epochs of training. Table 4 shows that TaT further improve upon the translation models with the +2.28 and +1.69 BLEU improvement for Pidgin-English and English-Pidgin respectively. This indicates that task adaptive training with back-translation training provides a better initialization for machine translation tasks.
### Further Analysis
**English-based model is superior to multilingual models.** To validate the hypothesis of the transferability from the English monolingual model and multilingual counterpart for Pidgin language, we compare the T5 where the encoder-decoder is extensively trained on English corpus and the multilingual variant mT5 that was pre-trained on new Common Crawl datasets converting 101 languages. To ensure the fine-tuning of T5 variation models converges smoothly, we train both the base version of T5 and mT5 in the Data aug. setting using JW300 and Bible. Additionally, we employed All the parallel corpus, which consists of Bible, JW300, and TreeBank. Table 5 demonstrates that T5 based solely on the English language outperforms its multilingual counterparts in various scenarios which confirms our hypothesis. We observed a BLEU improvement of +2.38 and +2.12 for both data settings in English-Pidgin translation, while the improvement was +0.82 and +1.27 points in Pidgin-English translation. We concluded that the English-based model is superior to the multilingual one. Moreover, despite using more training data during training, TaT still slightly improves upon T5 baselines. Next, we delve deeper into the potential of task adaptation in improving the adaptability of the base model when faced with limited labeled data.
**TAT significantly improves performance in low-data setting.** We compare the model with task adaptation stage T5+TAT and the baseline T5 to investigate the impact of task adaptation in low-data scenarios. We used four subsets randomly sampled from the original training splits (20%, 40%, 60%, and 80%) in addition to the full training set. The experimental setting was consistent with that used for English-based T5. Figure 2 shows that T5+TAT substantially outperforms the baselines across \(5\) sample sizes. We observed that employing TaT obtain particularly strong performance by +3.48 and +2.64 BLEU improvement for Pidgin-English and English-Pidgin respectively when only 20% of the data is available for training. Further, incorporating supervised task training into the model shows a steady increase across \(5\) training splits while the performances of the baseline are sensible to the sample size. This indicates the T5+TAT acquired the orthographic information from the task adaptation stage. Thus, the T5+TAT is capable of achieving high performance with less labeled data. The findings suggest that a robust initialization of the language model is essential for performing well in scenarios where data availability is limited, which is often the case in low-resource machine translation applications. Overall, these results highlight the potential value of incorporating TaT into models and suggest avenues for further research into optimizing models for limited data scenarios.
## 5 Conclusion and Future Works
In this research, we developed an effective spoken language processing framework for Pidgin language text, a low-resource language. We collected the largest parallel English-Pidgin corpus, performed large-scale data augmentation, and proposed a framework for cross-lingual adaptive training. Our studies show that the approach outperforms multilingual models and significantly improves model performance. Our results suggest that cross-lingual adaptive training is a promising approach for spoken language processing systems in low-resource language. For future work, we aim to improve upon the adaptation techniques by better leveraging the English-based PLMs, and making the finetuning process more parameter-efficient for low-resource scenarios.
## 6 Acknowledgements
This work was supported by the Deutsche Forschungsgemeinschaft, Funder Id: [http://dx.doi.org/10.13039/501100001659](http://dx.doi.org/10.13039/501100001659), Grant Number: SFB1102: Information Density and Linguistic Encoding.
Figure 2: BLEU scores on \(20\%\), \(40\%\), \(60\%\), \(80\%\) of sample size and full sample size of (a) English-Pidgin and (b) Pidgin-English translation tasks using T5+TAT framework.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Model Type** & **English-Pidgin** & **Pidgin-English** \\ \hline JW300, Bible & & \\ mT5 (base) & \(33.78\) & \(32.4\) \\ T5 (base) & **36.16** & **33.22** \\ \hline All & & \\ mT5 (base) & \(33.92\) & \(32.75\) \\ T5 (base) & **36.04** & **34.02** \\ \hline All+TAT & & \\ T5 (base) & **36.35** & **34.04** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on JW300 translation benchmark using T5 and mT5. |
2310.12349 | Developing 3D Virtual Safety Risk Terrain for UAS Operations in Complex
Urban Environments | Unmanned Aerial Systems (UAS), an integral part of the Advanced Air Mobility
(AAM) vision, are capable of performing a wide spectrum of tasks in urban
environments. The societal integration of UAS is a pivotal challenge, as these
systems must operate harmoniously within the constraints imposed by regulations
and societal concerns. In complex urban environments, UAS safety has been a
perennial obstacle to their large-scale deployment. To mitigate UAS safety risk
and facilitate risk-aware UAS operations planning, we propose a novel concept
called \textit{3D virtual risk terrain}. This concept converts public risk
constraints in an urban environment into 3D exclusion zones that UAS operations
should avoid to adequately reduce risk to Entities of Value (EoV). To implement
the 3D virtual risk terrain, we develop a conditional probability framework
that comprehensively integrates most existing basic models for UAS ground risk.
To demonstrate the concept, we build risk terrains on a Chicago downtown model
and observe their characteristics under different conditions. We believe that
the 3D virtual risk terrain has the potential to become a new routine tool for
risk-aware UAS operations planning, urban airspace management, and policy
development. The same idea can also be extended to other forms of societal
impacts, such as noise, privacy, and perceived risk. | Zhenyu Gao, John-Paul Clarke, Javid Mardanov, Karen Marais | 2023-10-18T21:51:04Z | http://arxiv.org/abs/2310.12349v1 | # Developing 3D Virtual Safety Risk Terrain for UAS Operations in Complex Urban Environments
###### Abstract
Unmanned Aerial Systems (UAS), an integral part of the Advanced Air Mobility (AAM) vision, are capable of performing a wide spectrum of tasks in urban environments. The societal integration of UAS is a pivotal challenge, as these systems must operate harmoniously within the constraints imposed by regulations and societal concerns. In complex urban environments, UAS safety has been a perennial obstacle to their large-scale deployment. To mitigate UAS safety risk and facilitate risk-aware UAS operations planning, we propose a novel concept called _3D virtual risk terrain_. This concept converts public risk constraints in an urban environment into 3D exclusion zones that UAS operations should avoid to adequately reduce risk to Entities of Value (EoV). To implement the 3D virtual risk terrain, we develop a conditional probability framework that comprehensively integrates most existing basic models for UAS ground risk. To demonstrate the concept, we build risk terrains on a Chicago downtown model and observe their characteristics under different conditions. We believe that the 3D virtual risk terrain has the potential to become a new routine tool for risk-aware UAS operations planning, urban airspace management, and policy development. The same idea can also be extended to other forms of societal impacts, such as noise, privacy, and perceived risk.
keywords: Unmanned aerial systems, Advanced air mobility, Operations planning, Third-party risk, Airspace management, Urban system +
Footnote †: journal: Transportation Research Part C: Emerging Technologies
## 1 Introduction
Advanced Air Mobility (AAM) is a novel air transport concept that integrates multiple transformational technologies, such as electric aircraft, small drones, and automated air traffic management, into the existing transportation and service system. As envisioned by National Aeronautics and Space Administration (NASA), AAM will enable the movement of people and cargo more effectively, especially in currently underserved local and regional settings [5]. The main AAM vehicle concepts include Electric Vertical Take-Off & Landing (eVTOL) aircraft, Electric Conventional Take-Off & Landing (eCTOL) aircraft, and Unmanned Aerial Systems (UAS). The former two concepts are indispensable for Urban Air Mobility (UAM) (Garrow et al., 2021), a subset of AAM that focuses on sustainable air mobility technologies that will operate and transport passengers or cargo at lower altitudes in urban environments. The UAS is highly capable in a diverse set of tasks such as goods delivery, emergency services, public safety, infrastructure inspection, agriculture monitoring, meteorological research, and aerial photography and videography. By expanding the current services into a new dimension - the sky, AAM is likely to become an integral part of the future urban infrastructure system. Nevertheless, the integration of AAM into the existing urban environment is
still a challenging task (Bauranov and Raks, 2021). The societal impacts of the system are among the key challenges that are decisive in the existence and large-scale deployment of AAM. A sustainable AAM system must operate harmoniously within the constraints imposed by public concerns such as noise pollution (Bian et al., 2021; Gao et al., 2023), emissions (Gao et al., 2022), safety (Lin and Shao, 2020; Wei et al., 2023), privacy (Ding et al., 2022), and equity (Bennaceur et al., 2022; Chin et al., 2023). Consequently, the societal/community integration of aerospace systems has become a focal research topic in recent years (Gao and Mavris, 2022; Gao et al., 2022; Nassi et al., 2021; Vascik et al., 2018; Yunus et al., 2023).
UAS safety has been a perennial challenge, and one of the principal barriers to the large-scale deployment of UASs, especially in complex urban environments. Relevant literature from the communities of UAV design, controls and robotics, reliability engineering, and transportation can be classified into two broad categories: models for evaluating UAS safety risks, and protocols for mitigating UAS safety risks. The former involves several specific risk models such as failure models, recovery models, impact location models, stress models, exposure models, and harm/damage models (Washington et al., 2017). The latter comprises aspects such as collision avoidance algorithms, regulations and policies, and risk-aware operations planning. When considering UAS (and other forms of AAM) operations in complex urban environments, airspace management is the kernel of operations planning and policy making. An urban airspace defines volumes in the 3D space where an UAS is allowed to operate. The 'no-fly' zones are areas where flying is prohibited due to urban topographies such as buildings, and societal impacts such as noise, privacy, and safety. On the flight safety side, although the UAS community has made concrete progress towards risk-aware trajectory planning in recent years (Lin and Shao, 2020; Pang et al., 2022; Primatesta et al., 2020), the existing works either (1) only consider trajectory planning in the 2D domain and not in a 3D urban environment, or (2) require a repetitive process to identify a flight path while minimizing Third Party Risk (TPR). In addition, to the best of our knowledge, no published work has investigated how TPR or ground risk could affect airspace management in complex urban environments. In this work, we propose the novel idea of 3D virtual risk terrain for UAS operations planning in complex urban environments. The core idea is to convert risk constraints in an urban environment into 3D exclusion zones that UAS operations should avoid to adequately reduce risk to the Entities of Value (EoV) in the urban space. In our view, the 3D virtual risk terrain has three advantages over other existing methods for risk-aware UAS operations planning:
1. _It enables efficient UAS trajectory planning._ A combination of the 3D virtual risk terrain and the physical urban terrain defines an overall acceptable fly zone for an UAS to operate. This turns the original 3D trajectory optimization problem into a much more straightforward terrain avoidance problem, which can be solved by a non-repetitive trajectory generation process.
2. _It can be extended to other societal impacts._ The same concept can be applied to create 'no-fly' zones for other public acceptance factors such as community noise and privacy. The union of some or all of these 'no-fly' zones will generate an overall acceptable fly zone, where UAS operations can circumvent/limit multiple or all societal impacts of the system.
3. _It can serve as a guideline to policy makers._ By specifying the spatial and temporal variation of minimum clearance distance to be maintained from people and properties, this new tool can play a significant role in the development of safety regulations for UAS operations. It can also be used by aviation airworthiness authorities as a basis for determining the reliability and equipment requirements for the UAS to operate within a certain space in an urban environment.
In this paper, we develop an integrated risk-based approach for generating 3D virtual risk terrain in urban environment, which facilitates risk-aware UAS trajectory planning. Specifically, this work bridges two categories of relevant literature. It combines most types of UAS safety risk sub-models to generate protocols for mitigating UAS safety risks. The integrated approach considers a total of seven sub-models from four areas: systems failure, third-party information, urban topography, and safety requirements to achieve the overarching objective. In addition, the framework is flexible in accommodating different probabilistic models, accounting for uncertainty, and capturing temporal dependencies in third-party exposure. Overall, we summarize our four primary contributions as follows:
1. _Proposing the novel concept of 3D virtual risk terrain for UAS operations._ This concept translates TPR or ground risk considerations into acceptable fly zones and provides a new angle for risk-aware UAS trajectory planning and urban airspace management. Based on our observations, a similar concept has not appeared in the literature before.
2. _Developing a holistic computational framework to generate 3D virtual risk terrain._ A review paper (Washington et al., 2017) summarized a total of seven types of sub-models for UAS ground risk. Whilst most relevant works in the literature have a limited coverage, this framework integrates all seven sub-models (in a modified organization). A mathematical framework based on conditional probability connects the sub-models together.
3. _Conducting numerical examples and generating prototypes of the proposed concept._ Using a Chicago downtown area as the background, we generate prototypes of the 3D virtual risk terrains under different conditions. The results are presented using both data visualization and quantitative measures. This first set of prototypes serves to provide insights to the key patterns of the virtual risk terrain.
4. _Interacting with 3D virtual terrains from other societal impacts._ In a project sponsored by NASA, a companion work (Gao et al., 2023) has developed 3D virtual acoustic terrain for aerial vehicle trajectory planning with limited noise impacts. This work also presents results that combine the two virtual terrains for the same urban model and vehicle type.
The remainder of the paper is organized as follows. Section 2 reviews literature in two relevant streams and identifies the research gap. Section 3 introduces the proposed overall approach and details of the sub-models. Section 4 applies the proposed approach to a real-world 3D urban model to generate prototypes of the proposed concept. Section 5 discusses the limitations and extensions of the study before Section 6 concludes the paper.
## 2 Background
### Modeling of UAS Safety Risk
A holistic understanding of the risks UAS operations pose to people and property in urban and suburban environments is key to the development of UAS safety and airworthiness regulations. Airworthiness authorities, such as the Federal Aviation Administration (FAA) (Federal Aviation Administration, 2019) and the European Aviation Safety Authority (EASA) (European Aviation Safety Agency, 2015) have promoted the adoption of a risk-based approach to develop safety regulatory frameworks for UASs. Risk assessment typically consists of three steps: risk identification, risk analysis, and risk evaluation (International Organization for Standardization, 2018). Of the two primary UAS risk sources, collision risk and ground risk, this work focuses on the ground risk of UAS: the system's risk to people or structures on the ground due to system failure during operation. A comprehensive survey paper (Washington et al., 2017) identified seven basic models in the assessment of UAS ground risk: failure model, impact location model, recovery model, stress model, exposure model, incident stress model, and harm model. Detailed definitions and recent developments for these sub-models are provided in Section 3. Each of these seven models is a dedicated research area in its own right. For example, the impact location model is at the intersection of flight dynamics and probabilistic modeling; the exposure model is enabled by Geographic Information System (GIS) data; and the harm model is a branch of solid mechanics and biomechanics. Therefore, a comprehensive assessment of UAS safety risk is interdisciplinary in nature and should integrate the latest research outcomes from a variety of specialized research fields.
The complexity in UAS safety risk modeling presents two dimensions of challenge/difficulty to this unique problem. The first dimension of challenge lies in the diversity of UASs. Compared to Conventional Piloted Aircraft (CPA), UASs consist of a more heterogeneous set of aerial vehicles (with different sizes, types, configurations, etc.), can operate under a greater variety of conditions (e.g., in complex urban environments), carries more state-of-the-art technologies (robotics, computer vision, etc), and is more susceptible
to environmental conditions (wind, weather, local climate, etc). It would be problematic to apply a unified set of regulatory rules on the entire system. Extensive studies are required to investigate the safety requirements for a diverse set of representative scenarios, such that tailored and more effective decisions can be made. Also due to the diversity of UASs, every existing study in its risk modeling is a result of specific relevant underlying assumptions (on the aircraft properties, operating conditions, environmental conditions, etc). Consequently, each model in the literature has a fairly limited range of applicability, and that critical attention must be taken when generalizing or transferring such results. The second dimension of challenge resides in the high levels of uncertainty in most sub-models. Treatment of uncertainty is highly critical when estimating the impact location of an Unmanned Aerial Vehicle (UAV) crash and predicting the level of harm/damage a crash can bring to people/property. Moderate uncertainty also exists in other sub-models such as the failure model and the stress model. When assessing UAS safety risk in a complex urban environment, such assessment must be conducted at high granularity such that the spatio-temporal aspect of uncertainty is considered. Overall, both diversity and uncertainty have a significant impact on risk modeling and the resulting regulations. Therefore, considering the rate of development in UAS operations, accurate modeling of UAS safety risk will be an active research area for many years to come.
### Mitigation of UAS Safety Risk
Mitigation of safety risk is among the main objectives of UAS system design. It can be achieved through improvement in either or both engineering design and operations management. Engineering design includes both hardware and software of the aerial vehicle (and other infrastructure in the system). The present work emphasizes on the operations management aspect, i.e., the planning and optimization of UAS operations to reduce the system's TPR. The operations management workflow is analogous to that of prediction-driven optimization, where the prediction comes as a result of the modeling of UAS safety risk described in the last subsection. In most cases, risk-aware UAS operations planning is based on risk maps - a 'heat map' that characterizes the spatial distribution of safety risk in an urban or suburban area. The operations planning problem then becomes a trajectory planning problem which either completely avoid high risk areas that are above certain thresholds or conduct trade-off to minimize the total risk an operation poses to people and property in the area.
In most research efforts in the literature, the creation of a risk map for UAS operations planning involves the integration of multiple basic models for UAS safety risk. Table 1 is a list of 12 of the most relevant papers in the literature which build risk maps for UAS operations, in chronological order. These works utilized between 2 and 5 basic models to construct risk maps for different areas and/or purposes. We can observe that the failure model, impact location model, and exposure model are the most frequently chosen building blocks, whilst none of these works included the recovery model in their approach. Another important aspect to consider is the number of dimensions in the risk map. The majority of works in Table 1 build 2D risk maps, therefore the corresponding trajectory planning problems are in the 2D domain. Two most recent
\begin{table}
\begin{tabular}{l|l|l} \hline
**Paper** & **Dimensions** & **Basic Models Used** \\ \hline Lum et al. (2011) & & Failure, Impact Location, Exposure \\ Bertrand et al. (2017) & & Failure, Impact Location, Stress, Harm, Exposure \\ La Cour-Harbo (2017) & & Failure, Impact Location, Stress, Harm, Exposure \\ Levasseur et al. (2019) & & Impact Location \\ Lin \& Shao (2020) & Failure, Impact location, Stress, Harm, Exposure \\ Hu et al. (2020) & & Failure, Stress, Harm, Exposure \\ Primatesta et al. (2020) & & Failure, Impact Location, Stress, Harm, Exposure \\ Kim \& Bae (2022) & & Failure, Impact location, Stress, Harm, Exposure \\ Berard et al. (2022) & & Failure, Impact location, Stress, Harm, Exposure \\ \hline Pang et al. (2022) & 3D & \begin{tabular}{l} Failure, Stress, Harm, Exposure \\ Failure, Exposure \\ \end{tabular} \\ Zhang et al. (2023) & 3D &
\begin{tabular}{l} Failure, Stress, Harm, Exposure \\ Failure, Exposure \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 1: A summary of literature on the use of risk map for operations planning
works (Pang et al., 2022; Zhang et al., 2023) have started to construct 3D risk maps that consider 3D UAS trajectory planning problems. A risk map can enable efficient ways for flight trajectory planning and optimization. Optimization methods such as the Dijkstra algorithm, A\({}^{*}\), Ant Colony Optimization (ACO) and their variants can be applied to generate trajectories with limited or minimal safety risk.
### Research Gap
The existing studies in the modeling and mitigation of UAS safety risk have made considerable contributions to the advancement of UAS and AAM. In an attempt to further the state-of-the-art UAS operations planning practices, we identify a research gap at the intersection of three limitations in the current literature. The first limitation is that very few existing approaches have comprehensively integrated the basic models of UAS safety risks. Although some works in Table 1 have considered as many as five basic models in their risk maps, several basic models include over-simplified assumptions, such as uniform distribution in impact location and certainty (probability \(=1\)) in serious injury after an impact. The lack of basic models and the use of oversimplified assumptions can have a significant impact on a model's precision. The second notable limitation is that, most relevant works in the literature can only create 2D risk maps for UAS operations planning. In a complex urban environment, however, UAS trajectory planning must be informed by a high-fidelity 3D risk map. To date, 3D risk mapping remains as a widely unexplored research area. Third, and most important to our research, none of the existing works can provide explicit guidance to risk-aware urban airspace management. Converting risk constraints into space constraints will have many benefits, such as insights for regulatory planning and consistency with other societal impact constraints. We use a concept called 3D "virtual terrain" to construct 3D 'no-fly' zones for better airspace management. Among the existing literature, the closest concept to the 3D virtual terrain is the one described in (Zhang et al., 2023), where the authors developed a 3D collision risk heatmap for an airport terminal area. In contrast, our work focuses on the TPR in complex urban environments and builds a risk map of a different nature. To address this research gap, we propose an integrated risk-based approach for generating 3D virtual risk terrain for UAS operations planning in complex urban environments. This approach includes all basic models of UAS safety risks, develops risk terrain in the 3D domain, and offers explicit guidance on urban airspace management.
## 3 The Proposed Approach
### Overview
In section we introduce details of the proposed risk-based approach for formulating 3D virtual risk terrain for airspace management and UAS trajectory planning. Figure 1 displays the main modules and sub-models in our approach. The overall approach consists of four primary modules and seven sub-models. The systems failure module includes the failure model, recovery model, impact location model, and impact stress model to characterize the ground risk of UAS operations. The analytical frameworks in the systems failure module are flexible to accommodate the variabilities and uncertainties in the domain. The third-party information module focuses on the TPR posed to people and properties on the ground. It considers the harm model and the exposure model which can accommodate for the temporal dependencies in population and vehicle exposure in an urban environment. This provides the flexibility necessary to investigate the safety terrains and requirements at different times of a day and at different locations in a city. The urban topography module employs a 3D urban model as the physical background of the virtual risk terrain. Eventually, the physical and virtual terrains will collectively determine a 'no-fly' zone in the urban airspace for UAS trajectory planning. Lastly, the safety requirements will determine the position of a 3D virtual surface, above which the UAS operations can maintain a satisfactory risk level. The final risk terrain result provides inputs to the airworthiness requirements of the system, i.e., given a safety requirement (risk standard), what failure and recovery reliability levels (in sub-models 1 and 2) are required for the UAS to operate at a certain altitude above the ground in complex urban environments.
In this risk-based framework, the first five sub-models are conditional probability models; the exposure model can be obtained through analyzing mobility and traffic data in the city; the 3D urban model can
be obtained from public sources such as OpenStreetMap (Haklay & Weber, 2008), NASA, and the United States Geological Survey (USGS). Because a probability model is defined by sample space, events, and probabilities associated with each event, we first formally define the sample spaces that are involved in this flight risk analysis.
* **Aircraft type**: \(\mathcal{A}=\{A_{1},...,A_{n}\}\). The type of aircraft has a discrete sample space. At macro level, there are several general categories of UAV, such as fixed-wing, multi-rotor, and helicopter. Within each category, there are UAVs with various detailed configurations, weights, parameters, etc. Each aircraft type in \(\mathcal{A}\) can have different features and behaviors related to safety analysis.
* **Operating conditions**: \(\mathcal{O}=\{O_{1},...,O_{n}\}\). Typical operating conditions of a UAV include takeoff, landing, hovering, and level flight at various speeds. These conditions compose a discrete sample space.
* **Environmental conditions**: \(\mathcal{E}=\{E_{1},...,E_{n}\}\). The environmental conditions can broadly include wind, temperature, other weather conditions, and urban topography. Therefore, the sample space of \(\mathcal{E}\) is inherently multivariate and continuous. However, in reliability and safety related analysis, it is more realistic to discretize \(\mathcal{E}\) into a finite number of representative scenarios.
* **Failure mode**: \(\mathcal{F}=\{F_{1},...,F_{n}\}\). This is also a discrete sample space. Some general types of failure mode include Loss of Control (LOC), Unpremeditated Descent Scenario (UDS), Controlled Flight into Terrain (CFIT), and Dropped or Jettisoned Components (DOJC). Each general failure type can be divided into multiple detailed scenarios. For example, LOC includes partial and complete losses; DOJC can happen on different types and/or numbers of components on the aerial vehicle.
* **Recovery outcome**: \(\mathcal{R}=\{R_{1},R_{2}\}\). The recovery model in the framework is takes into account state-of-the-art aerial vehicle technologies in robotics, control, and detection that can help a vehicle avoid catastrophic consequences when certain failure types occur. In this work, the recovery outcome indicates whether the aerial vehicle can recover (by its own system or with the help of a remote pilot) and land safely. \(R_{1}\) refers to successful recovery; \(R_{2}\) refers to unsuccessful recovery, which means that the aerial vehicle will enter into a crash trajectory.
Figure 1: Flowchart of the overall integrated risk-based approach for generating 3D virtual risk terrain.
* **Contingency capabilities**: \(\mathcal{C}=\{C_{1},...,C_{n}\}\). This sample space indicates what contingency capacities are onboard the aircraft. Typical examples include parachute, air bag, and emergency landing functions. An event \(C_{i}\in\mathcal{C}\) indicates the availability of each option. For example, given a total of six possible contingency capabilities, a vector \(C_{i}=[1,0,1,0,0,0]\) indicates that the first and third options are onboard.
* **Kinetic energy**: \(\mathcal{K}\). Kinetic energy is the most frequently used property to indicate the incident stress of a falling object. It is a continuous property, yet can be discretized to facilitate the analysis.
* **Harm level**: \(\mathcal{H}=\{H_{1},H_{2},H_{3},H_{4},H_{5},H_{6}\}\). The six harm levels on people or properties are Minor, Moderate, Serious, Severe, Critical, Unsurvivable, according to the Abbreviated Injury Scale (AIS).
* **Initial location**: \((x_{0},y_{0})\) indicates the 2-D coordinates of aircraft when the failure occurs.
* **Initial altitude**: \(h_{0}\) is the Above Ground Level (AGL) altitude of aircraft when the failure occurs.
* **Failure time**: \(\mathcal{T}\). Time is a crucial factor in the determination of third-party risk, as the presence and density of people and vehicles vary at different times of the day.
For simplicity, we further denote the 'current level' in each discrete sample space as \(\tilde{A}=a\in\mathcal{A}\), \(\tilde{O}=o\in\mathcal{O}\), \(\tilde{E}=e\in\mathcal{E}\), \(\tilde{F}=f\in\mathcal{F}\), \(\tilde{R}=r\in\mathcal{R}\), \(\tilde{C}=c\in\mathcal{C}\), \(\tilde{K}=k\in\mathcal{K}\), \(\tilde{H}=h\in\mathcal{H}\), and \(\tilde{T}=t\in\mathcal{T}\) respectively. Now, we define the form of each sub-model and relate them to the final flight risk metric that will be utilized to construct the 3D virtual flight risk terrain in an urban space. Below are the formal conditional probability forms of the first six sub-models.
1. **Failure model**: an important indicator of a UAV's reliability, the uncertainty in the occurrence of a failure mode \(\tilde{F}\) is dependent mainly on the aircraft type/configuration \(\tilde{A}\), the operating condition \(\tilde{O}\), and the environmental condition \(\tilde{E}\), given by \[P_{F}\left(f|a,o,e\right)=\Pr\left\{\tilde{F}=f|\tilde{A}=a,\tilde{O}=o,\tilde {E}=e\right\}\] (1)
2. **Recovery model**: the uncertainty in the ability of a UAV to recover from the failure mode and avoid catastrophic outcomes such as ballistic descent \(\tilde{R}\) is dependent on the aircraft type/configuration \(\tilde{A}\), the failure mode \(\tilde{F}\), the contingency capability \(\tilde{C}\), and the initial altitude \(h_{0}\). Note that sub-models 3 to 6 are considered only if the recovery is unsuccessful, i.e., \(\tilde{R}=R_{2}\). Here we further denote \(\mathbf{p}_{0}=(x_{0},y_{0},h_{0})\) for simplicity, the recovery model is given by \[P_{R}\left(r|a,f,c,\mathbf{p}_{0}\right)=\Pr\left\{\tilde{R}=r|\tilde{A}=a, \tilde{F}=f,\tilde{C}=c,\mathbf{p}_{0}\right\}\] (2)
3. **Impact Location model**: the spatial uncertainty in the ground location of a UAV's ground impact once an unrecoverable failure occurs \((x,y,0)\) is influenced by many factors, including the initial location and altitude \((x_{0},y_{0},h_{0})\), the aircraft type/configuration \(\tilde{A}\), the operating condition \(\tilde{O}\), the environmental condition \(\tilde{E}\), the failure mode \(\tilde{F}\), and the recovery outcome \(\tilde{R}\). The probability density function on the ground impact location is given by \(f_{G}\left((x,y,0)|(x_{0},y_{0},h_{0}),\tilde{A},\tilde{O},\tilde{E},\tilde{F },\tilde{R}\right)\). We further denote \(\mathbf{p}\) as a small area around \((x,y,0)\), then the probability that the UAV falls into \(\mathbf{p}\) is given by \[P_{G}\left(\mathbf{p}|\mathbf{p}_{0},a,o,e,f,r\right)=\Pr\left\{\mathbf{p}| \mathbf{p}_{0},\tilde{A}=a,\tilde{O}=o,\tilde{E}=e,\tilde{F}=f,\tilde{R}=r\right\}\] (3)
4. **Impact stress model**: like most relevant works in the literature, we use kinetic energy as the metric to measure a UAV's stress characteristic. The uncertainty in impact stress level \(\tilde{K}\) depends on the impact location \(\mathbf{p}\), the initial failure location \(\mathbf{p}_{0}\), the aircraft type/configuration \(\tilde{A}\), the operating condition \(\tilde{O}\), the failure mode \(\tilde{F}\), and the contingency capability \(\tilde{C}\), given by \[P_{S}\left(k|\mathbf{p},\mathbf{p}_{0},a,o,f,c\right)=\Pr\left\{\tilde{K}=k| \mathbf{p},\mathbf{p}_{0},\tilde{A}=a,\tilde{O}=o,\tilde{F}=f,\tilde{C}=c\right\}\] (4)
5. **Harm model**: the uncertainty in an EoV's harm level \(\tilde{H}\) is influenced by the aircraft type/configuration \(\tilde{A}\), the contingency capability \(\tilde{C}\), and the kinetic energy level \(\tilde{K}\), given by \[P_{H}\left(h|a,c,k\right)=\Pr\left\{\tilde{H}=h|\tilde{A}=a,\tilde{C}=c,\tilde{K }=k\right\}\] (5)
6. **Exposure model**: characterizes density of a specific EoV at time \(t\) and location \(\mathbf{p}\) in the domain. The model is given by \[E\left(\mathbf{p},t\right)=E\left(\mathbf{p},\tilde{T}=t\right)\] (6)
We define the general model of the individual risk \(R^{i}\) at ground location \(\mathbf{p}\) as the probability that the UAV failure at location \(\mathbf{p}_{0}\) can cause a certain harm level \(h\) to the EoV at ground location \(\mathbf{p}\) and time \(t\). With all six sub-models, we have
\[R^{i}_{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
### Failure Model
As the first sub-model in the framework, the failure model characterizes the probability and/or uncertainty in the occurrence of specific failure modes. The failure model is dependent on the aircraft type and configuration, operating conditions, and environmental conditions. According to (Clothier et al., 2018), failure modes of UASs can be broadly classified into four categories:
1. Unpremeditated Descent Scenario (UDS): a failure (or combination of failures), which results in the inability of the aerial vehicle to maintain a safe altitude above the surface or distance from objects and structures.
2. Loss of Control (LOC): a failure (or combination of failures), which results in the loss of control of the aerial vehicle and may lead to impact at high velocity.
3. Controlled Flight into Terrain (CFIT): when an airworthy aerial vehicle is flown, under the control of a qualified remote pilot or certified autopilot system, unintentionally into terrain (water, structures, or obstacles).
4. Dropped or Jettisoned Components (DOJC): failures that result in a component of the aerial vehicle (including its payload or stores) being dropped or jettisoned from the aerial vehicle.
Each failure mode can be attributed to a list of failures. For example, UDS can be caused by the propulsion system failure (Burke et al., 2011); components involved in DOJC can be propellers, camera, and packages. The type of failure mode is a crucial factor in the safety analysis of an UAS as it largely influences the subsequent three sub-models - recovery model, impact location model, and impact stress model. For instance, operations with UDS have better controllability on the impact location over those having LOC or CFIT (Washington et al., 2017). Overall, the failure models can be developed using (1) historical data (from failure, accident, incident), (2) expert opinions, and (3) reliability info on the components, subsystems, and systems of the UAS. Due to the limited historical data on UAS failures, functional and structural decomposition approaches, as well as the elicitation of expert opinions have been employed to assess the failure rate of a vehicle or system.
Although dependencies exist between different failure modes, most models in the literature considered a single failure mode in their analyses. The majority of models also assumed constant failure rates, although uncertainty exists because of a lack of data and knowledge on UAS failure. Going forward, with improved accessibility to UAS reliability data, a system-informed data-driven approach can advance the assessment of UAS system failure. Among the existing failure models, (Clothier et al., 2007) assumed \(10^{-5}\) per flight hour for unrecoverable flight critical event. (Ford and McEntee, 2010) assumed \(10^{-5}\) per flight hour for catastrophic failure (with uncontrolled flight termination) and \(10^{-4}\) per flight hour for hazardous failure. (Stevenson et al., 2015) assumed the Mean Time Between Failures (MTBF) to be \(10^{5}\) for sub-urban and \(10^{6}\) for urban areas. (Petritoli et al., 2018) conducted a more detailed reliability evaluation on UAV and produced a Failure In Time (FIT) rate table for commercial drone system and components. Table 2 is a summary of their estimations on UAV reliability.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**System Description** & **System FIT (Failure/\(10^{6}\) hrs)** & **MTBF (hours)** & **Incidence (\%)** \\ \hline Ground control system & 2.00 & 500,000.0 & 6.62 \\ Mainframe & 2.77 & 360,984.8 & 9.16 \\ Power plant & 9.94 & 100,603.6 & 32.88 \\ Navigation system & 9.41 & 106,269.9 & 31.13 \\ Electronic system & 5.01 & 199,600.8 & 16.57 \\ Payload & 1.10 & 909,090.9 & 3.64 \\ \hline TOTAL & 30.23 & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Reliability information of a commercial drone (from (Petritoli et al., 2018))
In the case study described in Section 4, we will create 3D virtual risk terrains for catastrophic failures such as LOC, because of improved accessibility to models and evidence in the literature. Since most relevant works in the literature have adopted \(10^{-5}\) to \(10^{-6}\) per flight hour for catastrophic system or component failures, and considering the existence of uncertainty in this process, we will conduct a failure rate sensitivity analysis by using a range of failure rates and then compare the results.
### Recovery Model
The recovery model characterizes the uncertainty in the UAV's ability to recover to a nominal or degraded operational state immediately after the failure occurs. In general, there are two types of recovery from UAV failure or incident. The first type of recovery refers to the UAV's ability to execute an emergency landing given the occurrence of a failure. This requires the UAV system (with or without a remote pilot) to at least maintain partial function such that a crash or other catastrophic consequences can be avoided. Under certain failure modes, it is possible for the UAV to land safely without causing damage to any EoV on the ground. When this type of recovery is attained, there is zero risk to EoVs on the ground and the remaining sub-models are not invoked. The second type of recovery refers to the mitigation of ground impact via contingency equipment such as air bags. Such capabilities are helpful in mitigating damage when an uncontrollable descent is unavoidable, which can happen under certain failure models such as LOC. The recovery model in this framework considers the first type of recovery, while the effect of ground impact mitigation equipment is considered later in the harm model.
The recovery model is built upon the existence and reliability of a failure recovery system. It is dependent on factors such as failure type, contingency capability, and the rate of success of the recovery system. When determining the amount of risk reduction that would be achieved through the failure recovery system, very few works in the literature have provided quantitative assessment. One significance of the recovery model in this framework is the space it provides to incorporate state-of-the-art control and robotics technologies. New developments in both hardware and software could facilitate the first type of recovery and reduce the probability of a crash during particular failure types or incidents. For example, at the start of UDS, the recovery capability in the control system could restart the engine to avoid further descent. When a component failure (e.g., rotor, blade) occurs, advanced control algorithms could facilitate the maintenance of an acceptable flight altitude, resulting in a safe landing. When a loss of communication happens, some UAVs could rely on the sensing and perception capabilities to make decisions that protect public safety. However, when a critical failure occurs, such as LOC or massive component failure, the probability of success for the first type of recovery is close to zero, i.e., \(P_{R}(\text{the recovery is unsuccessful})=1\). In the case study described in this paper, we are building risk terrains for the LOC failure mode, which is the worst case scenario of UAV failure for public safety. During LOC, the UAV cannot recover to safe landing. Use of contingency equipment, such as the correct deployment of a parachute, can increase the probability of successfully mitigating ground impact. In the case study we assume the following recovery model for LOC. When the UAV is not equipped with parachute:
\[P_{R}\left(\text{Unsuccessful recovery}|\text{Multi-rotor UAV},\text{ LOC},\text{Without parachute},\mathbf{p}_{0}\right)=1 \tag{11}\]
which follows the model format in Equation (2). The recovery model with parachute takes into account the AGL altitude \(h_{0}\) of the UAV operation. The deployment of a safety parachute system on UAV has been considered by many studies and designs in the literature (Al-Madani et al., 2018; Hasan et al., 2019; Panta et al., 2018). Ballistic parachute deployment requires less deployment time and is therefore well suited to multi-rotor UAVs in the urban environment. Each parachute system has a minimum deployment altitude, such that the chance of a successful recovery (in terms of not damaging EoVs on the ground) increases with higher altitude. At the time of this writing, the minimum deployment altitude for relevant parachute systems ranges from 20 m to 50 m. Assuming that under LOC, a UAV with a parachute system has a maximum probability of 50% to avoid a crash, the sigmoid function in Figure 2 models the probability of successful recovery versus altitude. Consequently, the probability of unsuccessful recovery is given by
\[P_{R}\left(\text{Unsuccessful recovery}|\text{Multi-rotor UAV},\text{ LOC},\text{With parachute},\mathbf{p}_{0}\right)=1-\frac{0.5}{1+1.35\exp(45-h_{0})} \tag{12}\]
The conditional probabilities in the recovery model can be further updated when more information on the failure recovery system's effectiveness or Subject-Matter Expert (SME) opinions are available.
### Impact Location Model
The impact location model is a critical component in the integrated risk-based approach and one that dominates the spatial pattern of the 3D virtual risk terrain. Formally, an impact location model in UAS risk analysis characterizes the spatial-temporal uncertainty in the location and area of a UAV's ground impact once a failure occurs (Washington et al., 2017). In our approach, the impact location model is invoked under the assumption that the result of the recovery model is unsuccessful - the system (with or without human interference) is unable to recover to a nominal or degraded operational state such that the UAV can safely land. The potential impact locations due to system failure are dependent on the crash trajectory of the UAV, which can be influenced by the following five factors:
* Type of UAV: a significant influencer on the impact distribution. For example, a fixed-wing UAV without power has the capability to glide a certain distance before impacting with the ground; a multi-rotor UAV under various failure modes is likely to start a free fall, leading to a smaller impact area.
* Failure mode: an indicator of the remaining controllability the UAV system or the remote pilot can maintain. The simple ballistic trajectory model could be appropriate for DOJC or LOC; more complex dynamic models are required for LOC or UDS (Washington et al., 2017).
* Initial conditions at failure: the initial altitude and locations are key factors in determining the impact range; the initial velocity determines the skewness or movement of the impact distribution.
* Environmental conditions: prevailing wind condition is an influencer on the trajectory; urban topography defines the physical constraints in the 3D space.
* Contingency capabilities: such as parachutes, air bags, and other damage control functions, will also influence the spatial and temporal patterns of the impact distribution.
Because (1) the crash trajectory can be jointly determined by multiple factors in this list, and (2) of the considerable variabilities and uncertainties in these factors, high-fidelity modeling of the ground impact location of UAV under off-nominal conditions has been a challenging task. In the meantime, the importance of the problem has made it an active research area in the UAV and system safety communities. Due to the lack of experimental and real-world data to apply statistical approach for characterizing the impact model, most of the existing works in the literature utilize a combination of modeling and simulation approach and analytical (probabilistic) approach for estimations. The simulation approach predicts the crash trajectory of UAV using aerodynamics models, flight dynamics models, and laws of physics. The probabilistic approach
Figure 2: The probability of successful recovery vs. altitude when the multi-rotor UAV is deployed with parachute under LOC
accounts for the epistemic and aleatory uncertainties in the process. A range of specific impact location models are inevitably required to capture the diversity and uncertainty in the impact location for different combinations of factors above. Some works (Dalamagkidis et al., 2008; Foster and Hartman, 2017) modeled a single point impact with a low level of uncertainty. Because of the high degree of uncertainty associated with the potential impact locations under the occurrence of a failure, we focus on the impact location models in the literature which use a probability distribution to characterize the impact location of a UAV.
The current literature includes impact location models for both fixed-wing and multi-rotor configurations. As previously mentioned, we use multi-rotor UAV in this work because its VTOL capability has a unique advantage to operate in complex urban environments. Most earlier works have been limited to ballistic descent, which is applicable only to catastrophic failure conditions. Some works (Ancel et al., 2017; Kim and Bae, 2022; Wu and Clothier, 2012) assumed uniform distribution of impact location inside a certain crash area for simplicity, which cannot reflect the higher impact probabilities at locations surrounding the crash trajectory. Among some of the representative works in recent years, (Aalmoes et al., 2015) used bivariate normal distribution to model the potential impact area of general types of system failure. (Haartsen et al., 2016) used flight simulation to determine the potential impact locations for quadcopter UAVs under various failure types and flight conditions and found that the impact areas are elliptical shaped, depending on the altitude and initial velocity. (Cour-Harbo, 2020) developed an analytical solution to estimate the 2D ground impact probability distribution for the ballistic descent of a UAV, while considering uncertainty sources from aircraft and wind parameters. (Lin and Shao, 2020) simulated Newton's laws of motion and Galileo's free fall to assess a UAV's crash probability density (CPD) under loss of power. (Man et al., 2022) used simulation to investigate the crash trajectory area for different failure modes of a Quadrotor UAV, such as when only one or two motors/blades fail with or without control system. They found that even when crashing to the ground is unavoidable, the control system helps reduce the size of the potential impact area.
Although variabilities exist in failure modes, initial flight conditions, and environmental conditions, from a probabilistic modeling perspective, we think that the impact location models can generally be classified as either a Gaussian model or a Non-Gaussian model. Below is a summary of the reasoning behind each of these two probabilistic models.
* Gaussian model: a Gaussian impact location model is useful in both individual and cumulative ground impact estimations. On the individual case, when the UAV is hovering (Man et al., 2022), during takeoff and landing, or when the initial velocity and wind play a weak role, it is most likely that its ground impact location is immediately below the initial failure location, since a multi-rotor UAV's crash trajectory is dominated by spin and autorotation while falling to the ground (Lin and Shao, 2020). On the cumulative case, researchers (Haartsen et al., 2016; Lum et al., 2011) also found that when simulating a large number of failure types and flight conditions, the vast majority of crashes occur in close proximity to the initial failure location.
* Non-Gaussian model: a non-Gaussian impact location model mainly applies to individual failure cases when the initial velocity and/or wind condition play a considerable role in the crash trajectory (Cour-Harbo, 2020; Primatesta et al., 2020). In these individual cases, it is unlikely that the ground impact location is immediately below the initial failure location. Instead, the UAV would travel along a declining horizontal trajectory such that the impact area is at a distance from the initial failure event. Then, a non-Gaussian probabilistic model is required to characterize the probable impact locations.
In this work, we propose two probabilistic frameworks to accommodate the Gaussian and non-Gaussian impact location models. Specifically, we add two new considerations in these frameworks to better serve the generation of 3D virtual risk terrain for UAS trajectory planning. The first consideration is the development of 3D impact location models. Each of the impact location models mentioned in this section are 2D models, i.e., in a probabilistic sense, the density function of the ground impact location \(f(x,y)\) is always obtained for a specific initial altitude such that it is only a function of the 2D initial location \((x_{0},y_{0})\). Here, we add altitude as the third dimension and propose model forms that are functions of 3D initial location \((x_{0},y_{0},h_{0})\), where \(h_{0}\) is the initial altitude. The 3D impact location models can better capture the variation in an impact area as altitude changes. The second consideration is that, for each initial failure location \((x_{0},y_{0},h_{0})\), we
aim to obtain a impact location model for all flight directions (360 degrees). This is because the resulting 3D virtual risk terrain used for flight trajectory planning should be independent from flight direction, i.e., a UAV can pursue any flight direction at a "safe" location in the air. The following two subsections introduce details of the two probabilistic frameworks: the 3D Gaussian impact model, and the Rayleigh impact model for the non-Gaussian case. Both parametric models are flexible to accommodate variations of different UAV types and flight conditions. The model parameters can be obtained by fitting the model to real-world experiment or simulation data.
#### 3.4.1 Gaussian Impact Model
The 3D Gaussian impact model captures cases where the majority of crashes occur in close proximity to the ground location that is immediately below the initial failure location. Because the ground impact area generally becomes larger with increasing initial altitude, a visualization of the 3D Gaussian impact model is similar to a 3D cone, as displayed in the left plot of Figure 3. Below we define the Gaussian impact model.
Suppose that the UAV's failure happens at location \((\mathbf{p}_{0},h_{0})\), where \(\mathbf{p}_{0}=(x_{0},y_{0})\) and \(h_{0}\) is the AGL altitude in the 3D space, and that the UAV itself is unable to recover, we use the following modified model form of the multivariate Gaussian distribution to represent the impact location density on the ground. Let \(\mathbf{p}=(x,y)\), we have
\[f_{G}(\mathbf{p}|\mathbf{p}_{0},\boldsymbol{\Sigma})=\frac{1}{\sqrt{(2\pi)^{2 }|\boldsymbol{\Sigma}|}}\exp\left(-\frac{1}{2}\left(\mathbf{p}-\mathbf{p}_{0} \right)^{\top}\boldsymbol{\Sigma}^{-1}\left(\mathbf{p}-\mathbf{p}_{0}\right)\right) \tag{13}\]
where \(\boldsymbol{\Sigma}=f(h_{0})\boldsymbol{\Sigma}_{0}\) and \(f(h_{0})\) has the form \(f(h_{0})=\alpha h_{0}^{2}\), and \(\alpha\) is a scaling constant. We further assume that the impact model has spherical symmetry. Therefore, \(\boldsymbol{\Sigma}=\mathbf{I}\), and \(\boldsymbol{\Sigma}=\alpha h_{0}^{2}\mathbf{I}\). Equation (13) therefore has the following simplified model form
\[f_{G}(\mathbf{p}|\mathbf{p}_{0},h_{0})=\frac{1}{\sqrt{(2\pi)^{2}\alpha^{2}h_{ 0}^{4}}}\exp\left(-\frac{1}{2\alpha h_{0}^{2}}\left(\mathbf{p}-\mathbf{p}_{0} \right)^{\top}\left(\mathbf{p}-\mathbf{p}_{0}\right)\right) \tag{14}\]
Now, it can be seen that the only unknown parameter in Equation (14) is the scaling constant \(\alpha\), which depends on properties such as vehicle type and flight conditions. In this work, we estimate \(\alpha\) using results and data from the existing literature. With the bivariate Probability Density Function (PDF) in Equation (14), the next step is to calculate the probability that the UAV crashes into a specific area on the ground. This can be computed by integrating the bivariate PDF over the specific area, as shown in the right plot of Figure 3. In this work, we define the specific area of interest as a square with an area of 4 square meters
Figure 3: Illustrative Gaussian impact model: change of impact distribution with altitude (left), and calculation of crash probability into a specific area through integration (right).
in the proximity of point \(\mathbf{p}\). Let \(\delta\) be the distance from \(\mathbf{p}\) to the middle of the square's side length (\(\delta=1\) when area is 4), we can express the probability as follows
\[P_{G}^{g}(\mathbf{p}|\mathbf{p}_{0},h_{0})=\frac{1}{\sqrt{(2\pi)^{2}\alpha^{2}h_ {0}^{4}}}\int_{x-\delta}^{x+\delta}\int_{y-\delta}^{y+\delta}\exp\left(-\frac{1 }{2\alpha h_{0}^{2}}\left(\mathbf{p}-\mathbf{p}_{0}\right)^{\top}\left(\mathbf{ p}-\mathbf{p}_{0}\right)\right)dydx \tag{15}\]
where the value can be obtained through numerical integration methods and tools. The Gaussian model in Equation (15) computes the probability that the UAV will crash into the small square area around ground point \(\mathbf{p}\) if the initial failure location is \((\mathbf{p}_{0},h_{0})\).
#### 3.4.2 Rayleigh Impact Model
The 3D Rayleigh impact model captures cases where the impact area is at a horizontal distance from the initial failure event. When different flight directions are considered, the nature of this model is a multi-modal spatial impact distribution. Figure 4 provides a simple illustration of the Rayleigh impact model on a 2D plane. Suppose that the initial failure location is at \(\mathbf{p}_{0}\) in the air, the mean location of the impact location density depends on the direction of the initial velocity. Assuming level flight, the direction of the initial velocity can be either left or right. Therefore, the impact area can have two possibilities; centered around a location that has a certain horizontal distance relative to \(\mathbf{p}_{0}\), either to the left or to the right. This results in a bi-modal impact distribution. When this simple illustration is extended to the complete 3D case where we consider all 360 degrees in the horizontal plane as possible flight directions, the Rayleigh impact model for each initial altitude is analogous to a 2D ring on the ground. In this concept, the most likely impact locations reside at the circumference that is distance \(\Delta\) from the origin immediately below \(\mathbf{p}_{0}\). Below we define the Rayleigh impact model.
The Rayleigh impact model has the following analytical form. Given the failure location \((\mathbf{p}_{0}=(x_{0},y_{0}),h_{0})\), the Rayleigh probability density function for the crash location is circularly symmetric around the origin \((\mathbf{p}_{0}=(x_{0},y_{0}),0)\) that is directly below the failure location. For any point on the ground \(\mathbf{p}=(x,y)\), we have
\[f_{R}(\mathbf{p}|\mathbf{p}_{0},\sigma,\Delta)=\frac{1}{2\pi\sigma^{2}}\exp \left(-\frac{1}{2\sigma^{2}}\left(\|\mathbf{p}-\mathbf{p}_{0}\|_{2}-\Delta \right)^{2}\right) \tag{16}\]
where \(\Delta=\beta h_{0}\) is the displacement distance from the origin, and \(\sigma=l(h_{0})=\gamma h_{0}\) represents the location of the mode that is \(\Delta+\sigma\) from the origin. Therefore, Equation (16) can be further written as
\[f_{R}(\mathbf{p}|\mathbf{p}_{0},h_{0})=\frac{1}{2\pi\gamma^{2}h_{0}^{2}}\exp \left(-\frac{1}{2\gamma^{2}h_{0}^{2}}\left(\|\mathbf{p}-\mathbf{p}_{0}\|_{2}- \beta h_{0}\right)^{2}\right) \tag{17}\]
Figure 4: Illustrative Rayleigh impact model: change of impact distribution with altitude and flying direction.
For the Rayleigh impact model, two unknown parameters \(\beta\) and \(\gamma\) need to be obtained through fitting statistical models to data from computer simulations or real-world flight tests for vehicle types and flight conditions. Like the Gaussian impact model, in this work we obtain these two parameters from the crash distributions of representative aerial vehicles in the literature. Under the Rayleigh model, the probability that the UA crashes into a specific area on the ground is computed through the integration
\[P_{G}^{r}(\mathbf{p}|\mathbf{p}_{0},h_{0})=\frac{1}{2\pi\gamma^{2}h_{0}^{2}} \int_{x-\delta}^{x+\delta}\int_{y-\delta}^{y+\delta}\exp\left(-\frac{1}{2\gamma ^{2}h_{0}^{2}}\left(\|\mathbf{p}-\mathbf{p}_{0}\|_{2}-\beta h_{0}\right)^{2} \right)dydx \tag{18}\]
Figure 5 displays a set of the Rayleigh impact models, where the parameters \(\sigma\) and \(\Delta\) in Equation (16) vary. When \(\mathbf{p}_{0}\) is at (0, 0), the circular bivariate Rayleigh distribution at different parameter settings is an illustration of how this framework can accommodate non-Gaussian impact location densities with different horizontal displacements and variances.
#### 3.4.3 The Individual Risk Terrain
Both the Gaussian and Rayleigh impact models are parametric probabilistic models. The Gaussian impact model in Equation (14) has one parameter \(\alpha\); the Rayleigh impact model in Equation (17) has two parameters \(\beta\) and \(\gamma\). These parameters can be obtained by analyzing real-world experiment data or simulation data. A qualified dataset must have: (1) information regarding the mode and dispersion of the potential impact locations at a certain altitude, and (2) results from multiple altitudes. In the case study, we use a medium sized multi-rotor delivery UAV that weighs 25 kg and normally flies at a speed of 10 m/s. Although the current regulation requires that such UAVs only fly up to a maximum of 400 feet (122 m), we will also explore the flight risk above this altitude to gain better insights on urban airspace management for safety purposes. To pursue close estimations to such conditions, we use simulation results from (Lin and Shao, 2020) and (Cour-Harbo, 2020) to obtain the parameter(s) for the Gaussian and Rayleigh impact models, respectively. Both works investigated, at multiple altitudes, the 2D ground impact distributions of multi-rotor UAV under a major in-flight incident, such as (near) complete loss of lift. The three model parameters used in the case study are: \(\alpha=0.0244\), \(\beta=0.2790\), \(\gamma=0.0918\).
With the impact location models, we can build some illustrative visualizations of the individual risk terrains to demonstrate the spatial variation of ground impact risk. Figure 6 displays a group of 2D illustrations of the individual risk terrain. The left plot of Figure 6 depicts a grid in the 2D space with dimensions 20 meters x 60 meters. Suppose that a pedestrian is at \(\mathbf{p}=(0,0)\), while a quadrotor UAV can fly in the space above the pedestrian. In this example a UAV impacts at a point on the ground if it falls within 1 meter of the point (for \(\mathbf{p}\), this interval is \([-1,1]\)). For each location \(\mathbf{p}_{0}\) on the grid, with an impact location model, the UAV's risk to \(\mathbf{p}\) is defined as
\[R^{i}=P(\text{Impact location is }\mathbf{p}|\text{Unrecoverable failure happens at }\mathbf{p}_{0}) \tag{19}\]
Figure 5: Illustrative examples of of the circular bivariate Rayleigh distribution at different parameter settings
The individual risk terrain for \(\mathbf{p}\) consists of such probability values at all points on the grid. The middle plot of Figure 6 visualizes the individual risk terrain under the Gaussian impact location model, where the black dashed lines indicate contours at levels of 0.05 and 0.1. Under the Gaussian impact location model, locations with the highest risks are the grid points that are immediately above and not too distant from (0, 0). Along the vertical line that goes through (0, 0) - the "centerline", the risk is below 0.1 when the altitude is higher than 25 meters; the risk further decreases to below 0.05 when the altitude is over 50 meters. The right plot of Figure 6 visualizes the individual risk terrain under the Rayleigh impact location model, which shows a different pattern. Under the Rayleigh impact location model, points on the "centerline" generally have very low risks. The high-risk regions and the contours become oblique, and the two symmetric regions capture risk terrains for both flight directions - left and right.
### Impact Stress Model
The impact stress model describes the probability or uncertainty in the impact's consequential harmful conditions (stresses) at a given location and time (Washington et al., 2017). Common metrics for measuring stress include kinetic energy (KE), momentum, and energy density of the UAV. The levels of impact stress can then be related to the harm levels through the harm model. The impact stress model is mainly characterized by type of UAV, initial failure location and ground location, operating conditions (e.g., initial velocity), and contingency equipment (e.g., air bags). More complex stress models also consider secondary effects such as debris scattering, explosions, and the release of hazardous materials. Most stress models in the literature used the KE associated with the UAV's impact as the primary impact stress metric. Thus, the primary harm mechanism considered is trauma caused through blunt force impact. Sources of uncertainty in the impact stress model include mass, speed, and orientation at the point of impact. Because of limited data availability to develop the impact stress model, physics-based models have been used to determine the amount of KE potentially transferred on impact. Below we follow a procedure used by some recent literatures (Koh et al., 2018; Pang et al., 2022) to calculate the impact KE.
The falling of UAV from the initial failure location is impacted by two main types of forces: (1) the gravitational force \(F_{g}=mg\), where \(m\) is the mass of the UAV and \(g=9.8\)m/s\({}^{2}\) and (2) the (vertical) drag force \(F_{d}\), which can be determined by
\[F_{d}=\frac{1}{2}\rho v_{\perp}^{2}SC_{D} \tag{20}\]
where \(\rho=1.225\)kg/m\({}^{3}\) at sea level is the air density, \(v_{\perp}\) is the vertical velocity of the falling UAV, \(S\) is the cross-sectional area of the UAV in the direction of falling, and \(C_{D}\) is the drag coefficient. Then, the acceleration of the falling UAV can be calculated as
\[a=\frac{F_{g}-F_{d}}{m}=g-\frac{\rho v_{\perp}^{2}SC_{D}}{2m} \tag{21}\]
Figure 6: Example of individual risk terrains in the 2D space: the risk terrain grid (left), risk terrain under Gaussian impact model (middle), and risk terrain under Rayleigh impact model (right)
When the initial vertical velocity is zero, the final impact velocity (also called terminal velocity) of the UAV from altitude \(h\) AGL can be obtained as
\[u=\int_{0}^{T}\left(g-\frac{\rho v_{\perp}^{2}SC_{D}}{m}\right)dt=\sqrt{\frac{2 mg}{\rho SC_{D}}\left(1-\exp\left(-\frac{\rho SC_{D}h}{m}\right)\right)} \tag{22}\]
Finally, the falling UAV's KE at the ground impact location can be obtained as
\[K_{g}=\frac{1}{2}mu^{2}=\frac{m^{2}g}{\rho SC_{D}}\left(1-\exp\left(-\frac{\rho SC _{D}h}{m}\right)\right) \tag{23}\]
In addition to the impact stress model, some literature (Washington et al., 2017) also mentioned a similar term called "incident stress model", which describes the uncertainty in the magnitude of stress that is actually transferred to an EoV when a particular attenuating or amplifying factor is present. The attenuating factors include various types of shelters (e.g., building structures), vehicles, and personal protective equipment (e.g., helmets). The probability of people being protected by these attenuating factors varies with the space and time that the overflight occurs. Many impact models in the literature do not consider this dimension and assume that 100% of the falling UAV's KE is transferred to an EoV. In this work, we focus our consideration on two types of EoVs in an urban environment - pedestrians and vehicles. For pedestrians, we assume that no attenuating factor is present in daily situations. This framework can further accommodate the incident stress model and relevant considerations when more knowledge becomes available.
### Harm Model
The harm model describes the uncertainty in the level of harm/damage caused to an EoV by an impact stress. More specifically, it relates an impact stress of a certain magnitude to the type and severity of the unwanted outcome. Two key factors involved in a harm model are EoV type and harm mechanism. Common EoV types include people, animals, vehicles, other properties, and the environment. An individual EoV's characteristics, such as a person's height, weight, and age, can also affect the EoV's physical response to an impact stress. Common harm mechanisms include penetration and laceration for small multi-rotor UAVs, crushing and blunt force trauma for larger UAVs, as well as blast and burns (Washington et al., 2017). Compared to physical harms, psychological harms have not been adequately considered in the literature. The failure of a UAV could cause harm to an EoV through one or more harm mechanisms. The most commonly studied harm mechanism in UAS safety analysis is blunt force trauma (Shelley, 2016), which can cause serious head injuries such as skull fracture. Some works (CAS Authority, 2013; Weibel & Hansman, 2006) have also investigated certain aspects of cutting and penetration. It is more challenging to assess the combined effects of multiple harm mechanisms, which could happen when there is no dominating harm mechanism for some specific UAV configurations. It is also common in previous harm response investigations (CAS Authority, 2013; Feinstein et al., 1968) to assume a specific demographic EoV model, such as an adult male with average physical fitness.
Harm models in the literature are built upon expert judgment, historical data, and impactor studies. Models based on inputs from SMEs output fixed probabilities of fatality, such as 100% (Clothier et al., 2007; Weibel & Hansman, 2006) or 50% (Lum et al., 2011), for certain types of UAV or all UAV strikes. The use of models informed by historical data (CAS Authority, 2013; Melnyk et al., 2014) from accidents and incidents should take into account information such as the specific UAV type and configuration. Models based on experimental or simulation data (Ball et al., 2012; Dalamagkidis et al., 2008) can provide information on specific harm mechanisms, such as blunt force trauma, on specific body parts such as the head (Raymond et al., 2009). Overall, modeling harm caused by a falling object is a highly complex problem (Melnyk et al., 2014) with multiple sources of uncertainty. Every model in the literature has both its assumptions and limitations.
We next discuss two categories of harm models in the literature - fatality models and casualty models. Some harm models directly relate impact energy to the probability of fatality. (Shelley, 2016) modeled the
probability of fatality using the following logistic curve:
\[P_{1}(\text{fatality}|E_{i})=\frac{1}{1+\exp\left(-k(E_{i}-E_{0})\right)} \tag{24}\]
where \(E_{0}\) is the impact energy associated with a 50% probability of a fatality, \(E_{i}\) is the impact energy, and \(k\) is a constant. In a more recent work, (Primatesta et al., 2020) further included the sheltering factor and suggested the computation of the fatality rate using
\[P_{2}(\text{fatality}|E_{i})=\frac{1-k}{1-2k+\sqrt{\frac{\alpha}{\beta}}\left( \frac{\beta}{E_{i}}\right)^{\frac{1}{4C_{S}}}} \tag{25}\]
where \(k=\min(1,(\beta/E_{i})^{1/4C_{S}})\), \(C_{S}\in(0,1]\) is the sheltering coefficient, \(\alpha\) is the impact energy needed to cause 50% probability of fatality with \(C_{S}=0.5\), and \(\beta\) is the impact energy required to cause fatality as \(C_{S}\) approaches zero. In addition to the probability of fatality, works (Ball et al., 2012; Burke et al., 2011; Dalamagkidis et al., 2009; Melnyk et al., 2014) also suggested energy level cut-offs for the nonlethal impact KE, which are mostly in the range of 70 to 90 J.
On the other hand, studies on non-fatal injuries of varying severity (Arterburn et al., 2017; Barr et al., 2017), also called casualty models, are useful resources for assessing the harm of a falling UAV. Different criteria that relate the impact KE to the severity of various types of injuries have been established. In this work we highlight and use two most relevant harm criteria: Abbreviated Injury Scale (AIS) and Blunt Criterion (BC). The AIS (Gennarelli et al., 2008; Greenspan et al., 1985) is the most widely-used criterion to assess the severity of individual injury based on medical diagnosis. This global, anatomical-based coding system, initially proposed to classify injuries sustained in vehicle accidents by the Association for the Advancement of Automotive Medicine (AAAM) and later on widely adopted by other industries, defines the severity of injuries throughout the body. Table 3 includes details of six injury classifications, where a higher AIS level indicates an increased threat to life. An advantage of the AIS system, as a mature assessment tool, is that the AIS score can be calculated or converted from a wide range of impact stress metrics, such as KE, forces, and acceleration. In the safety analysis of UAS systems and beyond, most literature (Arterburn et al., 2017; CAS Authority, 2013; Magister, 2010) used AIS level 3 as a reference level, i.e., any injury greater than AIS level 3 is considered substantial.
The BC correlates the KE deforming the body on impact with the body's ability to tolerate the energy on impact (Bir and Viano, 2004; Magister, 2010; Sturdivan et al., 2004), and has been extensively used to predict the level of injury due to blunt impacts. A model to compute the magnitude of BC is given by
\[BC=\ln\left(\frac{E_{i}}{TDM^{\frac{1}{3}}}\right)=\ln\left(\frac{E_{i}}{kDM^{ \frac{4}{3}}}\right) \tag{26}\]
where \(M\) (kg) is the mass of the struck body, \(T\) (cm) is the combined thickness of the soft tissue, \(D\) (cm) is the UA characteristic diameter (impact diameter), and \(k\) is the coefficient for determining the body wall thickness \(T\), with \(k=0.593\) for female and \(k=0.711\) for male. As a stronger criterion than KE, the BC
\begin{table}
\begin{tabular}{l|l|l|l} \hline
**AIS Code** & **Injury** & **Example** & **Probability of Death** \\ \hline
1 & Minor & Superficial Laceration (Skin cut) & 0\% \\
2 & Moderate & Minor Skull Fracture & 1–2\% \\
3 & Serious & Major Skull Fracture & 8–10\% \\
4 & Severe & Severe Life-Endangering Fracture & 5–50\% \\
5 & Critical & Ruptured Liver with Tissue Loss & 5–50\% \\
6 & Unsurvivable & Death & 100\% \\ \hline \end{tabular}
\end{table}
Table 3: Levels and details in the Abbreviated Injury Scale (AIS)
is recognized by previous works (Magister, 2010) as a suitable UAS design and airworthiness criterion for minimizing ground injuries in unsheltered populated areas. The final puzzle piece in the harm model is a relationship between BC and AIS. Researchers (Bir and Viano, 2004) studied injury data from ballistic impacts and developed a logistic regression model which predicts the probability of AIS 2-3 injuries using BC. The logistic regression model is given by
\[P(AIS=3)=\frac{1}{1+\exp\left(17.76-38.50BC\right)} \tag{27}\]
By combining Equations (26) and (27), we obtain an estimation of the probability that an individual will sustain an AIS level 3 injury given the impact KE of the UAV. This casualty model is used as the human injury model in this risk-based framework.
Another EoV we consider in this risk model is ground vehicles. There exist two types of treatments when considering the impact of UAV failure on ground vehicles. In one treatment, the focus is on the probability that people sitting in the a vehicle will get injured; vehicles are considered as people with a sheltering factor. In the second treatment, the focus is on the damage to a vehicle and possible secondary hazards, such as vehicle accidents. Research efforts on the evaluation of UAVs' impacts on ground vehicles are still at an early stage. Some existing studies have utilized methods such as Finite Element Method (FEM) (Che Man et al., 2022) and collision tests (Lee et al., 2019; Zhang et al., 2021) to investigate the damage resulting from a UAV collision on vehicles and glass panels. The extent of damage can be influenced by many factors such as the UAV type, UAV weight, impact angle, size and material of the impact location, and temperature. Therefore, a mature and generalized model is still lacking. Because our modeling scope consider small to medium sized multi-rotor UAVs, a great amount of their impact energy can be absorbed by the windshield or metal structures of the vehicle, such that penetration rarely happens (Che Man et al., 2022). As a result, the probability that the UAV impact will cause direct (and serious) harm to people in the car is very low.
However, if a UAV crashes into a car windshield, the damage could reduce driver visibility and lead to serious traffic accident. This consideration is utilized as the criterion to assess the level of damage on ground vehicles. Using the Impact Effect Assessment (IEA) proposed by EASA in their 'Drone Collision' Task Force (European Aviation Safety Agency, 2016), standards for three drone collision damage levels - High, Medium, and Low are displayed in Table 4. We refer to the IEA standards and use medium level damage (similar to AIS level 3 in the human case) as the threshold for serious damage on ground vehicles.
By referring to results and model forms from (Che Man et al., 2022) and (Lee et al., 2019), we use a sigmoid function to model the probability that a UAV crash will cause medium level damage to a car windshield, assuming that the vehicle speed is 50 km/h (31 mi/h) in an urban setting and that the impact angle is 90 degrees. The ground vehicle damage model is given by
\[P(\text{Medium level damage})=\frac{1}{1+0.5\exp\left(6-5E_{i}\right)} \tag{28}\]
where \(E_{i}\) is the impact energy (KE in kJ) of the UAV. This ground vehicle damage model is based on the impact energy of the UAV, while some works (e.g., (Lee et al., 2019)) also suggested the use of BC adjusted to car windshield. Figure 7 displays the shape of this function within 0 to 2 kJ. For example, a multi-rotor UAV with a weight of 2 kg and an impact velocity 40 m/s results in an impact KE of 1.6 kJ, which has a probability of 0.937 to cause medium level damage to a car windshield under this representative condition.
\begin{table}
\begin{tabular}{p{71.1pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}} \hline
**Component/Effects** & **High** & **Medium** & **Low** \\ \hline General Components & Penetration, major deformation, part detachment & No penetration but limited deformation & Only dents or scratches \\ \hline Windshield & Penetration or total loss of visibility & No Penetration, partial loss of visibility & No or limited damage, Nonsignificant loss of external visibility \\ \hline \end{tabular}
\end{table}
Table 4: Impact Effect Assessment (IEA) at Component Level
### 3D Urban Model
This sub-model aims to develop a 3D CAD model for a complex real-world urban environment. Here, the preparation of a 3D urban model involves two steps. The first step is to extract urban information in the form of geospatial data. In this work, we obtain the general building and terrain information of a representative city from an online platform CADMAPPER. CADMAPPER is a tool commonly utilized by architects, urban planners, and designers to create 3D CAD model of a city's terrain, buildings, roads, etc. CADMAPPER can transform data from public sources such as OpenStreetMap (Haklay and Weber, 2008), NASA, and USGS into organized CAD files. In the second step, the urban information from CADMAPPER is further processed by the software Autodesk 3ds Max. This step further modifies the model format and refines the 3D urban model for better usage and visualization purposes.
Although the proposed approach also applies to suburban and rural environments, in this study we choose a complex urban scene for demonstration. Our case study focuses on a downtown urban model with tall and high-density buildings. Figure 8 displays the 3D urban model used in this study, which is from the Chicago downtown area. The left plot of Figure 8 shows the 3D axonometric view from CADMAPPER; the right plot of Figure 8 shows the 2D traffic view of the same region from Google Maps. The model is part of the vibrant and famous Chicago Loop area, a neighborhood that is comprised of high-rises and a combination of various facilities commonly found in a city. The dimensions of the model are (approximately) 500 m \(\times\) 250 m \(\times\) 250 m. Although our method is scalable to larger urban models (as demonstrated in Section 4), this model size is the most appropriate for data visualization and comparison. On the distribution of buildings,
Figure 8: Views of the representative 3D urban model used in this study
Figure 7: The probability of medium level damage to the car windshield vs. impact energy
this model has more high-rises on the periphery and less height and density in the centre, which is beneficial for the effective visualizations of the 3D virtual risk terrains.
### Exposure Model
The exposure model estimates the probability of the presence of an EoV at time \(t\) and location \(\mathbf{p}\) in the evaluation domain. The EoV involved in a UAS operation has three classifications (Clothier et al., 2018). First parties are individuals and assets that are directly involved in the operation; second parties are ones that are not directly involved in the operation but gain direct benefits from its usage; third parties are ones that are neither involved in, nor derive any direct benefit from the operation of the UAS. This study (and exposure model) focuses on third parties, which mainly includes pedestrians and vehicles on the ground. The exposure model takes into account the population density (\(P\)), the vehicle density (\(V\)), and the temporal effect (\(T\)). A general mathematical representation of the exposure model is given by:
\[E\left(\mathbf{p},t\right)=P(\mathbf{p})T_{p}(t)+V(\mathbf{p})T_{v}(t) \tag{29}\]
where:
* \(E\left(\mathbf{p},t\right)\) is the exposure at location \(\mathbf{p}\) and time \(t\).
* \(P(\mathbf{p})\) is the base population (or crowd, pedestrian) density at location \(\mathbf{p}\).
* \(T_{p}(t)\) is a temporal factor for population density at time \(t\), derived from time series data.
* \(V(\mathbf{p})\) is the base vehicle density at location \(\mathbf{p}\).
* \(T_{v}(t)\) is a temporal factor for vehicle density at time \(t\), derived from time series data.
The temporal factor represents the relative density at a specific time point compared to the base density. For example, if the temporal factor for population density at 2 pm is 1.5, it means that the population density at 2 pm is 50% higher than the base population density. In our case study, we consider that people and vehicles have separate spaces (sidewalks and roads) in an urban environment. While most related works in the literature assumed uniform exposure models within a specific geographic area in a city, in our work we integrate a comprehensive exposure model to capture the spatial and temporal variations in the density of pedestrians and vehicles in a complex urban environment.
Because our exposure model needs to operate at a very fine-scale granular level (e.g., at the street level distribution of people and vehicles), the development of an accurate spatial and temporal model requires finer data and models on the complex dynamics and fluctuations of mobility in urban environments. Fortunately, in this big data era, data-driven approaches can provide powerful means of deriving high-fidelity exposure models. Urban transportation researchers have utilized data sources such as mobile phone data, traffic data, and techniques such as statistical models, deep learning, and computer vision to obtain accurate estimations of people and vehicle densities in a city. In this work, we are particularly interested in modeling and comparing the 3D virtual risk terrains at the following three representative times of a weekday:
* Midday (12 pm): This time represents the middle of the day when many people are taking their lunch breaks. Moderate densities of people and vehicles are expected on the ground.
* Evening rush hour (5 pm): This time is selected to represent the peak evening commuting hours, when people are leaving their workplaces or schools to return home. Just as with the morning commute, public transportation hubs and office-dense areas are expected to have high densities of people and vehicles on the ground.
* Night time (10 pm): This time is chosen to represent the late evening activities. At this time most people stay indoors and the ground densities of people and vehicles are at the lowest point of the day. We don't consider midnight and early in the morning because there is hardly any business activity during those hours.
For the Chicago urban area in Figure 8, there is currently no publicly available data with the required spatiotemporal granularity. Precise pedestrian/crowd density data is hardly publicly available, because of the high cost of data collection and processing, and concerns about privacy and public security. Traffic data, on the other hand, is more readily available, although the granularity of many publicly available datasets is still inadequate for our analysis. Here, we utilize a combination of publicly available Chicago transportation datasets and the latest research outcomes on urban analytics to estimate the exposure model for this Chicago neighborhood case. We conduct separate analyses for the pedestrian density and the vehicle density.
The estimation of pedestrian/crowd density in urban areas has been advanced through mobile phone data analytics (Fu et al., 2021; Huo et al., 2021; Weppner and Lukowicz, 2011, 2013) and deep learning (Ding et al., 2020; Fu et al., 2015; Jiang et al., 2021; Zhu et al., 2020). We first use the existing data and results to estimate \(P(\mathbf{p})\), the base pedestrian density on the sidewalks of the Chicago downtown area. In this work, the base pedestrian density is set as the density at the peak hour of the day (5 pm). Using the computer vision literature (Fu et al., 2015; Weppner and Lukowicz, 2011, 2013), we first establish the following five classes of pedestrian density: very low (\(<0.05\) people/m\({}^{2}\)), low (0.05-0.1 people/m\({}^{2}\)), moderate (0.1-0.2 people/m\({}^{2}\)), high (0.2-0.3 people/m\({}^{2}\)), and very high (\(>0.3\) people/m\({}^{2}\)). The very high pedestrian density (\(>0.3\) people/m\({}^{2}\), up to 2 people/m\({}^{2}\)) usually occurs during crowd gathering activities such as a square rally. The high pedestrian density can also occur as a daily routine on the business streets of a densely populated city, such as some megacities in Asia, during rush hours. For the Chicago downtown area (and many similar Central Business District (CBD) areas in North America), the average pedestrian density during rush hours is at the moderate level (0.1-0.2 people/m\({}^{2}\)), excluding special events. Hence, we use 0.15 people/m\({}^{2}\) as the estimation of pedestrian density on the sidewalks of the Chicago downtown area during evening rush hour. For the estimation of \(T_{p}(t)\), the temporal factor for population density at time \(t\), it can be derived from several previous works in the literature (Fu et al., 2021; Huo et al., 2021; Jiang et al., 2021). These works reported time series data which uncover the trends of pedestrian density at different times of the day. Since we set the pedestrian density at 5 pm as the base, we have \(T_{p}(5\text{ pm})=1\). For the temporal factor at midday and night time, using data from (Huo et al., 2021) and (Jiang et al., 2021), we conclude that \(T_{p}(12\text{ pm})=0.5\) and \(T_{p}(10\text{ pm})=0.1\) are valid projections.
The estimation of traffic density is enabled by similar data types and techniques. Compared to pedestrian density, traffic density data is more publicly available. For example, the U.S. Department of Transportation (DOT) Highway Performance Monitoring System (HPMS) database contains the Chicago area traffic data. For example, Figure 9 is a set of visualizations which display the Chicago area traffic patterns at 12 pm, 5 pm, and 10 pm, respectively. However, at the current stage, the publicly available traffic data does not adequately provide the desired traffic density information. We therefore estimate the detailed traffic density in our selected area using a combination of publicly available traffic data and urban analytics results in the literature. On the vehicle density \(V(\mathbf{p})\), a commonly used measure in the literature is the number of vehicles per unit length (e.g., 100 m, 1000 m) (Raj et al., 2016; Sakai et al., 2019; Zeroual et al., 2019) or the number of vehicles per unit length per lane (Lim et al., 2022). Based on vehicles/100 m/lane (veh/100m/lane), we
Figure 9: Visualization of Chicago traffic patterns by time of the day: 12 pm (left), 5 pm (middle), 10 pm (right) (Sources: data from U.S. DOT HPMS Public Release, visualization from Illinois Vehicle Auto Insurance)
again establish five classes of vehicle density: very low (\(<\) 2 veh/100m/lane), low (2-5 veh/100m/lane), moderate (5-10 veh/100m/lane), high (10-15 veh/100m/lane), and very high (\(>\) 15 veh/100m/lane). For the downtown area of Chicago, 10 veh/100m/lane is a valid estimation of vehicle density at the peak hour (5 pm). With the urban car lane width 10 ft (3 m) and average windshield projection area 14 ft\({}^{2}\) (1.28 m\({}^{2}\)), we have the base vehicle density \(V(\mathbf{p})=10\cdot 1.28\) m\({}^{2}\)/100 m \(\cdot\) 3 m = 0.04 counts/m\({}^{2}\). For the temporal factor, we use time series data in the literature (Po et al., 2019; Raj et al., 2016; Zeroual et al., 2019) to estabilish the following estimates: \(T_{v}(5\) pm\()=1\), \(T_{v}(12\) pm\()=0.6\) and \(T_{v}(10\) pm\()=0.2\).
## 4 Case Study
### Study Set-up
In this section we conduct a comprehensive case study and generate prototypes of the 3D virtual risk terrains for the Chicago downtown example. Table 5 is a summary of the settings and parameters used in the case study. Overall, we are simulating cargo delivery operations in a complex urban environment, enabled by a medium sized multi-rotor UAV. The failure type is LOC, a catastrophic failure that is unrecoverable and has little to no controllability on the impact location. Therefore, the resulting 3D virtual risk terrains are conservative and represent the 'worst case scenarios' on recovery and impact location. Both Gaussian and Rayleigh impact location models are considered, and their results are compared. On the type of EoV, we model the ground risk of both pedestrians and vehicles. Average human body characteristics are used, although a more conservative study could model more vulnerable populations. On the reference human injury level, our investigation centers around AIS level 3, which is more conservative because most relevant works in the literature have used the probability of fatality in the harm model. Likewise, we use Medium level damage under IEA as the threshold for vehicle windshield damage, which represents a probable risk for causing a traffic accident.
\begin{table}
\begin{tabular}{l|l} \hline
**Factors** & **Details** \\ \hline UAV type & Multi-rotor configuration \\ UAV weight \(m\) & 25 kg \\ UAV flying speed & 10 m/s \\ UAV drag coefficient \(C_{D}\) & 1.8 (estimated via method in (Hattenberger et al., 2023)) \\ UAV cross-sectional area size \(S\) & 0.2 m\({}^{2}\) \\ UAV characteristic diameter \(D\) & 50 cm \\ UAV failure type & Loss of Control (LOC) \\ UAV contingency equipment & With and without parachute \\ Impact location models & Gaussian model, Rayleigh model \\ \hline EoVs and their locations & Pedestrians and vehicles on the ground \\ Mass of the struck body \(M\) & 70 kg \\ Human body wall thickness coefficient \(k\) & 0.652 \\ Reference human injury level & AIS Level 3 \\ Reference vehicle damage level & Medium – no penetration, partial loss of visibility \\ \hline Air density \(\rho\) & 1.225 kg/m\({}^{3}\) \\ Standard gravity \(g\) & 9.8 m/s\({}^{2}\) \\ Weather (used in the noise part) & Standard day weather \\ \hline Size of unit area around \(\mathbf{p}\) & 4 m\({}^{2}\) (2 m \(\times\) 2 m) \\ Density of \(\mathbf{p}\) on the ground & One every 2 meters \\ Size of individual risk terrain around \(\mathbf{p}\) & 40 m \(\times\) 40 m \(\times\) 200 m \\ Size of the urban model & 500 m \(\times\) 250 m \(\times\) 250 m \\ Times of the day & 12 pm, 5 pm, 10 pm \\ \hline \end{tabular}
\end{table}
Table 5: Factors and settings of the case study
For a specific location \(\mathbf{p}\) on the ground, we use an area of 4 m\({}^{2}\) (2 m \(\times\) 2 m) around \(\mathbf{p}\) as its impact range. Within the available ground space of the urban model in Figure 7(a), we place a grid of \(\mathbf{p}\) with a density of one point every 2 meters. Therefore, the evaluation range is continuous on the ground. For the range of individual risk terrain around a certain \(\mathbf{p}\), we consider a cube with size 40 m \(\times\) 40 m \(\times\) 200 m, which, according to our analysis, is a large enough volume to cover locations in the air that have notable risks to \(\mathbf{p}\). Although the current regulations have set a maximum altitude 400 ft (122 m) for the operations of similar UAS in an urban space, we compute the risk terrain for up to 200 m for a more sufficient exploration. Each ground location \(\mathbf{p}\) represents either pedestrian or vehicle, but not both. Figure 10 is a visualization of the exposure model that shows the distributions of pedestrians and vehicles in the urban model at the three representative times of the day. The background of each subfigure in Figure 10 is the overhead view of the urban model in Figure 7(a). In Figure 10, the pedestrians (red) and vehicles (blue) appear on the sidewalks and roads (motor vehicle lanes) of the urban model, respectively. The intensity of color is an indicator of the density of an EoV at different times of the day.
### Visualizations of Virtual Risk Terrains
In a computational environment, we implement our integrated approach in Figure 1 with every sub-model described in Section 3 and apply the computations to the Chicago urban model. This subsection first displays visualizations of the 3D virtual risk under different parameter settings. Specifically, we are interested in investigating how the characteristics of the 3D virtual risk terrain change in relation to the risk requirement, UAS reliability level, time of the day, and impact location model. In the first group of visualizations, Figure 11 shows the 3D virtual risk terrains at three different risk levels: \(10^{-6}\), \(10^{-7}\), and \(10^{-8}\), assuming maximum risk settings in both failure model (\(10^{-5}\)/flight hour) and exposure model (evening rush hour). The interpretation of a virtual risk terrain is straightforward: to avoid a certain level of safety risk to crowd and vehicles on the ground, the UAS operation must avoid and fly above the corresponding virtual risk terrain. From the perspective of airspace management, the 3D virtual risk terrain clearly defines the "no-fly" zones, the space within the 3D surface, where the ground risk exceeds a certain threshold. In Figure 11, we can observe that the height of the virtual risk terrain increases as the risk requirement becomes more stringent - from \(10^{-6}\)/flight hour to \(10^{-8}\)/flight hour. This indicates that UAS must maintain a larger clearance distance (and altitude) from crowds and vehicles on the ground under more stringent safety regulations. In other words, with all other conditions held the same, more stringent safety policies will result in less space for UAS to operate in an urban environment.
We next demonstrate how improved UAS reliability can affect the virtual risk terrain. In Figure 12, while assuming the maximum risk settings for risk requirement (\(10^{-8}\)/flight hour) and exposure model (evening rush hour), we generate 3D virtual risk terrains for three different UAS failure rates: \(10^{-5}\)/flight hour, \(5\cdot 10^{-6}\)/flight hour, and \(10^{-6}\)/flight hour. More airspace becomes available as the system failure rate decreases. Therefore, UAS can operate closer to crowds and vehicles on the ground when the system itself becomes more reliable. This feature allows policy makers to work backwards to derive the required UAS reliability level for a specific airspace design scheme. For example, one can answer the question: if UAS is allowed to fly at a minimum of 40 meters above pedestrians during evening rush hour, what would be the
Figure 10: Spatial distribution and density of pedestrians (red) and vehicles (blue) on the ground, at 12 pm (left), 5 pm (middle), and 10 pm (right)
required reliability of the system? In the same format, Figure 13 explores how the virtual risk terrain varies with time of the day. Under the maximum risk settings in risk requirement (\(10^{-8}\)/flight hour) and failure model (\(10^{-5}\)/flight hour), the UAS can operate in a broader urban airspace and at a lower altitude when there are fewer people and vehicles on the ground. For better illustrative purposes, we choose to compare three settings for each of the risk requirements, UAS reliability levels, and times of the day. Similar studies can be conducted considering wider ranges of parameters and at finer granularity levels. Although many other factors can also play a critical role, risk requirement, UAS reliability level, and time of the day are potentially the three most significant factors in this decision making problem.
In the last set of visual comparisons, we observe the differences between the Gaussian impact location model and the Rayleigh impact location model in Figure 14. Even though the two impact location models make different assumptions and represent different impact location patterns, their resulting virtual risk terrains have certain features in common, due to the continuous distribution of pedestrians and vehicles on the ground. In Figure 14, the overall trends and shapes of the virtual risk terrains are similar between the
Figure 11: Virtual risk terrains at different risk levels: \(10^{-6}\) (left), \(10^{-7}\) (middle), and \(10^{-8}\) (right), under the failure rate \(10^{-5}\) and evening rush hour (5 pm).
Figure 12: Virtual risk terrains at different failure rates: \(10^{-5}\) (left), \(5\cdot 10^{-6}\) (middle), and \(10^{-6}\) (right), under the risk level \(10^{-8}\) and evening rush hour (5 pm).
Figure 13: Virtual risk terrains at different times of the day: 12 pm (left), 5 pm (middle), and 10 pm (right), under the risk level \(10^{-8}\) and failure rate \(10^{-5}\).
two impact location models. The two differences are the height and detailed shape of the virtual terrain. At all three risk levels, the virtual terrain under the Rayleigh impact location model is higher than its counterpart under the Gaussian impact location model. And with the Rayleigh impact location model, the surface of the virtual terrain is more complex at the \(10^{-8}\) risk level and smoother at the \(10^{-7}\) risk level.
### Quantitative Comparisons
In addition to the visual comparisons, we quantitatively compare between virtual risk terrains under different settings. Among the three risk levels we explored in the last subsection, \(10^{-8}\)/flight hour or below is a reference requirement level for policy makers regarding UAS operations in urban environments. Therefore, in this subsection we focus on the \(10^{-8}\) virtual risk terrain. To quantitatively evaluate the magnitude of the virtual risk terrain, we use the minimum clearance altitude (or height) of the virtual terrain. Under the assumption of continuous and uniform EoV distribution on the ground, the minimum clearance altitude is a prominent feature of the virtual risk terrain.
Figure 14: Comparison between virtual risk terrains under Gaussian (top) and Rayleigh (bottom) impact location models.
Figure 15: The risk level \(10^{-8}\) minimum clearance altitudes for pedestrians, under Gaussian (left) and Rayleigh (right) impact location models.
Figure 15 shows the patterns of \(10^{-8}\) risk level minimum clearance altitude for pedestrians. The left plot of Figure 15 shows results under the Gaussian impact location model. In the 'worst case' scenario, \(10^{-5}\)/flight hour failure rate (right end of the x-axis) and evening rush hour (blue curve), the UAS must fly at around 125 meters above the ground, exceeding the altitude limitation in the current policy. This minimum clearance altitude can be relaxed to below 100 meters if the failure rate is improved to \(5\cdot 10^{-6}\)/flight hour or operation takes place at the less congested 12 pm. If the UAS operation were to be allowed at around 40 meters above the ground, that would require one of the following conditions: (1) the failure rate is \(10^{-6}\)/flight hour, (2) the time is 10 pm, or (3) the failure rate is \(2\cdot 10^{-6}\)/flight hour AND the time is 12 pm. The right plot of Figure 15 shows results under the Rayleigh impact location model. Like observed in the visual comparisons, the general trends between the two groups of results are similar. Virtual risk terrain under the Rayleigh impact location model has higher minimum clearance altitude, where the difference is between 10 and 40 meters, at every setting. Figure 16 shows the patterns of \(10^{-8}\) risk level minimum clearance altitude for vehicles. One can observe that the minimum clearance altitudes above vehicles are generally lower than those of pedestrians. For example, under the Gaussian impact location model, the 'worst case' minimum clearance altitude for vehicles is one half of that for pedestrians (60 m vs. 120 m). Therefore, in many virtual risk terrain results, such as expressed in the middle plot of Figure 11, the terrain is higher above the pedestrian sidewalks and lower above the motor vehicle lanes. These results indicate that, under those conditions, UAS can fly at lower altitudes above motor vehicle lanes and should avoid flying above pedestrians at the same altitudes. Overall, Figures 15 and 16 provide a (rough) reference for UAS operations planning in a typical urban terrain.
limit the UAS operations' noise impact to people on the ground. On the selection of noise constraint levels, 70 dB is often considered noisy in an outdoor urban environment, while 40 dB is considered very quiet. The interpretation of the virtual acoustic terrain is the same: to maintain below a certain noise level on the ground, the UAV must fly above the corresponding virtual acoustic terrain.
In the last numerical example provided in Figure 18, we demonstrate the integration of two types of virtual terrains. The left plot of Figure 18 is a representative 3D virtual risk terrain selected from the middle plot of Figure 12; the middle plot of Figure 18 is a representative virtual acoustic terrain selected from the middle plot of Figure 17. Their combined virtual terrain is shown in the right plot of Figure 18. This combined 3D virtual terrain is the union of both "no-fly" zones. When planning UAS operations in this selected area, flight trajectories that avoid this combined virtual terrain can limit the operation's impacts on both ground risk and ground noise. In fact, the virtual societal impact terrains can enable efficient community-aware UAS trajectory planning in an urban environment in a broader sense. In an optimization paradigm depicted in Figure 19, societal constraints from four different aspects - noise, safety, privacy, and perceived risk, can all be converted into 3D virtual terrains. The combination of all virtual terrains with the physical urban terrain defines an overall acceptable fly zone for UAS operations and enables an efficient non-repetitive 3D trajectory optimization process. This will considerably impact and facilitate the community integration of UAS and AAM in urban environments.
## 5 Remarks
### Limitations
The main contributions of this work are the novel concept of 3D virtual risk terrain and the integrated modeling approach outlined in Figure 1. The framework is a coherent confluence of multiple sub-models for modeling the risks of UAS. For each sub-model, we have adopted either the latest research outcomes in the literature or widely embraced standard practices. Each sub-model belongs to a specialized research field that is progressing rapidly. In this regard, every sub-model of the framework must be continually
Figure 17: The virtual acoustic terrains with different noise constraint levels: 50 dB (left), 45 dB (middle), and 40 dB (right)
Figure 18: The combination of virtual risk terrain (left) and virtual acoustic terrain (middle) into the combined virtual terrain (right).
reinforced to upgrade the overall modeling to incorporate and benefit from future advancements in each field. We expect that the modeling fidelity can greatly benefit from continued research efforts in recovery model, impact location model, and exposure model. Accurate modeling of UAS recovery and impact location remain challenging due to a lack of precise data and a number of uncertain factors; the exposure model can be further refined by using the increasingly available high-resolution mobility data.
In addition, there are some limitations in the present case study. While the settings and parameters were selected to be as representative as possible of a package delivery scene in a complex urban environment, LOC is the 'worst case' UAV failure type among the possible scenarios. Therefore, the resulting 3D virtual terrains are also conservative with respect to the type of failure. The choice on LOC is currently constrained by the limited knowledge and models of a UAV's behavior under more complex failure scenarios. On the other hand, the case study has so far only considered ground risks. When operating UAS in a complex urban environment, one extra type of TPR to consider is the surface of the buildings. Although buildings are such a strong sheltering factor that people within the buildings are normally safe from small UAS operations, possible property damage on the buildings can be taken into account. The secondary effects of a UAV collision with the surface of a building can pose additional ground risks.
### Future Work
As the very first effort to build 3D virtual risk terrains for UAS operations and urban airspace management, this work will open up many future research avenues for further extensions. Here we briefly mention three opportunities that will enhance the operations planning and community integration of AAM. The first essential avenue is to extend the same concept and apply it to eVTOL aircraft, a key player in on-demand UAM. Because of the differences between UAV, eVTOL aircraft, and their operations, the virtual risk terrain for eVTOL aircraft requires new sub-models in almost every aspect. The ultimate objective is to plan eVTOL aircraft trajectories considering TPR and contingency management (e.g., emergency landing). The second opportunity is to generate the virtual risk terrain on a larger scale. In the case study we generated virtual risk terrains for a portion of the Chicago downtown area. Later on, this virtual terrain will be generated to cover the entire city. This requires the integration of the proposed framework with a GIS capability and can eventually make real-time risk terrain like the traffic map on Google Maps. Third, some ongoing efforts are applying the idea of virtual terrain to other forms of societal impacts/concerns, which include
Figure 19: Efficient AAM trajectory optimization enabled by virtual societal impact terrains
privacy and perceived risk. A fusion of various virtual terrains will provide comprehensive insights into the community integration of AAM and useful references for regulatory policies.
## 6 Conclusions
In this paper we introduced virtual risk terrain, a novel concept for UAS operations planning and urban airspace management with risk considerations. By converting public risk constraints in an urban environment into 3D 'no-fly' zones, the virtual risk terrain enables efficient UAS trajectory planning and can provide clear guidance to safety regulations for UAS operations in complex urban environments. The computational framework is a conditional probability approach which integrates six sub-models for UAS safety risk and a 3D urban model. We conducted a case study on the Chicago downtown area and generated 3D virtual terrains for the ground risk of multi-rotor UAV cargo delivery operations. We showed how the characteristics of the 3D virtual risk terrain could change under different safety risk levels, UAV reliability levels, impact location models, and times of the day. We also summarized and compared the more general minimum clearance distances/altitudes from EoVs in those scenarios. At the end of the case study, we demonstrated how the virtual risk terrain can also be developed for other societal impact constraints. The amalgamation of all virtual societal impact terrains will advance the operations planning and societal integration of UAS and AAM. We look forward to continuously upgrading this framework with new research outcomes in each sub-model and conducting studies for larger urban models.
## Acknowledgements
This work was sponsored by the National Aeronautics and Space Administration (NASA) University Leadership Initiative (ULI) program under project "Autonomous Aerial Cargo Operations at Scale", via grant number 80NSSC21M071 to the University of Texas at Austin. The authors are grateful to NASA project technical monitors and project partners for their support. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the project sponsor. The authors would also like to thank Dr. Mirmojtaba Gharibi for helpful discussions that contributed to aspects of this work. |
2303.12271 | The homotopy of the KU_G-local equivariant sphere spectrum | We compute the homotopy Mackey functors of the $KU_G$-local equivariant
sphere spectrum when $G$ is a finite $q$-group for an odd prime $q$, building
on the degree zero case from arXiv:2204.03797. | Tanner N. Carawan, Rebecca Field, Bertrand J. Guillou, David Mehrle, Nathaniel J. Stapleton | 2023-03-22T02:38:14Z | http://arxiv.org/abs/2303.12271v1 | # The homotopy of the \(Ku_{g}\)-local equivariant sphere spectrum
###### Abstract.
We compute the homotopy Mackey functors of the \(KU_{G}\)-local equivariant sphere spectrum when \(G\) is a finite \(q\)-group for an odd prime \(q\), building on the degree zero case from [BGS].
Gullou was supported by NSF grant DMS-2003204. Stapleton was supported by NSF grant DMS-1906236 and a Sloan Fellowship. This collaboration was made possible by NSF RTG grant DMS-1839968.
Introduction
### Representation rings and Green functors
Recall the following commutative rings associated to \(G\):
* The complex (resp. rational) representation ring \(RU(G)\) (resp. \(R\mathbb{Q}(G)\)) is the Grothendieck group of isomorphism classes of finite-dimensional complex (resp. rational) \(G\)-representations under direct sum. The product is induced by the tensor product of \(G\)-representations.
* The ring of complex-valued class functions \(\operatorname{Cl}(G,\mathbb{C})\) is the ring of functions \(G\to\mathbb{C}\) which are constant on conjugacy classes of elements in \(G\).
These rings are related by the following pair of ring homomorphisms:
\[R\mathbb{Q}(G)\xrightarrow{}RU(G)\xrightarrow{\chi}\operatorname{Cl}(G, \mathbb{C})\]
The first of these homomorphisms is base change from \(\mathbb{Q}\) to \(\mathbb{C}\), and the second is the character map. In particular, note that the character map \(\chi\colon RU(G)\to\operatorname{Cl}(G,\mathbb{C})\) is injective and embeds the complex representation ring as a subring of the ring of class functions. It will occasionally be convenient to calculate in the image of the character map rather than with complex representations themselves.
These four commutative rings can all be upgraded to Green functors. We denote the Green functor versions of these rings by underlining them; for example, \(\underline{R\mathbb{Q}}\) is the Green functor with \(\underline{R\mathbb{Q}}(G/H)=R\mathbb{Q}(H)\). The same relationships hold among the Green functors as do among the commutative rings: there is a sequence of Green functor homomorphisms
Let \(A(G)\) be the Burnside ring of \(G\), and let \(\underline{A}\) be the Burnside ring \(G\)-functor. There is a Green functor homomorphism \(\underline{A}\to R\mathbb{Q}\) given levelwise by taking a finite \(G\)-set to the
associated permutation representation. When \(G\) is a \(p\)-group, the Ritter-Segal theorem [Ri,Seg2] says that \(\underline{A}\to R\mathbb{Q}\) is surjective; we name its kernel \(\underline{J}\) and deduce an isomorphism \(\underline{A}/\underline{J}\cong R\mathbb{Q}\). The ideal \(J(\overline{G})\) admits a nice description as the ideal of \(A(G)\) generated by all virtual \(\overline{G}\)-sets \(X\) such that \(|X^{g}|=0\) for all \(g\in G\). The article [BGS] uses the notation \(\underline{A}/\underline{J}\) throughout, but here we use the simpler notation \(\underline{R\mathbb{Q}}\).
### Equivariant homotopy theory
Let \(\operatorname{Sp}^{G}\) denote the category of genuine equivariant \(G\)-spectra. Examples include the \(G\)-equivariant sphere spectrum \(\mathbb{S}_{G}\) and the \(G\)-spectrum of \(G\)-equivariant complex topological \(K\)-theory \(KU_{G}\).
The homotopy of genuine \(G\)-spectra is naturally Mackey-functor valued. For the primary spectra in question in this paper, we have
\[\underline{\pi}_{0}\mathbb{S}_{G}=\underline{A}\quad\text{ and }\quad\underline{\pi}_{ *}KU_{G}=\underline{RU}[\beta,\beta^{-1}]\text{ with }|\beta|=2.\]
If \(E\) and \(X\) are \(G\)-spectra, let \(L_{E}X\) denote the Bousfield localization of \(X\) at \(E\). In particular, when \(E=\mathbb{S}_{G}/p\) and \(X\) is any spectrum, this localization is the \(p\)-completion of \(X\), denoted by \(X_{p}^{\wedge}:=L_{\mathbb{S}_{G}/p}X\). If \(X\) is already a localization \(X=L_{E}Y\), the localization of \(X\) at \(\mathbb{S}_{G}/p\) may be written
\[L_{E/p}Y=L_{\mathbb{S}_{G}/p}L_{E}Y.\]
When \(E=\mathbb{S}_{G}\wedge H\mathbb{Q}\) is the rational equivariant sphere, we obtain the rationalization of \(X\), denoted \(X\otimes\mathbb{Q}:=L_{H_{G}\mathbb{Q}\otimes\underline{A}}X\).
The \(p\)-completion and the rationalization are related by a homotopy pullback square of \(G\)-spectra, called the arithmetic fracture square. When \(X=L_{KU_{G}}\mathbb{S}_{G}\), this is the square:
(2.1)
See [DFHH, Proposition 2.2 of Chapter 6] for a general version of the arithmetic square, from which (2.1) can be deduced. This is a useful tool for computing homotopy of \(G\)-spectra.
## 3. The cokernel of \(\psi^{\ell}-1\) acting on \(\underline{\pi}_{*}KU_{G}\)
Recall that for \(\ell\in\mathbb{Z}\) the Adams operation \(\psi^{\ell}\colon KU_{G}(X)\to KU_{G}(X)\) is a ring homomorphism natural in the \(G\)-space \(X\). In this section, we analyze \(\psi^{\ell}-1\) as a map on the complex representation ring of \(G\) and on related objects. Recall Section 1.1: the integer \(\ell\) will always be assumed to be coprime to the order of \(G\), and at times \(\ell\) will furthermore be assumed to be primitive modulo \(|G|=q^{j}\). The following is Exercise 9.4 of [Ser].
**Lemma 3.1**.: _The Adams operation \(\psi^{\ell}\colon RU(G)\to RU(G)\) permutes the basis of irreducible representations if \(\ell\) is coprime to \(|G|\)._
Proof.: Recall that a class function \(\chi\) is the character of an irreducible representation if and only if \(\chi(e)\geq 0\) and \(\langle\chi,\chi\rangle=1\). On a class function \(f\), the Adams operation \(\psi^{\ell}\) acts as \(\psi^{\ell}(f)(g)=f(g^{\ell})\). Since \(\ell\) is coprime to \(|G|\), every element has an \(\ell\)th root, so that the \(\ell\)th power determines a bijection on \(G\). It follows that the Adams operation preserves the inner product.
**Lemma 3.2**.: _Suppose that \(\ell\) is coprime to \(|G|\). The Adams operation \(\psi^{\ell}\) on \(\underline{RU}\) is a homomorphism of Green functors._
Proof.: The Adams operation \(\psi^{\ell}\) is a levelwise ring homomorphism, and it is straightforward that \(\psi^{\ell}\) commutes with restriction. The main point is to show that \(\psi^{\ell}\) commutes with induction of representations. To see this, we can use the character map to embed \(\underline{RU}\) into the Green functor of class functions. As this is levelwise an injection, it suffices to see that \(\psi^{\ell}\) commutes with induction for class functions. Here, the formula (see [Ser, Section 7.2] is
\[\operatorname{Ind}_{\operatorname{H}}^{\operatorname{G}}(f)(g)=\frac{1}{|H|} \sum_{\begin{subarray}{c}\gamma\in G,\\ \gamma^{-1}g\gamma\in H\end{subarray}}f(\gamma^{-1}g\gamma).\]
As \(\psi^{\ell}(f)(g)=f(g^{\ell})\), comparing the formula for \(\psi^{\ell}\operatorname{Ind}_{\operatorname{H}}^{\operatorname{G}}(f)\) at \(g\) with \(\operatorname{Ind}_{\operatorname{H}}^{\operatorname{G}}(\psi^{\ell}f)\) at \(g\), one finds that they differ only in that the former sums over \(\gamma^{-1}g^{\ell}\gamma\) in \(H\), whereas the latter sums over \(\gamma^{-1}g\gamma\) in \(H\). Since the \(\ell\)th power is a bijection on \(H\), as in the proof of Lemma 3.1, the two sums are the same.
We next consider the endomorphism \(\psi^{\ell}-1\) on \(\underline{RU}\). This will later appear as the endomorphism \(\psi^{\ell}-1\) on the nonnegative homotopy Mackey functors of \(KU_{G}\), whose \(\mathbb{Z}\)-graded homotopy Mackey functors are \(\underline{RU}[\beta^{\pm 1}]\), with \(\beta\) in degree \(2\). Recall that \(\psi^{\ell}\) acts on \(\beta^{d}\) as multiplication by \(\ell^{d}\) [A, Proposition 3.2.2].
**Proposition 3.3**.: _Suppose that \(\ell\) is coprime to \(|G|\). The Mackey functor homomorphism \(\psi^{\ell}-1\colon\underline{RU}\{\beta^{d}\}\to\underline{RU}\{\beta^{d}\}\) is injective for \(d>0\)._
Proof.: This proceeds as the proof of [BGS, Proposition 6.8]. It suffices to show that this homomorphism is levelwise injective. By Lemma 3.1, \(\psi^{\ell}\) acts by permuting the basis of irreducibles in \(RU(G)\). If \(S\) is the associated permutation matrix, then \(\psi^{\ell}-1\) acts by \(\ell^{d}S-I\), where \(I\) is the identity matrix. To show that this matrix is injective as a linear transformation, it suffices to show that it has a nonzero determinant.
If \(d>0\), this is a matrix with integer entries and \(\det(\ell^{d}S-I)\equiv(-1)^{m}\pmod{\ell}\), where \(m\) is the number of rows of \(S\). Therefore, \(\det(\ell^{d}S-I)=a\ell+(-1)^{m}\) for some \(a\in\mathbb{Z}\) (note that \(\ell\geq 2\)). In particular, it is nonzero.
**Remark 3.4**.: The statement of Proposition 3.3 in the case \(d=0\) does not hold. Indeed, [BGS, Proposition 6.7] identifies the kernel of \(\psi^{\ell}-1\) on \(RU(G)\) as \(R\mathbb{Q}(G)\).
The result also holds for negative \(d\), but there \(\ell\) must be invertible in order to define \(\psi^{\ell}\) on \(\underline{RU}\{\beta^{d}\}\). We therefore pass to \(q\)-completion, as this will be the case in which this homomorphism is later considered.
**Corollary 3.5**.: _Suppose that \(\ell\) is coprime to \(|G|\). The Mackey functor homomorphism \(\psi^{\ell}-1\colon\underline{RU}_{q}^{\wedge}\{\beta^{d}\}\to\underline{RU} _{q}^{\wedge}\{\beta^{d}\}\) is injective for \(d\neq 0\)._
Proof.: In the case that \(d\) is positive, this follows from Proposition 3.3 by flat base change along \(\mathbb{Z}\hookrightarrow\mathbb{Z}_{q}^{\wedge}\). For \(d<0\), we argue as in Proposition 3.3. First, \(\det(\ell^{d}S-I)=(\ell^{d})^{r}\det(S-\ell^{-d}I)\), where \(r\) is the number of rows in the matrix. Now \(S-\ell^{-d}I\) is an integer matrix with \(\det(S-\ell^{-d}I)\equiv\det(S)\pmod{\ell}\). The permutation matrix \(S\) has nonzero determinant, so \(\ell^{d}S-I\) does as well.
Having considered the kernel, we now turn to the cokernel. In order to get a closed form answer, we again pass to completions, first completing at \(q\) in Proposition 3.7 and then completing away from \(q\) in Proposition 3.11.
**Notation 3.6**.: We will write \(\underline{\operatorname{cok}}\{d\}\) for the cokernel of \(\psi^{\ell}-1\colon\underline{RU}\{\beta^{d}\}\to\underline{RU}\{\beta^{d}\}.\) We will also write \(\underline{\operatorname{cok}}^{\wedge}_{p}\{d\}=\underline{\operatorname{cok} \{d\}}\otimes\mathbb{Z}_{p}^{\wedge}\) for the cokernel of \(\psi^{\ell}-1\colon\underline{RU}_{p}^{\wedge}\{\beta^{d}\}\to\underline{RU}_{p }^{\wedge}\{\beta^{d}\}\), and similarly for the \(q\)-complete version. When \(d=0\), we sometimes drop the degree from the notation and simply write \(\underline{\operatorname{cok}}\) or \(\underline{\operatorname{cok}}^{\wedge}_{p}\).
**Proposition 3.7**.: _Let \(\ell\) be primitive mod \(|G|=q^{j}\). The Mackey functor \(\underline{\operatorname{cok}}^{\wedge}_{q}\{d\}\) is given at level \(G/H\) by:_
1. _for_ \(d\neq 0\)_,_ \[\underline{\operatorname{cok}}^{\wedge}_{q}\{d\}\cong\bigoplus_{\text{\rm cyclic }[C]}\mathbb{Z}_{q}^{\nu_{q}(d^{d\phi(|C|)}-1)},\] _where the direct sum runs over conjugacy classes of cyclic subgroups_ \(C\) _of_ \(H\)_,_ \(\varphi\) _is Euler's totient function, and_ \(\nu_{q}\) _is the_ \(q\)_-adic valuation. When_ \(|C|=q^{k}\) _with_ \(k\neq 0\)_, then_ \[\nu_{q}(\ell^{d\varphi(|C|)}-1)=k+\nu_{q}(d).\]
2. _for_ \(d=0\)_,_ \[\underline{\operatorname{cok}}^{\wedge}_{q}\cong\bigoplus_{\text{\rm cyclic }[C]}\mathbb{Z}_{q}^{\wedge},\] _where the direct sum again runs over conjugacy classes of cyclic subgroups_ \(C\) _of_ \(H\)_._
_The restriction and transfer in the cokernel are inherited from those in \(\underline{RU}^{\wedge}_{q}\)._
Proof.: The cokernel is computed levelwise; at level \(G/H\), we have
\[\psi^{\ell}-1\colon RU(H)^{\wedge}_{q}\{\beta^{d}\}\to RU(H)^{\wedge}_{q}\{ \beta^{d}\}.\]
By Lemma 3.1, the Adams operation \(\psi^{\ell}\) permutes the basis of irreducibles of \(RU(H)\), and it continues to do so after flat base change along \(\mathbb{Z}\to\mathbb{Z}_{q}^{\wedge}\). As in the proof of Proposition 3.3, \(\psi^{\ell}-1\) acts by a matrix \(\ell^{d}S-I\), where \(S\) is a permutation matrix and \(I\) the identity matrix. Reordering the basis of irreducibles if necessary, this becomes a block-diagonal matrix with blocks
\[\begin{bmatrix}-1&\ell^{d}&&\\ &-1&\ell^{d}&&\\ &&\ddots&\ddots&\\ &&&-1&\ell^{d}\\ \ell^{d}&&&-1\end{bmatrix}\sim\begin{bmatrix}1&&&&\\ &1&&\\ &&\ddots&&\\ &&&1&\\ &&&\ell^{dt}-1\end{bmatrix}\]
which are equivalent to diagonal matrices as shown above, using a combination of row and column operations, where \(t\) is the number of rows in this block. When \(d\neq 0\), each block contributes a factor of \(\mathbb{Z}_{q}^{\wedge}/(\ell^{dt}-1)\) to the cokernel. When \(d=0\), each block contributes a factor of \(\mathbb{Z}_{q}^{\wedge}\).
It remains to count the number of blocks and their sizes. Each block corresponds to a \(\psi^{\ell}\)-orbit of irreducibles in \(RU(H)\). Since \(RU(H)\) is a free \(\mathbb{Z}\)-module of finite rank, we may base change to \(\mathbb{C}\) and view the resulting ring as a \(\mathbb{C}[\psi^{\ell}]\)-module. Since the character map \(\mathbb{C}\otimes RU(H)\to\operatorname{Cl}(H,\mathbb{C})\) is a map of \(\mathbb{C}[\psi^{\ell}]\)-modules and \(\mathbb{C}[\psi^{\ell}]\) is a PID, it suffices to understand the orbits of the \(\psi^{\ell}\) action on a basis for class functions. The Adams operation acts on class functions by \(\psi^{\ell}(f)(g)=f(g^{\ell})\). Consider the basis of \(\operatorname{Cl}(H,\mathbb{C})\) given by the indicator functions \(1_{[g]}\). Since \(\ell\) is primitive mod \(|H|\), two indicator functions \(1_{[g]}\) and \(1_{[h]}\) are in the same \(\psi^{\ell}\)-orbit if and only if \(g\) and \(h\) generate conjugate cyclic subgroups of \(H\). Hence, there are as many \(\psi^{\ell}\)-orbits as the number of conjugacy classes of cyclic subgroups of
\(G\). The size of an orbit is the number of generators for a cyclic subgroup; if \(C\) is a nontrivial cyclic subgroup of \(G\) with \(|C|=q^{k}\), this is \(\varphi(q^{k})=q^{k}-q^{k-1}=q^{k-1}(q-1)\).
Finally, we must understand
\[\mathbb{Z}_{q}^{\wedge}\!\!\big{/}\!\!\big{(}\ell^{d(q-1)q^{k-1}}-1\big{)} \stackrel{{\cong}}{{=}}\!\!\big{/}\!\!q^{\nu_{q}(\ell^{d(q-1)q^{k- 1}}-1)}.\]
For this, we need to know the largest value \(r\) such that \(\ell^{d(q-1)q^{k-1}}\equiv 1\pmod{q^{r}}\). It helps to work additively. There is an isomorphism of abelian groups \((\mathbb{Z}/q^{r})^{\times}\cong\mathbb{Z}/((q-1)q^{r-1})\). Since \(\ell\) is a generator of \((\mathbb{Z}/q^{r})^{\times}\), it maps to a generator of the right hand side. Since \(dq^{k-1}(q-1)\equiv 0\pmod{(q-1)q^{r-1}}\) when \(r\leq k+v_{q}(d)\), we have
\[\nu_{q}(\ell^{d(q-1)q^{k-1}}-1)=k+\nu_{q}(d).\]
So if \(C\) is a nontrivial cyclic subgroup of order \(q^{k}\), then the \(\psi^{\ell}\)-orbit corresponding to the conjugacy class of \(C\) contributes a factor of
\[\mathbb{Z}_{q}^{\wedge}\!\!\big{/}\!\!\big{(}\ell^{d(q-1)q^{k-1}}-1\big{)} \stackrel{{\cong}}{{=}}\!\!\big{/}\!\!q^{k+\nu_{q}(d)}.\]
The trivial cyclic subgroup contributes \(\mathbb{Z}_{q}^{\wedge}/(\ell^{d}-1)\cong\mathbb{Z}/q^{\nu_{q}(\ell^{d}-1)}\).
**Remark 3.8**.: The formula for the cokernel of \(\psi^{\ell}-1\) in the case \(d=0\) holds integrally, before passage to \(q\)-completion, as can be seen from the proof.
Levelwise, the formula for \(\underline{\operatorname{cok}}_{q}^{\wedge}\) suggests that it is a quotient of \((\underline{R}\mathbb{Q})_{q}^{\wedge}\). We show in Example 3.9 that this Mackey functor is not a cyclic \(\underline{A}_{q}^{\wedge}\)-module.
**Example 3.9**.: We calculate \(\underline{\operatorname{cok}}_{q}^{\wedge}\) for \(G=C_{q^{2}}\). Recall the representation ring Green functor \(\underline{RU}_{q}^{\wedge}\):
\[\mathbb{Z}_{q}^{\wedge}[x]/(x^{q^{2}}-1)\] \[\mathbb{Z}_{q}^{\wedge}[y]/(y^{q}-1)\] \[\mathbb{Z}_{q}^{\wedge}[y]/(y^{q}-1)\] \[\mathbb{Z}_{q}^{\wedge}[y]/(y^{q}-1)\] \[\mathbb{Z}_{q}^{\wedge}[y]/(y^{q}-1)\] \[\mathbb{Z}_{q}^{\wedge}[y]\]
Here, \(y\) is the class of the \(C_{q}\)-representation where the generator acts on the complex plane by a \(q\)-th root of unity, and \(x\) is the class of the \(C_{q^{2}}\)-representation where the generator acts by a primitive \(q^{2}\) root of unity. Since these are one-dimensional complex representations, the Adams operation takes \(y\) to \(y^{\ell}\) and \(x\) to \(x^{\ell}\). Hence, the Mackey functor homomorphism
\(\psi^{\ell}-1\) takes the form
At the \(C_{q}\)-level, the quotient by the image of \(\psi^{\ell}-1\) identifies all nontrivial representations because \(\ell\) is primitive mod \(q\). At the top level, the quotient places the nontrivial representations into two classes: the class of \(x\) and the class of \(x^{q}\). Hence, the cokernel is:
\[\begin{matrix}&\mathbb{Z}_{q}^{\wedge}\{1,x,x^{q}\}\\ \begin{pmatrix}\mathbf{1}&\mathbf{0}&\mathbf{1}\\ \mathbf{0}&\mathbf{1}&\mathbf{0}\end{pmatrix}\begin{pmatrix}&\mathbf{K}\\ \mathbf{x}\end{pmatrix}\begin{pmatrix}&1&0\\ 0&q\\ q-1&0\end{pmatrix}\\ &\mathbb{Z}_{q}^{\wedge}\{1,y\}\\ &\begin{pmatrix}1&1\end{pmatrix}\begin{pmatrix}&\mathbf{K}\\ \mathbf{Z}_{q}^{\wedge}&\mathbf{Z}_{q}^{\wedge}\end{pmatrix}\begin{pmatrix}&1 \\ q-1\end{pmatrix}\\ &\mathrm{cok}_{3}^{\wedge}\end{matrix}\]
This Mackey functor is not free; it contains a copy of the Burnside Mackey functor generated by the element \(1\) at the top level, and the quotient by this subfunctor has \(q\)-torsion.
**Example 3.10**.: Let \(G=C_{9}\). Below we present the Mackey functors \(\underline{\operatorname{cok}}_{3}^{\wedge}\{1\}\) and \(\underline{\operatorname{cok}}_{3}^{\wedge}\{2\}\), which are the cokernels of \(\psi^{\ell}-1\) on \(\underline{RU}_{3}^{\wedge}\{\beta\}\) and \(\underline{RU}_{3}^{\wedge}\{\beta^{2}\}\), respectively.
\[\mathbb{Z}/3\{x^{3}\}\oplus\mathbb{Z}/9\{x\} \mathbb{Z}/3\{1,x^{3}\}\oplus\mathbb{Z}/9\{x\}\] \[\begin{pmatrix}0&1\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr \cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\cr\crcr\cr\cr\crcr\cr\cr\crcr\cr\cr\crcr\crcr\crcr\cr\crcr\cr\crcr\crcr\cr\crcr\cr\crcr\crcr\cr\crcr\cr\crcr\crcr\crcr\crcr\crcr\crcr\crcr\crcr\cr\crcr\crcr\cr\crcr\crcr\crcr\crcr\crcr\crcr\cr\crcr\crcr\cr\crcr\crcr\cr\crcr\crcr\cr\crcr\cr\crcr\crcr\crcr\crcr\cr\crcr\crcr\cr\crcr\crcr\crcr\cr\cr\crcr\crcr\crcr\crcr\crcr\crcr\cr\crcr\crcr\crcr\crcr\crcr\crcrcr\cr\crcr\crcr\crcr\crcr\cr\crcr\cr\crcr\crcr\cr\crcr\crcr\cr\cr\crcr\cr\crcr\crcr\cr\crcr\crcr\cr\crcr\crcr\cr\crcr\cr\cr\crcr\cr\crcr\crcr\cr\cr\cr\crcr\crcr\cr\cr\crcr\cr\crcr\crcr\cr\crcr\crcr\crcr\crcr\cr\crcr\crcr\cr\crcr\cr\crcr\crcr\cr\crcr\cr\crcr\crcr\cr\cr\crcr\crcr\crcr\cr\crcr\crcr\crcr\cr\crcr\crcr\cr\crcr\cr\crcr\cr\crcr\cr\crcr\crcr\crcr\crcr\cr\crcr\crcr\cr\cr\crcr\crcr\crcr\crcr\crcrcr\crcr\cr\crcrcr\crcrcr\crcrcr\cr\crcr\crcrcr\crcrcrcr\crcr\crcr\crcr\crcrcrcr\crcrcr\crcr\crcrcr\crcr\crcr\crcrcr\cr\crcrcr\crcr\crcrcr\crcr\crcr\crcr\crcrcr\crcrcr\crcr\crcrcr\crcr\crcr\crcrcr\crcr\crcr\crcrcr\crcr\crcrcr\crcr\crcrcr\crcrcr\cr\cr\crcr\crcr\crcrcr\crcr\crcr\crcr\cr\crcr\crcr\crcr\crcr\cr\crcr\crcr\cr\crcr\crcr\crcr\cr
If \(H\) is not cyclic, there exists a surjection \(\theta\colon H\to C_{q}\times C_{q}\). In Lemma 3.14, we will show that \(q\in RU(C_{q}\times C_{q})\) lies in the image of transfers from proper subgroups. A double coset formula yields the commuting diagram
Since \(\theta^{*}\) is a ring homomorphism, \(\theta^{*}(q)=q\). This shows that \(q\in RU(H)\) lies in the image of transfers from proper subgroups. Since \(q\) becomes a unit after \(p\)-completion, \(V_{H}(\underline{RU^{\wedge}_{p}})\) is the quotient of \(\underline{RU^{\wedge}_{p}}(H)\) by the unit ideal and therefore vanishes. The rational version of this statement appears, for example, in [T, Section 9].
We have seen that
\[V_{H}(\underline{RU^{\wedge}_{p}})=\begin{cases}\mathbb{Z}^{\wedge}_{p}[x]/ \phi_{q^{k}}(x)&\text{$H$ cyclic and $|H|=q^{k}$},\\ 0&\text{otherwise},\end{cases}\]
where \(\phi_{q^{k}}(x)\) is the \(q^{k}\)-th cyclotomic polynomial. Hence, it remains to determine the cokernel of
\[\mathbb{Z}^{\wedge}_{p}[x]/\phi_{q^{k}}(x)\xrightarrow{\psi^{\ell}-1}\mathbb{ Z}^{\wedge}_{p}[x]/\phi_{q^{k}}(x) \tag{3.13}\]
We may write \(\mathbb{Z}^{\wedge}_{p}[x]/\phi_{q^{k}}(x)\cong\mathbb{Z}^{\wedge}_{p}\{x,x^{ 2},x^{3},\dots,x^{(q-1)q^{k-1}}\}\). The Adams operation \(\psi^{\ell}\) cyclically permutes the \(q-1\) powers of \(x^{q^{k-1}}\) in this basis. Thus we may decompose
\[\mathbb{Z}^{\wedge}_{p}[x]/\phi_{q^{k}}(x)\cong A\oplus B\]
as a \(\mathbb{Z}^{\wedge}_{p}[\psi^{\ell}]\)-module, where
\[A=\mathbb{Z}^{\wedge}_{p}\{x^{iq^{k-1}}\ |\ 1\leq i\leq q-1\}\]
and
\[B=\mathbb{Z}^{\wedge}_{p}\{x^{n}\ |\ q^{k-1}\text{ does not divide }n,1\leq n<(q-1)q^{k-1}\}.\]
It then follows that the cokernel of \(\psi^{\ell}-1\) on \(A\) gives \(\mathbb{Z}^{\wedge}_{p}\). We claim that on \(B\) the cokernel vanishes.
Primitivity of \(\ell\) ensures that in \(RU(C_{q^{k}})\cong\mathbb{Z}[x]/(x^{q^{k}}-1)\), two monomials \(x^{n_{1}}\) and \(x^{n_{2}}\) are in the same \(\psi^{\ell}\)-orbit if and only if \(n_{1}\) and \(n_{2}\) have the same \(q\)-adic valuation, where \(n_{1}\) and \(n_{2}\) are both assumed to be less than \(q^{k}\). It then follows that in the cokernel of \(\psi^{\ell}-1\) on \(RU(C_{q^{k}})\), the polynomial \(\phi_{q^{k}}(x)\cdot x^{n}\) is equivalent to \(q\cdot x^{n}\), so long as \(n\) is not divisible by \(q^{k-1}\). Thus in the cokernel of \(\psi^{\ell}-1\) on the quotient ring \(\mathbb{Z}[x]/\phi_{q^{k}}(x)\), there is a relation \(q\cdot x^{n}=0\) when \(n\) is not divisible by \(q^{k-1}\). In particular, after completing at \(p\), which is different from \(q\), it follows that \(x^{n}\) vanishes in the cokernel of \(\psi^{\ell}-1\) on \(\mathbb{Z}[x]/\phi_{q^{k}}(x)\).
Hence, \(\underline{\operatorname{cok}}^{\wedge}_{p}\) corresponds to \(\mathbb{Z}^{\wedge}_{p}\) supported on the cyclic subgroups, and under the equivalence of categories, this is the same as \((\underline{RQ})^{\wedge}_{p}\). Since the equivalence (3.12) preserves and creates cokernels, we are done.
**Lemma 3.14**.: _The ideal of \(RU(C_{q}\times C_{q})\) generated by transfers from proper subgroups contains \(q\)._
Proof.: Let \(H=C_{q}\times C_{q}\). The representation ring of \(H\) is isomorphic as a commutative ring to \(\mathbb{Z}[x,y]/(x^{q}-1,y^{q}-1)\), where \(x\) and \(y\) are the classes of rotation representations of the left and right factors, respectively. If \(K\) is the subgroup of \(H\) generated by an element \((\gamma^{i},\gamma^{j})\) with \(i\neq 0\), then
\[\operatorname{tr}_{K}^{H}(1)=\sum_{k=0}^{q-1}(x^{i}y^{j})^{k}.\]
We claim that
\[q=\sum_{K\leq H}\operatorname{tr}_{K}^{H}(1)-\operatorname{tr}_{L}^{H}(1) \cdot\operatorname{tr}_{R}^{H}(1),\]
where \(L\) is the subgroup generated by \((\gamma,e)\) and \(R\) is the subgroup generated by \((e,\gamma)\). This is a calculation. Recall that \(H\) has \(q+1\) distinct subgroups of order \(q\): the subgroup \(R\) generated by \((e,\gamma)\), and subgroups generated by elements \((\gamma,\gamma^{j})\) for \(j=0,1,\ldots,q-1\).
\[\sum_{K\leq H}\operatorname{tr}_{K}^{H}(1) -\operatorname{tr}_{L}^{H}(1)\cdot\operatorname{tr}_{R}^{H}(1)\] \[=\sum_{j=0}^{q-1}\operatorname{tr}_{(\langle\gamma,\gamma^{j} \rangle)}^{H}(1)+\operatorname{tr}_{R}^{H}(1)-\operatorname{tr}_{L}^{H}(1) \cdot\operatorname{tr}_{R}^{H}(1)\] \[=\sum_{j=0}^{q-1}\operatorname{tr}_{(\langle\gamma,\gamma^{j} \rangle)}^{H}(1)+\operatorname{tr}_{R}^{H}(1)\cdot(1-\operatorname{tr}_{L}^{H} (1))\] \[=\sum_{j=0}^{q-1}\left(1+xy^{j}+x^{2}y^{2j}+\ldots+x^{q-1}y^{j(q-1) }\right)+\] \[\qquad\quad\left(1+y+y^{2}+\ldots+y^{q-1}\right)\left(-x-x^{2}- \ldots-x^{q-1}\right)\] \[=q+\sum_{j=0}^{q-1}\left(xy^{j}+x^{2}y^{2j}+\ldots+x^{q-1}y^{j(q-1 )}\right)\] \[\qquad\quad-\sum_{k=0}^{q-1}\left(xy^{k}+x^{2}y^{k}+\ldots+x^{q-1 }y^{k}\right)\] \[=q.\]
The last equality follows by a reindexing, recalling that these equations live in the ring \(\mathbb{Z}[x,y]/(x^{q}-1,y^{q}-1)\).
**Example 3.15**.: Let \(H=C_{3}\times C_{3}\), and let \(L=\langle(\gamma,e)\rangle\), \(C=\langle(\gamma,\gamma)\rangle\), \(D=\langle(\gamma,\gamma^{2})\rangle\), and \(R=\langle(e,\gamma)\rangle\) be its four subgroups of order \(3\). Consider the representation ring Green functor for \(C_{3}\times C_{3}\). In this Green functor, we have:
\[\operatorname{tr}_{L}^{H}(1) =1+x+x^{2}\] \[\operatorname{tr}_{C}^{H}(1) =1+xy+x^{2}y^{2}\] \[\operatorname{tr}_{D}^{H}(1) =1+x^{2}y+xy^{2}\] \[\operatorname{tr}_{R}^{H}(1) =1+y+y^{2}\]
We can directly see that 3 is contained in the ideal of \(RU(H)\) generated by images of transfers from proper subgroups of \(H\):
\[\sum_{K\leq H} \operatorname{tr}_{K}^{H}(1)-\operatorname{tr}_{L}^{H}(1)\cdot \operatorname{tr}_{R}^{H}(1)\] \[=\big{(}\operatorname{tr}_{L}^{H}(1)+\operatorname{tr}_{R}^{H}(1) +\operatorname{tr}_{C}^{H}(1)+\operatorname{tr}_{D}^{H}(1)\big{)}- \operatorname{tr}_{L}^{H}(1)\cdot\operatorname{tr}_{R}^{H}(1)\] \[=\big{(}1+x+x^{2}\big{)}+\big{(}1+y+y^{2}\big{)}+\big{(}1+xy+x^{2} y^{2}\big{)}+\big{(}1+x^{2}y+y^{2}x\big{)}\] \[\qquad\qquad-\big{(}1+x+x^{2}\big{)}\,\big{(}1+y+y^{2}\big{)}\] \[=\big{(}4+x+y+x^{2}+xy+y^{2}+x^{2}y+y^{2}x+x^{2}y^{2}\big{)}\] \[\qquad\qquad-\big{(}1+x+x^{2}+y+xyx^{2}y+y^{2}+y^{2}x+y^{2}x^{2} \big{)}\] \[=3.\]
## 4. The homotopy Mackey functors of \(L_{KU_{G}}\mathbb{S}_{G}\)
Our strategy for understanding \(L_{KU_{G}}\mathbb{S}_{G}\) is to use the fracture square (2.1). We begin by describing the homotopy Mackey functors of the local factors \(L_{KU_{G}/p}\mathbb{S}_{G}\), both in the case \(p=q\) and \(p\neq q\), using the work of Section 3. With the local computations in hand, we then use the long exact sequence
\[\cdots\to\mathbb{Q}\otimes\prod_{p}\underline{\pi}_{n+1}L_{KU_{G}/p}\mathbb{S} _{G}\to\underline{\pi}_{n}L_{KU_{G}}\mathbb{S}_{G}\to\underline{\pi}_{n} \mathbb{Q}\otimes L_{KU_{G}}\mathbb{S}_{G}\times\prod_{p}\underline{\pi}_{n}L_ {KU_{G}/p}\mathbb{S}_{G}\to\ldots \tag{4.1}\]
arising from the fracture square (2.1) to obtain the homotopy Mackey functors \(\underline{\pi}_{n}L_{KU_{G}}\mathbb{S}_{G}\). We will use the fact that the rationalization \(\mathbb{Q}\otimes L_{KU_{G}}\mathbb{S}_{G}\) is the Eilenberg-Mac Lane spectrum for the rational Mackey functor \(\mathbb{Q}\otimes\underline{R}\mathbb{Q}\) [BGS, Lemma 9.1].
### Local computations for \(p=q\)
In this section, we compute the homotopy Mackey functors for \(L_{KU_{G}/q}\mathbb{S}_{G}\). The key tool is the following.
**Proposition 4.2** ([Bgs, Propositions 5.3, 6.3]).: _If \(\ell\) is primitive modulo \(|G|\), then the Adams operation \(\psi^{\ell}\colon(KU_{G})_{q}^{\wedge}\to(KU_{G})_{q}^{\wedge}\) is a well-defined map of \(G\)-spectra that participates in a fiber sequence_
\[L_{KU_{G}/q}\mathbb{S}_{G}\longrightarrow(KU_{G})_{q}^{\wedge}\xrightarrow{ \psi^{\ell}-1}(KU_{G})_{q}^{\wedge}. \tag{4.3}\]
Note that the fiber is independent of \(\ell\) in the fiber sequence above.
**Remark 4.4**.: The original proposition 5.3 in [BGS] contains the assumption that \(\ell\) is primitive mod \(|G|\), but by [HiKo, Corollary 2.5], in order to show that \(\psi^{\ell}\) extends to a map of \(G\)-spectra, it suffices to assume that \(\ell\) is coprime to \(|G|=q^{k}\). On the other hand, in order to identify the fiber as the \(KU_{G}/q\)-local equivariant sphere, the additional primitivity assumption is required.
Since \(\underline{\pi}_{*}KU_{G}\cong\underline{R}U[\beta,\beta^{-1}]\) with \(\beta\) in degree 2, the long exact sequence of homotopy Mackey functors associated to the fiber sequence (4.3) splits into four-term exact sequences:
Thus the homotopy Mackey functors of \(L_{KU_{G}/q}\mathbb{S}_{G}\) follow from the work of Section 3.
Proposition 3.3 immediately implies the following.
**Corollary 4.5**.: _For \(d\neq 0\), \(\underline{\pi}_{2d}L_{KU_{G}/q}\mathbb{S}_{G}=0\)._
The \(d=0\) case was previously computed:
**Proposition 4.6**.: _[_BGS, Proposition 6.8]__\(\underline{\pi}_{0}L_{KU_{G}/q}\mathbb{S}_{G}\cong(\underline{R}\underline{Q}) ^{\wedge}_{q}\)_._
**Corollary 4.7**.: _The Mackey functor \(\underline{\pi}_{2d-1}L_{KU_{G}/q}\mathbb{S}_{G}\) is_
\[\underline{\pi}_{2d-1}L_{KU_{G}/q}\mathbb{S}_{G}\cong\underline{\mathrm{cok}^{ \wedge}_{q}}\{d\}=\mathrm{coker}\Big{(}\underline{R}\underline{U}^{\wedge}_{q} \{\beta^{d}\}\xrightarrow{\psi^{\ell}-1}\underline{R}\underline{U}^{\wedge}_{q }\{\beta^{d}\}\Big{)}.\]
This cokernel was computed in Proposition 3.7.
**Example 4.8**.: The Mackey functors \(\underline{\mathrm{cok}^{\wedge}_{3}}\{1\}\) and \(\underline{\mathrm{cok}^{\wedge}_{3}}\{2\}\) were computed for \(G=C_{9}\) in Example 3.10. According to Corollary 4.7, these agree with the homotopy Mackey functors \(\underline{\pi}_{1}L_{KU_{C_{9}}/3}\mathbb{S}_{C_{9}}\) and \(\underline{\pi}_{3}L_{KU_{C_{9}}/3}\mathbb{S}_{C_{9}}\).
### Local computations for \(p\neq q\)
Let \(p\) be a prime that does not divide \(|G|=q^{k}\). The calculation of the nonzero \(p\)-local homotopy groups of \(L_{KU_{G}}\mathbb{S}_{G}\) was done in [BGS]. For an odd prime \(p\), recall the homotopy groups of \(L_{KU/p}\mathbb{S}\) as originally calculated by Adams-Baird [Ad] and Ravenel [Ra] and described more recently in [Z, Equation 2.3.8]:
\[\pi_{n}L_{KU/p}\mathbb{S}\cong\begin{cases}\mathbb{Z}^{\wedge}_{p}&\text{if $n \in\{0,-1\}$,}\\ \mathbb{Z}/p^{\nu_{p}(k)+1}&\text{if $n=2k-1$ and $(p-1)\mid k$,}\\ 0&\text{otherwise.}\end{cases}\]
For \(p=2\), the homotopy groups of \(L_{KU/2}\mathbb{S}\) are in [Z, Equation 2.3.13]:
\[\pi_{n}L_{KU/2}\mathbb{S}\cong\begin{cases}\mathbb{Z}^{\wedge}_{2}\oplus \mathbb{Z}/2&\text{if $n=0$,}\\ \mathbb{Z}^{\wedge}_{2}&\text{if $n=-1$,}\\ \mathbb{Z}/2\oplus\mathbb{Z}/2&\text{if $n\equiv 1\pmod{8}$,}\\ \mathbb{Z}/2&\text{if $n\equiv 0,2\pmod{8}$, and $n\neq 0$,}\\ \mathbb{Z}/2^{\nu_{2}(k)+3}&\text{if $n=4k-1$ and $n\neq-1$,}\\ 0&\text{otherwise.}\end{cases}\]
**Proposition 4.9** ([BGS, Proposition 8.5]).: _Let \(p\neq q\). There is an isomorphism of graded Green functors_
\[\underline{\pi}_{*}L_{KU_{G}/p}\mathbb{S}_{G}\cong\underline{R}\underline{Q} \otimes\pi_{*}L_{KU/p}\mathbb{S}.\]
The above is a complete description of the \(p\)-complete homotopy Mackey functors of \(L_{KU_{G}}\mathbb{S}_{G}\), but Proposition 3.11 then gives the following description in the case \(n=-1\):
**Corollary 4.10**.: _For \(p\neq q\), we have \(\underline{\pi}_{-1}L_{KU_{G}/p}\mathbb{S}_{G}\cong\underline{R}\underline{Q} \otimes\mathbb{Z}^{\wedge}_{p}\cong\underline{\mathrm{cok}^{\wedge}_{p}}\)._
### Local to global reassembly
Here we use the work of Section 4.1 and Section 4.2, in combination with the long exact sequence (4.1), to deduce the homotopy Mackey functors \(\underline{\pi}_{n}L_{KU_{G}}\mathbb{S}_{G}\). The case \(n=0\) was the focus of [BGS]. The cases \(n=-1\) and \(n=-2\) behave quite differently from the rest, so we begin by considering the cases of \(n\) different from \(0\), \(-1\), or \(-2\).
**Proposition 4.11**.: _Let \(n=2k\) be different from \(0\) and \(-2\). Then_
\[\underline{\pi}_{2k}L_{KU_{G}}\mathbb{S}_{G}\cong\underline{R}\underline{Q} \otimes\pi_{2k}L_{KU}\mathbb{S}\cong\underline{R}\underline{Q}\otimes\mathbb{Z }/2\]
_for \(2k\equiv 0,2\pmod{8}\). This Mackey functor vanishes otherwise._
Proof.: Fix \(2k\) different from \(0\) and \(-2\). By Corollary 4.5 and Proposition 4.9, we have that for \(p\) any odd prime (including \(p=q\)), then \(\underline{\pi}_{2k}\left(L_{KU_{G}/p}\mathbb{S}_{G}\right)\) vanishes. In the case of \(p=2\), we have
\[\underline{\pi}_{2k}\left(L_{KU_{G}/2}\mathbb{S}_{G}\right)\cong\begin{cases} \underline{R}\underline{\mathbb{Q}}\otimes\mathbb{Z}/2&2k\equiv 0,2\pmod{8},\\ 0&\text{else}.\end{cases}\]
Similarly, we find that \(\underline{\pi}_{2k+1}L_{KU_{G}/p}\mathbb{S}_{G}\) is nonzero (and levelwise finite) only for finitely many primes \(p\). It follows that \(\mathbb{Q}\otimes\prod_{p}\underline{\pi}_{2k+1}L_{KU_{G}/p}\mathbb{S}_{G}\) vanishes. The result now follows from (4.1).
In the case of \(n\) odd and different from \(-1\), the answer is stated in terms of the cokernel of \(\psi^{\ell}-1\), where as usual \(\ell\) is primitive modulo the order of \(G\).
**Proposition 4.12**.: _Let \(2k-1\neq-1\). Then_
\[\underline{\pi}_{2k-1}L_{KU_{G}}\mathbb{S}_{G}\cong\underline{R}\underline{ \mathbb{Q}}\otimes\pi_{2k-1}L_{KU}\mathbb{S}\big{[}\tfrac{1}{q}\big{]}\oplus \underline{\mathrm{cok}}_{q}^{\wedge}\{k\}.\]
Proof.: According to Section 4.2, the homotopy Mackey functors of \(L_{KU_{G}/p}\mathbb{S}_{G}\) are levelwise finite in degrees \(2k\) and \(2k-1\). Corollary 4.5 gives that \(\underline{\pi}_{2k}L_{KU_{G}/q}\mathbb{S}_{G}\) vanishes, while Corollary 4.7 identifies \(\underline{\pi}_{2k-1}L_{KU_{G}/q}\mathbb{S}_{G}\) with \(\underline{\mathrm{cok}}_{q}^{\wedge}\{d\}\). By Proposition 3.7, this is levelwise finite.
We now turn our attention to the case \(n=-1\).
**Proposition 4.13**.: \(\underline{\pi}_{-1}L_{KU_{G}}\mathbb{S}_{G}=0\)_._
Proof.: By Proposition 3.7(b) and Corollary 4.7, the Mackey functor \(\underline{\pi}_{-1}L_{KU_{G}/q}\mathbb{S}_{G}\) is torsion-free. The same is true of \(\underline{\pi}_{-1}L_{KU_{G}/p}\mathbb{S}_{G}\) for \(p\neq q\) by Section 4.2. It follows that the map
\[\prod_{p}\underline{\pi}_{-1}L_{KU_{G}/p}\mathbb{S}_{G}\longrightarrow\mathbb{ Q}\otimes\left(\prod_{p}\underline{\pi}_{-1}L_{KU_{G}/p}\mathbb{S}_{G}\right)\]
is injective. The long exact sequence (4.1) then shows that \(\underline{\pi}_{-1}L_{KU_{G}}\mathbb{S}_{G}\) is the cokernel of
\[\mathbb{Q}\otimes\underline{R}\underline{\mathbb{Q}}\times\prod_{p}\underline {\pi}_{0}L_{KU_{G}/p}\mathbb{S}_{G}\longrightarrow\mathbb{Q}\otimes\left( \prod_{p}\underline{\pi}_{0}L_{KU_{G}/p}\mathbb{S}_{G}\right),\]
which may be rewritten as
\[\mathbb{Q}\otimes\underline{R}\underline{\mathbb{Q}}\oplus\mathbb{Z}/2\otimes \underline{R}\underline{\mathbb{Q}}\oplus\prod_{p}(\underline{R}\underline{ \mathbb{Q}})_{p}^{\wedge}\longrightarrow\mathbb{Q}\otimes\left(\prod_{p}( \underline{R}\underline{\mathbb{Q}})_{p}^{\wedge}\right).\]
It suffices to show that this is levelwise surjective. As the values of the Mackey functor \(\underline{R}\underline{\mathbb{Q}}\) are all free abelian groups of finite rank, the result follows from Lemma 4.14.
**Lemma 4.14**.: _Let \(B\) be a free abelian group of finite rank. Then the map_
\[f\colon(\mathbb{Q}\otimes B)\oplus\prod_{p}B_{p}^{\wedge}\longrightarrow \mathbb{Q}\otimes\left(\prod_{p}B_{p}^{\wedge}\right)\]
_defined by_
\[f\left(\frac{b_{0}}{n},(b_{p})\right)=\frac{1}{n}(b_{0}-nb_{p})\]
_is surjective._
Proof.: Left to the reader.
Finally, we deal with the case \(n=-2\).
**Proposition 4.15**.: \(\underline{\pi}_{-2}L_{KU_{G}}\mathbb{S}_{G}\cong\mathbb{Q}/\mathbb{Z}\otimes \underline{\operatorname{cok}}\)_._
Proof.: By Corollary 4.5 and Section 4.2, the Mackey functors \(\underline{\pi}_{-2}L_{KU_{G}/p}\mathbb{S}_{G}\) vanish for all primes \(p\). It follows from the long exact sequence (4.1) that \(\underline{\pi}_{-2}L_{KU_{G}}\mathbb{S}_{G}\) is the cokernel of the rationalization map
\[\prod_{p}\underline{\pi}_{-1}L_{KU_{G}/p}\mathbb{S}_{G}\longrightarrow\mathbb{ Q}\otimes\left(\prod_{p}\underline{\pi}_{-1}L_{KU_{G}/p}\mathbb{S}_{G}\right).\]
In other words, we have that
\[\underline{\pi}_{-2}L_{KU_{G}}\mathbb{S}_{G}\cong\mathbb{Q}/\mathbb{Z}\otimes \left(\prod_{p}\underline{\pi}_{-1}L_{KU_{G}/p}\mathbb{S}_{G}\right).\]
By Corollary 4.7 and Corollary 4.10, this may be rewritten as
\[\underline{\pi}_{-2}L_{KU_{G}}\mathbb{S}_{G}\cong\mathbb{Q}/\mathbb{Z}\otimes \left(\prod_{p}\underline{\operatorname{cok}}_{p}^{\wedge}\right).\]
Each Mackey functor \(\underline{\operatorname{cok}}_{p}^{\wedge}\) is (levelwise) \(p\)-local, so that according to Lemma 4.16 we have an isomorphism
\[\mathbb{Q}/\mathbb{Z}\otimes\left(\prod_{p}\underline{ \operatorname{cok}}_{p}^{\wedge}\right) \cong\bigoplus_{p}\left(\mathbb{Q}_{p}/\mathbb{Z}_{p}\otimes \underline{\operatorname{cok}}_{p}^{\wedge}\right)\cong\bigoplus_{p}\left( \mathbb{Q}_{p}/\mathbb{Z}_{p}\otimes\underline{\operatorname{cok}}\right)\] \[\cong\left(\bigoplus_{p}\mathbb{Q}_{p}/\mathbb{Z}_{p}\right) \otimes\underline{\operatorname{cok}}\cong\mathbb{Q}/\mathbb{Z}\otimes \underline{\operatorname{cok}}.\]
**Lemma 4.16**.: _Suppose for each prime \(p\), \(A_{p}\) is an abelian group such that all primes different than \(p\) act invertibly on \(A_{p}\). Then_
\[\mathbb{Q}/\mathbb{Z}\otimes\left(\prod_{p}A_{p}\right)\cong\bigoplus_{p}\left( \mathbb{Q}_{p}/\mathbb{Z}_{p}\otimes A_{p}\right).\]
Proof.: This follows from the decomposition of \(\mathbb{Q}/\mathbb{Z}\) as \(\bigoplus_{r}\mathbb{Q}_{r}/\mathbb{Z}_{r}\) as \(r\) runs over primes, the expression of \(\mathbb{Q}_{r}/\mathbb{Z}_{r}\) as \(\operatorname{colim}_{k}\mathbb{Z}/r^{k}\), and the fact that tensor product commutes with colimits.
|
2308.10714 | CXL Memory as Persistent Memory for Disaggregated HPC: A Practical
Approach | In the landscape of High-Performance Computing (HPC), the quest for efficient
and scalable memory solutions remains paramount. The advent of Compute Express
Link (CXL) introduces a promising avenue with its potential to function as a
Persistent Memory (PMem) solution in the context of disaggregated HPC systems.
This paper presents a comprehensive exploration of CXL memory's viability as a
candidate for PMem, supported by physical experiments conducted on cutting-edge
multi-NUMA nodes equipped with CXL-attached memory prototypes. Our study not
only benchmarks the performance of CXL memory but also illustrates the seamless
transition from traditional PMem programming models to CXL, reinforcing its
practicality.
To substantiate our claims, we establish a tangible CXL prototype using an
FPGA card embodying CXL 1.1/2.0 compliant endpoint designs (Intel FPGA CXL IP).
Performance evaluations, executed through the STREAM and STREAM-PMem
benchmarks, showcase CXL memory's ability to mirror PMem characteristics in
App-Direct and Memory Mode while achieving impressive bandwidth metrics with
Intel 4th generation Xeon (Sapphire Rapids) processors.
The results elucidate the feasibility of CXL memory as a persistent memory
solution, outperforming previously established benchmarks. In contrast to
published DCPMM results, our CXL-DDR4 memory module offers comparable bandwidth
to local DDR4 memory configurations, albeit with a moderate decrease in
performance. The modified STREAM-PMem application underscores the ease of
transitioning programming models from PMem to CXL, thus underscoring the
practicality of adopting CXL memory. | Yehonatan Fridman, Suprasad Mutalik Desai, Navneet Singh, Thomas Willhalm, Gal Oren | 2023-08-21T13:27:27Z | http://arxiv.org/abs/2308.10714v1 | # CXL Memory as Persistent Memory for Disaggregated HPC: A Practical Approach
###### Abstract.
In the landscape of High-Performance Computing (HPC), the quest for efficient and scalable memory solutions remains paramount. The advent of Compute Express Link (CXL) introduces a promising avenue with its potential to function as a Persistent Memory (PMem) solution in the context of disaggregated HPC systems. This paper presents a comprehensive exploration of CXL memory's viability as a candidate for PMem, supported by physical experiments conducted on cutting-edge multi-NUMA nodes equipped with CXL-attached memory prototypes. Our study not only benchmarks the performance of CXL memory but also illustrates the seamless transition from traditional PMem programming models to CXL, reinforcing its practicality.
To substantiate our claims, we establish a tangible CXL prototype using an FPGA card embodying CXL 1.1/2.0 compliant endpoint designs (Intel FPGA CXL IP). Performance evaluations, executed through the STREAM and STREAM-PMem benchmarks, showcase CXL memory's ability to mirror PMem characteristics in _App-Direct_ and _Memory Mode_ while achieving impressive bandwidth metrics with Intel 4th generation Xeon (Sapphire Rapids) processors.
The results elucidate the feasibility of CXL memory as a persistent memory solution, outperforming previously established benchmarks. In contrast to published DCPMM results, our CXL-DDR4 memory module offers comparable bandwidth to local DDR4 memory configurations, albeit with a moderate decrease in performance. The modified STREAM-PMem application underscores the ease of transitioning programming models from PMem to CXL, thus underscoring the practicality of adopting CXL memory.
The sources of this work are available at: [https://github.com/Scientific-Computing-Lab-NRCN/STREAMer](https://github.com/Scientific-Computing-Lab-NRCN/STREAMer).
CXL, Memory disaggregation, Persistent Memory (PMem), Intel Optane DCPMM, HPC, STREAM, STREAM-PMem, STREAMer +
Footnote †: ccs: Information from the First and Second International Conference on Computing and Communications, 2019.
+
Footnote †: ccs: Information from the First and Second International Conference on Computing and Communications, 2019.
a result, the data transfer rate between the processor and board-mounted memory becomes a bottleneck, hindering the overall performance of the system (Han et al., 2019).
For an increase of memory capacity outside of the node, advanced communication technologies such as the Remote Direct Memory Access (RDMA) based Message Passing Interface (MPI) optimize inter-node communication (Shi et al., 2017). However, these sophisticated frameworks are not devoid of challenges (Kirshman et al., 2017): MPI, a cornerstone for distributed computing communication, contends with latency and overhead issues during message transmission, disproportionately affecting efficiency for applications requiring frequent communication. Furthermore, the management complexity escalates with the cluster's scale due to heightened contention for network resources among a larger node count (Beng et al., 2017).
### Persistent Memory in HPC
A proposed solution aimed at bridging the gap between memory and storage is Persistent Memory (PMem) (Shi et al., 2017; Shi et al., 2017). PMem implementations such as BBU (battery backed up) DIMM or Non-Volatile RAM (NVRAM) aim to deliver rapid byte-addressable data access alongside persistent data retention across power cycles. PMem technologies establish a new tier within the memory-storage hierarchy by combining memory and storage characteristics (Shi et al., 2017; Shi et al., 2017). Basic solutions include battery-backed DRAM and have been accessible from diverse vendors over a significant timeframe, representing an established concept (Shi et al., 2017; Shi et al., 2017; Shi et al., 2017). However, these solutions face challenges due to limited scalability and potential data loss risks. The reliance on batteries introduces concerns regarding power failures, leading to potential data corruption or loss if batteries deplete. Moreover, the approach's scalability is hampered by the need for individual batteries for each module, impacting cost-effectiveness and overall system performance.
Yet, in recent years new PMem technologies have emerged, with 3D-Xpoint (Kirshman et al., 2017) being the main technology and Intel Optane DCPMM (Kirshman et al., 2017; Shi et al., 2017) the prominent product on the market. These modern PMem technologies offer byte-addressable memory in larger capacities compared to DRAM while maintaining comparable access times (Shi et al., 2017). Moreover, as these technologies are non-volatile in nature, they enable data retrieval even in instances of power failures. Moreover, PMem offers two configuration options based on these characteristics: (1) It can be utilized as main memory expansion, providing additional volatile memory, and (2) it can serve as a persistent memory pool that can be accessed by applications via a PMem-aware file system (Shi et al., 2017) or be managed and accessed directly by applications (Shi et al., 2017). To simplify and streamline PMem programming and management, the Persistent Memory Development Kit (PMDK) was created (Shi et al., 2017).
During recent years, PMem has gained significant traction in HPC applications (Kirshman et al., 2017; Shi et al., 2017; Shi et al., 2017), with two direct use cases of PMem for scientific applications that require no (or minimal) changes to applications. The first use-case involves PMem as memory expansion to support the execution of large scientific problems (Shi et al., 2017). The second use case involves leveraging PMem as a fast storage device accessed by a PMem-aware file system (mainly based on the POSIX API), primarily for application diagnostics and checkpoint restart (C/R) mechanisms (Shi et al., 2017), but also for increasing the performance and inherent fault tolerance of scientific applications (Kirshman et al., 2017).
In addition to the direct use cases of PMem in scientific applications, various frameworks and algorithms were developed to access and manage data structures on PMem (Beng et al., 2017). Among these are primary methods that are built on top of the PMDK library (Kirshman et al., 2017; Shi et al., 2017). For example, persistent memory object storage frameworks such as MOSIQS (Shi et al., 2017) and the NVM-ESR recovery model for exact state reconstruction of linear iterative solvers using PMem (Kirshman et al., 2017).
Nevertheless, as HPC workloads advance, computing units evolve, and onboard processing elements increase, the demand for heightened memory bandwidth becomes essential (Shi et al., 2017). Existing PMem solutions demonstrate notable shortcomings in meeting these requirements, showing limitations in scalability beyond a certain threshold (Kirshman et al., 2017). Specifically, PMem devices exhibit limited bandwidth. For instance, the bandwidth of Optane DCPMM for reading and writing is multi-factor lower than that of DRAM (Shi et al., 2017). This, in part, is connected with the hybrid and in-between properties of a PMem module (Kirshman et al., 2017), as schematically described in Table 1.
Adding to these challenges, a significant limitation arises from the physical attachment of most PMem devices, like Optane DCPMM, to the CPU board through memory DIMMs. This configuration restricts the potential for memory expansion, as PMem contends for DIMM slots alongside conventional DRAM cards, presenting a bottleneck to achieving optimal memory configurations (Shi et al., 2017; Shi et al., 2017). The HPC community as a whole -- both the super and cloud computing (Shi et al., 2017) -- recognizes the drawbacks associated with tight integrating memory and compute resources, particularly in relation to capacity, bandwidth, elasticity, and overall system utilization (Han et al., 2019; Shi et al., 2017). PMem technologies that are tightly coupled with the CPU inherit these limitations. Now, as prominent PMem technologies are phased out (Optane DCPMM, for example, as announced in 2022 (Kirshman et al., 2017; Shi et al., 2017)), there is an active and prominent pursuit for the adoption of novel memory solutions in particular, and a strive to achieve more disaggregated computing in general (Shi et al., 2017).
### Disaggregated Memory with CXL
The emergence of discrete memory nodes housing DRAM and network interface controllers (NICs) is anticipated to revolutionize conventional memory paradigms, facilitating distributed and shared memory access and reshaping HPC landscapes (Han et al., 2019). This shift aligns with the concept of disaggregation, where compute resources and memory units are decoupled for optimized resource utilization, scalability, and adaptability.
The concept of memory disaggregation has been facilitated recently by the development of advanced interconnect technologies, exemplified by Compute Express Link (CXL) (Shi et al., 2017). CXL is an open standard to support cache-coherent interconnect between a variety of devices (Shi et al., 2017). After its introduction in 2019, the standard has evolved and continues to be enhanced. CXL. 1.1 defines the protocol for three major device types (Shi et al., 2017): Accelerators with cache-only (type 1), cache with attached memory (type 2), and memory expansion (type 3). CXL 2.0 expands the specification - among other capabilities - to memory pools using CXL switches on a device level. CXL 3.0 introduces fabric capabilities and management,
improved memory sharing and pooling with dynamic capacity capability, enhanced coherency, and peer-to-peer communication. Bandwidth-wise, CXL 1.1 and 2.0 employ PCIe 5.0, achieving 32 GT/s for transfers up to 64 GB/s in each direction via a 16-lane link. On the other hand, CXL 3.0 utilizes PCIe 6.0, doubling the speed to 64 GT/s, supporting 128 GB/s bi-directional communication via an x16 link.
Since the market of CXL memory modules is emerging, several vendors have announced products using the CXL protocol. For example, Samsung (Samsung, 2018) and SK Hynix (Hynix, 2018) introduce CXL DDR5 modules, AsteraLabs (Aste et al., 2019) announced a CXL memory accelerator, and Montage Technology (Mentes et al., 2019) will offer a CXL memory expander controller.
Leveraging CXL, memory nodes will be interconnected through high-speed links, enabling adaptive memory provisioning to compute nodes in real time (Samsung, 2018). The practice of intra-rack disaggregation holds the potential to effectively address the memory demands of applications while concurrently ensuring an adequate supply of efficient remote memory bandwidth (Samsung, 2018; Samsung, 2018). Figure 1 demonstrates the expected phase change from the processor's point of view, from previous years' DDR4+PMem memory access, equipped with NVMe SSDs via the PCIe Gen4, to the upcoming future of DDR5 local memory equipped with local or remote NVMe SSDs and CXL memory for memory expansion or persistency over the new generations of PCIe.
Nevertheless, while the concept of memory disaggregation with technologies like CXL holds significant promise, it is important to acknowledge that there are still challenges and considerations that need to be addressed (Beng et al., 2019; Chen et al., 2020); challenges and considerations that resemble the ones of persistent memory integration in HPC (Chen et al., 2020). For example, software and programming models need to evolve to take advantage of disaggregated memory fully; Applications and algorithms must be designed or adapted to work seamlessly across distributed memory nodes; and efficient data placement and movement strategies are crucial to minimize the impact of network latencies and ensure that data-intensive workloads can effectively utilize CXL-based disaggregated memory resources, especially when cache-coherence or direct access is enabled. Notwithstanding, when comparing CXL memory aspects to the ones of PMem as non-volatile RAM (NVRAM), in general, it can be observed (Table 2) that from the disaggregated HPC usage perspective, there should be a prevalence to CXL over NVRAM considering bandwidth, data transfer, and scalability, but also considering memory coherency, integration, pooling and sharing.
### Contribution
In this work, based on actual physical experiments with multi-NUMA nodes and multi-core high-performance SOTA hardware (subsection 2.1) and CXL-remote memory (subsection 2.2), we claim that it is not only possible to exemplify most persistent memory modules characteristics (as described in Table 1) with CXL memory fully but also that in terms of performance, we can achieve much better bandwidth than previously published Optane DCPMM ones (such in (Samsung, 2018), which, for a single Optane DCPMM, discovers that its max read bandwidth is 6.6 GB/s, whereas its max write bandwidth is 2.3 GB/s). In fact, we show (Figure 4) that by approaching our CXL-DDR4 memory module - much cheaper than DDR5 - we achieve comparable results to the local DDR4 module and exhibit performance degradation of only about 60% in bandwidth in comparison to local DDR5 module access (noting that DDR4 has about 50% bandwidth of DDR5). Our tests were made in multiple configurations (subsection 3.2) in relation to the memory distance from the working threads using the well-known STREAM benchmark (subsection 3.1).
In order to demonstrate the non-volatile properties of the memory as PMem, the CXL memory was located outside of the node, in an FPGA device (subsection 2.2), potentially backed by battery, like previous battery-backed DIMMs. As many nodes can approach the device, the battery-backed consideration is no longer considered by us as a major overhead since it will be applied only once for the memory modules and not in each compute node.
Moreover, besides the cache-coherent performance benchmarks with STREAM (Samsung, 2018; Samsung, 2018), we retested the memory bandwidth in an equivalent of the _App-Direct_ approach with a modified STREAM application, named STREAM-PMem (Samsung, 2018) when all of the main arrays were allocated as a PMDK's _pmemobj_ and manipulated accordingly (Samsung, 2018). _pmemobj_ provides an assurance that the condition of objects will remain internally consistent regardless of when the program concludes. Additionally, it offers a _transaction_ function that can encompass various modifications made to persistent objects. This function ensures that either all of the modifications are successfully applied or none of them take effect.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Property** & **As a main memory extension** & **As a direct access to persistent memory** \\ \hline Volatility & Volatile in memory extension mode & Non-volatile in direct access mode \\ \hline Access & Cache-coherent memory expansion & Transactional byte-addressable object store \\ \hline Capacity & Higher than main memory volume & Lower than storage volume \\ \hline Cost & Cheaper than the main memory & More expansive than storage \\ \hline Performance & Several factors below main memory bandwidth & High bandwidth compared to storage \\ \hline \end{tabular}
\end{table}
Table 1. Properties of PMem modules, either as a memory extension (_Memory Mode)_ or as a direct access PMem (_App-Direct_).
Figure 1. The migration from PMem as hardware to CXL memory as PMem in future systems.
We stress that as our CXL memory module is located outside of the node and can be backed by a battery, the ability to transactionally and directly access the memory, exactly as previously done with Optane DCPMM, while achieving even better performances, is a key to our practical approach, which consider CXL memory as a persistent memory for the future of disaggregated HPC.
Finally, we open-sourced the entire benchmarking methodology as an easy-to-use and automated tool named STREAMer for future CXL memory device evaluations for HPC purposes.
## 2. Physical Experimental Setup
### HPC hardware
Our HPC hardware experimental environment is based on 2 setups:
1. [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt]
2. Node equipped with two Intel 4\({}^{th}\) generation Xeon (Sapphire Rapids) processors with a base frequency of 2.1GHz and 48 cores each, plus Hyper-Threading. BIOS was updated to support only 10 cores per socket. Each processor has one memory DIMM (64GB DDR5 4800MHz DIMM). The system is equipped with a CXL prototype device, implemented as DDR4 memory on a PCIe-attached FPGA (see Figure 2).
3. Node equipped with two Intel Xeon Gold 5215 processors with a base frequency of 2.5GHz and 10 cores each, plus Hyper-Threading. Each processor has total 96GB DRAM in 6 channels, 16GB DDR4 2666MHz DIMM per channel. (see Figure 3).
### CXL prototype
We provide an in-depth overview of our CXL prototype's implementation on an FPGA card (Hotz et al., 2017). Figure 2 and Figure 4 grant a more detailed view into the implementation of our CXL memory pool on the FPGA card (while Figure 3 show the reference system, without any CXL attachment, with DDR4 main memory). The prototype aims to harness the capabilities of the R-Tile Intel FPGA IP for CXL, encompassing critical functionalities for CXL link establishment and transaction layer management. This comprehensive solution facilitates the construction of FPGA-based CXL 1.1/2.0 compliant endpoint designs, including Type 1, Type 2, and Type 3 configurations. It's built upon a previously proven prototype, with necessary slight modifications for PMem activity (Peng et al., 2018).
The architecture of our CXL implementation revolves around a synergistic pairing of protocol Soft IP within the FPGA main fabric die and the Hard IP counterpart, the R-Tile. This cohesive arrangement ensures effective management of CXL link functions, which are pivotal for seamless communication. Specifically, the R-Tile interfaces with a CPU host via a PCIe Gen5x16 connection, delivering a theoretical bandwidth of up to 64GB/s. As a key facet of our implementation, the FPGA device is duly enumerated as a CXL endpoint within the host system.
Complementing this link management, the Soft IP assumes the mantle of transaction layer functions, vital for the successful execution of different CXL endpoint types. For Type 3 configurations, the CXL_mem transaction layer adeptly handles incoming CXL_mem requests originating from the CPU host. It orchestrates the generation of host-managed device memory (HDM) requests directed toward an HDM subsystem. Simultaneously, the CXL.io transaction layer undertakes the responsibility of processing CXL.io requests. These requests encompass both configuration and memory space inquiries initiated from the CPU host, seamlessly forwarding them to their designated control and status registers. A noteworthy augmentation is the User Streaming Interface, offering a conduit for custom CXL.io features that can be seamlessly integrated into the user design.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Aspect** & **CXL Memory** & **NVRAM** \\ \hline Bandwidth \& & Significantly higher bandwidth enabling fast data transfers between processors and memory devices. & Non-volatile storage with potential data transfer rate limitations due to underlying interface and technology. \\ \hline Memory Coherency & Provides memory-coherent links, ensuring consistent data across different memory tiers. & Requires additional mechanisms for memory coherency, except with local RAM, when integrated with other memory technologies. \\ \hline Heterogeneous Memory Integration & Allows seamless integration of various memory technologies within a unified architecture. & Effective for extending memory capacity, but integration may require additional considerations due to unique characteristics. \\ \hline Memory Pooling and Sharing & Facilitates memory pooling and sharing, enabling efficient resource utilization and dynamic allocation based on workload requirements. & Extends memory capacity, but inherent flexibility in memory sharing and pooling may be limited. \\ \hline Industry Standardization & Open industry standard supported by major technology players, ensuring compatibility, interoperability, and broader adoption. & Solutions may vary, potentially leading to compatibility challenges and limited integration options. \\ \hline Scalability & Architecture designed for scalability with multiple lanes and protocols, catering to evolving data center needs. & Scalability may be constrained by underlying technology characteristics, such as DIMM count and RAM/N-VRAM tradeoff. \\ \hline Relevance to HPC & Higher bandwidth, memory coherency, and memory pooling capabilities enhance HPC workload performance. Standardization compatibility in heterogeneous environments and scalability cater to evolving demands. & Offers non-volatility but is constrained by limitations in bandwidth, coherency management, and scalability, affecting its applicability to complex HPC memory needs. \\ \hline \end{tabular}
\end{table}
Table 2. General comparison between common aspects of CXL memory and NVRAM for disaggregated HPC.
Integral to our FPGA card is the inclusion of two onboard DDR4 memory modules, each boasting a capacity of 8GB and operating at a clock frequency of 1333 MHz. These modules are accessible from the host system as conventional memory resources. It is imperative to highlight a distinctive attribute of this prototype configuration: the CXL link facilitates access to an identical memory volume. In essence, this means that the same far memory segment can be made available to two distinct NUMA nodes, eliminating any concerns of address overlap. However, due to the absence of a unified cache-coherent domain, the onus of maintaining coherency between the two NUMA nodes assigned to the shared far memory rests with the applications leveraging this configuration.
Notably, the bandwidth attainable from this prototype configuration is subject to current implementation constraints and does not reflect an intrinsic limitation of the CXL standard. Potential avenues for enhancing bandwidth include several considerations. First, transitioning to a higher-speed FPGA, supporting DDR4 speeds of 3200 Mbps or even embracing the capabilities of DDR5 at 5600 Mbps, could appreciably enhance throughput. Additionally, scaling the resources allocated to the CXL IP by increasing the number of slices is a viable strategy. Furthermore, expanding the FPGA's capacity to accommodate multiple independent DDR channels, possibly transitioning from one channel to four, holds promise in augmenting the prototype's bandwidth potential.
In our discussion, the fact that the CXL memory device is DDR4 and not DDR5 is key, as usually, PMem is slower and cheaper than the main memory. By using DDR4 CXL memory and not DDR5, while main memory is DDR5, we keep on this important relation.
## 3. Performance Evaluation
### Stream and Stream-PMem Benchmarks
The STREAM benchmark (Zhao et al., 2018) is a synthetic benchmark program that measures sustainable memory bandwidth for simple vector kernels in high-performance computers. STREAM was developed as a proxy for the basic computational kernels in scientific computations (Zhao et al., 2018) and includes Copy, Scale, Sum, and Triad kernels. STREAM has a dedicated version to benchmark PMem modules by allocating and accessing PMem via PMDK (STREAM-PMem (Kumar et al., 2019)).
The excerpt presented in Listing 1 constitutes a portion of the initial codebase that has since been extracted from the current version of the code.
```
#ifndefSTREAM_TYPE
#defineSTREAM_TYPE double
#endif
#staticSTREAM_TYPEa[STREAM_ARRAY_SIZE+OFFSET], b[STREAM_ARRAY_SIZE+OFFSET], c[STREAM_ARRAY_SIZE+OFFSET];
```
The content represented in Listing 1 has been substituted in STREAM-PMem (Kumar et al., 2019) with the code demonstrated in Listing 2. The code commences by accessing the memory pool. Furthermore, a function named _initiate_ is employed to initialize the three arrays. Following this initialization, the code proceeds to execute the remaining segments of the STREAM benchmark code, mirroring the structure of the original STREAM benchmark code.
```
1PMEMobjpool*pop;
2PO8J_LAVOU_BEGIN(array);
3PO8J_LAVOU_T_TOD(array, double);
4PO8J_LAVOU_EN(array);//Declearingthearrays
5TOD(double), a, c; void initiate() //Initiatingthearrays.
6PO8J_ALLOC(pop, &a, double, (STREAM_ARRAY_SIZE+OFFSET)*sizeof(STREAM_TYPE), NULL, NULL);//Sameforbandc.
7intmain(){
Figure 4. Overview of CXL IP for Intel® Agilex® 7 I-Series FPGA (Zhao et al., 2018), demonstrated in Figure 2 (setup #1).
Figure 3. Setup #2 with DDR4 on-node memory.
Figure 2. Setup #1 with DDR5 on-node memory and DDR4 CXL-attached memory.
constcharpath[]=".../pool.obj"; pop=pmemboj_create(path,LAYOUT_NAME,10737418240,#666); if(pop==NULL)
2 pop=pmemboj_open(path,LAYOUT_NAME); if(pop==NULL){
3 error(path); exit([);} initiate(); //TherestoftoftSTREAMbenchmarkafterthis. } ```
In this work, we employ STREAM in those two versions to showcase the shift from PMem to CXL. Throughout this demonstration, we illustrate how programs designed for PMem can seamlessly operate on CXL-enabled devices. Furthermore, we provide performance assessments to anticipate the impact of CXL on performance in relation to local RAM (DDR4 and DDR5) and local PMem-like devices (emulation of remote sockets either for memory expansion or as a direct access device, as done in (Bordes et al., 2017; Bordes et al., 2017)).
In contrast to previous research that primarily emphasizes demonstrating the use of CXL memory for in-memory database queries or file system operations (Bordes et al., 2017; Bordes et al., 2017), STREAM memory access involves accessing and manipulating large arrays, making it particularly applicable and significant for scientific computations in HPC systems. Moreover, STREAM is implemented with OpenMP threads, which is the common shared-memory paradigm in scientific computing for parallelism (Bordes et al., 2017).
### Test Configurations
The methodology of this work is to employ STREAM and STREAM-PMem in various CPU and memory configurations, taking into account the availability of DRAM and CXL memory available on the HPC setups, as will be described next. The results presented in Figure 5, Figure 6, Figure 7, Figure 8 refer to STREAM executions with 100M array elements for Scale, Add, Copy, and Triad operations correspondingly. For each STREAM method, the results of our tests are presented in 2 classes (and a total of 5 groups), divided conceptually for unique comparisons. We sub-divide those 5 groups into two classes. The first class (Class 1, (a)-(c)) refers to the equivalent of the _App-Direct_ mode in PMem in which we directly access the local or remote memory (either in the alternative socket or in the CXL memory), and the second class (Class 2, (a)-(b)) refers to the _Memory Mode_ in PMem, in which we increase the available memory using other CC-NUMA nodes:
**Class 1 - App-Direct:**
1. **Local memory access as PMem:** Configurations within this group involve accessing local memory (on-socket memory) in _App-Direct_ mode (thus benchmarking STREAM-PMem).
2. **Remote memory access as PMem:** Configurations within this group involve computing cores on a single socket that access remote memory in _App-Direct_ mode (thus benchmarking STREAM-PMem). The term "remote memory" in this context encompasses both CXL-attached memory and on-node memory accessed from the alternative CPU socket (i.e., memory accessed through the UPI).
3. **Remote memory as PMem (thread affinity):** Configurations within this group involve computing cores in both sockets that access remote memory in _App-Direct_ mode (thus benchmarking STREAM-PMem) using two distinct thread affinity methods: _close_ and _spread_. The _close_ method populates an entire socket first and then adds cores from the second socket. The _spread_ method, on the opposite, adds cores alternately from both sockets.
**Class 2 - Memory Mode:**
1. **Remote CC-NUMA:** Configurations within this group involve computing cores on a single socket that access remote memory as CC-NUMA.
2. **Remote CC-NUMA (all cores):** Configurations within this group involve cores on both CPU sockets accessing remote memory as CC-NUMA. This includes configurations where both sockets operate and access memory on one of them since these workloads include remote accesses.
For better clarity, the data flow for each test configuration is demonstrated in Figure 9. Each row in Figure 9 contains the data flow examinations of the test groups of the two classes. Thus, in each of our test groups, for each of the STREAM operations (Figures 5, 6, 7, 8), the way to understand each trend, and its correspondence to the relevant dataflow, is given in the trend itself by a combination of three: symbol, color and memory annotation. The symbol is used to distinguish between accessing on-node DDR4 (a), on-node DDR5 (\(\bullet\)) or CXL-attached DDR4 (\(\times\)). The color implies the active compute cores -- either in socket0, socket1, or both. The annotations \(pmemb\#\{0,1,2\}\) or \(num\#\{0,1,2\}\) accompanying each trend provide an explanation of the accesses memory location: 0 for socket0; 1 for socket1; and 2 for CXL memory. \(numa\) signifies STREAM accessing memory as NUMA memory expansion, while _pmem_ represents STREAM-PMem accessing memory using PMDK.
## 4. Results and Analysis
Figure 5, Figure 6, Figure 7 and Figure 8 present STREAM results for the Scale, Add, Copy, and Triad operations correspondingly, and for the test configurations defined in subsection 3.2 as will be described next. Figure 5a, Figure 6a, Figure 7a and Figure 8a through Figure 5e, Figure 6e, Figure 7e and Figure 8e present results for Class 1.(a) group though Class 2.(b) group correspondingly.
The results explain the costs associated with memory access across varied configurations distinguished by parameters such as memory type (on-node or CXL-attached), memory placement (local to the socket, on the alternate CPU socket, or the CXL-attached memory), access mode (_App-Direct_ vs. _Memory Mode_), and thread affinity (Close or Spread).
Next, we will examine and analyze the achieved results in relation to the configuration classes and groups presented in subsection 3.2:
1. [label=()]
2. **Class 1 - App-Direct:**
3. **Local memory access as PMem:** It is possible to observe that among all of the STREAM actions, the _App-Direct_ access using PMDK to the local DDR5 memory is saturated around 20-22 GB/s. This test is a reference for the remote access presented in the following group, either to a nearby remote socket or to the CXL memory (with PMDK).
4. **Remote memory access as PMem:**_App-Direct_ access to the emulated remote PMem (DDR5 on the alternate socket) results in a decrease of 30% (~15 GB/s) of performance on average for
Figure 5. SCALE -- Various STREAM test configurations. Refer to Section 3.2 for definition of test groups 1.(a), 1.(b), 1.(c), 2.(a), 2.(b) and legend clarifications.
Figure 8. TRIAD -- Various STREAM test configurations. Refer to Section 3.2 for definition of test groups 1.(a), 1.(b), 1.(c), 2.(a), 2.(b) and legend clarifications.
Figure 9. Data flow demonstrations for the two classes (_App Direct_ and _Memory Mode_). Each test group is evaluated in corresponding to subfigures of Figure 5, Figure 6, Figure 7, Figure 8. Each row corresponds to a test group.
all STREAM operations, in comparison to local _App-Direct_ access. In the case of _App-Direct_ access to remote CXL memory (DDR4), we experience 50% decrease in performance in comparison to the emulated PMem on DDR5. However, we note that DDR5 inherently has about 50% higher bandwidth than DDR4, meaning that the rest of the overhead - about 2-3 GB/s loss in bandwidth - can be attributed to the CXL fabric.
3. **Remote memory as PMem (thread affinity):** As observed in previous groups, local _App-Direct_ accesses result in higher bandwidth than remote accesses. In the case of _close_ thread affinity, after populating the entire socket, adding remote accesses of compute cores to the workload negatively impacts the bandwidth, whereas adding local accesses contributes positively. With _spread_ affinity, the performance demonstrates an average between local and remote accesses due to the inclusion of alternating accesses. Eventually, when both sockets are operating with the entire core count, the results converge for on-node DDR5 and remote CXL memory, separately. Notably, accessing remote CXL memory (DDR4) leads to a 50% observed degradation compared to on-node DDR5.
**Class 2 - _Memory Mode_**:**
1. **Remote CC-NUMA:** Evaluating DDR4 CC-NUMA, whether on the remote socket or CXL-attached memory, yields comparable figures (with average gaps of up to 2-5 GB/s). However, following a small number of threads, a slight advantage is observed for accessing CXL memory. This advantage can be attributed to the larger caches in Setup #1 utilizing CXL (Shappire Rapids), as opposed to Setup #2 (Xeon Gold) with on-node DDR4 (subsection 2.1). This indicates that the CXL fabric overhead is constrained by the performance reduction when transitioning back from Sapphire Rapids to Xeon Gold. Moreover, the gap between the _CC-NUMA_ to DDR5 and DDR4 (on-node or CXL-attached) stands on a factor of two, as already observed in 1.(b) and 1.(c). In addition, in comparison to the results of the _App-Direct_ tests in 1.(b), it is observed that PMDK overheads over CC-NUMA are 10%-15% (in all STREAM methods).
2. **Remote CC-NUMA (all cores):** The observed gap between DDR4 and DDR5 repeats here. Moreover, accessing on-node DDR4 using all cores converges to the same results as accessing DDR4 CXL memory.
To conclude, the analysis reveals that direct access to local DDR5 memory using PMDK saturates at 20-22 GB/s, while direct remote access to emulated PMem and CXL memory results in 30% and 50% performance decreases, respectively, with about 2-3 GB/s bandwidth loss attributed to CXL fabric. In terms of memory expansion, accessing remote DDR4 CC-NUMA and DDR4 CXL-attached memory exhibit similar performance gaps of 2-3 GB/s, while DDR5 CC-NUMA maintains an advantage gap of a factor of 1.5 compared to DDR4, and on-node DDR4 access converges with off-node DDR4 access under varying thread affinities.
## 5. Conclusions
In this study, we embarked on a comprehensive exploration of the potential of CXL memory as a promising candidate for serving as a persistent memory solution in the context of disaggregated HPC systems. By conducting physical experiments on state-of-the-art multi-NUMA nodes equipped with high-performance processors and CXL-attached memory prototypes, we have provided empirical evidence that supports the feasibility of using CXL memory to exhibit all the characteristics of persistent memory modules while achieving impressive performance metrics.
Our findings demonstrate that CXL memory has the capability to outperform previously published benchmarks for Optane DCPMM in terms of bandwidth. Specifically, by employing a CXL- DDR4 memory module, which is a cost-effective alternative to DDR5 memory, we achieved bandwidth results comparable to local DDR4 memory configurations, with only a marginal decrease of around 50% when compared to local DDR5 memory configurations. These results, attained across various memory distances from the working threads, were assessed through the well-established STREAM benchmark underscoring the reliability and versatility of CXL memory in the HPC landscape.
The shift from PMem to CXL was not only demonstrated through performance evaluations but was also highlighted through the modification of the STREAM application into STREAM-PMem. We showcased the seamless transition of programming models from PMem to CXL, leveraging the PMDK's _pmemobj_ to ensure transactional integrity and consistency of operations on persistent objects. Furthermore, the ability to access CXL memory directly and transactionally, akin to Optane DCPMM, was underscored as a key advantage for practical implementation.
Our study extends beyond theoretical considerations by implementing a practical CXL prototype on an FPGA card. This prototype embodies CXL 1.1/2.0 compliant endpoint designs, demonstrating effective link establishment and transaction layer management through a combination of Soft and Hard IP components. The prototype's performance, while constrained by current implementation limitations, stands as a testament to the extensibility of this solution and offers a blueprint for potential enhancements, including higher-speed FPGAs and increased resources.
## 6. Future Work
While this study provides valuable insights into the feasibility and potential benefits of using CXL-enabled memory in HPC systems, several avenues for future research and exploration remain:
* **Scalability and Performance Optimization**: Further investigation is warranted to explore the scalability of CXL-enabled memory in larger HPC clusters, with more than one node accessing the CXL memory. Optimizing communication protocols and memory access patterns can help maximize memory disaggregation benefits.
* **Hybrid Architectures**: Combining different memory technologies, such as DDR, PMem, and CXL memory, in a hybrid memory architecture could offer a balanced solution that leverages the strengths of each technology. Also, the CXL memory could also use DDR5 and even Optane DCPMM, and as such, revisiting the results with those CXL memories would be beneficial.
* **Real-World Applications**: Extending the evaluation to real-world HPC applications beyond benchmarks can provide a clearer understanding of how CXL memory performs in practical scenarios.
* **Fault Tolerance and Reliability**: Investigating fault tolerance mechanisms and data reliability in the context of CXL-enabled memory is crucial, especially in large-scale distributed environments. Specifically, code systems that have previously been built upon PMDK and Optane DCPMM presence in the HPC system.
## Acknowledgments
This work was supported by Pazy grant 226/20, the Lynn and William Frankel Center for Computer Science, and Intel Corporation (oneAPI Center of Excellence program). Computational support was provided by the NegevHPC project [54] and Intel Developer Cloud [23]. The authors would like to thank Gabi Dadush, Israel Hen, and Emil Malka for their hardware support on NegevHPC. The authors also want to thank Jay Mahalingam and Guy Tamir of Intel for their great help in forming this collaboration.
|
2305.10930 | On the Off-Target Problem of Zero-Shot Multilingual Neural Machine
Translation | While multilingual neural machine translation has achieved great success, it
suffers from the off-target issue, where the translation is in the wrong
language. This problem is more pronounced on zero-shot translation tasks. In
this work, we find that failing in encoding discriminative target language
signal will lead to off-target and a closer lexical distance (i.e.,
KL-divergence) between two languages' vocabularies is related with a higher
off-target rate. We also find that solely isolating the vocab of different
languages in the decoder can alleviate the problem. Motivated by the findings,
we propose Language Aware Vocabulary Sharing (LAVS), a simple and effective
algorithm to construct the multilingual vocabulary, that greatly alleviates the
off-target problem of the translation model by increasing the KL-divergence
between languages. We conduct experiments on a multilingual machine translation
benchmark in 11 languages. Experiments show that the off-target rate for 90
translation tasks is reduced from 29\% to 8\%, while the overall BLEU score is
improved by an average of 1.9 points without extra training cost or sacrificing
the supervised directions' performance. We release the code at
https://github.com/PKUnlp-icler/Off-Target-MNMT for reproduction. | Liang Chen, Shuming Ma, Dongdong Zhang, Furu Wei, Baobao Chang | 2023-05-18T12:43:31Z | http://arxiv.org/abs/2305.10930v3 | # On the Off-Target Problem of Zero-Shot Multilingual
###### Abstract
While multilingual neural machine translation has achieved great success, it suffers from the off-target issue, where the translation is in the wrong language. This problem is more pronounced on zero-shot translation tasks. In this work, we find that failing in encoding discriminative target language signal will lead to off-target and a closer lexical distance (i.e., KL-divergence) between two languages' vocabularies is related with a higher off-target rate. We also find that solely isolating the vocab of different languages in the decoder can alleviate the problem. Motivated by the findings, we propose Language Aware Vocabulary Sharing (LAVS), a simple and effective algorithm to construct the multilingual vocabulary, that greatly alleviates the off-target problem of the translation model by increasing the KL-divergence between languages. We conduct experiments on a multilingual machine translation benchmark in 11 languages. Experiments show that the off-target rate for 90 translation tasks is reduced from 29% to 8%, while the overall BLEU score is improved by an average of 1.9 points without extra training cost or sacrificing the supervised directions' performance. We release the code at [https://github.com/PKUnlp-icler/Off-Target-MNMT](https://github.com/PKUnlp-icler/Off-Target-MNMT) for reproduction.
## 1 Introduction
Multilingual NMT makes it possible to do the translation among multiple languages using only one model, even for zero-shot directions Johnson et al. (2017); Aharoni et al. (2019). It has been gaining increasing attention since it can greatly reduce the MT system's deployment cost and enable knowledge transfer among different translation tasks, which is especially beneficial for low-resource languages. Despite its success, off-target is a harsh and widespread problem during zero-shot translation in existing multilingual models. For the zero-shot translation directions, the model translates the source sentence to a wrong language, which severely degrades the system's credibility. As shown in Table 1, the average off-target rate on 90 directions is 29% and even up to 95% for some language pair (tr->gu) on WMT'10 dataset.
Researchers have been noticing and working on solving the problem from different perspectives. For model trained on English-centric dataset, a straight forward method is to add pseudo training data on the zero-shot directions through back-translation Gu et al. (2019); Zhang et al. (2020). Adding pseudo data is effective since it directly turns zero-shot translation into a weakly supervised task. Despite its effectiveness, it brings a lot more training cost during generating data and training on the augmented corpus and the supervised directions' performance is also reported to decrease due to the model capacity bottleneck Zhang et al. (2020); Yang et al. (2021). Rios et al. (2020) finds that instead of regarding all languages as one during the vocabulary building process, language-specific BPE can alleviate the off-target problem, yet it still costs the supervised directions' performance.
In this work, we perform a comprehensive analysis of the off-target problem, finding that failure in encoding discriminative target language signal
will lead to off-target and we also find a strong correlation between off-target rate of certain direction and the lexical similarity between the involved languages. A simple solution by separating the vocabulary of different languages in the decoder can decrease lexical similarity among languages and it proves to improve the zero-shot translation performance. However, it also greatly increases the model size (308M->515M) because a much larger embedding matrix is applied to the decoder.
For a better performance-cost trade-off, we further propose Language-Aware Vocabulary Sharing (LAVS), a novel algorithm to construct the multilingual vocabulary that increases the KL-divergence of token distributions among languages by splitting particular tokens into language-specific ones.
LAVS is simple and effective. It does not introduce any extra training cost and maintains the supervised performance. Our empirical experiments prove that LAVS reduces the off-target rate from 29% to 8% and improves the BLEU score by 1.9 points on the average of 90 translation directions. Together with back-translation, the performance can be further improved. LAVS is also effective on larger dataset with more languages such as OPUS-100 (Zhang et al., 2020) and we also observe that it can greatly improve the English-to-Many performance (+0.9 BLEU) in the large-scale setting.
## 2 Related Work
Off-Target Problem in Zero-Shot TranslationWithout parallel training data for zero-shot directions, the MNMT model is easily caught up in off-target problem (Ha et al., 2016; Aharoni et al., 2019; Gu et al., 2019; Zhang et al., 2020; Rios et al., 2020; Wu et al., 2021; Yang et al., 2021) where it ignores the target language signal and translates to a wrong language. Several methods are proposed to eliminate the off-target problem. Zhang et al. (2020); Gu et al. (2019) resort different back-translation techniques to generate data for non-English directions. Back-translation method is straight-forward and effective since it provides pseudo data on the zero-shot directions but it brings a lot more additional cost during generating data and training on the augmented corpus. Gu et al. (2019) introduced decoder pretraining to prevent the model from capturing spurious correlations, Wu et al. (2021) explored how language tag settings influence zero-shot translation. However, the cause for off-target still remains underexplored.
Vocabulary of Multilingual NMTVocabulary building method is essential for Multilingual NMT since it decides how texts from different languages are turned into tokens before feeding to the model. Several word-split methods like Byte-Pair Encoding (Sennrich et al., 2016), Wordpiece (Wu et al., 2016) and Sentencepiece (Kudo and Richardson, 2018), are proposed to handle rare words using a limited vocab size. In the background of multilingual NMT, most current studies and models (Conneau et al., 2019; Ma et al., 2021; team et al., 2022) regard all languages as one and learn a shared vocabulary for different languages. Xu et al. (2021) adopted optimal transport to find the vocabulary with most marginal utility. Chen et al. (2022) study the relation between vocabulary sharing and label smoothing for NMT. Closely related to our work, Rios et al. (2020) finds that training with language-specific BPE that allows token overlap can improve the zero-shot scores at the cost of supervised directions' performance and a much larger vocab while our method does not bring any extra cost.
To the best of our knowledge, we are the first to explore how vocabulary similarity of different languages affects off-target in zero-shot MNMT and reveal that solely isolating vocabulary in the decoder can alleviate the off-target problem without involving extra training cost or sacrificing the supervised directions' performance.
## 3 Delving into the Off-Target Problem
### Multilingual NMT System Description
We adopt the Transformer-Big (Vaswani et al., 2017) model as the baseline model. For multilingual translation, we add a target language identifier <XX> at the beginning of input tokens to combine direction information. We train the model on an English-centric dataset WMT'10 (Callison-Burch et al., 2010). Zero-shot translation performance is evaluated on Flores-101 (Goyal et al., 2021) dataset. We use a public language detector1 to identify the sentence-level language and compute the off-target rate (OTR) which denotes the ratio of translation that deviates to wrong languages. Full information about training can be found in Section 5.1.
Footnote 1: [https://github.com/Mimino666/langdetect](https://github.com/Mimino666/langdetect)
### Off-Target Statistics Safari
Off-Target Rate Differs in DirectionsWe first train the multilingual NMT model in 10 EN-X directions and 10 inverse directions from WMT'10
simultaneously. Then we test the model on 90 X-Y zero-shot directions using semantic parallel sentences from the previous 10 languages provided by Flores-101. We compute the off-target rate of all directions and list the result in Table 1.
In addition to the individual score, we next split the languages into High (cs, fr, de, fi), Mid (lv, et), and Low (ro, tr, hi, gu) resources according to data abundance degree. Then we compute the average OTR of High-to-High, High-to-Low, Low-to-High, and Low-to-Low directions and rank the result. The ranked result is: Low-to-Low (50.28%) > High-to-High (27.16%) > Low-to-High (23.18%) > High-to-Low (20.78%). Based on the observation, we can see that language with the lowest resource (gu) contributes to a large portion of off-target cases. This is reasonable since the model might not be familiar with the language identifier <GU> and the same situation goes for Low-to-Low translations.
However, it is surprising to see that translations between high-resource languages suffer from more severe off-target than those directions involving one low-resource language. There seem to be other factors influencing the off-target phenomena.
In other words, if data imbalance is not the key factor for off-targets between high-resource languages, what are the real reasons and possible solutions? To answer these questions, we need to delve deeper into the real off-target cases.
The Major Symptom of Off-TargetWhen the model encounters an off-target issue, a natural question is which language the model most possibly deviates to. We find that among different directions, a majority (77%) of the off-target cases are wrongly translated to English, which is the centric language in the dataset. A small part (15%) of cases copy the the input sentence as output. Our observation also agrees with the findings of Zhang et al. (2020). It raises our interest that why most off-target cases deviate to English.
### Failing in Encoding Discriminative Target Language Signal Leads to Off-Target
Considering the encoder-decoder structure of the model, we hypothesize that:
_The encoder fails to encode discriminative target language information to the hidden representations before passing to the decoder._
To test the hypothesis, we start by analyzing the output of the trained transformer's encoder:
1) We choose French as the source language and conduct a French-to-Many translation (including all languages in WMT'10) on Flores-101.
2) We collect all the pooled encoder output representations of the French-to-Many translation and project them to 2D space using TSNE. The visualization result is shown in Figure 2.
The visualization result justifies our hypothesis. We can tell from the distribution that only representations belonging to "fr-tr" and "fr-ro" directions have tight cluster structures with boundaries. _The representations from high/mid-resource language pairs are completely in chaos and they are also mixed with fr-en representations._ And those languages generally have a higher off-target rate in French-to-Many Translation according to Table 1.
The decoder cannot distinguish the target language signal from the encoder's output when it receives representations from the "chaos" area. Moreover, during the training process, the decoder generates English far more frequently than other lan
Figure 1: A real Off-Target case observed in our multilingual NMT system. In this case, the output is literally English while the real target is German.
Figure 2: Encoder pooled output visualization using TSNE for French-to-Many translations. The input French sentences are the same for all directions. Note that there are only French sentences in the encoder side.
guages and it allocates a higher prior for English.
Passing hidden representation similar to English one will possibly confuse the decoder to generate English no matter what the given target language is. It could explain why most off-target cases deviate to English. The decoder struggles to tell the correct direction from the encoder's output.
Now we have a key clue for the off-target issue. The left question is what causes the degradation of target language signal in some directions and whether we can make the representations of different target languages more discriminative to eliminate the off-target cases.
### Language Proximity Correlates with Zero-Shot Off-Target Rate
To explore how off-target occurs differently in different language pairs, we conduct experiments using a balanced subset of WMT'10 dataset where we hope to preclude the influence of data size. We randomly sampled 500k sentences from different directions to form a balanced training set and remove the directions(hi, tr and gu) that do not have enough sentences.
Language Proximity is an Important Characteristic of Translation DirectionOur motivation is intuitive that if two languages are rather close, the probability distribution of different n-grams in the two languages' tokenized corpus should be nearly identical. Considering a large number of different n-grams in the corpus, we only consider 1-grams to compute the distribution. We call the result "Token Distribution".
We use Kullback-Leibler divergence from Token Distribution of Language B to Language A to reflect the degree of difficulty if we hope to encode sentence from B using A, which can also be interpreted as "Lexical Similarity".
\[D_{\mathrm{KL}}(A\|B)=\sum_{x\in\mathcal{V}}A(x)\log\left(\frac{A(x)}{B(x)}\right) \tag{1}\]
where \(\mathcal{V}\) denotes the shared vocabulary, \(A(x)\) is the probability of token \(x\) in language \(A\). To avoid zero probability during computing Token Distribution, we add 1 to the frequency of all tokens in the vocabulary as a smoothing factor.
Lexical Similarity is related to Off-Target RateWe compute the KL divergence between language pairs with the training data. After training on the balanced dataset, the zero-shot translation is conducted on the Flores-101 dataset. We visualize the result of the top-3 languages(fr,cs,de) with most resources in WMT'10 dataset for analysis.
As shown in Figure 3, we can observe from the statistics that language proximity is highly related to the off-target rate. The Pearson correlation coefficients between the off-target rate and the KL-Divergence from target to source of the three x-to-many translations are -0.75\(\pm\)0.02, -0.9\(\pm\)0.03 and -0.92\(\pm\)0.03. The average Pearson correlation of all x-to-many directions is -0.77\(\pm\)0.11. It indicates that language pair which has higher lexical similarity from target to source may have a higher chance to encounter off-target than those language pairs which has less similar languages.
### Shared Tokens in the Decoder Might Bias the Zero-Shot Translation Direction
Previous section shows a correlation between the lexical similarity and off-target rate within certain language pair. We are more interested in whether the lexical similarity will cause the representation degradation in Figure 2, which further causes off-target. In fact, larger lexical similarity suggests more shared tokens between languages and will let the decoder output more overlapped tokens during supervised training. **The token overlap for different target in output space is harmful for zero-shot translation.** During training, the decoder might not be aware of the language it's generating directly from the output token because of the existence of shared tokens. In other words, the relation between target language and output tokens is weakened because of the shared tokens among different target languages, which might cause representation degradation in the encoder and further lead to off-target in zero-shot test.
### Separating Vocab of Different Languages is Effective yet Expensive
Based on the previous discussion, we now have an idea that maybe we can ease the off-target problem by decreasing the lexical similarity among languages, i.e. decreasing the shared tokens.
When building the vocab for multilingual NMT model, most work regard all languages as one and learn a unified tokenization model. We argue that this leads to low divergence of token distribution since many sub-words are shared across languages.
There is an easy method to decrease the shared tokens without changing the tokenization. We can
separate the vocab of different languages as shown in Figure 9 from Appendix. Under such condition, no two languages share the same token.
As shown in Table 2, with separate decoder vocab the average off-target rate in 90 directions is reduced from 29% to 5% and the BLEU score is raised from 10.2 to 12.4. We conduct the same probing experiment on encoder representation with the original WMT'10 dataset. As shown in Figure 4, representations for different target are divided. The "chaos" area does not exist anymore.
We also train the model with separated encoder&decoder vocab and finds it suffers from worse zero-shot performance compared to baseline. This also agrees to Rios et al. (2020)'s findings.
We think that without any vocabulary sharing among languages, the model will learn a wrong correlation between input language and output language and ignore the target language identifier during the English-centric training process.
The experiment result justifies our assumption in Section 3.5 that the shared tokens in the decoder will lead to the representation problem. Though achieving great improvement by isolating all vocabulary, it is much more parameter-consuming. In fact, in our experiment, the number of parameters increases from 308M to 515M.
## 4 Language-Aware Vocabulary Sharing
### Adding Language-Specific Tokens
Based on previous observation, lexical similarity will cause the representation degradation problem and further lead to off-target. Thus, our goal is to decrease the lexical similarity. We can achieve it without changing the original tokenizer by splitting the shared tokens into language-specific ones.
As shown in Figure 5, instead of splitting all shared tokens, we can choose specific tokens to
Figure 4: Encoder pooled output visualization using TSNE for French-to-Many translation using separate vocab. The result is comparable to Figure 2, which shows result with shared vocab.
Figure 5: Illustration of LAVS. Tokens with higher shared frequency are split into language-specific ones.
Figure 3: Scatter plot of off-target rate and KL-divergence for different language pairs. We draw the linear regression result with 95% confidence interval.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Size & OTR & BLEU \\ \hline Vocab Sharing & 308M & 29\% & 10.2 \\ Separate Vocab (Dec) & 515M & **5\%** & **12.4** \\ Separate Vocab (Enc,Dec) & 722M & 84\% & 2.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average zero-shot result for models with different vocab. (Dec) means only the decoder uses the separate vocab. (Enc,Dec) means both the encoder and the decoder use the separate vocab.
split. After decoding, we could simply remove all language-specific tags to restore the literal output sentence. By adding language-specific tokens, the number of shared tokens between different languages decreases and makes the token distribution more different thus increasing the KL Divergence.
### Optimization Goal
Given original vocab set \(V^{\prime}\) and language list \(L\), we aim at creating new vocab \(V\) to maximize the average KL divergence within each language pair under the new vocabulary with the restriction of adding \(N\) new language-specific tokens. Thus, our objective becomes:
\[\begin{split} V^{*}=&\operatorname*{arg\,max}_{V} \frac{1}{|L|^{2}}\sum_{m\in L}\sum_{n\in L}D_{KL}(P_{m}^{V}||P_{n}^{V})\\ s.t.&\quad V^{\prime}\subseteq V,\quad|V|-|V^{\prime }|=N\end{split} \tag{2}\]
where \(P_{m}^{V}\) denotes the \(m\)-th language's token distribution on vocabulary \(V\), add-one smoothing is applied to avoid zero probability. It is a combinatorial optimization problem. The searching space of V has an astronomical size of \(C_{|V^{\prime}|:|L|}^{N}\).
### Greedy Selection Algorithm that Maximizes Divergence Increment
Based on the previous discussion, we propose the Language-Aware Vocabulary Sharing algorithm as listed in Algorithm 1 to add language-specific tokens. Intuitively, LAVS algorithm prefers to split those shared tokens that have high frequency among different languages, which directly reduces the appearance of shared tokens in the decoder to the maximum extent.
First, we adopt a prior queue to keep the token candidates. Second, for each token in the shared vocabulary, we compute the shared token frequency in each language pair and add the (frequency, languageA, languageB, token) tuple to the queue. Last, since the queue ranks the elements by frequency, we create language-specific tokens for the top \(N\) tuples and return the new vocab. We give more details about the algorithm in Appendix B.
The whole tokenization process with LAVS is illustrated in Figure 6. In practice, given an original shared vocab with \(M\) tokens, we can always first learn a vocab with \(M-N\) tokens and conduct LAVS to add \(N\) language-specific tokens to maintain the vocab size \(M\) unchanged.
## 5 Experiments
### Datasets
Following Wang et al. (2020), we collect WMT'10 datasets for training. The devtest split of Flores-101 is used to conduct evaluation. Full information of datasets is in Appendix C.
### Vocabulary Building
Vocab SharingWe adopt Sentencepiece Kudo and Richardson (2018) as the tokenization model. We randomly sample 10M examples from the training corpus with a temperature of 5(Arivazhagan et al., 2019) on different directions and learn a shared vocabulary of 64k tokens.
Separate VocabBased on the sharing vocab of the baseline model, we separate the vocab of each language forming a 266k vocab.
LavSWe first learn a 54k vocabulary using the same method as the baseline model's and add 10k language-specific tokens using LAVS.
### Training Details of MNMT
ArchitectureWe use the Transformer-big model Vaswani et al. (2017) implemented by fairseq Ott et al. (2019) with \(d_{model}=1024\), \(d_{hidden}=4096\), \(n_{heads}=16\), \(n_{layers}=6\). We add a target language identifier <XX> at the beginning of input tokens to indicate the translation directions as suggested by Wu et al. (2021).
OptimizationWe train the models using Adam Kingma and Ba (2015), with a total batch
Figure 6: Illustration of tokenization and detokenization process with Language-Aware Vocabulary Sharing.
size of 524,288 tokens for 100k steps in all experiments on 8 Tesla V100 GPUs. The sampling temperature, learning rate and warmup steps are set to 5, 3e-4 and 4000.
Back-TranslationBack-Translation method is effective in improving zero-shot performance by adding pseudo parallel data generated by the model Gu et al. (2019); Zhang et al. (2020). For simplicity, we apply off-line back-translation to both the baseline and LAVS. With the trained model, we sample 100k English sentences and translate them to other 10 languages, which creates 100k parallel data for every zero-shot language pair and results in a fully-connected corpus of 9M sentence pairs. We add the generated data to the training set and train the model for another 100k steps.
EvaluationWe report detokenized BLEU using sacrebleu2. We also report the Off-Target rate with language detector3 and conduct model-based evaluation using Bert-Score4Zhang* et al. (2020).
Footnote 2: nrefs:lcase:mixeddeff:noltok:13alsmooth:explversion:2.1.0
Footnote 3: [https://github.com/Mimino666/langdetect](https://github.com/Mimino666/langdetect)
### Results
LAVS improves zero-shot translation by a large margin.Table 3 and 4 list the overall results on both zero-shot and supervised directions. According to Table 3, we can see that LAVS improves all the x-to-many and many-to-x directions with a maximum average improvement of -61.6% OTR, +3.7 BLEU and +0.036 Bert-Score compared to the baseline vocab. It gains an average of -21%
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Size} & \multicolumn{4}{c}{Zero-Shot Off-Target Rate} & \multicolumn{4}{c}{BLEU Score} \\ \cline{3-12} & & x-y & H-H & L-L & H-L & L-H & x-y & H-H & L-L & H-L & L-H & en-x & x-en \\ \hline Vocab Sharing & 308M & 29\% & 27\% & 50\% & 21\% & 23\% & 10.2 & 11.26 & 5.03 & 9.18 & 9.95 & 24.8 & 30.2 \\ Separate Vocab (Dec) & 515M & **5\%** & 4\% & 19\% & **1\%** & **1\%** & 12.4 & 14.69 & 6.54 & **10.10** & **12.22** & 24.6 & **30.5** \\ LAVS (Enc, Dec) & 308M & 12\% & **3\%** & 33\% & 13\% & 6\% & **12.5** & **15.90** & 6.26 & 9.91 & 12.14 & 24.8 & 30.3 \\ LAVS (Dec) & 308M & 8\% & 13\% & **14\%** & 3\% & 4\% & 12.1 & 13.33 & **7.81** & 9.80 & 12.01 & **24.9** & 30.3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Overall performance comparison. x-y denotes all zero-shot directions. H and L denotes High/Low-sources. All evaluation are done with Flores-101 dataset. (Dec) suggests vocab only changes in decoder and (Enc, Dec) suggests changing in both encoder and decoder. LAVS outperforms baseline in zero-shot setting on both BLEU and OTR by a large margin while maintaining the en-x and x-en performance.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Metric & Method & cs-x & fr-x & de-x & fi-x & lv-x & et-x & ro-x & hi-x & tr-x & gu-x \\ \hline \multirow{3}{*}{OTR} & Vocab Sharing & 18.8\% & 28.3\% & 22.6\% & 19.5\% & 19.2\% & 17.1\% & 22.0\% & 35.2\% & 30.1\% & 52.8\% \\ & LAVS(Dec) & **4.2\%** & **14.4\%** & **11.5\%** & **6.2\%** & **3.7\%** & **4.7\%** & **2.9\%** & **9.7\%** & **10.2\%** & **6.1\%** \\ & \(\Delta\downarrow\) & -14.6\% & -13.9\% & -11.1\% & -13.3\% & -15.5\% & -12.4\% & -19.1\% & -25.5\% & -19.9\% & -46.7\% \\ \hline \hline \multirow{3}{*}{BLEU} & Vocab Sharing & 10.9 & 10.5 & 11.3 & 9.0 & 9.4 & 10.0 & 11.7 & 6.9 & 7.3 & 4.7 \\ & LAVS(Dec) & **12.0** & **12.0** & **12.2** & **9.6** & **10.9** & **11.0** & **14.0** & **9.3** & **9.1** & **8.4** \\ & \(\Delta\uparrow\) & +1.1 & +1.5 & +0.9 & +0.6 & +1.5 & +1.0 & +2.3 & +2.4 & +1.8 & +3.7 \\ \hline \hline \multirow{3}{*}{BERT Score} & Vocab Sharing & 0.781 & 0.808 & 0.787 & 0.766 & 0.783 & 0.774 & 0.791 & 0.771 & 0.643 & 0.677 \\ & LAVS(Dec) & **0.799** & **0.829** & **0.806** & **0.786** & **0.790** & **0.798** & **0.796** & **0.777** & **0.660** & **0.713** \\ & \(\Delta\uparrow\) & 0.018 & 0.021 & 0.019 & 0.020 & 0.007 & 0.024 & 0.005 & 0.006 & 0.017 & 0.036 \\ \hline \hline \multirow{3}{*}{Metric} & Method & x-cs & x-fr & x-de & x-fi & x-lv & x-et & x-ro & x-hi & x-tr & x-gu \\ \hline \multirow{3}{*}{OTR} & Vocab Sharing & 22.4\% & 17.8\% & 23.9\% & 26.0\% & 21.9\% & 28.1\% & 8.9\% & 25.4\% & 14.0\% & 77.0\% \\ & LAVS(Dec) & **8.7\%** & **5.9\%** & **6.6\%** & **9.2\%** & **8.4\%** & **7.8\%** & **3.0\%** & **1.7\%** & **7.0\%** & **15.4\%** \\ & \(\Delta\downarrow\) & -13.7\% & -11.9\% & -17.3\% & -16.8\% & -13.5\% & -20.3\% & -5.9\% & -23.7\% & -7.0\% & -61.6\% \\ \hline \hline \multirow{3}{*}{BLEU} & Vocab Sharing & 11.0 & 17.9 & 13.2 & 8.3 & 12.2 & 9.9 & 14.0 & 8.3 & 8.8 & 3.3 \\ & LAVS(Dec) & **12.5** & **20.1** & **15.7** & **9.4** & **13.3** & **11.7** & **14.2** & **9.9** & **9.0** & **6.7** \\ & \(\Delta\uparrow\) & +1.5 & +2.2 & +2.5 & +1.1 & +1.1 & +1.8 & +0.2 & +1.6 & +0.2 & +3.4 \\ \hline \hline \multirow{3}{*}{BERT Score} & Vocab Sharing & 0.772 & 0.776 & 0.781 & 0.749 & 0.757 & 0.759 & 0.771 & 0.743 & 0.750 & 0.723 \\ & LAVS(Dec) & **0.791** & **0.799** & **0.796** & **0.770** & **0.777** & **0.774** & **0.797** & **0.756** & **0.768** & **0.726** \\ \cline{1-1} & \(\Delta\uparrow\) & 0.019 & 0.023 & 0.015 & 0.021 & 0.020 & 0.015 & 0.026 & 0.013 & 0.018 & 0.003 \\ \hline \hline \end{tabular}
\end{table}
Table 4: The zero-shot translation performance (Off-Target Rate, BLEU and BERT-Score) on average x-to-many and many-to-x directions using LAVS (Dec) compared to baseline.
OTR, +1.9 BLEU and +0.02 Bert-Score improvement on 81 zero-shot directions. Compared with the Separate Vocab (Dec) method which also leads to significant improvement in x-y directions, LAVS does not increase any model size.
LAVS with Back-Translation further improves the zero-shot performance.As shown in Table 5, as expected, our back-translation method can improve the zero-shot performance by a large margin. Under such setting, LAVS also outperforms Vocab Sharing by 0.4 average BLEU score on zero-shot directions.
We also observe performance degradation in English-to-Many directions for both models comparing to not using back-translation, which also agrees to the result of Zhang et al. (2020); Rios et al. (2020). We think a possible reason is that the English-to-Many performance will be interfered with the increase of translation tasks. Back Translation also brings much extra cost. The total training time for the model with Back-Translation is almost twice as long as the model with vanilla training. Only applying LAVS brings no extra training cost and does not influence the supervised performance.
## 6 Discussion
### How does LAVS calibrate the direction?
We visualize the encoder-pooled representations for model with LAVS(dec) in Figure 7. The representations' distribution is similar to Figure 4 where representations for different target are almost divided, suggesting that LAVS work similarly to separating all the vocabulary for different languages. We also give a case study as shown in Section 6.2.
We further visualize the language identifiers' hidden output during among high-resource languages and compare the results of the original Vocabulary Sharing and LAVS. As shown in Figure 10 from Appendix, it turns out that LAVS encodes more discriminative target language information into the <XX> token's hidden output.
### Case Study
We compare different model's outputs as shown in Figure 8. The baseline output has off-target problem while LAVS output generates in the correct language. From the direct token output of LAVS, we can see that many of which are language-specific tokens. Models with LAVS could learn the relation between the target language signal and corresponding language-specific tokens, which further decreases the probability of off-target.
### Scalability of LAVS
As shown in Table 6, we explore how the number of language specific(LS) tokens influence the zero-shot performance. The result shows that the OTR keeps decreasing when the number of LS tokens increases. It suggests that more LS tokens can
Figure 8: Case study of DE-\(>\)FR zero-shot translation. The baseline model off-target to English. Tokens in blue belong to language-specific tokens.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Data & OTR & x-y & en-x & x-en & Extra Cost \\ \hline Vocab Sharing & 29\% & 10.2 & 24.8 & 30.2 & - \\ + B.T. & 1\% & 16.4 & 23.4 & 30.0 & 24 GPU Days \\ LAVS (Dec) & 8\% & 12.1 & **24.9** & 30.3 & 0 \\ + B.T. & **0\%** & **16.8** & 23.7 & **30.4** & 24 GPU Days \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results with Back-Translation.
Figure 7: The encoder-pooled representations learned by multilingual NMT with LAVS on fr-x directions.
better relieve the off-target issue **without harming the supervised performance.**
To test how LAVS generalizes in dataset with more languages, we compare LAVS and VS on OPUS-100 (Zhang et al., 2020). More details of the experiment can be found in Appendix D To alleviate the inference burden, we select all 42 languages with 1M training data for evaluation, which results in 1722 zero-shot directions and 84 supervised directions (en-x and x-en). As shown in Table 7, it turns out that LAVS can improve the zero-shot performance(-14% OTR, detailed results in Table 12 from appendix) under such setting. Yet, the overall performance is much lower comparing to training on WMT'10. With more languages, the lack of supervision signal would become more problematic for zero-shot translation. LAVS improves the en-x performance by a large margin (+0.9 BLEU, detailed scores in Table 13 from appendix), we think separate the vocab of different languages on decoder might have positive influence on general en-x performance.
### LAVS's Compatibility with Masked Constrained Decoding
We propose another method to prevent off-target, which is through masked constrained decoding (MCD). During decoding, the decoder only considers tokens that belong to the target vocab in softmax. The target vocab could be computed using the training corpus. We implement MCD for both original vocab sharing and LAVS. We list the detail of the size of different target vocabs in Table 11 from appendix.
As shown in Table 8, it turns out that the method can further improve the zero-shot performance for LAVS (+1.2 BLEU for de-cs, +0.6 BLEU for fr-de). It is worth noticing that, in some direction like FR->DE, the benefit of MCD is rather small for the baseline model (+0.1 BLEU). We think the reason is that the original vocab sharing generates many shared tokens between languages, which will weaken the influence of the constraint. Thus, with more language-specific tokens, LAVS can work better with constrained decoding.
## 7 Conclusion
In this paper, we delve into the hidden reason for the off-target problem in zero-shot multilingual NMT and propose Language-Aware Vocabulary Sharing (LAVS) which could significantly alleviate the off-target problem without extra parameters. Our experiments justify that LAVS creates a better multilingual vocab than the original Vocabulary Sharing method for multiple languages.
## 8 Limitation
LAVS is proposed to overcome the off-target problem among languages that share alphabets because those languages tend to have more sharing tokens after the sub-word tokenization process. As for language pair that does not have shared tokens, LAVS might not have a direct influence on the zero-shot translation though it can also increase the overall performance for those languages, which might need further exploration.
## 9 Acknowledgements
This paper is supported by the National Key Research and Development Program of China under Grant No.2020AAA0106700 and the National Science Foundation of China under Grant
\begin{table}
\begin{tabular}{c c c c} \hline \hline Shared Tokens(M) & LS Tokens(N) & OTR\(\downarrow\) & Sup. BLEU\(\uparrow\) \\ \hline
64k & 0 & 29.4\% & 27.5 \\
54k & 0 & 33.1\% & 26.9 \\
54k & 10k & 8.2\% & 27.6 \\
54k & 20k & 7.4\% & **27.8** \\
54k & 50k & 5.9\% & 27.6 \\
54k & 212k & **5\%** & 27.6 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Exploration in number of Language-Specific tokens in LAVS(dec) and the Off-Target Rate on Flores-101. We report the average OTR on zero-shot directions and average BLEU on supervised directions.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Data & OTR\(\downarrow\) & x-\(\uparrow\) & en-x\(\uparrow\) & x-en\(\uparrow\) \\ \hline Vocab Sharing & 72\% & 1.9 & 12.6 & 19.8 \\ LAVS (Dec) & **58\%** & **2.3** & **13.5** & **20.1** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Results in OPUS dataset. We evaluate 1722 zero-shot directions and 84 supervised-directions.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{DE\(\rightarrow\)CS} & \multicolumn{2}{c}{FR\(\rightarrow\)DE} \\ \cline{2-5} & OTR & BLEU & OTR & BLEU \\ \hline Vocab Sharing & 45.1\% & 9.7 & 38.3\% & 12.7 \\ w/ MCD & 30.9\% & 11.4 & 36.4\% & 12.8 \\ LAVS (Dec) & 18.9\% & 13.0 & 15.4\% & 17.2 \\ w/ MCD & **11.1\%** & **14.2** & **11.3\%** & **17.8** \\ \hline \hline \end{tabular}
\end{table}
Table 8: The results of masked constrained decoding (MCD) combined with LAVS. Constrained decoding could further improve the performance of LAVS.
No.61936012. We also thank all reviewers for their valuable suggestions.
|
2310.05635 | Nanoscale engineering and dynamical stabilization of mesoscopic spin
textures | Thermalization phenomena, while ubiquitous in quantum systems, have
traditionally been viewed as obstacles to be mitigated. In this study, we
demonstrate the ability, instead, to harness thermalization to dynamically
engineer and stabilize structured quantum states in a mesoscopically large
ensemble of spins. Specifically, we showcase the capacity to generate, control,
stabilize, and read out 'shell-like' spin texture with interacting $ {}^{
13}\mathrm{C}$ nuclear spins in diamond, wherein spins are polarized oppositely
on either side of a critical radius. The texture spans several nanometers and
encompasses many hundred spins. We capitalize on the thermalization process to
impose a quasi-equilibrium upon the generated texture; as a result, it is
highly stable, immune to spin diffusion, and endures over multiple-minute long
periods -- over a million times longer than the intrinsic interaction scale of
the spins. Additionally, the texture is created and interrogated without
locally controlling or probing the nuclear spins. These features are
accomplished using an electron spin as a nanoscale injector of spin
polarization, and employing it as a source of spatially varying dissipation,
allowing for serial readout of the emergent spin texture. Long-time
stabilization is achieved via prethermalization to a Floquet-induced
Hamiltonian under the electronic gradient field. Our work presents a new
approach to robust nanoscale spin state engineering and paves the way for new
applications in quantum simulation, quantum information science, and nanoscale
imaging. | Kieren Harkins, Christoph Fleckenstein, Noella D'Souza, Paul M. Schindler, David Marchiori, Claudia Artiaco, Quentin Reynard-Feytis, Ushoshi Basumallick, William Beatrez, Arjun Pillai, Matthias Hagn, Aniruddha Nayak, Samantha Breuer, Xudong Lv, Maxwell McAllister, Paul Reshetikhin, Emanuel Druga, Marin Bukov, Ashok Ajoy | 2023-10-09T11:46:53Z | http://arxiv.org/abs/2310.05635v1 | # Nanoscale engineering and dynamical stabilization of mesoscopic spin textures
###### Abstract
Thermalization phenomena, while ubiquitous in quantum systems, have traditionally been viewed as obstacles to be mitigated. In this study, we demonstrate the ability, instead, to harness thermalization to dynamically engineer and stabilize structured quantum states in a mesoscopically large ensemble of spins. Specifically, we showcase the capacity to generate, control, stabilize, and read out "shell-like" spin texture with interacting \({}^{13}\)C nuclear spins in diamond, wherein spins are polarized oppositely on either side of a critical radius. The texture spans several nanometers and encompasses many hundred spins. We capitalize on the thermalization process to impose a quasi-equilibrium upon the generated texture; as a result, it is highly stable, immune to spin diffusion, and endures over multiple-minute long periods -- over a million times longer than the intrinsic interaction scale of the spins. Additionally, the texture is created and interrogated without locally controlling or probing the nuclear spins. These features are accomplished using an electron spin as a nanoscale injector of spin polarization, and employing it as a source of spatially varying dissipation, allowing for serial readout of the emergent spin texture. Long-time stabilization is achieved via prethermalization to a Floquet-induced Hamiltonian under the electronic gradient field. Our work presents a new approach to robust nanoscale spin state engineering and paves the way for new applications in quantum simulation, quantum information science, and nanoscale imaging.
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
+
Footnote †: These authors contributed equally to this work
## I Introduction
Thermalization is a pervasive phenomenon in all of physics. The quest to elucidate how isolated microscopic quantum systems approach equilibrium, has spurred a thriving field at the intersection of contemporary theoretical and experimental research. This has been marked by ongoing developments in nonequilibrium dynamics, such as the Eigenstate Thermalization Hypothesis for closed quantum systems [1; 2; 3], anomalous transport and emergent hydrodynamics [4; 5; 6], or the creation of prethermal ordered states of matter [7; 8].
Thermalization in quantum systems proceeds analogously to its classical counterpart: entropy grows with time, erasing traces of the system's prior history. Physically, this translates into a gradual reduction of accessible information that can be obtained from local measurements until, in thermal equilibrium, a maximum entropy state is reached. In most cases, such loss of information is irreversible, impeding the capacity to harness quantum systems. Consequently, a broad endeavor has been underway to retard, even preclude, these thermalization processes. Techniques range from physically isolating quantum systems (e.g. in vacuum) or through quantum control [9], cooling them to near absolute zero temperatures [10], or, in a complementary manner, exploiting theoretical paradigms of many-body localization to inhibit the onset of thermalization [11].
Though the thermalization process is often depicted as leading to mundane, featureless states, in this work, we demonstrate its utility in preparing structured mesoscopic quantum states. Specifically, in a system of interacting nuclear spins at high temperature (\(>\)100K), we exploit out-of-equilibrium thermalizing dynamics to controllably engineer and stabilize mesoscopic _shell-like_ nuclear spin polarization textures that span several nanometers and envelope hundreds of nuclear spins (Fig. 1A). Simultaneously, we continuously observe the formation and stabilization of these textures with high temporal resolution over prolonged, multiple-minute-long periods. This spatiotemporal control, facilitated by thermalization, obviates the need for local spin manipulation or readout, intrinsically protects against control errors, and bypasses technical challenges of differentiating spins with near-identical resonance frequencies. Given these methodological advantages, our approach has direct implications for quantum memories [12; 13], spintronics [14; 15], and nanoscale magnetic resonance imaging [16; 17].
Our experiments are in a model system of Nitrogen Vacancy (NV) center electrons in diamond surrounded by \({}^{13}\)C nuclear spins (Fig. 1A) [18]. The sparsely distributed NVs encompass \(\sim\)10\({}^{4}\)\({}^{13}\)C nuclei, spanning a radius of \(\approx\)12 nm [19]. Our strategy is built upon three elements. Firstly, the optically polarizable NV electron serves as a "polarization injector" and a nanoscale "antenna". In its role as an antenna, it generates a nanoscale gradient through the hyperfine interaction, resulting in localized displacement of the nuclear spin resonance frequencies [20]. Under this gradient field, a time-periodic (Floquet) drive stabilizes the nuclei into a metastable state [21], characterized by a shell-like polarization texture (Fig. 1B). Finally, spatially dependent dissipation from the NV is exploited to observe these textures: \({}^{13}\)C nuclei in the electronic proximity relax faster, providing a means to _serially_ probe spins farther away from the electron, without locally measuring them (Fig. 1B).
To demonstrate this new toolbox, we first demonstrate a simple means to produce spin textures. Here the NV is exploited to successively inject "hyperpolarization" - polarization that is orders of magnitude greater than Boltzmann levels - of alternate sign into the \({}^{13}\)C nuclei. However, the resulting **State-Engineered** spin texture lacks stability: its domain boundary melts due to nuclear spin diffusion [22].
Exploiting thermalization, however, offers a solution -- creating polarization textures with domain boundaries protected
against spin diffusion. We create an effective inhomogeneous parent Hamiltonian with spatial characteristics derived from the NV antenna field (Fig. 1C). Irrespective of the initial state [23], the spin polarization naturally _prethermalizes_[24, 25, 26] to its drive-induced quasi-equilibrium state, forming spin shells with constant-in-time domain boundaries. We observe the formation of this generated **Hamiltonian-Engineered**[27, 28, 29] texture continuously with unprecedented resolution over a lifetime spanning several minutes. Numerical simulations corroborate these observations and illustrate a stable spin texture spanning several nanometers.
## II Spin Texturing via hyperpolarization injection
**State-Engineered** shells are created by alternately injecting positive and negative hyperpolarization [32, 33] for periods \(\tau_{+}\) and \(\tau_{-}\), respectively (Fig. 2A inset). Direct \({}^{13}\)C hyperpolarization occurs rapidly over a short range, with distant nuclei polarized more slowly via spin diffusion. Consequently, \({}^{13}\)C nuclei close to the NV center are negatively polarized, while more distant nuclei are left positively polarized, yielding the shell-like texture. Varying \(\tau_{-}\) at a fixed \(\tau_{+}\) provides a means to tune the shell size.
Spin diffusion is driven by internuclear dipolar interactions with the Hamiltonian
\[\mathcal{H}_{\text{dd}}=\sum_{k<\ell}b_{k\ell}\Big{(}3I_{k}^{z}I_{\ell}^{z}- \mathbf{I}_{k}\cdot\mathbf{I}_{\ell}\Big{)}, \tag{1}\]
where \(b_{k\ell}\)=\(J_{\text{exp}}(3\cos^{2}(\vartheta_{k\ell})-1)/r_{k\ell}^{3}\), \(J_{\text{exp}}\)=\(\mu_{0}\hbar\gamma_{n}^{2}/4\pi\), \(\cos(\vartheta_{k\ell})\)=\(\mathbf{B}_{0}\cdot\mathbf{r}_{k\ell}/(|\mathbf{B}_{0}|r_{k\ell})\), \(\mathbf{r}_{k\ell}\) is the inter-spin vector, \(\mathbf{B}_{0}=\mathbf{B}_{0}\mathbf{z}\) is the external field. The median \({}^{13}\)C coupling strength is \(\langle b_{k\ell}\rangle\)=\(J\)\(\simeq\)0.6 kHz.
Fig. 2A depicts the _net_\({}^{13}\)C polarization during spin injection, for \(\tau_{+}\)=60 s, after which the sign of the injected polarization is reversed (dashed vertical line). In the boxed region around \(t_{\text{pol}}\)=100 s (\(\tau_{-}\)=40 s), the total net polarization approaches zero. Although there is limited net polarization, the system can still exhibit locally large spin expectation at different sites. To observe this, the \({}^{13}\)C nuclei are subject to a Floquet protocol at \(\mathbf{B}_{0}\)=7 T (Fig. 2B(i) inset) involving a series of spin-locking \(\theta\)-pulses of duration \(t_{p}\), at Rabi frequency \(\Omega\), and separated by time interval \(\tau\). Spin-locking with \(\theta\)\(\pm\pi\) yields the leading order effective Floquet Hamiltonian [29] (see SI S7 B),
\[\mathcal{H}_{\text{SL}}=-\frac{1}{2}\sum_{k,\ell}b_{k\ell}\big{(}3I_{k}^{x}I_ {\ell}^{x}-\mathbf{I}_{k}\cdot\mathbf{I}_{\ell}\big{)}\,. \tag{2}\]
Here \(>\)560, 000 pulses are applied over a period of \(t\)\(\gtrsim\)60 s. The drive induced dynamics (quasi-)conserves \(\mathbf{\hat{x}}\)-polarization as evident from the leading order effective Hamiltonian Eq. (2) (see SI S7 B), resulting in long transverse lifetimes \(T_{2}^{*}\)[34].
This process can be quasi-continuously tracked in real-time. Between the pulses, rapid and non-destructive (inductive) interrogation of \({}^{13}\)C Larmor precession occurs at a rate of \({}^{7-1}\)\(\sim\)10 kHz. We record the nuclear polarization amplitude \(S\) and phase \(\varphi_{R}\) in the \(\hat{\mathbf{x}}\)-\(\hat{\mathbf{y}}\) plane in the rotating frame, where \(\varphi_{R}\)=0(\(\pi\)) refers to a vector along \(\mathbf{\hat{x}}\) (\(-\mathbf{\hat{x}}\)).
Each point in Fig. 2A can therefore be expanded into a secondary dimension (\(S,\varphi_{R}\)), as shown in Fig. 2B, considerably increasing the information content compared to previous approaches [35]. Consider first the point _(a)_ at the top of the polarization buildup curve in Fig. 2A (\(\tau_{-}\)=0). Considering normalized signal \(S\) in Fig. 2B (black dashed line), a long-lived decay bearing lifetime \(T_{2}^{*}\)\(\approx\)93.4 s is evident, reflective of prethermalization to \(\mathcal{H}_{\text{SL}}\) and the resulting \(\mathbf{\hat{x}}\) (quasi-)conservation [34]. In comparison to this slow, featureless, decay, normalized signal profiles, the zero total net polarization region (boxed in Fig. 2A) exhibits distinct differences. For clarity, we consider first the representative trace (bold orange) in Fig. 2B(i) at \(\tau_{-}\)=44.8 s. Normalized \(S\) here features a sharp zero-crossing at \(t\)=\(t_{\text{xc}}\)(\(\approx\)11.34 s), accompanied by a simultaneous reversal in total net polarization sign from \(-\mathbf{\hat{x}}\) to \(+\mathbf{\hat{x}}\) (Fig. 2B(ii)). The region in the zero-crossing vicinity (1% variation) itself encompasses \(\sim\)10\({}^{4}\) points, showcasing the rapid and non-invasive dynamics sampling \(-\)\(\tau^{-1}\)\(>\)14\(J\) in these experiments. Additionally, the data captures dynamics up to very long times, \(J\)\(t\)\(>\)1.2\(\times\)10\({}^{6}\), surpassing previous experiments by several orders of magnitude [36, 37, 38].
The other colored traces in Fig. 2B(i) show corresponding signals \(S\) for different \(\tau_{-}\) (see colorbar). The zero-crossing point \(t_{\text{xc}}\) shifts to the right with increasing \(\tau_{-}\). The extracted \(t_{\text{xc}}\) values are elaborated in Fig. 2B(iii). A movie showcasing data for 151 changing values of \(\tau_{-}\) is accessible at [30].
Quasi-conservation of the net \(\mathbf{\hat{x}}\)-polarization naively suggests constant-in-time signal curves, in contradiction to our observations. To rationalize the data in Fig. 2, recall that the NV electron is strongly coupled to a phonon bath while the \({}^{13}\)C spins are only weakly coupled to it. Consequently, the NV acts as a dominant local relaxation source for the \({}^{13}\)C nuclei. Proximal nuclei dissipate polarization (\(\sim\)1/\(r^{6}\)) faster than more distant ones, as can be derived with the Lindblad master equation in the strong coupling limit (see Methods and Sec. S6 B). Measurements at
Figure 1: **System and readout.** (A) _Spin-textures._ Controllable “shell-like” spin texture of positive and negative polarization (shaded red and blue, respectively) is generated in a nanoscale ensemble of \({}^{13}\)C nuclear spins surrounding a central NV electron (manipulated by an external laser, shown in green). The domains are separated by the critical radius \(r_{c}\) at the domain boundary (dashed white line). The texture encompasses \(\sim\)\(O(100)\) spins within the critical radius and remains stable for minutes. The optically-pumped electron serves as a spin injector, and produces a nanoscale magnetic field gradient \(B\) (shaded light pink) that stabilizes the spin texture in a prethermal state. (B) _Experimental schematic_ showing prethermalization caused by Floquet driving at high-magnetic field (HF) \(\mathbf{B}_{0}\)\(\geq\)7 T, with a train of spin-locking \(\theta\)-pulses of length \(t_{p}\). \(\mathbf{\hat{x}}\cdot\mathbf{\hat{y}}\) spin polarization is interrogated in windows between the pulses for total readout times \(t\)\(>\)60s (\(>\)0.5M pulses). (C) \({}^{13}\)C spin texture is generated and stabilized by a spatially-varying potential \(\phi(r)\) created upon Floquet driving with \(\theta\)=\(\pi\) in the presence of the NV gradient (cf. Sec. III).
longer_ times \(t\), therefore, serially probe nuclei _further_ away from the NV.
Fig. 2C schematically represents the polarization distribution during the interrogation period \(t\), focusing on specific points (I-IV) along the bold orange trace (\(\tau_{-}\)=44.8 s) in Fig. 2B. For simplicity, we depict the generated texture as spherical shells, although in reality, it possesses an angular dependence inherited from the hyperfine interaction (see Fig. 5 C-D). Starting from the initial texture (\(t\)=0), proximal \({}^{13}\)C polarization undergoes dissipation from the NV center (shown white), gradually revealing polarization at greater distances as \(t\) increases. Zero-crossing at \(t\)=\(t_{ac}\) corresponds to an equal distribution of positive and negative polarization (Fig. 2C II). Further dissipation leads to a polarization sign inversion (Fig. 2C III-IV). Overall, Fig. 2 illustrates the ability to discriminate spins without relying on the electronic "frozen core", constituting a departure from previous work [35, 39].
The different stages in Fig. 2C exhibit distinct signatures in the \({}^{13}\)C NMR spectrum. In Fig. 2D we report the \({}^{13}\)C spectrum for 91 \(\tau_{-}\) values, separated by 0.2 s intervals, obtained by applying a Fourier transform (FT) to the sign-corrected data in Fig. 2B. The Fourier intensity is shown on a logarithmic scale spanning over nine orders of magnitude. The wide dynamic range reflects the high signal-to-noise ratio in our experiments. \({}^{13}\)C spins closer to the NV produce broader spectral lines because they experience faster relaxation, manifesting as stronger contributions to the spectral wings around 0Hz. Conversely, more distant \({}^{13}\)C nuclei are centrally located in the spectrum due to longer relaxation times. With increasing \(\tau_{-}\), depolarization initially affects the spectral wings, resulting in an apparent narrowing of the spectrum at \(\tau_{-}\)\(\approx\)37 s. Subsequently, the inversion of the central feature, corresponding to bulk \({}^{13}\)C nuclei, follows suit.
Spin texture generated via **State Engineering** (Fig. 2) is not intrinsically stable. Over time, any imprinted domain boundaries
Fig. 2: **Spin textures via hyperpolarization injection (State Enginerring).** (A) Net \({}^{13}\)C spin polarization under positive hyperpolarization buildup (\(\tau_{+}\), red points), followed by negative hyperpolarization injection after \(\tau_{+}\)=60s (\(\tau_{-}\), blue points) at room temperature. Solid lines represent a biexponential fit. Boxed region around \(t_{\rm pol}\)\(\approx\)97s corresponds to close to zero net polarization. (B) Normalized spin-lock decays under Floquet drive with \(\theta\)\(\approx\)\(\pi\)/2, showing (i) signal \(S\) and (ii) rotating-frame phase \(\varphi_{R}\). _Dashed black line:_ spin-lock decay corresponding to \(\tau_{-}\)=0 (\(a\) in A), displaying long lifetime \(T_{2}^{*}\)\(\approx\)93.4s. _Colored lines:_ Data for different \(\tau_{-}\) values (colorbar). For \(\tau_{-}\) corresponding to the boxed region in (A), decay profiles exhibit a sharp zero-crossing at \(t\)=\(t_{ac}\) (II) and associated phase inversion (see \(\pm\)1). Each trace has \(>\)0.5\(\times\)10\({}^{6}\) data points, with \(\sim\)10\({}^{4}\) points in the vicinity (1% variation) of \(t_{ac}\). _Dark orange line:_\(\tau_{-}\)=44.8 s slice is emphasized for clarity. A full movie of this dataset can be viewed at [30]. (iii) Movement of zero-crossing \(t_{ac}\) with \(\tau_{-}\), demonstrates the ability to control spin texture. (C) _Schematic representation_ of spin textures with increasing experiment progression time \(t\) (arrow). Representative points (I-IV) are marked on the \(\tau_{-}\)=44.8s slice in (B). NV is located at \(r\)=0. Positive (negative) polarization is shaded red (blue). Electron-driven dissipation during \(t\) is indicated by a decrease in polarization (white region) around \(r\)=0. Spin diffusion is indicated by growing texture size and increasing white region at the boundary between polarization layers of different sign, with dashed line as a guide to the eye for the zero polarization regime. (D) \({}^{13}\)C _NMR spectrum_ obtained via Fourier transforms of full data in (B). Data is shown for varying \(\tau_{-}\) (vertical axis) and plotted on a logarithmic scale, comprising 91 \(\tau_{-}\) slices, separated by 0.2 s. It displays a \(>\)10\({}^{9}\) variation in intensity dynamic range (color bar). Signature of spin shell texture formation is the narrowing of the FT spectrum with \(\tau_{-}\)\(\approx\)37s, corresponding to the boxed region in (A).
begin to dissolve due to spin diffusion. Such "melting" of spin texture can be observed in experiments where a delay period \(t_{\text{wait}}\) (\(<\)30 s) is introduced before the (shell-like) \(\hat{\textbf{z}}\)-polarized state is rotated into the \(\hat{\textbf{x}}\)-\(\hat{\textbf{y}}\) plane and the Floquet driving is started (see Fig. 3A). During \(t_{\text{wait}}\), NV-driven dissipation of the \(\hat{\textbf{z}}\)-polarized texture is negligible. This is due to the increased energy gap (\(\gamma_{n}B_{0}\), as opposed to \(\Omega\), see SI Sec. S2.1), and is reflected in the extremely long nuclear lifetimes \(T_{\text{1n}}\)\(\approx\)\(1\) h\(\gg\)\(T_{2}^{\prime}\) under these conditions. Consequently, spin dynamics in the \(t_{\text{wait}}\) period is primarily affected by spin diffusion. Simulations based on a Lindblad master equation confirm this picture (see Methods).
Fig. 3B shows the signal \(S\) dependence on the delay period \(t_{\text{wait}}\), starting from a spin texture produced with \(\tau_{\pm}\)=60 s and \(\tau_{-}\)=42 s. Spin diffusion gradually homogenizes (flattens) the polarization distribution as \(t_{\text{wait}}\) increases (Fig. 3B). This leads to a rightward shift in the zero-crossing point \(t_{\text{zc}}\) and a decrease in signal amplitude, as evident in Fig. 3B, and as schematically depicted in Fig. 3C.
The spin diffusion-mediated flattening of the polarization distribution can be directly observed by considering the signal decreasing with \(t_{\text{wait}}\) for a fixed value of \(t\). Fig. 3E shows \(S\)(\(t_{\text{wait}}\)) at \(t\)=50 s, normalized against the value at \(t_{\text{wait}}\)=0 to highlight changes with \(t_{\text{wait}}\). For a homogeneously polarized state (\(\tau_{-}\)=0), the thus normalized signal increases slightly with increasing \(t_{\text{wait}}\) due to polarization diffusing away from the NV, and hence being subjective to lower effective relaxation from it. These observations are supported by numerical simulations (Fig. 3F-G) obtained by solving the Lindblad equation for a simplified toy model (see Fig. 5 and SI - S8.1). All qualitative features and intuition from the microscopic dynamics are well reproduced.
## III Robust spin shells by Hamiltonian engineering
To stabilize the generated spin texture, we introduce a second approach of **Hamiltonian Engineering**. This stems from a surprising observation: when deploying the Floquet pulse train with flip angle \(\theta\)\(\approx\)\(\pi\) (Fig. 4A) and starting with \({}^{13}\)C spins positively polarized (\(\tau_{\pm}\)=60 s, \(\tau_{-}\)=0 s), we observe that over time, the signal \(S\) exhibits a sharp zero-crossing and subsequent sign inversion that can persist longer than 180 s. This is highlighted in representative traces in Fig. 4B-C, on linear (B) and logarithmic (C) scales, respectively, both with \(>\)10\({}^{5}\) points. The emergence of spin shells here is counterintuitive. Since for \(\theta\)\(\approx\)\(\pi\) no conservation law shields the initial state from rapid heat-death, one would expect the interacting nuclear spin system to quickly relax to a featureless infinite-temperature state [22, 40]. Despite extensive use of \(\pi\)-trains (CPMG experiments [41, 42]) in various contexts including dynamical decoupling and quantum sensing, to the best of our knowledge, this phenomenon has hitherto not been previously reported.
We attribute the emergence of the long-lived signal to Hamiltonian engineering facilitated by the simultaneous action of the nanoscale electronic field gradient dressed by the \(\pi\)-train Floquet drive. In particular, the NV electron spin, thermally polarized (\(\approx\)12.6 % polarized at 9.4 T and 100 K), induces a hyperfine gradient \(\eta\)(**r**) on the nuclear spins. As a consequence, the direction and magnitude of the Rabi field \(\Omega\) experienced by the nuclei depends on their proximity to the electron (see Fig. 1A). When
Fig. 3: **Melting of spin-texture due to spin diffusion (State Engineering).** (A) _Experiment schematic._ Waiting period \(t_{\text{wait}}\) at high-field is introduced after successive \(\{\tau_{+},\tau_{-}\}\) spin injection, and prior to application of the Floquet drive in Fig. 2. (B) Representative signal traces showing changes in decay profiles with variable \(t_{\text{wait}}\) for texture generated with \(\tau_{+}\)=60 s and \(\tau_{-}\)=42 s (bolded trace in Fig. 2B). Decrease in signal amplitude between \(t_{\text{wait}}\)=0 and \(t_{\text{wait}}\)=30 s evidences the instability of state-engineered spin texture. A full movie of this data set, with phase information, can be found at [31] (_C_) _Schematic representation_ showing the melting of spin texture during \(t_{\text{wait}}\) due to spin diffusion at the boundary between polarization layers, dashed line serves as guide to the eye for the domain wall boundary. (D) Points show movement of the zero-crossing \(t_{\text{zc}}\) position with \(t_{\text{wait}}\) for data in (B). Solid line is a linear fit. (E) Signal intensity at \(t\)=50s in (B) plotted for different values of \(t_{\text{wait}}\) normalized to its value corresponding to the \(t_{\text{wait}}\)=0 case. Melting of the spin texture due to diffusion manifests as a decrease in signal. Solid line is a biexponential fit. Light blue line corresponds to case without spin texture (\(\tau_{+}\)=60s, \(\tau_{-}\)=0). (F-G) _Simulations_ corresponding to (D-E) showing zero-crossing times and \(|\text{min}(\mathcal{I}_{x})|\) extracted from numerical time-evolution in a one-dimensional short-range model using a similar shell-like initial state (see Fig. 5 and Methods). Simulations show qualitative agreement with experimental results.
\(\theta\)\(\neq\)\(\pi\) (cf. **State Engineering**, Fig. 2-Fig. 3), this merely alters the prethermalization axis \(\hat{\mathbf{x}}\)\(-\)\(\hat{\mathbf{x}}^{\prime}(r)\) (see SI SI SI 7 B.1). On the other hand, when \(\theta\)=\(\pi\) exactly and we ignore the NV-induced gradient field, the absence of \(\hat{\mathbf{x}}\)-polarization conservation leads to a rapid, yet trivial, decay of the spins within \(T_{\pi}^{*}\)[34].
However, in the presence of the NV gradient field \(\eta(\mathbf{r})\), for \(\theta\)=\(\pi\)+\(\epsilon\) (with \(\epsilon\)\(\ll\)\(\pi\)), spin dynamics is governed by the effective Floquet Hamiltonian (see SI SI 7 B.2)
\[\mathcal{H}_{\text{eff}}=\mathcal{H}_{\text{dd}}\)+\(\sum_{\mathbf{r}}\phi_{\mathbf{r}}I_{\mathbf{r}}^{x}\),\(\quad\)(3)
where \(\phi_{\mathbf{r}}=\eta(\mathbf{r})\)+\((\theta\)\(-\pi)/\tau\) denotes the effective spatially-varying on-site potential (Fig. 1C). We operate in the regime of a weak electronic field gradient (\(\phi_{\mathbf{r}}\)\(<\)1 kHz), far from the frozen-core limit -- a significant departure from previous experiments [16]. Notably, the spatial inhomogeneity induced by \(\phi_{\mathbf{r}}\) separates into two regimes by a critical radius \(r_{c}\), where \(\phi_{\mathbf{r}}\) flips sign: \(\phi_{\mathbf{r}}\)\(\lesssim\) 0 for \(r\)=\(\|\mathbf{r}\|\)\(\gtrdot r_{c}\). The exact position of \(r_{c}\) is determined by the \({}^{13}\)C slice for which \(\phi_{\mathbf{r}}\)=0 (Fig. 1C). Note that here we ignore the angular \(\vartheta\) dependence for simplicity and return to it in Fig. 5C-D.
Crucially, the sign change of \(\phi_{\mathbf{r}}\) on either side of \(r_{c}\) means that \(\mathcal{H}_{\text{eff}}\) internally encodes spatial structure, and \(r_{c}\) serves as the domain boundary. Then, irrespective of the initial state, applying the Eigenstate Thermalization Hypothesis [1, 2, 3], one predicts that the spins thermalize to the quasi-equilibrium state described by the density matrix \(\rho_{\text{eq}}\)\(\propto\) exp\((-\beta\mathcal{H}_{\text{eff}})\)=1 - \(\beta\mathcal{H}_{\text{eff}}\)+\(\mathcal{O}(\beta^{2})\), with an inverse temperature \(\beta\) set by the energy density of the initial state. The local expectation value of \(\hat{\mathbf{x}}\)-polarization in the thermal state, \(I_{\mathbf{r}}^{x}\)=Tr\((I_{\mathbf{r}}^{x}\rho)\)\(\simeq\)\(-\beta\phi_{\mathbf{r}}\)/2+\(\mathcal{O}(\beta^{2})\), then attains opposite signs on either side of \(r_{c}\) -- imprinting a spatial structure of a quasi-equilibrium shell stabilized within the prethermal plateau (Fig. 1B).
The highlighted traces in Fig. 4B-C track such shell formation. The initial polarization profile \(\rho_{i}\) is far from the prethermal equilibrium \(\rho_{\text{eq}}\): it possesses a finite amount of energy with respect to \(\mathcal{H}_{\text{eff}}\) in the thermodynamic limit, which is initially localized within the polarized region. Over time this energy diffuses through the system due to energy (quasi-)conservation (see SI SI 8 B.3). Consequently, spins in the region \(r\)\(>\)\(r_{c}\) begin to sequentially flip, driving the system, towards a quasi-equilibrium, \(\rho_{i}\)\(\rightarrow\)\(\rho_{\text{eq}}\) (see schematic depiction in Fig. 4D). As the negative polarization in the outer region (\(r\)\(>\)\(r_{c}\)) surpasses the positive polarization in the inner region (\(r\)\(<\)\(r_{c}\)) in magnitude, the total polarization along the \(\hat{\mathbf{x}}\) direction undergoes a sign inversion, resulting in a zero-crossing. Importantly, since the spin texture emerges through thermalization and total polarization is not conserved, the domain boundary \(r_{c}\) remains stable (on prethermal
Figure 4: **Robust spin textures by Hamiltonian engineering.** (A) _Experiment schematic._ Spins are hyperpolarized for \(\tau_{\pm}\)=90s and subject to a spin-locking train with \(\theta\)\(\approx\)\(\pi\). (B) Measured signal \(S\) undergoes sharp zero-crossing and associated sign inversion (phase signal is analogous to Fig. 2B(ii) and not shown). Colored lines show variation with \(\theta\) (see colour). Each trace has \(\approx\)0.5M points. Dark purple trace highlights representative data at \(\theta\)/\(\pi\)=0.94 for clarity, with sharp zero-crossing at \(t_{\text{Exc}}\). (C) Analogous signal to bolded purple trace in (B), with small frequency offset (see SI S5). Signal is plotted on a logarithmic scale and extends for \(r\)\(>\)150s. Dramatic signal zero-crossing is evident. (D) _Schematic representation_ of formed spin texture for key points of the bolded trace in (B) (marked I-IV). Signal zero-crossing arises due to thermalization to an effective Hamiltonian \(\mathcal{H}_{\text{eff}}\) bearing spatial texture arising from the NV-imposed gradient (Fig. 1C). Dashed black line indicates the domain boundary at \(r_{c}\). Spin texture remains robust against spin diffusion with \(t\) (see Fig. 5A). (E) _Movement of zero-crossing with \(\theta\)._ 2D color plot showing logarithmic scale visualization of data in (B) plotted with respect to \(\theta\) (horizontal slices). Zero crossing appears as an abrupt decrease in signal (colored blue). \(\theta\)=\(\pi\) slice is marked and corresponds to a rapid signal decay, \(t_{\text{Exc}}\) occurs at later times for smaller \(\theta\). Point III corresponding to zero-crossing in (B) is marked. (F) _Numerical simulations_ performed with LITE for the simplified short-range model show good qualitative agreement with the experimental data (see Fig. 5 and SI S8).
timescales). This stands in contrast to **State Engineering** (see Fig. 5). Furthermore, the signal drop observed here is orders of magnitude larger compared to **State Engineering**, since we do not operate in a regime of nearly zero total net polarization, indicating a larger spin texture gradient (Methods).
Experiments in Fig. 4B-C, therefore, uniquely enable real-time observation of spin flipping dynamics during the prethermalization process. This capability, combined with the large accessible values of \(Jt>\)10\({}^{6}\), allows us to investigate thermalizing spin dynamics with exceptional resolution over extended time periods, surpassing the capabilities of previous experiments by many orders of magnitude [36, 37, 38].
Colored traces in Fig. 4B show similar experiments for different values of \(\theta\) in the vicinity of \(\theta\)=\(\pi\) (see colorbar). The zero-crossing position is observed to change with \(\theta\). A movie of this data is accessible in [43]. Fig. 4E recasts this movie for 120 different values of \(\theta\) in a 2D plot on a logarithmic scale in intensity, where the darker blue colors highlight the zero crossing. This visualization clearly elucidates the movement of \(t_{\mathrm{zc}}\) with \(\theta\). Additionally, a distinct slice in the plot displays rapid signal decay (indicated by the dashed line). This is attributed to bulk \({}^{13}\)C nuclei, which are far removed from the NV influence, corresponding for which \(\theta\)=\(\pi\). The qualitative behaviour can be reproduced in numerical simulations (see Methods and Refs. [44, 45]). While the exact shape of the zero-crossing arc depends on the details of the model (such as \(\phi_{\mathrm{r}}\)), the simulations capture the diverging behaviour of \(t_{\mathrm{zc}}\) found for \(\theta\rightarrow\pi^{-}\).
Based on data in Fig. 4, we estimate (see Methods) that the domain spans \(r_{c}\)\(\approx\)\(2.7\,\mathrm{nm}\cdot\sqrt[3]{\left[3\cos^{2}\theta-1\right]}\) encompassing \(\approx\)150 spins for the representative line in Fig. 4B (\(\theta\)=\(0.94\,\pi\)) [46]. We are able to exert control over the \(r_{c}\) domain boundary by changing the Floquet engineered Hamiltonian (3). By adjusting the kick-angle \(\theta\) we can modify the effective on-site potential \(\phi_{\mathrm{r}}\), and hence \(r_{c}\), allowing us to tune the shell size between tens (\(\approx\)50) up to mesoscopic numbers (\(\gtrsim\)300) of spins (see SI Sec. S7 D) [47]. We observe that the movement of \(t_{\mathrm{zc}}\) matches the theoretical prediction (Fig. 4E,F). Varying Rabi frequency \(\Omega\) or the frequency offset can modify the Hamiltonian and allow further control over \(r_{c}\) (SI Sec. S5). Furthermore, thermalization to shell-like texture is found to be robust to lattice orientation (SI Sec. S5 B) or initial states employed (SI Sec. S5 A).
## IV Numerical results
Quantum simulations here use Local-Information Time Evolution (LITE) [44, 45], a recently developed algorithm suited to investigate diffusive and dissipative quantum dynamics. Numerical tractability constrains us to work with a simplified one-dimensional toy-model Hamiltonian, but which captures all relevant features of \(\mathcal{H}_{\mathrm{eff}}\) (Methods).
Fig. 5A(i) shows simulations corresponding to **State Engineering** experiments. Time is represented on the horizontal axis in units of \(J^{-1}\), the vertical axis represents the distance from the NV (both axes are on a logarithmic scale), and the colorbar shows the local \(\mathbf{\hat{x}}\)-polarization \(\mathcal{I}_{\mathrm{r}}^{\mathrm{x}}\). Considering an initial shell-like texture produced by spin injection, we observe that the subsequent
Figure 5: **Simulations of spin texture formation and evolution** using the effective model Hamiltonians for (A) **State Engineering** and (B-D) **Hamiltonian Engineering**. Panels (A,B) are obtained from _1D short-range simulations_ for infinitely extended systems including effects of dissipation using the LITE algorithm (see SI S8 A). System extends in both directions, but only positive values are shown. Horizontal axis displays interrogation time \(t\) in \(J^{-1}\) units, vertical axis displays distance \(r\) from NV in arbitrary units (a.u.). (A) _Spin texture via State Engineering_. We imprint the spin texture in the initial state using 11 positively polarized spins close to the NV center at \(r=0\) followed by 18 negatively polarized spins. (i) Colors display polarization \(\mathcal{I}_{\mathrm{r}}^{\mathrm{x}}(t)\) (colorbar) at different sites at time \(t\) (vertical slices). Panel displays spreading dynamics with \(t\). Armed shells melt under diffusion (see SI S8C). (ii) Integrated polarization over the spin ensemble, corresponding to signal \(S\) measured in Fig. 2. Simulations reveal formation of a sharp zero-crossing in either case (\(t_{\mathrm{zc}}\) marked), occurring at the instance of zero total net polarization. (B) _Spin texture via Hamiltonian engineering_. The initial state at \(t\)=0 contains 11 positively polarized spins within a background of spins in a fully mixed state. (i) Polarization \(\mathcal{I}_{\mathrm{r}}^{\mathrm{x}}(t)\) (colorbar) showing that spin texture forms via thermalization. Late time behavior (dashed line) follows energy diffusion \(\propto\)\(\sqrt[3]{t}\). Shell critical radius \(r_{c}\) (horizontal dashed line) is stabilized. (ii) Integrated ensemble polarization showing formation of zero-crossing analogous to Fig. 4. (C, D) _Formed shells in three-dimensions_ via Hamiltonian Engineering, obtained from late-time dynamics of classical three-dimensional long-range simulations (see SI Fig S28 for full time traces). For the classical simulation dissipation is not considered and spins within the frozen core (white region in D) are not simulated.
dynamics are characterized by melting domain walls, causing the magnetization gradient to diminish over time. Fig. 5A(ii) shows the total net polarization \(S\)=\(\left|\sum_{r}{\it{T}}_{r}^{x}\right|\) -- analogous to that measured in experiment. We observe a zero-crossing similar to Fig. 2B. Notably, in **State Engineering**, we rely on dissipation to enable the observation of the sign-inversion of the total polarization: in the absence of dissipation, total \(\hat{\bf{x}}\) polarization is (quasi)-conserved and thus will appear constant on prethermal timescales.
In contrast, Fig. 5B considers **Hamiltonian Engineering**, and polarization initially confined in the vicinity of the NV. Over time, by virtue of energy diffusion, the spins prethermalize to a polarization gradient under the NV-induced potential. Peripheral spins become endowed with negative polarization (blue region), and ultimately, over time, the total polarization inverts as the majority of spins feature negative polarization. Importantly, the domain \({\it{r}}_{c}\) (dashed horizontal line) separating the local positive and negative polarization regions is solely set by the on-site potential \(\phi_{\bf{r}}\) present in \({\cal{H}}_{\rm{eff}}\) and is, therefore, stationary. As the spins thermalize, the (non-conserved) polarization adapts to the _stable_ gradient profile imposed by \({\cal{H}}_{\rm{eff}}\). Irrespective of the initial state, the diffusion-propelled dynamics therefore induce a polarization gradient with a time-invariant domain boundary.
Considering the net polarization (Fig. 5B(ii)), we observe a steep zero-crossing in qualitative agreement with experiments. We note that the zero-crossing \({\it{t}}_{\rm{z}}\) arises even in the absence of dissipation (see SI S8.2), as observed in the long-time slices in Fig. 5B(i). Dissipation only serves to extinguish polarization at distances close to the NV (i.e., accelerating the inevitable diffusion-induced polarization inversion).
To demonstrate that these results hold beyond one-dimensional short-range models, we additionally perform classical simulations based on a three-dimensional long-range model comprising \(\approx\)10\({}^{3}\) dipolar-interacting spins on a diamond lattice (Methods). In contrast to the one-dimensional quantum simulations, here the system is finite and free of dissipation so that equilibrium is reached in finite time. Notably, here, the system thermalizes to a three-dimensional spin texture which exhibits the analytically predicted \(\vartheta\)-dependence of the NV gradient field with \({\it{r}}_{c}\) spanning several nanometers (cf. Sec. III).
## V Discussion & Outlook
Our experiments introduce several novel features. The creation, stabilization, and observation of spin texture are achieved through _global_ spin control and readout. Despite this, a good degree of local nuclear discrimination is shown to be attainable by utilizing an electron as a _controllable_ spin injector, gradient source, and dissipator. Notably, unlike other experiments [16], our approach is not confined to the constraints presented by diffusion barrier limits, enabling examination of a mesoscopically large number of nuclear spins around each electron. The injected hyperpolarization exceeds the Boltzmann polarization levels by orders of magnitude compared to previous studies [39].
Our method of continuous interrogation in the rotating frame, as opposed to point-by-point probes in the lab-frame [39], offer a distinct methodological advantage. It facilitates a rapidly sampled (\(J\tau\)\(<\)0.07) visualization of polarization dynamics while reaching into the very-long-time regime (\(J\)\(I\)\(>\)1.2\(\times\)10\({}^{6}\)). The latter is orders of magnitude beyond previous experiments [36; 37; 38], permitting direct observation of emergent stabilization. The dynamics observed complement recent experiments with cold atoms [5; 48; 51; 49; 5], but occur in a distinct and novel context: at the nanoscale, in the solid-state, and in the limit of strongly interacting dipolar-coupled spins (for which \(JT_{2}^{*}\)\(\approx\)1).
Our dynamical stabilization protocol operates distinctly out of equilibrium. The dynamics is governed by an emergent quasi-conservation of energy and explicitly breaks polarization conservation, allowing us to inhibit diffusion in the spin polarization channel. In conjunction with the electronic hyperfine field, this protocol stabilizes localized, shell-like spin textures at a finite energy density, even in the absence of local control. Note that no obvious static protocol could yield similar outcomes in our system. The underlying physics of this dynamical stabilization deviates from traditional state engineering approaches, is versatile, and can encompass a variety of models (short- or long-range, clean or weakly-disordered interacting systems, quantum or classical models), across different dimensions.
The elucidated **Hamiltonian Engineering** protocol, therefore, highlights the untapped capabilities of nonequilibrium control techniques in manipulating quantum matter. For example, our approach can be utilized within quantum simulators to induce a magnetization domain wall in a close to infinite-temperature state - the starting point to investigate sub/superdiffusive spin dynamics [5; 52; 53]. Depending on the structure of the effective Hamiltonian and the initial state, one can also stabilize and investigate spin textures at negative-temperature [54]. Our work, therefore, presents a first instance of a far-reaching idea that demonstrates the applicability of concepts from the emerging field of nonequilibrium quantum dynamics, to engineer stable many-body states with tailored attributes [55].
Lastly, the manipulation and control of the orientation of spin polarization at sub-nanometer length scales itself, opens up several promising future directions. The use of ubiquitously occurring nuclear spins, as elucidated in this work, broadens the application of spin texturing to diverse systems, moving beyond previously considered magnetic materials [56; 57; 58; 12; 14; 15]. This suggests applications in quantum memories [12; 13], spintronics [14; 15], and spatiotemporal quantum sensing harnessing hyperpolarized nuclei as sensors [59]. Controllable spin textures may also be applied to non-invasive nanoscale chemical imaging in materials science and biology. We envision employing targeted electron spin labels to map the radial spin densities of different nuclei (e.g., \({}^{1}\)H and \({}^{13}\)C) within molecules.
## VI Acknowledgements
We thank J. H. Bardarson, T. Klein Kvorning, R. Moessner, C. Ramanathan, J. Reimer, D. Suter for insightful discussions, and J. Mercade (Tabor Electronics) for technical assistance. This work was supported by ONR (N00014-20-1-2806), AFOSR YIP (FA9550-23-1-0106), AFOSR DURIP (FA9550-22-1-0156), a Google Faculty Research Award and the CIFAR Azrieli Foundation (GS23-013). CF and CA acknowledge funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 101001902). The computations with the LITE algorithm were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at Tetralith partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973.
Classical simulations were performed on the MPI PKS HPC cluster.
## VII Author Contributions
KH, CF, ND, PMS and DM contributed equally to this work. Order in which these authors are listed was decided by dice roll.
DM, KH, ND, MM and ED built the experimental apparatus. ND, DM, KH, SB and AA collected data and performed data analysis. KH, ND and DM conceived of experimental design for Hamiltonian Engineering, and QRF, WB, XL and AA conceived of experimental design for State Engineering. UB, AP, ND, KH and AN made data visualizations.
PMS and CF developed the Floquet analysis. PMS did the numerically exact quantum simulations and the classical simulations. CF did the approximate quantum simulations using LITE and the diffusion analysis. CA performed the open quantum systems analysis and contributed to LITE simulations. MB supervised the theory work. PMS, CF, CA, and MB helped interpreting and analyzing the numerical and experimental data. AA, MB, PMS, CF, CA, ND and KH wrote the paper.
## VIII Methods
### Experiment
#### Sample
Experiments here employ a single-crystal diamond sample (3x3x0.3nm) from Element6. It contains \({}^{13}\)C spins at natural abundance (1.1 %), and \(\approx\)1 ppm NV\({}^{-}\) concentration, corresponding to inter-NV spacing of \(\sim\)25 nm, and each NV center has 10\({}^{413}\)C surrounding it. The sample also hosts \(\gtrsim\)20 ppm of substitutional nitrogen impurities (P1 centers). For experiments in Fig. 2-Fig. 4, the sample is oriented such that its [100] face is approximately parallel to \(B_{0}\). However, as elucidated in SI, Sec. S5.2, these results are qualitatively independent of the sample orientation.
### Experimental Apparatus
State Engineering experiments (Fig. 2-Fig. 3), carried out at room temperature, employ an apparatus described before in [60]. For Hamiltonian Engineering (Fig. 4), on the other hand, we introduce a novel instrument (Fig. 6) for \({}^{13}\)C hyperpolarization and interrogation at cryogenic temperatures. Low temperatures allow access to higher Boltzmann electronic polarization, consequently stronger electronic gradient fields, and slower electronic relaxation rates. The instrument supplies for the first time (to our knowledge) a method for _"cryogenic field cycling"_ for dynamic nuclear polarization (DNP), allowing simultaneous operation at variable fields (1mT-9.4T), and controllable cryogenic temperatures down to 4K (although we restrict ourselves to 77K in this work).
The device utilizes an Oxford SpectrostatNMR cryostat under continuous flow cryogenic cooling, which is mechanically moved ("shuttleed") from lower (few mT) fields into a \(B_{0}\)=9.4 T NMR magnet (Oxford). The low-field position situated 640mm above the magnet center (\(\mathbf{B}_{\mathrm{pol}}\)=36 mT) is employed for hyperpolarization; the cryostat is then shuttled to high-field (9.4 T) where the \({}^{13}\)C nuclei are interrogated. Shuttling occurs via a belt-driven actuator (Parker) powered by a motor (ACS) fitted with a high-torque gearbox for enhanced load-bearing capacity for the heavy (251b) cryostat. The actuator carries a movable stage to which two custom-designed clamps secure the cryostat. A 1.6 m flexible transfer line allows for continuous cooling during shuttling and operation over \(>\)1 week. Shuttling occurs at 7 mm/s and takes \(\approx\)90 s. Long \({}^{13}\)C \(T_{1}\) lifetimes (\(>\)1hr) at fields exceeding 0.1T, mean that there is minimal loss of polarization during shuttling. We note that room temperature experiments (Fig. 2-Fig. 3)), in contrast, involve shuttling in \(<\)1s.
The diamond sample is secured at the bottom portion of the cryostat in a custom-built NMR/DNP probe (Fig. 7). The probe is top-mounted in the cryostat and includes two coils: a loop through which microwaves (MWs) are applied for DNP, and a saddle coil for NMR detection. A novel arrangement employing O-rings at the probe top allows the NMR coil to be frequency-tuned and impedance matched without breaking vacuum.
For optical DNP at \(\mathbf{B}_{\mathrm{pol}}\), an optical window at the cryostat bottom permits illumination by a laser (\(\lambda\)=532nm, Coherent) directed into the bore using a 45\({}^{\circ}\) mirror. A pair of piezo-driven Zaber mirrors ensure optimal alignment into the cryostat center, aided by a camera mounted with a 637nm long-pass filter at the top of the cryostat. A TTL-triggered mechanical shutter controls illumination timing to within 1 \(\mu\)s.
Hyperpolarization employs MW chirps applied across the NV EPR spectrum. MWs are generated by a Tabor Proteus arbitrary waveform transceiver (AWT) and gated by a Mini-Circuits ZASWA-2-50DR+ switch. MWs are amplified in two stages by ZHL 2W-63-S+ and ZHL-100W-63+ amplifiers. A Varian VNMRS console generates the RF pulses at the \({}^{13}\)C Larmor frequency (\(\approx\)100MHz), while the NMR signal is detected in windows between the pulses, filtered, and amplified by a Varian preamplifier, and digitized by the AWT. We apply \(>\)0.5M pulses, yielding a continuously interrogated NMR signal for up to \(t\)=180s that is non-destructive since spins are only weakly
Fig. 6: **Novel instrumentation for cryogenic DNP.** (A) _Device construction._ CAD model showing instrument and highlighting main components. It consists of a 9.4 T superconducting magnet, surrounded by an aluminum frame to which a 4K cryostat is mounted on a belt-driven actuator, bearing a high-torque motor. Laser illuminates sample from the bottom. (B) Photograph showing key features of the instrument. Cryostat is shown at low-field position \(\mathbf{B}_{\mathrm{pol}}\); actuator truck and mounts are visible.
coupled to the RF coil. Timing of all events -- lasers, MW application, mechanical shuttling, triggering NMR detection, and signal digitization -- is synchronized by the Swabian pattern generator, controlled by MATLAB.
### Hyperpolarization Methodology
Hyperpolarization follows [33]. Application of a continuous-wave 532nm green laser (SW) induces polarization in the NV electrons to the \(m_{\mathrm{s}}\)=0 state at **B\({}_{\mathrm{pol}}\)**. This is transferred to \({}^{13}\)C nuclei through successive traversals of rotating frame Landau-Zener level anti-crossings. Practically, this is accomplished by utilizing MW chirps generated by a Tabor Proteus AWT, sweeping across the NV center EPR spectral bandwidth (25 MHz) at a rate of 200 Hz.
### Data Acquisition and Processing
Induced \({}^{13}\)C Larmor precession signal into the RF saddle coil is captured by a Tabor Proteus AWT, and digitized every 1ns in \(t_{\mathrm{acq}}\) windows between the pulses. It is decimated to preserve memory and improve acquisition and processing speed (here 64-fold). A Fast Fourier Transform (FFT) is applied to extract signal amplitude \(S\) and phase \(\varphi\) in each \(t_{\mathrm{acq}}\) window. Phase signals in each window are linearly offset by the phase accord during \(t_{p}\) pulse periods, a phase unwrapping algorithm is used to unfold this trivial phase development, yielding phase \(\varphi_{R}\) of the spins in the rotating frame. \((S,\varphi_{R})\) forms the basis for the experimental data in this work.
### Estimation of Kick Angle \(\theta\)
To estimate kick angle \(\theta\), important in Fig. 4, we conduct \({}^{13}\)C Rabi experiments and fit the obtained nutation to a sinusoidal curve. A spin-locking train of \(\sim\)\(\pi/2\) pulses and continuous readout (see Fig. 8B) for \(t\)=70 s is employed to enhance measurement SNR. The pulse length of the initial pulse \(\theta_{\mathrm{init}}\) is varied, and the integrated signal is measured, as shown in Fig. 8C. High SNR yields a high-precision estimate of \(\theta\), with an error of within 1 %.
### Comparison of Signals in State and Hamiltonian Engineering
For clarity, we contrast in Fig. 9 the measured signal \(S\) amplitude for experiments of State engineering (Fig. 2-Fig. 3) and Hamiltonian engineering (Fig. 4). Spin texture created by
Fig. 8: **Estimation of flip angle \(\theta\)**. (A) Schematic of spin-locking Rabi experiment, consisting of a train of \(\pi/2\) pulses, where the angle of the first pulse \(\theta_{\mathrm{init}}\), is varied. (B) Typical signals in case of \(\theta_{\mathrm{init}}\)=\(\pi/2\) and \(\theta_{\mathrm{init}}\)=\(\pi\), with total measurement time of \(t\)=70s, corresponding to \(>0.5\)M pulses. (C) Variation of integrated signal \(S_{\mathrm{int}}\) with length of the first pulse. Solid line represents sinusoidal fit. Slight off-resonance in the pulses leads to a finite signal even at \(\theta_{\mathrm{init}}\)=0.
Fig. 7: **Probe for \({}^{13}\)C hyperpolarization and readout** (A) CAD model showing the schematic of the NMR/DNP probe. It fits snugly within the cryostat, providing additional RF shielding. \(>\)3ft long tuning rods extend through the top of the probe and allow for RF cavity tuning and impedance matching, even under vacuum and cryogenic conditions. (B) _Photograph_ of constructed probe. A close-up of coil arrangement is shown, highlighting the RF saddle coil and centrally placed microwave loop employed for hyperpolarization. Aperture at probe base enables optical access to the sample.
Fig. 9: **Comparison of State and Hamiltonian Engineering signals**. Comparison of signal \(S\) for State Engineering data in main text Fig. 2 (taken at room temperature) and Hamiltonian Engineering data from main text Fig. 4 (taken at 100 K) for comparable zero crossing times \(t_{\mathrm{ac}}\)\(\approx\)31 s. Data is shown on a logarithmic scale for clarity. Measured signal is over an order of magnitude greater in the Hamiltonian Engineering method.
State Engineering starts from a regime of _low_ net polarization (Fig. 2A), and sign inversion (characterized by the \(t_{\rm{xc}}\) zero-crossing), occurs due to electronic dissipation. On the other hand, Hamiltonian engineering starts with large net polarization, and sign inversion occurs on account of thermalization into the imposed potential. As a result, the measured signal, and the magnitude of the zero-crossing drop in the Hamiltonian engineering method is over one order of magnitude larger than State engineering (see Fig. 9). The strong signal and robustness of the domain boundaries highlight the advantages of Hamiltonian engineering for stable spin texture generation and readout.
### Additional Results on Hamiltonian Engineering
To further validate the physical picture in Fig. 4 and Fig. 5B for Hamiltonian engineering, additional studies are conducted to probe the thermalization process. A summary of these findings is provided here, with further details available in the SI. First, to confirm that the zero-crossing observed in Fig. 4B does not arise from spins tipping towards the \(\hat{\mathbf{z}}\)-axis where they are unobservable, we interrupted the Floquet drive with a \(\pi/2\) pulse at \(t\)=\(t_{\rm{xc}}\). The data (SI Sec. S4) reveal no generation of \(\hat{\mathbf{z}}\) during \(t\), supporting the thermalization model above. Next, we investigated the relative roles of diffusion and dissipation leading up to \(t_{\rm{xc}}\) by studying profiles similar to Fig. 4B with varying temperatures (Sec. S5 C). Temperature serves as a control parameter for relaxation, as it strongly lengthens (\(\gtrsim\)50-fold) the electronic \(T_{1e}\) while only linearly changing the strength of the electronic gradient. Decreasing the temperature results in a rightward shift of \(t_{\rm{zc}}\) due to the slower rate of electron dissipation, consistent with theoretical expectations (Fig. 5B).
## Theory
### Effective Hamiltonians for State and Hamiltonian Engineering protocols
As explained in the main text, the experimental system is subject to a Floquet drive which consists of a periodic train of \(\hat{\mathbf{x}}\)-pulses. When the period of switching is small compared to the energy scales of the physical system, this driving induces prethermalization wherein the dynamics is governed by an effective Hamiltonian [21], before the system eventually heats up to a featureless infinite-temperature state.
In the SI (Sec. S7), we provide a detailed derivation of the approximate effective Hamiltonians \(\mathcal{H}_{\rm{SL}}\), Eq. (2), and \(\mathcal{H}_{\rm{eff}}\), Eq. (3), that govern the dynamics of the nuclear spin system for the State and Hamiltonian Engineering approaches, respectively. In particular, using exact simulations on system sizes up to \(N\)=16 quantum spins, we obtain an excellent agreement between the dynamics generated by the effective Hamiltonian and the exact Floquet system in all temporal regimes of interest (see SI, Sec. S7). Therefore, for the theoretical analysis in the main text, we work with static effective Hamiltonians. We emphasize that their applicability is limited to the duration of the prethermal plateau.
### Dissipation induced by the NV center
In the strong coupling limit between the NV center and \({}^{13}\)C spins, the dissipation induced by the NV-center can be rigorously derived from the Hamiltonian of the full system in three dimensions (see Sec. S6 B). Unlike previous works [61], we integrate out the phonon bath and the NV electron obtaining a Markovian master equation for the \({}^{13}\)C spins only. Such a Markovian master equation involves on-site Lindblad jump operators with coupling constants decaying as \(\sim 1/r^{6}\) as a function of the distance \(r\) from the NV center (which is effectively short-ranged). Thus, as a simplification, in our approximate one-dimensional quantum simulations, we approximately solve the Lindblad equation with local jump operators \(L\) acting only on the site with index \(r=0\) (representing the location of the NV) with isotropic coupling constants,
\[L_{+}=\frac{1}{2}\left(I_{0}^{x}+iI_{0}^{y}\right),\ \ L_{-}=\frac{1}{2}\left(I_{ 0}^{x}-iI_{0}^{y}\right),\ \ L_{z}=I_{0}^{z}. \tag{4}\]
Such operators generate both dephasing and dissipation in the system. Both effects are produced by the jump operators derived in Sec. S6 B for the Hamiltonian of the full system in three dimensions.
### Numerical simulations
#### ii.1.1 Quantum simulations
We use the novel algorithm LITE (local-information time evolution, see Refs. [44, 45]) to simulate the dynamics of the \({}^{13}\)C-spins with respect to the effective Hamiltonians subject to dissipation. LITE is designed to investigate the out-of-equilibrium dynamics of quantum systems, including open systems governed by the Lindblad equation. Its adaptive system size allows us to effectively simulate infinite systems. Motivated by the experimental setup, the system is initialized in a spatially inhomogeneous partially polarized state, where a relatively small number of spins in proximity to the NV center (\(r\)=0) carry a finite partial \(\hat{\mathbf{x}}\)-polarization, while spins located far from \(r\)=0 are fully mixed \(\rho_{r\gg a}\)=\(\mathds{1}_{2}/2\).
To use the LITE toolbox effectively, we investigate numerically tractable one-dimensional short-range toy models akin to the effective (three-dimensional) Hamiltonians of the coupled \({}^{13}\)C system
\[\mathcal{H}_{\pi/2} = -\frac{1}{2}\sum_{r}J_{r}\big{(}3I_{r}^{x}I_{r+a}^{x}-\mathbf{I}_{r} \cdot\mathbf{I}_{r+a}\big{)},\] \[\mathcal{H}_{\pi} = \sum_{r}J_{r}\big{(}3I_{r}^{z}I_{r+a}^{z}-\mathbf{I}_{r}\cdot\mathbf{I}_{r +a}\big{)}+\phi_{r}I_{r}^{x}. \tag{5}\]
The subscripts indicate the corresponding partner Hamiltonian in the actual system: \(\pi/2\) refers to the system kicked with \(\theta=\pi/2\) resulting in the effective Hamiltonian \(\mathcal{H}_{\rm{SL}}\); likewise, subscript \(\pi\) refers to a kick angle \(\theta\approx\pi\) with the associated Hamiltonian \(\mathcal{H}_{\rm{eff}}\). In the simulations, \(J_{r}\)=\(J_{0}\)+\(W\) is taken weakly disordered with \(W\) drawn uniformly at random from the interval \([-0.3|J_{0}|,0.3|J_{0}|]\) and with lattice constant \(a\). The on-site potential follows \(\phi_{r}\sim\frac{1}{r^{3}}-\delta\theta\) with \(r=0\) corresponding to the location of the NV center, and \(\delta\theta\) is the deviation in the kick angle from \(\pi\) (see SI S7 B2). For the simulations in Fig. 5 we use \(a=0.2/\sqrt{\pi}\), \(\delta\theta=0.05\pi\), \(J_{0}=-0.025\), and \(\gamma\)=\(0.1|J_{0}|\)
While these toy models may appear simplistic compared to the actual experimental system, as they lack the complexity of higher dimensions and long-range spin-spin couplings, they capture the
essential physics of diffusion (and dissipation), since they obey the same conservation laws compared to their three-dimensional counterparts. Therefore, the conclusions drawn from our numerical results can be qualitatively extended to the experimental system.
#### iv.2.2 Classical simulations
In addition, we also performed classical simulations analyzing the dynamics of the system with hundreds of dipolar-interacting spins of the full three-dimensional long-range interacting model. Spins are placed randomly on the vertices of a 3D diamond lattice of a finite extent, with lattice constant \(a\)=0.356 nm, and the classical evolution, generated under the effective Hamiltonian \(\mathcal{H}_{\text{eff}}\) (3), is studied. In the classical limit, the evolution of the spins \(\mathcal{I}_{k}\) is described by (see SI, Sec. S10.1)
\[\frac{\text{d}}{\text{d}t}\,\mathcal{I}_{k}(t)=\mathcal{I}_{k}(t)\times \boldsymbol{\nabla}_{\mathcal{I}_{k}}\mathcal{H}_{\text{eff}}(\mathcal{I}_{k}( t))\;. \tag{6}\]
In particular, for the classical simulation we are only considering closed system dynamics, i.e. we do not include dissipation as is done in the LITE simulations. However, as we also comment on later dissipation is not an essential ingredient for Hamiltonian Engineering.
As in the experiment, we average the simulation data over many (here 100) different lattice configurations. For each lattice configuration, we consider an ensemble of 150 partially polarized initial states drawn from a spatially inhomogeneously polarized distribution, where all spins within a radius \(r\)\(<\)7 nm are polarized and all others unpolarized, without correlations between the spins. We then evolve this ensemble using the classical Hamilton equations (Eq. (6)), and compute the ensemble-averaged polarization, see Sec. S10.
### Details on the Theoretical Results for State Engineering
#### iv.2.1 Zero crossing radius for State Engineering
In Fig. 5A(i) the polarization dynamics close to the NV center seem to show an abrupt sign inversion from positive to negative polarization. However, this is only an artefact originating from the logarithmic time scale. In Fig. 10 we show the region close to \(r\)=0 and around the potential sign inversion on a linear scale. Here, it becomes clear that the strong dissipation close to the NV leads to a quick decay of positive polarization. Therefore, the positive polarization cannot diffuse outwards. In fact, the negative polarization diffuses symmetrically in both directions filling the polarization hole left by the decayed positive polarization. Thus, the positive polarization does not abruptly invert; rather it is absorbed by the NV and the negative polarization can freely diffuse into what initially was a positively polarized regime. In the absence of dissipation, both positive and negative polarization would diffuse outwards leading to an increase in crossing radius (see SI Fig. S27).
#### iv.2.2 Waiting time analysis
Typically, the dissipation strength is indirectly controlled by tuning the temperature of the sample. Here, the peculiar form of the dissipation obtained in the singular coupling limit (see SI, Sec. S6.2), where jump operators are only the spin operators along the \(\hat{\mathbf{z}}\)-axis, opens up another possibility. While the strength of the dissipation itself remains immutable, the effect of the Lindblad jump operators on the system strongly depends on the direction of polarization: spins polarized along \(\hat{\mathbf{z}}\) experience only dephasing, whereas spins polarized in any other direction undergo both dephasing and dissipation. Experimentally, this is reflected in the extremely long \(T_{1}\) lifetimes of \(\hat{\mathbf{z}}\)-polarized states; during this period, the system still evolves under diffusion and any initial domain wall starts to melt over time (cf. SI, Fig. S25). Since dissipation does not affect the spins uniformly, different signals are expected to emerge depending on the waiting time \(t_{\text{wait}}\) after which the system is finally rotated out of the \(\hat{\mathbf{z}}\)-axis.
To simulate this effect we initialize the system in domain wall states akin to Fig. S25 and evolve it with \(\mathcal{H}_{\pi/2}\) up to times \(t_{\text{wait}}\) after which we switch on dissipation. The results are displayed in Fig. 11. As the waiting time increases, the zero-crossing time grows accordingly. This is expected since the system diffuses polarization away from \(r=0\) during \(t_{\text{wait}}\) and becomes less sensitive to dissipation (here acting only at \(r=0\)). A less obvious observation is the decrease of the absolute value of the minimally reached total net polarization \(|\min(\mathcal{I}_{x})|\) as a function of the waiting time (Fig. 11 (c)). A possible explanation is that with increasing waiting time, diffusion causes domain walls to melt, flattening the polarization profiles; hence, the net amount of polarization left at large distances (i.e., probed at long times) is reduced with increasing waiting time. This is also supported by simulations performed with a uniformly polarized (within a small region around the NV) initial state where no such decay appears as a function of waiting time. In fact, in this scenario, even a slight increase can be observed. Intuitively, this is expected as for increased \(t_{\text{wait}}\) more polarization is able to diffuse away from the dissipation-active region around the NV. All these results are qualitatively consistent with experimental observations.
Figure 10: **Zero crossing radius for State Engineering.** Zoom-in to \(Jt<200\) region in Fig. 5A(i) with adjusted color map scale to reveal polarization dynamics close to NV (\(r=0\)). The dynamics of positive polarization close to NV are dominated by dissipation such that it decays to zero before the crossing radius can move outwards due to diffusion.
The numerical simulations shown in Fig. 3F-G have been performed by means of the LITE algorithm. During the waiting time \(J_{\mathrm{trauit}}\), the system evolves under the one-dimensional short-range version of \(\mathcal{H}_{\mathrm{dd}}\) and is subject to spin diffusion. When the readout protocol is activated, dissipation comes into play (see above). Therefore, we initialize the system with a different number of positively polarized spins and activate dissipation after \(J_{\mathrm{trauit}}\). To plot Fig. 3F we measure the zero-crossing point \(t_{\mathrm{zc}}\), while for Fig. 3G we compute the minimum of the total net polarization.
### Details on the Theoretical Results for Hamiltonian Engineering
#### iii.1.1 Energy vs. Polarization diffusion
In the Hamiltonian Engineering simulations, we find that the variance of the energy distribution grows as \(\sim\)\(\sqrt{t}\) during the late-time dynamics (\(Jt\)\(>\)10), as expected for diffusive processes (see SI S8.2.3). Since the energy operator has a large overlap with the \(\hat{\mathbf{x}}\)-polarization at long times, the diffusive scaling becomes visible in the spreading of the polarizing front (dashed tilted line, Fig. 5B); however, we stress that in Hamiltonian Engineering, proper diffusion of polarization is absent due to a lack of \(\hat{\mathbf{x}}\)-polarization conservation.
#### iii.1.2 Effect of Dissipation
Let us emphasize that in contrast to State Engineering, the \(t_{\mathrm{zc}}\) zero-crossing for Hamiltonian Engineering arises even in the absence of dissipation (see SI S8.2). As observed in the long-time slices in Fig. 5B, dissipation only serves to extinguish polarization close to the NV (i.e., accelerating the inevitable diffusion-induced polarization inversion).
#### iii.1.3 Critical radius \(r_{c}\)
In the main text, we argued that the polarization zero-crossing radius is given by the zeroes, \(\phi(r_{c})=0\), of the effective on-site potential \(\phi\). To leading order, this on-site potential is determined solely from the microscopic gradient field induced by the NV together with the details of the kick sequence (see SI S7.2). Thus, assuming that we have full knowledge about the gradient field, we can estimate the crossing radius for the experimental data, see Fig. 4C-D.
In particular, we assume that the gradient field induced by the NV electron follows the dipole-dipole form \(B=2PK_{\mathrm{exp}}(3\cos^{2}\vartheta-1)/r^{3}\) with \(K_{\mathrm{exp}}=\mu_{0}\hbar\gamma_{\mathrm{p}}\gamma_{\mathrm{e}}/4\pi\) and electron spin polarization of \(P(\approx\)\(10\,\%)\). Then, using the experimental parameters from Fig. 4, i.e., \(\Omega\)\(\approx\)\(50\,\mathrm{kHz}\), \(t_{\mathrm{zcq}}=51\,\mu\mathrm{s}\) and, e.g., \(\vartheta=0.94\,\pi\) (\(t_{\mathrm{p}}=95\,\mu\mathrm{s}\)), we can estimate the crossing radius \(r_{c}\)\(\approx\)\(2.7\,\mathrm{nm}\)\(\times\sqrt[3]{|3\,\cos^{2}\vartheta-1|}\). For more details see SI, Sec. S7.2.
Figure 11: **Waiting time simulations (a)** Time-evolution curves of the total \(\hat{\mathbf{x}}\)-polarization for different waiting times. After \(t_{\mathrm{wait}}\) the dissipation acting on the site with \(r=r_{\mathrm{NV}}\) is switched on. For increased waiting time, the corresponding zero-crossing time increases since polarization spreads diffusively during \(t_{\mathrm{wait}}\) diminishing the effect of dissipation which acts most strongly at the position of the NV (\(r_{\mathrm{NV}}=0\)). We use an initial domain wall state with a total of \(N_{+}+N_{-}=31\) initially polarized spins where \(r=N_{+}=13\) (panel (_i_), blue) and \(N_{+}=11\) (panel (_ii_), orange) spins closest to \(r_{\mathrm{NV}}\) have \(p_{r}=0.6\) and the remaining \(N_{-}=31-N_{+}\) spins have \(p_{r}=-0.2\) (here, \(\rho_{n}\propto 1+p_{r}I_{r}^{x}\)). The dissipation parameters are \(\gamma_{z}=\gamma_{+}=\gamma_{-}=0.5J\), and the other simulation parameters as in SI Fig. S15. **(b)** The zero-crossing time \(t_{\mathrm{zc}}\) of the curves in (a) measured from \(t_{\mathrm{wait}}\) as a function of \(t_{\mathrm{wait}}\). **(c)** The absolute value of the minimum attained value of the total \(\hat{\mathbf{x}}\)-polarization of the curves in (a) as a function of the waiting time, normalized with respect to \(t_{\mathrm{wait}}=0\). In addition to the domain wall initial states, we also perform a simulation with a uniformly polarized initial state with \(N_{-}=0\) (\(N_{+}=31\)) (red curve, (iii)).
Supplementary Information: Nanoscale engineering and dynamical stabilization of mesoscopic spin textures
Kieren Harkins,\({}^{1,*}\) Christoph Fleckenstein,\({}^{2,*}\) Noella D'Souza,\({}^{1,*}\) Paul M. Schindler,\({}^{3,*}\) David Marchiori,\({}^{1,*}\) Claudia Artiaco,\({}^{2}\) Quentin Reynard-Feytis,\({}^{1}\) Ushoshi Basumallick,\({}^{1}\) William Beatrez,\({}^{1}\) Arjun Pillai,\({}^{1}\) Matthias Hagn,\({}^{1}\) Aniruddha Nayak,\({}^{1}\) Samantha Breuer,\({}^{1}\) Xudong Lv,\({}^{1}\) Maxwell McAllister,\({}^{1}\) Paul Reshetikhin,\({}^{1}\) Emanuel Druga,\({}^{1}\) Marin Bukov,\({}^{3}\) and Ashok Ajoy,\({}^{1,4,5}\)
\({}^{1}\) Department of Chemistry, University of California, Berkeley, Berkeley, CA 94720, USA.
\({}^{2}\) Department of Physics, KTH Royal Institute of Technology, SE-106 91 Stockholm, Sweden.
\({}^{3}\) Max Planck Institute for the Physics of Complex Systems, Nothnitzer Str. 38, 01187 Dresden, Germany.
\({}^{4}\) Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA.
\({}^{5}\) CIFAR Azrieli Global Scholars Program, 661 University Ave, Toronto, ON M5G 1M1, Canada.
###### Contents
* S1 Summary
* S2 Dissipation sources in the experiment
* S3 Nuclear relaxation in transverse and longitudinal directions
* S2 Indirect evidence for electron dissipation on the \({}^{13}\)C nuclear spins
* S3 Prethermal signal decays at a wide range of flip angles
* S4 Experiments probing \(\mathbf{\hat{z}}\) polarization
* S5 Hamiltonian engineering: variation of \(t_{2c}\) with experimental parameters
* S5 Experiments with changing hyperpolarization time
* S6 Experiments with different orientations
* S6. Temperature dependence of spin texture
* S6 Theory model
* S7 Closed system
* S7 Effective Hamiltonian analysis
* S8 Exact diagonalization simulation
* S9 Derivation of Floquet Hamiltonians
* S10 Spin locking for state engineering (\(\theta\neq\pi\))
* S11 Spin locking for Hamiltonian engineering (\(\theta\approx\pi\))
* S12 Analysis based on the Eigenstate Thermalization Hypothesis
* S13 Estimating the crossing radius \(r_{c}\)
* S14 Simplifications
* S8 One-dimensional approximate quantum dynamics
* S9.1 Introduction to the LITE algorithm
* S9.2 Energy diffusion around \(\theta\approx\pi\): Hamiltonian engineering
* S10 Toy model Hamiltonian
* S10.1 Initial states
* S10.1 Energy diffusion
* S10.1 Constant on-site potential \(\phi\)
* S10.1.1.1.2 \(\phi\)-state potential \(\phi\)
* S10.1.3 \(\phi\)-state potential \(\phi\)
* S10.1.4 \(\phi\)-state potential \(\phi\)
* S10.1.5 \(\phi\)-state potential \(\phi\)
* S10.1.6 \(\phi\)-state potential \(\phi\)
* S10.1.7 \(\phi\)-state potential \(\phi\)
* S10.1.8 \(\phi\)-state potential \(\phi\)
* S10.1.9 \(\phi\)-state potential \(\phi\)
* S10.2 \(\phi\)-state potential \(\phi\)
* S10.2 \(\phi\)-state potential \(\phi\)
* S10.3 \(\phi\)-state potential \(\phi\)
* S10.4 \(\phi\)-state potential \(\phi\)
* S10.5 \(\phi\)-state potential \(\phi\)
* S10.6 \(\phi\)-state potential \(\phi\)
* S10.7 \(\phi\)-state potential \(\phi\)
* S10.8 \(\phi\)-state potential \(\phi\)
* S10.9 \(\phi\)-state potential \(\phi\)
* S10.1.1 \(\phi\)-state potential \(\phi\)
* S10.1.1 \(\phi\)-state potential \(\phi\)
* S10.1.1 \(\phi\)-state potential \(\phi\)
* S10.1.2 \(\phi\)-state potential \(\phi\)
* S10.1.3 \(\phi\)-state potential \(\phi\)
* S10.1.4 \(\phi\)-state potential \(\phi\)
* S10.1.5 \(\phi\)-state potential \(\phi\)
* S10.1.6 \(\phi\)-state potential \(\phi\)
* S10.1.7 \(\phi\)-state potential \(\phi\)
* S10.1.8 \(\phi\)-state potential \(\phi\)
* S10.1.9 \(\phi\)-state potential \(\phi\)
* S10.1.1 \(\phi\)-state potential \(\phi\)
* S10.1.1 \(\phi\)-state potential \(\phi\)
* S10.1.1 \(\phi\)-state potential \(\phi\)
* S10.1.1 \(\phi\)-state potential \(\phi\)
* S10.1.1 \(\phi\)-state potential \(\phi\)
* S10.1.2 \(\phi\)-state potential \(\phi\)
* S10.1.3 \(\phi\)-state potential \(\phi\)
* S10.1.4 \(\phi\)-state potential \(\phi\)
* S10.1.5 \(\phi\)-state potential \(\phi\)
* S10.1.6 \(\phi\)-state potential \(\phi\)
* S10.1.7 \(\phi\)-state potential \(\phi\)
* S10.1.8 \(\phi\)-state potential \(\phi\)
* S10.1.9 \(\phi\)-state potential \(\phi\)
* S10.20.1 \(\phi\)-state potential \(\phi\)
* S10.21.1 \(\phi\)-state potential \(\phi\)
* S10.21.1 \(\phi\)-state potential \(\phi\)
* S10.21.2 \(\phi\)-state potential \(\phi\)
* S10.21.3 \(\phi\)-state potential \(\phi\)
* S10.21.4 \(\phi\)-state potential \(\phi\)
* S10.21.5 \(\phi\)-state potential \(\phi\)
* S10.21.6 \(\phi\)-state potential \(\phi\)
* S10.21.7 \(\phi\)-state potential \(\phi\)
* S10.21.8 \(\phi\)-state potential \(\phi\)
* S10.21.8 \(\phi\)-state potential \(\phi\)
* S10.21.9 \(\phi\)-state potential \(\phi\)
* S10.21.1 \(\phi\)-state potential \(\phi\)
* S10.21.10 \(\phi\)-state potential \(\phi\)
* S10.21.11 \(\phi\)-state potential \(\phi\)
* S10.21.12 \(\phi\)-state potential \(\phi\)
* S10.21.13 \(\phi\)-state potential \(\phi\)
* S10.21.14 \(\phi\)-state potential \(\phi\)
* S10.21.15 \(\phi\)-state potential \(\phi\)
* S10.21.16 \(\phi\)-state potential \(\phi\)
* S10.21.17 \(\phi\)-state potential \(\phi\)
* S10.21.18 \(\phi\)-state potential \(\phi\)
* S10.21.19 \(\phi\)-state potential \(\phi\)
* S10.21.19 \(\phi\)-state potential \(\phi\)
* S10.21.20 \(\phi\)-state potential \(\phi\)
* S10.21.21 \(\phi\)-state potential \(\phi\)
* S10.21.22 \(\phi\)-state potential \(\phi\)
* S10.21.23 \(\phi\)-state potential \(\phi\)
* S10.21.24 \(\phi\)-state potential \(\phi\)
* S10.21.25 \(\phi\)-state potential \(\phi\)
* S10.21.26 \(\phi\)-state potential \(\phi\)
* S10.21.27 \(\phi\)-state potential \(\phi\)
* S10.21.28 \(\phi\)-state potential \(\phi\)
* S10.21.29 \(\phi\)-state potential \(\phi\)
* S10.21.29 \(\phi\)-state potential \(\phi\)
* S10.21.30 \(\phi\)-state potential \(\phi\)
* S10.21.40 \(\phi\)-state potential \(\phi\)
* S10.21.51 \(\phi\)-state potential \(\phi\)
* S10.21.61 \(\phi\)-state potential \(\phi\)
* S10.21.7 \(\phi\)-state potential \(\phi\)
* S10.21.8 \(\phi\)-state potential \(\phi\)
* S10.21.9 \(\phi\)-state potential \(\phi\)
* S10.21.19 \(\phi\)-state potential \(\phi\)
* S10.21.19 \(\phi\)-state potential \(\phi\)
* S10.21.19 \(\phi\)-state potential \(\phi\)
* S10.21.19 \(\phi\)-state potential \(\phi\)
* S10.21.20 \(\phi\)-state potential \(\phi\)
* S10.21.21 \(\phi\)-state potential \(\phi\)
* S10.21.22 \(\phi\)-state potential \(\phi\)
* S10.21.23 \(\phi\)-state potential \(\phi\)
* S10.21.24 \(\phi\)-state potential \(\phi\)
* S10.21.25 \(\phi\)-state potential \(\phi\)
* S10.21.26 \(\phi\)-state potential \(\phi\)
* S10.21.27 \(\phi\)-state potential \(\phi\)
* S10.21.28 \(\phi\)-state potential \(\phi\)
* S10.21.29 \(\phi\)-state potential \(\phi\)
* S10.21.29 \(\phi\)-state potential \(\phi\)
* S10.21.29 \(\phi\)-state potential \(\phi\)
* S10.220 \(\phi\)-state potential \(\phi\)
* S10.21.21 \(\phi\
Section S5 examines changes in \(t_{\text{zc}}\) to probe the sensitivity of formed spin texture to experimental parameters. Increasing hyperpolarization time in Fig. S7 does not significantly change the location of \(t_{\text{zc}}\), showcasing the robustness of Hamiltonian Engineering to different initial states.
Decreasing the temperature in Fig. S9 diminishes the effect of electron dissipation to the system, preserving the generated spin texture for longer and moving \(t_{\text{zc}}\) to later times in the experimental data. In Fig. S5, changing the effective Floquet drive Rabi frequency, by changing the pulse duty cycle while fixing \(\theta\), leads to a movement in \(t_{\text{zc}}\) as anticipated from theory.
Changing sample orientation creates composite behavior at kick angle \(\pi\) qualitatively identical to other Hamiltonian Engineering experiments (compare Fig. S8Aiii and Biii). The multi-dip behavior seen in Fig. S8Bii is consistent with domain boundaries forming from NV centers inequivalently aligned with respect to the applied magnetic field. Lastly, changing the pulse transmit offset frequency (TOF) shifts the Rabi curve of the system by a finite phase, yet zero crossing behavior can appear before or after the bulk \(\pi\) value for a given TOF (Fig. S6).
Section S6 details the theory model for dipolar-coupled \({}^{13}\)C nuclear spins and nitrogen-vacancy (NV) center electrons. The system is described as a closed system with a hierarchical energy scale separation, allowing for independent manipulation of the \({}^{13}\)C nuclei and NV electrons. The Hamiltonian captures the couplings between spins and the external magnetic field. Additionally, the open system nature of the experimental setup is discussed in Sec. S6.2, where spins and NV centers couple to a phonon bath, resulting in decoherence and dissipation. An effective Lindblad master equation is derived, considering the singular coupling limit, which describes the dissipative dynamics of the spins only. The dissipative terms in the Lindblad equation account for dephasing and dissipation effects operational in experiments.
In Sec. S7, we investigate in more detail the dynamics of the \({}^{13}\)C spin system coupled to the NV center. By considering the fast dynamics of the NV spin, we replace the NV operators with their expectation values. This leads to an effective on-site potential for the \({}^{13}\)C spins. We analyze the system's evolution under a kick sequence and derive the effective Hamiltonian using Floquet's theorem. We consider two cases: spin locking with a kick angle different from \(\pi\) and near \(\pi\). In the former case, the effective Hamiltonian conserves the \(\hat{\mathbf{x}}\)-polarization, while it is not conserved in the latter case, and the strong NV-induced potential leads to a spatially inhomogeneous effective Hamiltonian. Exact numerical simulations confirm the validity of the derived effective Hamiltonians. In Sec. S7.3, we discuss the dynamics of the \({}^{13}\)C system within the Eigenstate Thermalization Hypothesis (ETH). The system is expected to (ultimately) thermalize to an infinite temperature state due to the absence of energy conservation. However, at high driving frequencies, a pre-thermal plateau emerges before full thermalization. We find that in the \(\theta\)=\(\pi\) case, the local polarization follows the local on-site potentials induced by the NV center, which can lead to a sign-inversion of the local polarization, and hence also of the integrated polarization. In Sec. S7.4, we estimate the \(r_{c}\) domain boundary where the local polarization changes sign. Finally, in Sec. S7.5, we present two simplified, effective Hamiltonians for \(\theta\)\(\neq\)\(\pi\) and \(\theta\)\(\approx\)\(\pi\) that aid in a qualitative understanding of the dynamics.
Section S8 first introduces the approximate time-evolution algorithm used to perform the quantum dynamics of the simplified one-dimensional toy models used in the rest of this Section. In particular, Sec. S8.1 summarizes the main features of the local-information time-evolution (LITE) algorithm. LITE is designed to investigate the out-of-equilibrium transport of short-range systems. In contrast to similar algorithms [REF:], it preserves local constants of motion. LITE decomposes the system into subsystems and solves the von Neumann equation for each subsystem in parallel. Importantly, it can also simulate open quantum systems described by the Lindblad master equation. In Sec. S8.2, we apply LITE to simulate an effective one-dimensional short-range toy model for the case \(\theta\)=\(\pi\). Such a toy model is derived from the experimental three-dimensional long-range model within the approximation of sparse density of spins. Thus, the one-dimensional model retains only the dominant nearest-neighbor mutual coupling and the space-dependent on-site potential generated by the NV center. For \(\theta\)\(\approx\)\(\pi\), the energy diffusion and the behavior of the total \(\hat{\mathbf{x}}\)-polarization are analyzed, showing a sign inversion of polarization in the presence of a space-dependent potential. The diffusion of energy in the inhomogeneous systems is explored, revealing a slowing down of energy spread that can be explained in terms of a spatially dependent diffusion constant. Dissipation effects are then introduced. It is shown that dissipation favors energy diffusion, inducing the emergence of Gaussian-like energy distributions. The presence of dissipation does not alter the possibility of observing sign inversion in the total \(\hat{\mathbf{x}}\)-polarization. In Sec. S8.3, the dynamics of the one-dimensional spin system obtained for a kick angle around \(\pi/2\) is investigated. Unlike the case of a kick angle close to \(\pi\), the total (net) \(\hat{\mathbf{x}}\)-polarization is conserved in addition to energy. This leads to the diffusion of polarization throughout the system, making it impossible to detect polarization gradients using global measurements. Only by introducing dissipation can a sign inversion of the integrated polarization be induced. The effects are demonstrated using numerical simulations obtained within the LITE algorithm. In Secs. S8.4 and S8.5, we highlight the main differences between the two kick angles analyzed and summarize the results.
In Sec. S9, we discuss the role of dimensionality and interaction range in the simplified one-dimensional versions of the long-range three-dimensional quantum system used in the experiments. We argue that, although our simplified models may not capture all the details of the out-of-equilibrium dynamics, we expect them to provide qualitative insights into the three-dimensional system. We discuss the effects of dimensionality on Floquet prethermalization and equilibration dynamics within the prethermal plateau. We also consider the possible effects of many-body localization and the impact of spin-spin couplings and angular dependence.
In Sec. S10, we perform classical simulations on a three-dimensional diamond lattice to complement the previous analysis. The classical simulation allows us to study larger systems and reach longer time scales, although quantum correlations are neglected. We analyze the spin locking regime near \(\theta\approx\pi\) and observe the formation of a spatially inhomogeneous local polarization profile. The classical simulation confirms the predictions from the analytical and numerical results obtained for one-dimensional toy models. The simulations show that the local polarization at late times follows the applied local potential up to an overall constant, which is related to the inverse temperature. The classical simulation results provide further support for the formation of a robust spatially inhomogeneous local polarization
profile in the experimental system.
## S2 Dissipation sources in the experiment
### Nuclear relaxation in transverse and longitudinal directions
In this discussion, we elucidate the mechanisms for \({}^{13}\)C nuclear relaxation for different initial states. Let us begin by examining Fig. S1A, which illustrates the \({}^{13}\)C dephasing when prepared in a superposition state, \(\propto\)\(I_{x}\). In the absence of the applied Floquet drive, the measured signal rapidly decays in \(T_{2}^{*}\)\(\sim\)1.5ms. This is primarily attributable to interspin interactions, specifically, the formation of many-body spin states invisible under inductive readout. The magnitude of the internuclear dipolar coupling can be estimated from this decay as \(\langle J\rangle\)=660 Hz.
Moving on to Fig. S1B, we consider the spins prepared in the \(\rho_{z}\propto I_{z}\) state. In this case, the corresponding (\(T_{1}\)) relaxation is found to be remarkably long, with \(T_{1}\)\(<\)1hr even at room temperature. Relaxation arises from spin-flipping noise perpendicular to \(\hat{\mathbf{z}}\), stemming from the spectral density component that matches the nuclear Larmor frequency (\(\omega_{L}\)=\(\gamma_{n}B_{0}\)). Phononic contributions likely play a dominant role in these decay channels; their inherently weak nature results in the long \(T_{1}\). Conversely, to assess the contribution of electronic dissipation to this relaxation, it is worth noting that the corresponding spectral density is centered at (\(T_{1e}\))\({}^{-1}\), which is orders of magnitude separated from \(\omega_{L}\). Hence, this contribution can be considered to be extremely weak. This is supported by experiments in Fig. 3 of the main paper.
Lastly, we direct our attention to \(T_{2}^{\prime}\) relaxation, as shown in Fig. S1C. Here spins are prepared in a superposition state \(\rho_{I}\)\(\propto\)\(I_{x}\) and subjected to Floquet driving, which preserves them along \(\hat{\mathbf{x}}\). However, this effective Hamiltonian is only accurate to leading order in a Magnus expansion, and higher-order terms can induce heating to infinite temperature. Relaxation additionally arises from \(\hat{\mathbf{z}}\)-oriented noise spectral matched to the energy gap in the rotating frame, \(\Omega_{\text{eff}}\)=\(\Omega(t_{p}/\tau)\), where the latter factor is the pulsing duty cycle. In our experiments, (\(T_{1e}\))\({}^{-1}\)\(\sim\)100kHz and exhibits a significant contribution at \(\Omega_{\text{eff}}\). Consequently, the electron can play a role as a _"dissipator"_ for the spins.
Note that while the lattice contains both NV and P1 center electrons, the NV center is present at the same location \(r\)=0 for every \({}^{13}\)C ensemble considered while the P1s are randomly positioned. Thus, upon ensemble averaging, the P1s contribute to only a background rate of \(T_{2}^{\prime}\) relaxation, while the NV-driven relaxation is discernible in a spatially dependent manner.
### Indirect evidence for electron dissipation on the \({}^{13}\)C nuclear spins
In previous work, we presented preliminary evidence supporting the role of the electron spin as a nanoscale dissipator for neighboring nuclear spins [62]. Here, we provide a concise summary of these findings. Specifically, we observed that the relaxation profiles of \({}^{13}\)C spins in the \(T_{2}^{\prime}\) regime appear to lengthen when a greater degree of hyperpolarization (achieved by employing longer \(\tau_{\star}\) durations) is injected into the spins. This is shown in Fig. S2, where the colorbar represents \(\tau_{\star}\). We interpreted this behavior by considering that, with longer \(\tau_{\star}\) durations, the polarization has more time to diffuse within the lattice, thus reaching nuclei less affected by relaxation originating from the NV center. Instead, the observation of sharp zero-crossings in Fig. 2 of the paper provides more _direct_ evidence for this, and allows it to be exploited for the readout of spin texture.
## S3 Prethermal signal decays at a wide range of flip angles
We present here an analysis of spin-lock decay behavior for a wide range of flip angles, [0, 1.5\(\pi\)], as a comparison to the results in the main text which focused on the proximity of \(\theta\)=\(\pi\) (Fig. 4) and \(\theta\)=\(\pi/2\) (Fig. 2). Experiments here (Fig. S3) were conducted at 115K, similar to conditions in Fig. 4. Fig. S3A shows the integrated signal \(S_{\text{int}}\) for the range of angles, and Fig. S3B-D shows representative slices of signal \(S\) at specific values of \(\theta\) in three ranges (shaded). A full movie of the dataset is available in
the Youtube link at Ref. [63].
Data in Fig. S3B,D illustrates that far from \(\pi\), the spin-lock profiles 'exhibit slow decays at \(T_{c}^{\prime}\) that follow stretched exponential behavior. In the regime away from \(\pi\), there is no inversion of the spin signal with changing \(\theta\). However, as observed in Fig. S3C (also Fig. 4), in the proximity of \(\pi\), there are sharp zero crossings corresponding to the formation of spin-shell texture. Overall, these results demonstrate unexpected sign inversions for the case of CPMG pulse trains, which to our knowledge, has not been reported previously.
## S4 Experiments probing 2 polarization
In the experiments describing shell formation through Hamiltonian Engineering (Fig. 4 of the main paper), we observed an apparent change in the polarization direction from \(\mathbf{\hat{x}}\) to \(-\mathbf{\hat{x}}\), which was attributed to the thermalization of spins under the electron gradient induced potential. In this section, we aim to provide additional evidence to support this. Specifically, we conducted measurements probing the \(\mathbf{\hat{z}}\) component of the spin vector at various time points \(\mathbf{\hat{\tau}}\) along experimental traces in Fig. 4B. This allows us to examine whether the observed zero-crossings in Fig. 4 are trivially a result of the spin vector tilting towards the \(\mathbf{\hat{z}}\) axis, where the spins become unobservable.
To further investigate this, we performed additional experiments as depicted in Fig. S4A. The spins are subject to the Floquet drive as before, but it is interrupted at time \(t\)=\(t_{c}\) with \(\pi/2\) pulses applied in either the \(\mathbf{\hat{x}}\) direction (Fig. S4A) or the \(\mathbf{\hat{y}}\) direction (Fig. S4B). These pulses reorganize the spin populations from the \(\mathbf{\hat{y}}\)-\(\mathbf{\hat{z}}\) or \(\mathbf{\hat{x}}\)-\(\mathbf{\hat{z}}\) planes into the \(\mathbf{\hat{x}}\)-\(\mathbf{\hat{y}}\) plane at \(t_{c}\). Setting \(t_{c}\)=\(t_{ac}\), we can precisely probe the \(\mathbf{\hat{z}}\) component at the zero-crossing point. Moreover, by varying \(t_{c}\), we can track the \(\mathbf{\hat{z}}\) component throughout the observed dynamics in Fig. S4A-B.
Specifically, Fig. S4A(i) and Fig. S4B(i) display data obtained at \(t_{c}\)=\(t_{ac}\) slices for both cases. Notably, no significant \(\mathbf{\hat{z}}\) signal was observed, with the residual signal primarily attributed to the frequency offset of the pulses employed in these experiments. To further support this observation, the inset Fig. S4B(ii) illustrates the variation of the \(\mathbf{\hat{z}}\) amplitude as a function of \(t_{c}\). It reveals that the \(\mathbf{\hat{z}}\) component undergoes negligible change with \(t\). Consequently, Fig. S4 provides compelling evidence that the zero-crossing observed in Fig. 4 arises from the thermalization dynamics of the spins, rather than from the polarization vector deviating away from the \(\mathbf{\hat{x}}\)-\(\mathbf{\hat{y}}\) plane.
## S5 Hamiltonian engineering: variation of \(t_{\text{zc}}\) with experimental parameters
appears in the 70s experiment time. This is consistent with effectively shifting the position of the \({}^{13}\)C slice corresponding to \(r_{c}\). We find that the \(\theta\)=\(\pi\) slice occurs _after_ the parabola-like \(t_{\text{zc}}\) curve observed in Fig. 4C for off-resonant pulses. This again points to shifting \(r_{\text{c}}\) via the resonance offset field. Nevertheless, the physics underlying the formation and thermalization of the spinning shell remains unchanged in this case.
### Experiments with changing hyperpolarization time
We examine the relationship between results displayed in Fig. 4 of the main paper and the amount of hyperpolarization injected into the \({}^{13}\)C nuclear spins. Specifically Fig. S7 describes experiments varying the amount of time \(\tau_{\text{z}}\) for which hyperpolarization is injected. Temperature is maintained consistent at 115K, flip angle here is 160\({}^{\circ}\), and shuttling time \(t_{s}\)=90 s.
The amplitude signal obtained in these experiments (Fig. S7B) shows the expected zero crossing and subsequent phase inversion. The zero crossing time \(t_{\text{zc}}\) does not significantly change with increased hyperpolarization time \(\tau_{\text{z}}\). This stands in contrast to the movement of the zero crossing with variations in the flip angle (Fig. 4) or temperature (Fig. S9). These results support the fact that spin texture engineered via **Hamiltonian Engineering** is qualitatively independent of the initial state of nuclear spins.
### Experiments with different orientations
Here we examine the effect of changing the diamond sample orientation to results described in Fig. 4 of the main paper. We compare two different sample orientations, where the NV center families have different directions relative to \(B_{0}\). Fig. S8A-B shows the DNP-measured EPR spectra of the NV centers in both cases. For experiments similar to Fig. 4B, we find qualitatively identical development of spin-shell texture and zero-crossings in both cases, as described in the colorplots in Fig. S8Aiii,Biii. The physics underlying Fig. 4 can therefore be considered independent of the exact sample orientation.
### Temperature dependence of spin texture
We now consider the effect of changing temperature on the observed spin texture signal via **Hamiltonian Engineering** (Fig. 4 of the main paper). A key advance enabled by the instrumentation (Fig. 6) introduced in this paper is the ability to study the role of temperature (in 77K-RT range), which previously has been shown to have a strong effect on rates of electronic relaxation \(T_{\text{1e}}\) (of both NV and PI centers).
Fig. S9 presents experiments conducted at six representative temperatures. To ensure consistency, we maintain the same nuclear Rabi frequency for all experiments. Results indicate that the time at which the zero-crossing occurs (\(t_{\text{zc}}\)) increases as the temperature decreases. This can be seen as a direct reflection of the increase in the NV center \(T_{\text{1e}}\), which in turn decreases the strength of the dissipation acting on the \({}^{13}\)C nuclear spins (see also Sec. S6 B). We note that Ref. [64,65] found that for samples with comparable NV center densities as the one employed here, \(T_{\text{1e}}\) increases sharply close to 100K and can exceed \(T_{\text{1e}}\)\(>\)1s under these conditions.
## S6 Theory model
### Closed system
As outlined in the main text, the system of interest is given by dipolar coupled \({}^{13}\)C nuclear spins with the (rotating-frame) Hamiltonian
\[\mathcal{H}_{\text{dd}}=\sum_{k<\ell}b_{k\ell}\left(3I_{k}^{x}I_{\ell}^{z}- \mathbf{I}_{k}\cdot\mathbf{I}_{\ell}\right),\] (S 1)
where \(\mathbf{I}_{k}=\left(I_{k}^{x},I_{k}^{y},I_{k}^{z}\right)^{T}\), the spin-\(1/2\) operators \(I_{k}^{\eta}=\sigma_{k}^{\eta}/2\) describes the nuclear spins, and \(\sigma^{\eta}\) (\(\eta\in\{x,y,z\}\)) are the Pauli matrices. The couplings between different \({}^{13}\)C spins strongly depends on their distance and the angle with the \(\mathbf{\hat{z}}\)-axis via \(b_{k\ell}=J_{\text{exp}}(3\cos^{2}(\vartheta_{k\ell})-1)/r_{k\ell}^{3}\) where \(\cos(\vartheta_{k\ell})\)=\(\mathbf{B}\cdot\mathbf{\mathbf{r}}_{k\ell}/(|\mathbf{B}||\mathbf{\mathbf{r}}_{k\ell}|)\); \(\mathbf{\mathbf{r}}_{k\ell}\) is the inter-spin vector and \(\mathbf{B}=(0,0,B)^{T}\) is the externally applied magnetic field and \(J_{\text{exp}}\)=\(\mu_{0}\mu_{0}\gamma_{n}^{2}/4\pi\), with the gyromagnetic ratio of the neutron \(\gamma_{n}\) and the vacuum permeability \(\mu_{0}\). The \({}^{13}\)C-spins relevant for the experiment are randomly placed on a diamond lattice in the proximity of NV-centers. The NV-centers are approximately described by an electronic two-level system, which is itself coupled to all \({}^{13}\)C nuclear spins via dipole-dipole interactions:
\[\mathcal{H}_{\text{NV}} = \epsilon_{\text{NV}}S^{z}\,,\] (S 2) \[\mathcal{H}_{\text{NV},^{13}C} = \sum_{j}K_{j}\left(3S^{z}I_{j}^{z}-\mathbf{S}\cdot\mathbf{I}_{j}\right).\] (S 3)
Here \(S^{\eta}=\sigma^{\eta}/2\) with \(\alpha\in\{x,y,z\}\) are (pseudo-)spin-\(1/2\) operators describing the two-level system of the NV-center. \(K_{j}=K_{\text{exp}}(3\cos^{2}(\vartheta_{j})-1)/r_{j}^{3}\) with the vector \(\mathbf{\mathbf{r}}_{j}\) between the NV center and the \(j\)-th \({}^{13}\)C nuclear spin and the corresponding angle
\(\cos(\vartheta_{j})=\mathbf{B}\cdot\mathbf{r}_{j}/(|\mathbf{B}||\mathbf{r}_{j}|)\) between \(\mathbf{r}_{j}\) and the external magnetic field \(\mathbf{B}\), and \(K_{\text{exp}}\)=\(\mu_{0}\hbar\gamma_{\mathbf{r}_{n}}\gamma_{e}/4\pi\), where \(\gamma_{e}\) is the gyromagnetic ratio of the electron.
Note that the system obeys the hierarchy of energy scales \(e_{\text{NV}}\gg K\gg J\), where \(J\) and \(K\) are characteristic energy scales associated with the Hamiltonians of Eq. (S 1) and (S 3). The separation of energy scales is important as it allows to independently kick (i.e., apply strong short pulses to) the \({}^{13}\)C spins while leaving the NV two-level system unaffected. Moreover, it allows us to consider the singular-coupling limit in order to derive an effective Lindblad master equation for the \({}^{13}\)C spins coupled to the NV center (see below). The kick Hamiltonian of interest, created from an external microwave field at resonant Larmor-frequency of the external magnetic field, is given in the rotating frame by
\[\mathcal{H}_{x}(t)=\begin{cases}\Omega\sum_{j}I_{j}^{x}&n\tau<t<n\tau+t_{\text {p}},\ \ n\in\mathbb{N}\\ 0&\text{otherwise}\end{cases},\] (S 4)
with the driving period \(\tau\) and pulse width duration \(t_{\text{p}}\).
For the combined system of NV and \({}^{13}\)C nuclear spins, it is useful to perform an interaction-picture transformation with \(\mathcal{H}_{\text{dd}}\)+\(\mathcal{H}_{\text{NV}^{13}}\)C as the interaction Hamiltonian. Using the rotating-wave approximation, we readily obtain
\[\widetilde{\mathcal{H}}_{\text{tot}}=\mathcal{H}_{\text{dd}}+\sum_{j}2K_{j}S^{ \pm}I_{j}^{z}\,,\] (S 5)
in the rotating frame with respect to \(\exp(-i\tau\mathcal{H}_{\text{NV}})\).
### Open system
In the experimental platform, both the NV center and the \({}^{13}\)C spins are not completely isolated. They are coupled to a bath of phonons generated by the diamond lattice; such a bath causes decoherence and dissipation. Thus, a comprehensive theory of the system should take these effects into consideration.
Following the rotating-wave approximation, Eq. (S 5), the microscopic Hamiltonian of the total system composed of the NV center and the \({}^{13}\)C spins reads
\[\mathcal{H}_{\text{tot}} =\mathcal{H}_{\text{dd}}+\mathcal{H}_{\text{NV}}+\sum_{j}2K_{j}S^ {\pm}I_{j}^{z}\] (S 6) \[=\sum_{j<k}I_{jk}\left(3I_{j}^{z}I_{k}^{z}-\mathbf{I}_{j}\cdot\mathbf{I}_ {k}\right)+\epsilon_{\text{NV}}S^{z}+\sum_{j}2K_{j}S^{z}I_{j}^{z},\] (S 7)
in the lab frame. The first term represents the Hamiltonian of the \({}^{13}\)C spins, the second is the Hamiltonian of the NV center, and the last is the effective interaction between the two. As mentioned, the total system obeys the following hierarchy of energy scales:
\[\epsilon_{\text{NV}}\gg K\gg J,\] (S 8)
which implies that the timescale intrinsic to the NV center is much shorter than any other timescale of the problem. This allows us to assume that the NV center is strongly coupled to the thermal phonon bath while the \({}^{13}\)C spins are only weakly coupled to it. Therefore, we also assume that the NV center is always in thermal equilibrium with the phonons. For simplicity, we consider the NV at infinite temperature: \(\rho_{\text{NV}}(t)=\frac{1}{2}\text{I}_{\text{NV}}\), for all times \(t\); however, our results can be easily generalized to any temperature.
The thermal-equilibrium assumption for the NV allows us to derive an effective quantum master equation for the reduced system of the \({}^{13}\)C spins by tracing out the NV center (and, indirectly, the phonons to which it is coupled) in the form of a Lindblad master equation. The latter is the most general Markovian master equation. It describes the equation of motion of a reduced system in contact with a thermal bath which is memoryless, i.e., whose timescale is much shorter than any other timescale in the problem, as the NV center in our system. Moreover, the hierarchy of energy scales (S 8) indicates the validity of the so-called _singular coupling limit_, that enormously simplifies the microscopic derivation of the Lindblad equation [66; 67; 68]. Notice that previous works [61] have investigated the opposite scenario in which one is interested in integrating out the nuclear spins to derive a master equation for the NV electron only. From Eq. (S 8), it is clear that this opposite scenario generates a highly non-Markovian master equation for the NV electron.
The derivation of the Lindblad equation for the \({}^{13}\)C spins proceeds as follows. Let us rewrite the total system Hamiltonian (S 6) in the more general form as [69]
\[\mathcal{H}_{\text{tot}}=\mathcal{H}_{{}^{13}\text{C}}\otimes\mathbbm{1}_{\text {NV}}+\alpha^{-2}\mathbbm{1}_{{}^{13}\text{C}}\otimes\mathcal{H}^{\prime}_{ \text{NV}}+\alpha^{-1}V^{\prime},\] (S 9)
where we have renormalized the bath and interaction Hamiltonian as \(\mathcal{H}^{\prime}_{\text{NV}}=\alpha^{2}\mathcal{H}_{\text{NV}}\) and \(V^{\prime}=\sum_{j}2K^{\prime}_{j}S^{z}l^{z}_{j}\) with \(K^{\prime}_{j}=\alpha K_{j}\). The parameter \(\alpha\) is the inverse of the coupling strength; in the singular coupling limit we eventually take \(\alpha\to 0\). Equation (S 9) highlights the hierarchy of energy scales of our model (S 6).
We start by considering the Nakajima-Zwanzig equation [66; 68]
\[\frac{d}{dt}\mathcal{P}\tilde{\rho}(t) = \alpha^{-1}\mathcal{P}\mathcal{V}(t)\mathcal{P}\tilde{\rho}(t)+ \alpha^{-1}\mathcal{P}\mathcal{V}(t)\mathcal{G}(t,0)\mathcal{Q}\tilde{\rho}(0)\] (S 10) \[+\alpha^{-2}\int_{0}^{t}du\mathcal{P}\mathcal{V}(t)\mathcal{G}(t, u)\mathcal{Q}\mathcal{V}(u)\mathcal{P}\tilde{\rho}(u),\]
where \(\tilde{}\) denotes the time-evolved operator in the interaction picture; \(\mathcal{P}\) and \(\mathcal{Q}\) are two orthogonal projector operators (i.e., \(\mathcal{P}^{2}=\mathcal{P}\), \(\mathcal{Q}^{2}=\mathcal{Q}\), and \(\mathcal{P}\mathcal{Q}=\mathcal{Q}\mathcal{P}=0\)) given by \(\mathcal{P}\rho=\operatorname{T}_{\text{NN}}(\rho)\otimes\rho_{\text{NN}}\) and \(\mathcal{Q}\rho=(\mathds{1}-\mathcal{P})\rho\); \(\mathcal{V}(t)\left(\cdot\right)=-i[\tilde{V}^{\prime}(t),\cdot]\); \(\mathcal{G}(t,u)=\mathcal{T}e^{\int_{u}^{t}dt^{\prime}\mathcal{Q}\mathcal{V}(t ^{\prime})}\) is the propagator with \(\mathcal{T}\) the time-ordering operator.
The integro-differential equation (S 10) is exact and describes the equation of motion of the relevant subspace \(\mathcal{P}\tilde{\rho}(t)\) in the interaction picture. Unfortunately, it is usually as difficult to solve as the von Neumann equation describing the dynamics of the total system. Hence, we need to make a few more assumptions to proceed further: First, we assume that
\[\mathcal{P}\mathcal{V}(t)\mathcal{P}=0,\] (S 11)
which corresponds to \(\operatorname{T}_{\text{INV}}\bigl{(}\tilde{V}^{\prime}(t)\rho_{\text{NN}} \bigr{)}=0\). The Hamiltonian (S 6) fulfils this condition since \(\operatorname{T}_{\text{INV}}\bigl{(}S^{z}\rho_{\text{NV}}\bigr{)}=\operatorname {T}_{\text{INV}}\bigl{(}S^{z}/2\bigr{)}=0\). Nonetheless, if the condition is not fulfilled one can shift the system Hamiltonian \(\mathcal{H}_{{}^{13}\text{C}}\) such that it is satisfied [68]. Second, we assume that
\[\rho(0)=\rho_{{}^{13}\text{C}}(0)\otimes\rho_{\text{NV}}(0),\] (S 12)
which implies that we have control over the system of the \({}^{13}\)C spins (for example, we can prepare them in a pure state at \(t=0\)). This assumption is known as the Born approximation and it is necessary to obtain a universal dynamical map [68].
Assumptions (S 11) and (S 12) make the first and second term on the right-hand side of Eq. (S 10) vanish; the latter now becomes
\[\frac{d}{dt}\mathcal{P}\tilde{\rho}(t)=\alpha^{-2}\int_{0}^{t}du\mathcal{K}(t, u)\mathcal{P}\tilde{\rho}(u),\] (S 13)
where \(\mathcal{K}(t,u)=\mathcal{P}\mathcal{V}(t)\mathcal{G}(t,u)\mathcal{Q}\mathcal{V }(u)\) is the memory kernel. Since the NV is always in thermal equilibrium, the memory kernel becomes homogeneous: \(\mathcal{K}(t,u)=\mathcal{K}(t-u)\).
Equation (S 13) is in general non-Markovian as the state of the system at time \(t\) depends on all the states at former times from \(0\) to \(t\). By definition, a Markovian master equation for the system is obtained if the kernel \(\mathcal{K}(t-u)\) behaves as a delta function with respect to \(\tilde{\rho}(u)\). This is verified in the singular coupling limit, where the hierarchy of energy scales (S 8) implies that the typical timescale at which the kernel vanish, \(\tau_{\text{NV}}\), is much smaller than the typical variation timescale of the system, \(\tau_{\text{13}\text{C}}\); thus, \(\tau_{\text{NV}}/\tau_{\text{13}\text{C}}\to 0\) and \(\mathcal{K}(t-u)\propto\delta(t-u)\). This can be seen by integrating Eq. (S 13) and taking the zeroth-order expansion in \(1/\alpha\) of \(\mathcal{K}(t-u)\)[70]; we obtain
\[\mathcal{P}\tilde{\rho}(t)= \mathcal{P}\tilde{\rho}(0)\] (S 14) \[+\alpha^{-2}\int_{0}^{t}ds\int_{0}^{s}du\mathcal{P}\mathcal{V}(s )\mathcal{V}(u)\mathcal{P}\tilde{\rho}(u)+\mathcal{O}(\alpha^{-3}),\]
which is equivalent to
\[\tilde{\rho}{}_{{}^{13}\text{C}}(t)=\tilde{\rho}{}_{{}^{13}\text{C}}(0)\] \[+\alpha^{-2}\sum_{mn}\int_{0}^{t}ds\int_{0}^{s}du\frac{4K^{ \prime}_{n}K^{\prime}_{m}}{r_{m}^{3}r_{m}^{3}}\left(C_{ij}(s-u)\bigl{[}\tilde{l }^{z}_{j}(u)\tilde{\rho}{}_{{}^{13}\text{C}}(u),\tilde{l}^{z}_{i}(s)\bigr{]}\right.\] \[\left.+C^{*}_{mn}(s-u)\bigl{[}\tilde{l}^{z}_{n}(s),\tilde{\rho} {}_{{}^{13}\text{C}}(u)\tilde{l}^{z}_{m}(u)\bigr{]}\right)+\mathcal{O}( \alpha^{-3}),\] (S 15)
where the correlation functions are defined as
\[C_{nm}(s-u)=\operatorname{Tr}\bigl{(}\tilde{S}^{z}(s-u)S^{z}\rho_{\text{NV}} \bigr{)}.\] (S 16)
By Fourier-decomposing the NV operator \(\tilde{S}^{z}(s-u)\) and taking into account the factor \(\alpha^{-2}\) in the free NV evolution, we get
\[C_{nm}(s-u)=\int_{-a}^{a}d\omega\,e^{-\frac{ia(s-u)}{\alpha^{2}}}\operatorname {Tr}\bigl{(}S^{z}(\omega)S^{z}\rho_{\text{NV}}\bigr{)},\] (S 17)
where the integration limits \(a\) denotes the spectral support. We see that, when \(\alpha\to 0\), the integral in Eq. (S 17) tends to a delta function: \(C_{nm}(s-u)\propto\delta(s-u)\). Thus, by making the change of variable \(w=\alpha^{-2}u\), and taking the limit \(\alpha\to 0\), Eq. (S 15) becomes a Markovian master equation for the \({}^{13}\)C-spins; in the Schrodinger picture, it reads
\[\frac{d}{dt}\rho_{{}^{13}\text{C}}(t)=-i\bigl{[}\mathcal{H}_{{}^{13} \text{C}}+\mathcal{H}_{\text{LS}},\rho_{{}^{13}\text{C}}(t)\bigr{]}\] \[+\sum_{mm}\frac{4K^{\prime}_{n}K^{\prime}_{m}}{r_{n}^{3}r_{m}^{ 3}}\gamma_{nm}\left(l^{z}_{m}\rho_{{}^{13}\text{C}}(t)I^{z}_{n}-\frac{1}{2}\{l^{z }_{n}I^{z}_{m},\rho_{{}^{13}\text{C}}(t)\}\right),\] (S 18)
where the Lamb shift Hamiltonian is
\[\mathcal{H}_{\text{LS}}=\sum_{nm}L_{nm}I^{z}_{n}I^{z}_{m}.\] (S 19)
Here \(\gamma_{nm}\) and \(L_{nm}\) represents the Hermitian and anti-Hermitian components of the integral over time of the bath correlation functions (S 17):
\[\int_{0}^{\infty}ds\;C_{nm}(s)=\frac{\gamma_{nm}}{2}+iL_{nm}\] (S 20)
Hence, by inserting Eq. (S 17), we have \(\gamma_{nm}\propto 1\) and \(L_{nm}=0\). By absorbing constants of order unity into \(K_{i}^{\prime}K_{j}^{\prime}\), we arrive at the final expression for the \({}^{13}\)C master equation:
\[\frac{d}{dt}\rho_{{}^{13}\mathrm{C}}(t)=-i\left[\mathcal{H}_{{}^{ 13}\mathrm{C}},\rho_{{}^{13}\mathrm{C}}(t)\right]\] \[+\sum_{nm}\frac{K_{n}^{\prime}K_{m}^{\prime}}{r_{n}^{3}r_{m}^{3}} \left(I_{m}^{z}\rho_{{}^{13}\mathrm{C}}(t)I_{n}^{z}\!-\!\frac{1}{2}\{I_{n}^{z}I _{m}^{z},\rho_{{}^{13}\mathrm{C}}(t)\}\right).\] (S 21)
A few comments are in order. The hierarchy of energy scales (S 8) and, thus, the validity of the singular coupling limit approximation for this model greatly simplify the derivation of the Lindblad master equation. In particular, the Lindbladian term on the second line of Eq. (S 21) is determined only by the bath Hamiltonian \(\mathcal{H}_{\mathrm{NV}}\) and the interaction Hamiltonian \(\alpha V^{\prime}\) of Eq. (S 9), while it is completely agnostic of the system Hamiltonian \(\mathcal{H}_{{}^{13}C}\). This is particularly useful if the system can be governed by different unitary dynamics \(\mathcal{H}_{{}^{13}C}\), e.g., in the prethermal regimes determined by different kick angles (see Secs. S8.2 and S8.3.3). Moreover, we note that the jump operators in Eq. (S 21) act as dephasing operators if the spins are polarized in the \(\hat{\mathbf{z}}\) direction, while they generate both dephasing and dissipation for a system that is polarized in the \(\hat{\mathbf{x}}\) direction. This result is in agreement with the experimental observations. In particular, it is at the origin of the non-decaying behavior of the normalized signal \(S\) as a function of \(t_{\mathrm{wait}}\) for \(\tau_{-}\)=0 observed in Fig. 3E of the main text. Indeed, when \(\tau_{-}\)=0 the whole system is positively polarized along the \(\hat{\mathbf{z}}\) axis. Therefore, during \(t_{\mathrm{wait}}\) no diffusion nor dissipation affect the system, giving a uniform behavior for any \(t_{\mathrm{wait}}\). Finally, we note that the strength of the dissipative terms \(K_{n}^{\prime}K_{m}^{\prime}/(r_{n}^{3}r_{m}^{3})\) decays faster with the distance from the NV center with respect to the dipole interaction. This allows us to consider the simplified dissipative one-dimensional toy model studied in Sec. S8.
## S7 Effective Hamiltonian analysis
Since the NV system experiences fast dynamics compared to the \({}^{13}\)C-spins we neglect the NV-spin degree of freedom and replace it by its mean value. Hence, we replace the operator \(S^{z}\) in Eq. (S 5) by its thermal expectation value \(P=P(T)\), which depends on the temperature \(T\), thus, leading to an effective on-site potential \(\eta_{j}=2K_{j}P\) for the \({}^{13}\)C nuclear spins. Thereby, we also neglect the effect of dissipation.
The kick sequence (see main text Fig. 1B) at stroboscopic times \(n\tau\!\approx\!n(t_{\mathrm{acq}}\!+\!t_{\mathrm{p}})\), \(n\in\mathds{N}\), can then be written as
\[U=e^{-i\mathcal{H}_{\mathrm{dn}_{\mathrm{acq}}}}e^{-i\mathcal{H}_{\mathrm{SL} }t_{\mathrm{p}}}\;,\] (S 22)
with
\[\mathcal{H} =\mathcal{H}_{\mathrm{dd}}\!+\!\mathcal{H}_{\mathrm{pot}}\;,\] \[\mathcal{H}_{\mathrm{SL}} =\mathcal{H}_{x}\!+\!\mathcal{H}_{\mathrm{pot}}\;,\] (S 23) \[\mathcal{H}_{\mathrm{pot}} =\sum_{j}\eta_{j}I_{j}^{z}\;.\]
In Eq. (S 22) we additionally neglect the dipole-dipole coupling \(\mathcal{H}_{\mathrm{dd}}\) during the kicks, since its magnitude is much smaller compared to \(\mathcal{H}_{x}\), i.e., \(||\mathcal{H}_{\mathrm{dd}}|\!|\!\ll\!|\mathcal{H}_{x}|\). However, we consider the effective on-site potential during the kicks, since it can be of comparable strength at short distances from the NV, see also Fig. S10 (A) for a schematic of the on-site potential and other energy scales. Spins within a radius of \(\leq 1.7\,\mathrm{nm}\), indicated as a shaded area in Fig. S10 (A), are far detuned such that they are not contributing to the bulk dynamics and also cannot be measured by NMR experiments.
The kicked evolution in Eq. (S 22) corresponds to a periodic (Floquet) evolution with period \(\tau\). Therefore, we can exploit Floquet's theorem, which states that stroboscopic dynamics (\(t=n\tau,\,n\in\mathds{N}\)) are described by an effective Hamiltonian
\[U(n\tau)=U(\tau)^{n}=U_{F}^{n}\] (S 24)
with
\[U_{F}=e^{-i\mathcal{H}_{\mathrm{dn}_{\mathrm{acq}}}}e^{-i\mathcal{H}_{\mathrm{ QA}}t_{\mathrm{p}}}\equiv e^{-i\mathcal{H}_{F}\tau}\;.\] (S 25)
Although the exact effective Hamiltonian \(\mathcal{H}_{F}\) is, in general, a highly non-local object, analytical progress can be made by considering a high-frequency (small \(\tau\)) expansion [71] up to some
order \(O(\tau^{n})\),
\[\mathcal{H}_{F}=\sum_{n=0}^{\infty}\tau^{n}\mathcal{H}_{F}^{[n]}=\mathcal{H}_{F}^ {(n)}+O(\tau^{n+1})\,\] (S 26)
valid up to some finite number of drive cycles. For sufficiently small period \(\tau\) we expect that this Hamiltonian is a good description of the prethermal plateau [7, 8, 24, 25, 26, 34, 71, 72] observed in the experiment. Any deviations from the effective Hamiltonian are expected to result in Floquet heating to a featureless infinite temperature state at long times [40, 73].
Before proceeding further, we outline the rest of this section and summarize the main results. We start with a description in Sec. S7.1 of the numerical algorithm used to support the analytical findings obtained in the rest of this section. Then, in Sec. S7.2 we will derive the effective Hamiltonian \(\mathcal{H}_{F}^{(0)}\) to lowest order in the driving period \(\tau\). The derived effective Hamiltonians serve as the starting point of all other types of numerical and theoretical analysis.
Using a toggling frame expansion, we derive two effective Hamiltonians capturing the Floquet evolution at stroboscopic times for the two regimes of a kick-angle \(\theta\neq\pi\) and near \(\theta\approx\pi\) in the presence of the hyperfine splitting field \(\eta_{J}\). In agreement with earlier findings [74, 75], for \(\theta\neq\pi\) we find that the effective Hamiltonian conserves the \(\hat{\mathbf{x}}\)-polarization, while the hyperfine splitting \(\eta_{J}\) only leads to a quantitative change in axis \(\hat{\mathbf{x}}\rightarrow\hat{\mathbf{x}}\). In the case, \(\theta\approx\pi\), this conservation law is broken and the hyperfine splitting induces an effective spatially inhomogenous potential dominating the relaxation dynamics of the polarization.
In Sec. S7.3 we analyze the long-time dynamics generated by the derived effective Hamiltonians. We show that, for finite systems, the long-time expectation values are well described by ETH-like arguments. In particular, we find that, in the \(\theta=\pi\) case, the local polarization follows the local on-site potential induced by the electron which can lead to a sign-inversion of the local polarization and hence - also of the integrated polarization. We close this section by introducing simplified toy-model effective Hamiltonians; they capture the main physics but, due to their reduced complexity, make the qualitative analysis easier to understand.
### Exact diagonalization simulation
Throughout this section, we support our findings by numerically exact full Floquet simulations. However, these simulations are limited to small system sizes of \(L\)=16 spins. As we anticipate to find that the distance from the NV plays a crucial role, we will consider in the following a one-dimensional geometry to maximize the radial extension. Concretely, in this section we consider spins, labeled by \(j=1,\,\ldots,\,L\), to have positions on a one-dimensional chain described by \(x_{j}=x_{0}+aj+\delta x_{j}\), with lattice spacing \(a=1\), offset distance \(x_{0}=5\), some independent identically distributed random displacements \(\delta x_{j}\), drawn from a normal distribution \(\mathcal{N}(\mu\)=0, \(\sigma\)=0.01) with zero mean and standard deviation \(\sigma=0.01\,a\). However, if not explicitly stated otherwise, we will consider the full Floquet evolution described by Eq. (S 23), i.e. the spins interact via long-range interactions which decay as \(1/r^{3}\) in the spin-spin distance \(r\). In addition we choose \(\Omega/J=10\), \(\delta\omega/J=0\), \(J_{\mathrm{Jacq}}=0.1\) and \(t_{\mathrm{p}}\) is chosen such that \(\Omega t_{\mathrm{p}}=\theta\) reproduces the desired angle \(\theta\), where \(J\) is the median coupling described below.
The median coupling \(J\) serves as an experimentally observable measure for the energy scale of the system, determined as the inverse \(1/e\) decay time of a \(\hat{\mathbf{x}}\)-polarized initial state in the absence of spin locking. Therefore, we can match numerics qualitatively with experimental results by re-expressing our quantities with this time scale.
### Derivation of Floquet Hamiltonians
A common technique to derive the lowest-order Floquet Hamiltonian is the so-called _Floquet Magnus expansion_[76] which here is equivalent to using the Baker-Campbell-Hausdorff (BCH) formula [77]:
\[\exp(A)\exp(B)=\exp(A+B)+O(||A||\cdot||B||)\,\] (S 27)
which is a good approximation if \(||A||\), \(||B||\ll 1\). However, with \(\left|\mathcal{H}_{\mathrm{SL}}t_{\mathrm{p}}\right|>1\), as \(\Omega t_{\mathrm{p}}\)\(>\) 1, this assumption is not met; moreover all powers \(\mathcal{H}_{\mathrm{SL}}^{m}\), \(n=0,\,1,\,\ldots,\,\mathcal{H}_{\mathrm{SL}}\) are of similar magnitudes \(||\mathcal{H}_{\mathrm{SL}}||^{n}\ll||\mathcal{H}_{\mathrm{SL}}||^{1}\). Therefore, we cannot directly apply the BCH Formula (S 27) but would formally need to resum an infinite number of contributions stemming from the kicks. To this end, we will perform a so-called toggling frame transformation.
Notice also that, in the vicinity of the NV centre, the magnetic on-site potential contribution from the NV is strong compared to the dipole-dipole couplings between the nuclear spins. Since these terms commute, \([\mathcal{H}_{\mathrm{dd}},\,\mathcal{H}_{\mathrm{pot}}]\)=0, we can separate them into two exponentials \(e^{-iHt}=e^{-iHt}\omega^{t}e^{-i\mathcal{H}_{\mathrm{pot}}}\); hence, we can combine all strong contributions into a single one-particle unitary
\[U_{\mathrm{SP}}\mathbb{n}e^{-i\mathcal{H}_{\mathrm{pot}}t_{\mathrm{eq}}}\mathbb{ n}e^{-i\mathcal{H}_{\mathrm{pot}}t_{\mathrm{eq}}}e^{-i\mathcal{H}_{\mathrm{L}}t_{ \mathrm{p}}\mathcal{H}_{\mathrm{SL}}}\,,\] (S 28)
where we defined \(\mathcal{H}_{\text{SP}}t_{\text{acq}}{=}\sum_{j}\theta_{j}\hat{\mathbf{n}}_{j} \cdot\mathbf{I}_{j}\) with
\[\theta_{j}= 2\arccos\left[\cos(\alpha_{j}/2)\cos(\eta_{j}t_{\text{acq}}/2)\right.\] (S 29) \[-\left.\frac{\eta_{j}t_{\text{p}}}{\alpha_{j}}\sin(\alpha_{j}/2) \sin(\eta_{j}t_{\text{acq}}/2)\right],\]
and \(\alpha_{j}{=}t_{\text{p}}\sqrt{\Omega^{2}+\eta_{j}^{2}}\). The components of the normalized direction vector \(\hat{\mathbf{n}}_{j}=n_{j}^{\text{x}}\hat{\mathbf{x}}+n_{j}^{\text{y}}\hat{ \mathbf{y}}+n_{j}^{\text{z}}\hat{\mathbf{z}}\) are given by
\[n_{j}^{\text{x}} =\frac{\Omega_{\text{p}}}{\alpha_{j}}\frac{\sin(\alpha_{j}/2) \cos(\eta_{j}t/2)}{\sin(\theta_{j}/2)},\] (S 30) \[n_{j}^{\text{y}} =\frac{-\Omega t_{\text{p}}}{\alpha_{j}}\frac{\sin(\alpha_{j}/2) \sin(\eta_{j}t/2)}{\sin(\theta_{j}/2)},\] \[n_{j}^{\text{z}} =\frac{\eta_{j}t_{\text{p}}}{\alpha_{j}}\frac{\sin(\alpha_{j}/2) \cos(\eta_{j}t/2)}{\sin(\theta_{j}/2)}+\frac{\cos(\alpha_{j}/2)\sin(\eta_{j}t/ 2)}{\sin(\theta_{j}/2)}.\]
#### S-B1 Spin locking for state engineering (\(\theta\neq\pi\))
We can account for the strong kicks by transforming to the toggling frame considering the unitary evolution over \(N\) cycles
\[U^{N} =\prod_{\ell=1}^{N}e^{-i\hbar_{\text{acq}}\mathcal{H}_{\text{acq} }}U_{\text{SP}}\] (S 31) \[=U_{\text{SP}}^{N}\prod_{\ell=1}^{N}U_{\text{SP}}^{-\ell}e^{-i \hbar_{\text{acq}}\mathcal{H}_{\text{acq}}}U_{\text{SP}}^{\ell}\] \[=U_{\text{SP}}^{N}\prod_{\ell=1}^{N}e^{-i\hbar_{\text{acq}} \mathcal{H}_{\text{acq},\ell}}\,\]
where in the second line we introduced \(N\) identities \(U_{\text{SP}}^{-k}U_{\text{SP}}^{k}{=}\mathds{1}\) and in the last line we defined the rotated Hamiltonians \(\hat{H}_{\text{ad},\ell}{=}U_{\text{SP}}^{-\ell}\mathcal{H}_{\text{ad}}U_{ \text{SP}}^{\ell}\).
Notice that \(\left\|\hat{H}_{\text{ad},\ell}\right\|{=}\|\mathcal{H}_{\text{ad}}\|\) since they are related by a unitary transformation. Therefore, all contributions in the final expression in Eq. (S 31) are small, so we can apply the BCH formula (S 27) to obtain
\[\mathcal{H}_{\text{eff}}^{\theta\neq\pi}=\sum_{j=1}^{L}\frac{(\theta_{j}N) \text{mod}(2\pi)}{N}\hat{\mathbf{n}}_{j}\mathbf{I}_{j}{+}\mathcal{H}_{F,\text{dd} }^{(0)}\,\] (S 32)
where \(\mathcal{H}_{F,\text{dd}}^{(0)}{=}\sum_{\ell=1}^{N}\mathcal{\tilde{H}}_{\text {dd},\ell}\). In the following, we will evaluate the second term in Eq. (S 32) explicitly in the limit of a large number of cycles \(N{\to}\infty\). To this end, let us first write the dipole-dipole term as
\[\mathcal{H}_{\text{dd}}=\sum_{k<l}b_{kl}\mathbf{I}_{k}^{T}\mathbf{D}\mathbf{I}_{k}\] (S 33)
where \({}^{T}\) denotes transpose, and we introduced the diagonal matrix \(\mathbf{D}=(-\hat{\mathbf{x}}\hat{\mathbf{x}}^{T}+\hat{\mathbf{y}}\hat{\mathbf{y}} ^{T}+2\hat{\mathbf{z}}\hat{\mathbf{z}}^{T})\). Then, the action of \(U_{\text{SP}}\) amounts to the matrix-vector product:
\[U_{\text{SP}}^{-\ell}\mathbf{I}_{j}U_{\text{SP}}^{\ell}=\mathbf{r}(\ell\theta_{j}, \hat{\mathbf{n}}_{j})\mathbf{I}_{j}\,,\] (S 34)
with the 3\(\times\)3 rotation matrix \(\mathbf{r}(\theta,\hat{\mathbf{n}})\) rotating about the axis \(\hat{\mathbf{n}}\) by the angle \(\theta\). Hence, we have \(\sum_{k=1}^{N}\bar{H}_{\text{dd},\ell}{=}\sum_{k<l}b_{kl}\mathbf{I}_{k}\mathbf{M}_{kl} \mathbf{I}_{l}\), with the matrix \(\mathbf{M}_{kl}=N^{-1}\sum_{\ell=1}^{N}\mathbf{r}^{T}(\ell\theta_{k},\hat{\mathbf{n }}_{k})\mathbf{D}\mathbf{r}(\ell\theta_{l},\hat{\mathbf{n}}_{l})\).
In order to evaluate \(\mathbf{M}_{kl}\), we make use of the Rodrigues representation of rotation matrices:
\[\mathbf{r}(\theta,\hat{\mathbf{n}})=\hat{\mathbf{n}}\hat{\mathbf{n}}^{T}{+}\! \cos(\theta)(1{-}\hat{\mathbf{n}}\hat{\mathbf{n}}^{T}){+}\!\sin(\theta)\epsilon( \hat{\mathbf{n}})\,\] (S 35)
where \(\epsilon_{ij}(\hat{\mathbf{n}}){=}\sum_{k}\epsilon_{ikj}\hat{\mathbf{n}}_{k}\) and \(\epsilon_{ijk}\) is the fully antisymmetric Levi-Civita symbol. Thus, \(\mathbf{M}_{kl}\) reads as
\[\mathbf{M}_{kl} =\frac{1}{N}\sum_{\ell=1}^{N}\Bigl{(}\left[\hat{\mathbf{n}}_{k}\hat{ \mathbf{n}}_{k}^{T}{+}\!\cos(\ell\theta_{k})(1{-}\hat{\mathbf{n}}_{k}\hat{ \mathbf{n}}_{k}^{T}){+}\!\sin(\ell\theta_{k})\epsilon(\hat{\mathbf{n}}_{k}) \right]\] (S 36) \[\mathbf{D}\left[\hat{\mathbf{n}}_{i}\hat{\mathbf{n}}_{l}^{T}{+}\!\cos( \ell\theta_{l})(1{-}\hat{\mathbf{n}}_{i}\hat{\mathbf{n}}_{l}^{T}){+}\!\sin( \ell\theta_{l})\epsilon(\hat{\mathbf{n}}_{i})\right]\Bigr{)}.\]
The sums of the \(\sin\) and \(\cos\) contributions in \(\mathbf{M}_{kl}\) can be evaluated as
\[\mathcal{G}_{s}(N,\theta) =\sum_{j=1}^{N}\sin(j\theta)=\frac{\sin(N\theta/2)}{N\sin(\theta/2) }\sin((N{-}1)\theta/2),\] (S 37) \[\mathcal{G}_{c}(N,\theta) =\sum_{j=1}^{N}\sin(j\theta)=\frac{\sin(N\theta/2)}{N\sin(\theta/2) }\cos((N{-}1)\theta/2)\.\]
Using the relation \(\lim_{N{\to}\infty}\mathcal{G}_{s,c}(N,\theta)=0\) for \(\theta\neq 2\pi k\), \(k\in\mathbb{N}\), all contributions appearing in Eq. (S 36) which are not proportional to unity, \(\cos(\theta_{j})\cos(\theta_{k})\), or \(\sin(\theta_{j})\sin(\theta_{k})\) vanish in the large \(N\) limit.
In summary, in the large \(N\) limit, we find the following expression for the leading-order Floquet Hamiltonian:
\[\mathcal{H}_{\text{eff}}^{\theta\neq\pi}=\sum_{k}\frac{(\theta_{k}N)\text{mod}(2 \pi)}{N}\hat{\mathbf{n}}_{k}\cdot\mathbf{I}_{k}{+}\!\sum_{k<l}b_{kl}\mathbf{I}_{k}\mathbf{M}_{ kl}^{(0)}\mathbf{I}_{l}\,,\] (S 38)
with
\[\mathbf{M}_{kl}^{(0)}= \hat{\mathbf{n}}_{k}\hat{\mathbf{n}}_{k}^{T}\mathbf{D}\hat{\mathbf{n} }_{i}\hat{\mathbf{n}}_{l}^{T}\] \[+\mathcal{G}_{c}(\Delta_{kl}\theta)(1{-}\hat{\mathbf{n}}_{k}\hat{ \mathbf{n}}_{l}^{T})\mathbf{D}(1{-}\hat{\mathbf{n}}_{i}\hat{\mathbf{n}}_{l}^{T})\] \[+\mathcal{G}_{c}(\Delta_{kl}\theta)\epsilon(\hat{\mathbf{n}}_{k})^{ T}\mathbf{D}\epsilon(\hat{\mathbf{n}}_{l})\.\]
Notice that Eq. (S 38) is invariant under the basis transformation \(U(\theta){=}\exp(-i\theta\sum_{k}\hat{\mathbf{n}}_{k}\hat{\mathbf{n}}_{k}\mathbf{I}_{k})\) for any \(\theta\). This can be immediately checked by applying the transformation rules
\[\left(1{-}\hat{\mathbf{n}}_{k}\hat{\mathbf{n}}_{k}^{T}\right)\mathbf{I }_{k} \overset{U(\theta)}{\longrightarrow}\cos(\theta)\Bigl{(}1{-}\hat{ \mathbf{n}}_{k
presence of the NV-induced on-site potential (\(\eta_{j}{\neq}0\)), the conserved quantity is locally tilted away from the \(\hat{\mathbf{x}}\)-axis close to the NV, see Fig. S10(b) and Fig. S11(c).
In Fig. S11(a,b) we compare the dynamics generated by the lowest order Floquet Hamiltonian \(\mathcal{H}_{F}^{(0)}\) derived above, against the exact kicked quantum simulation performed for \(L{=}16\) spin; in particular, we measure the conservation of the effective axis and the effective Hamiltonian. We find that both quantities are conserved up to errors which parametrically scale down in \(t_{\text{acq}}\), as expected from the regime of validity of the lowest order approximation.
In summary, the presence of the NV-induced on-site potential \(\eta_{j}\) only leads to minor quantitative corrections to the spin-locking sequence far away from \(\theta{=}\pi\). In the next section, we show that this is not true for spin locking in the vicinity of \(\theta{=}\pi\), where the situation is much more intriguing.
#### s7.2 Spin locking for Hamiltonian engineering (\(\theta\approx\pi\))
The analysis performed in Sec. S7.2.1 breaks down as the spatially-dependent kick-angle \(\theta_{j}\), see Eq. (S29), approaches \(\pi\), as we can no longer assume \(\lim_{N\to\infty}\mathcal{G}_{s,c}(N,\theta){=}0\). In this section, we will analyze this case in detail.
Let us first focus on the fine-tuned case \(\theta{=}\pi\). Notice that, for \(\theta{=}\pi\) the kick unitary squares to the identity, \(U_{\text{kick}}^{2}=\mathds{1}\). Therefore, the toggling frame unitary from Eq. (S31),
\[U^{N}=U_{\text{SP}}^{N}\prod_{n=1}^{N}U_{\text{SP}}^{-n}e^{-i\mathcal{H}_{ \text{ad}}t_{\text{acq}}}U_{\text{SP}}^{n}\,,\]
reduces to
\[U^{N}=\prod_{n=1}^{N/2}e^{-i\mathcal{H}_{\text{ad}}t_{\text{acq}}}U_{\text{SP }}^{\dagger}e^{-i\mathcal{H}_{\text{ad}}t_{\text{acq}}}U_{\text{SP}}\,.\]
Thus, at exactly \(\theta{=}\pi\) the lowest order Floquet Hamiltonian is simply given by
\[\tilde{H}\equiv\frac{1}{2}\Big{(}\mathcal{H}_{\text{dd}}{+}U_{\text{SP}}^{ \dagger}\mathcal{H}_{\text{dd}}U_{\text{SP}}\Big{)}\,.\] (S40)
If \(\theta{\approx}\pi\) but not exactly \(\pi\), we may split the unitary \(U_{\text{SP}}{=}\exp(-i\delta\theta\hat{\mathbf{n}}\cdot\mathbf{I})\exp(-i\pi\hat{ \mathbf{n}}\cdot\mathbf{I})\) into a \(\theta{=}\pi\) and a \(\delta\theta{=}(\theta{-}\pi)\) contribution. Notice that the \(\delta\theta\)-contribution is small by assumption. Hence, to lowest order \(O(\delta\theta,J\tau)\) we may include this contribution into the dipole-dipole Hamiltonian \(\exp(-i\mathcal{H}_{\text{dd}}t_{\text{acq}}){\to}\exp(-i\big{[}\mathcal{H}_{ \text{dd}}t_{\text{acq}}{+}\delta\theta\hat{\mathbf{n}}\cdot\mathbf{I}\big{]}){+} O(\delta\theta,J\tau)\). Thus, the lowest-order toggling frame expansion reads as
\[\mathcal{H}_{\text{eff}}^{\theta{=}\pi}=\tilde{H}{+}\sum_{k=1}^{L}\mathbf{\phi}_{ \text{eff},k}\cdot\mathbf{I}_{k}\,,\] (S41)
where \(\mathbf{\phi}_{\text{eff},k}=\delta\theta_{k}\hat{\mathbf{n}}_{k}/\tau\).
Let us emphasize that around \(\theta{=}\pi\) the effective Hamiltonian does not conserve the (tilted) total net polarization \(\sum_{k=1}^{L}\mathbf{\phi}_{\text{eff},k}\cdot\mathbf{I}_{k}\). This is in stark contrast to the results for spin locking far away from \(\theta{=}\pi\), see discussion after Eq. (S39).
In Fig. S12 we compare the exact Floquet Evolution at \(\theta{\approx}\pi\) the effective Hamiltonian evolution (S41) for a system of \(L{=}16\) spins. The effective Hamiltonian dynamics capture the exact Floquet evolution for all observed times. At even longer times we expect Floquet heating to become dominant, leading to a heat death of the Floquet system; notice that this does not happen in the dynamics generated by the effective Hamiltonian shown in Fig S12.
### Analysis based on the Eigenstate Thermalization Hypothesis
Any generic many-body system is expected to thermalize according to the eigenstate-thermalization hypothesis [1, 22, 3] (ETH). As a periodically driven system does not conserve energy it is expected to thermalize to a featureless, infinite temperature state \(\propto\mathds{1}\) for long times [78, 79, 80]. However, heating is suppressed at high driving frequency, leading to a separation of time scales in the high-frequency regime and to the build-up of a so-called prethermal plateau [40, 73]: the system first (pre-)thermalizes with respect to the low order effective Hamiltonian, before eventually fully thermalizing at long times towards the infinite temperature state.
In this section, we analyze the prethermal plateau characterized by the effective Hamiltonians derived in Sec. S7.2. ETH suggests that if a system evolves under some generic unitary evolution \(U(t)\), and this unitary preserves some local operators \(\{O_{j}\}_{j=1}^{n_{\text{G}}}\), then the system is expected to thermalize to a state which is locally equivalent to
\[\rho(\lambda_{1},\,\dots,\,\lambda_{n_{\text{G}}})=e^{\sum_{j=1}^{n_{\text{G}} }\lambda_{j}O_{j}}/Z,\] (S42)
where \(Z=\text{Tr}(e^{\sum_{j=1}^{m_{O}}A_{j}O_{j}})\). The parameters \(\lambda_{1}\),..., \(\lambda_{n_{O}}\) are Lagrange multipliers fixed by the initial conditions using the self-consistency relation
\[\left<O_{j}\right>_{\phi_{0}}\overset{!}{=}\text{Tr}\bigl{(}\rho(\lambda_{1}, \,\ldots,\,\lambda_{n_{O}})O_{j}\bigr{)}\,,\]
where \(\left|\psi_{0}\right>\) is the initial state.
_Thermalization at \(\theta\neq\pi\)._ - In the regime \(\theta\neq\pi\), both the effective Hamiltonian \(\mathcal{H}_{F}^{(0)}\) and the rotated polarization \(\widetilde{I}_{x}\) are (prethermally, i.e., quasi-) conserved quantities, as \(\left[\widetilde{I}_{x},\mathcal{H}_{F}^{(0)}\right]=0\) (see discussion around Eq. (S 39)). Therefore, using Eq. (S 42) expectation values of local observables in the (pre-)thermal plateau are obtained from
\[\rho_{\theta\neq\pi}(\beta,\mu)=e^{-\beta\mathcal{H}_{F,\text{ad}}^{(0)}+\mu \widetilde{I}_{x}}/Z\,,\] (S 43)
where we collected all contributions \(\propto\!\!\widetilde{I}_{x}\) and with \(Z\)=\(\text{Tr}(e^{-\beta\mathcal{H}_{F,\text{ad}}^{(0)}+\mu\widetilde{I}_{x}})\). The parameters \(\beta\) and \(\mu\) are fixed by the energy and polarization of the initial state \(\left|\psi_{0}\right>\):
\[\left<\mathcal{H}_{F}^{(0)}\right>_{\phi_{0}} =\text{Tr}\Bigl{\{}\mathcal{H}_{F}^{(0)}\rho_{\theta\neq\pi} \Bigr{\}}\] (S 44) \[\left<\widetilde{I}_{x}\right>_{\phi_{0}} =\text{Tr}\Bigl{\{}\widetilde{I}_{x}\rho_{\theta\neq\pi}\Bigr{\}}.\]
In particular, the local polarizations \(\left<\sigma_{n}^{\alpha}\right>\), \(n=1\),... \(L\) and \(\alpha=x\), \(y\), \(z\), at long times are expected to be described by the thermal expectation value
\[\mathcal{I}_{n}^{\alpha}=\left<I_{n}^{\alpha}\right> =\text{Tr}\bigl{\{}I_{n}^{\alpha}\rho_{\theta\neq\pi}(\beta,\mu) \bigr{\}}\] (S 45) \[\approx\text{Tr}\bigl{\{}I_{n}^{\alpha}e^{\mu\widetilde{I}_{x}}/Z \Bigr{\}}\] \[=\mathbf{\hat{n}}_{n}^{\alpha}\tanh(\mu/2)\,,\]
where we used in the second line, that \(\beta\) is expected to be small, \(\beta\ll\)1. Further we exploit the tracelessness of all non-trivial pauli strings
\[\text{Tr}(\sigma^{\mu}\otimes\!\sigma^{\nu}\otimes\ldots\sigma^{\rho})\neq 0 \Longleftrightarrow\sigma^{\mu}=\sigma^{\nu}=\cdots=\sigma^{\rho}= \mathds{1}\,.\] (S 46)
Therefore, the spins also locally align with the rotated axis \(\mathbf{\hat{n}}_{n}\), i.e.
\[\frac{\boldsymbol{I}_{n}}{||\boldsymbol{I}_{n}||}=\mathbf{\hat{n}}_{n}\,.\] (S 47)
In Fig. S11(c) we compare the numerically exact, Floquet dynamics at long times against the results expected from the ETH analysis (Eq. (S 47)). We find excellent agreement between the numerical Floquet evolution results and the analytical predictions in Eq. (S 47).
_Thermalization around \(\theta=\pi\)._ - In the regime \(\theta\approx\pi\) only the Hamiltonian \(\mathcal{H}_{\text{eff}}^{\theta=\pi}\) is conserved in the prethermal plateau. Thus, the (pre-)thermal steady state for a finite system at long times is expected to be locally equivalent to the thermal density matrix
\[\rho_{\pi}(\beta)=e^{-\beta\mathcal{H}_{\text{eff}}^{\theta=\pi}}/Z\,,\] (S 48)
where \(Z=\text{Tr}(e^{-\beta\mathcal{H}_{\text{eff}}^{\theta=\pi}})\). The inverse temperature \(\beta\) is determined due to (prethermal) energy conservation by the energy expectation in the initial state \(\left|\psi_{0}\right>\):
\[\left<\psi_{0}\right|\mathcal{H}_{\text{eff}}^{\theta=\pi}\left|\psi_{0}\right> \overset{!}{=}\text{Tr}\Bigl{(}\mathcal{H}_{\text{eff}}^{\theta=\pi}\rho_{\pi}( \beta)\Bigr{)}.\] (S 49)
Notice that this so-called temperature is an intrinsic property of the evolution and the initial state, and is not related to the actual temperature the experiment is operated at.
Using a high-temperature, \(\beta\bigl{\|}\mathcal{H}_{\text{eff}}^{\theta=\pi}\bigr{\|}\ll 1\), expansion of the density matrix \(\rho_{\pi}\approx(\mathds{1}-\beta\mathcal{H}_{\text{eff}}^{\theta=\pi})/Z\), where now \(Z=\text{Tr}(\mathds{1})=2^{L}\), in Eq. (S 48) we can compute the local polarization
\[\mathcal{I}_{n}^{\alpha} =\text{Tr}\bigl{(}\rho_{\pi}I_{n}^{\alpha}\bigr{)}\] (S 50) \[\approx-\beta\text{Tr}\Bigl{(}\mathcal{H}_{\text{eff}}^{\theta=\pi }I_{n}^{\alpha}\Bigr{)}/Z\] \[=-\beta\phi_{\text{eff},n}^{\alpha}/2\] \[\propto\phi_{\text{eff},n}^{\alpha}\,,\]
where we used in the last equality the definition of \(\mathcal{H}_{\text{eff}}^{\theta=\pi}\), Eq. (S 41), and tracelessness of all non-trivial pauli-strings, Eq. (S 46). In addition, the inverse temperature can be obtained similarly from Eq. (S 49)
\[\beta=-\frac{\left<\psi_{0}\right|\mathcal{H}_{\text{eff}}^{\theta=\pi}\left| \psi_{0}\right>}{\text{Tr}\Bigl{(}\mathcal{H}_{\text{eff}}^{\theta=\pi}\mathcal{H }_{\text{eff}}^{\theta=\pi}\Bigr{)}/Z}\,.\] (S 51)
In Fig. S12(c) we compare the exact long time steady states for numerically exact Floquet and Hamiltonian evolution with the thermal expectation value Eq. (S 50). We find excellent agreement between all three methods. In particular, let us emphasize that due to \(\phi_{\text{eff},n}^{\alpha}\propto\delta\theta_{n}=\theta_{n}-\pi\) the effective magnetic on-site potential acting on the spins experiences a sign change around \(\theta_{n}=\pi\) as a function of the distance from the NV. Therefore, since \(\left<I_{n}^{\alpha}\right>\propto\phi_{\text{eff},n}^{\alpha}\), see Eq. (S 50), the final polarization profile is inhomogenous in space and can have positive and negative distribution although the initial state is fully positively polarized. This is a key result of this work as it allows for engineering desired non-homogeneous states via Hamiltonian-engineering. We will
### Estimating the crossing radius \(r_{\text{c}}\)
In the previous section we have shown that combining Floquet engineering with ETH-like arguments leads to a (pre-)thermal steady states which has a spatially inhomogenous polarization profile with positive and negative contributions. In particular the polarization profile follows the effective on-site potential \(\mathcal{I}_{n}^{\alpha}\propto\phi_{\text{eff},n}^{\alpha}\), which in turn is determined by the microscopic on-site potential \(B_{n}\) induced by the NV electron and the kick angle \(\theta\) via Eq. (S 29). Therefore, by tuning the kick-angle \(\theta\) close to \(\theta=\pi\) we can engineer states with robust domain-wall like spatial polarization profiles. The boundary of the domain wall is given by the crossing radius \(r_{\text{c}}\) which defines the distance from the NV at which the spatial profile has zero polarization
\[\phi(r=r_{\text{c}})\equiv 0\,.\] (S 52)
It is directly obtained from setting \(||\phi_{\text{eff}}(r)||=0\), i.e. \(\delta\theta(r=r_{\text{c}})=0\). Using Eq. (S 29) we find the implicit equation for the crossing radius
\[\tan\biggl{(}\frac{t_{\text{p}}\sqrt{\Omega^{2}+\eta(r)^{2}}}{2}\biggr{)}\tan \biggl{(}\frac{\eta(r)t_{\text{acq}}}{2}\biggr{)}=\frac{\sqrt{\Omega^{2}+\eta(r)^{ 2}}}{\eta(r)}\,.\] (S 53)
Note that in \(3D\) the on-site potential \(\eta=\eta(r,\,\theta)\) has a radial and angular dependence. However, if the detuning \(\delta\) vanishes, which we generally assume to be the case, the angular dependence for the crossing radius simply reduces to a multiplicative factor \(r_{\rm c}(\theta)=r_{\rm c,0}\!\times\!\sqrt[3\cos^{2}(\theta)\!-\!1]\), where \(r_{\rm c}(\theta)\) is determined from setting \(\sqrt[3]{1\cos^{2}(\theta)\!-\!1}=1\).
Using the parameters in the experiment, Fig. 4, i.e. the Rabi frequency \(\Omega\approx 50\,\mathrm{kHz}\) and \(t_{\rm acq}\approx 51\,\mu\mathrm{s}\) we can estimate the crossing radius for the different pulse widths \(t_{\rm p}\), i.e. different kick angles \(\theta=\Omega t_{\rm p}\), which lead to a spin-polarization inversion. The results are shown in Fig. S13. There we also estimated the average number of spins \(N_{r<r_{\rm c}}\) which are within the crossing radius, using a \({}^{13}\)C-spin density of \(\approx\!1/\mathrm{nm}^{3}\). For example using \(\theta=94\,\pi\) we find a crossing radius of \(r_{\rm c}(\theta)\approx 2.8\!\times\!\sqrt[3]{[3\cos^{2}(\theta)\!-\!1]}\) encompassing on average around \(\approx\!150\) spins. Let us emphasize, that the results above are not limited to finite size systems and are also expected to hold in the thermodynamic limit.
### Simplifications
In this section we repeatedly found that the non-equilibrium dynamics for intermediate times are well-captured by effective Hamiltonians, in particular Eqs. (S38) and (S41), derived using a high-frequency expansion. Therefore, in the following, we consider only time-evolution generated by these time-independent effective Hamiltonians. The corresponding results are expected to describe the intermediate time dynamics well but ignore any Floquet heating effects. However, Floquet heating will only lead to an onset of slow homogeneous decay of all local observables at times beyond the prethermal plateau.
Moreover, in the following, we will consider simplified versions of the effective Hamiltonians derived above. While the details of Eqs. (S38) and (S41) are needed to derive quantitative results, like the crossing radius, Eq.(S52), their key qualitative properties can be described by simplified models. In particular, the key property of Eq. (S38) is the conservation of a (tilted) global polarization, see discussion around Eq. (S39). However, the tilt leads to only minor quantitative changes in the exact dynamics so we can neglect the tilt for a qualitative understanding of the dynamics. Analogously, the main properties of Eq. (S41) consist in the absence of a conserved polarization axis and in the presence of a spatially inhomogeneous on-site potential which can take positive and negative values and is not conserved by the Hamiltonian.
Therefore, in what follows, we will study the simplified effective Hamiltonians
\[\mathcal{H}_{\rm eff}^{\theta\,\neq\,\pi}\approx\sum_{j<k}J_{jk}\left(\frac{3 }{2}\left[I_{j}^{z}I_{k}^{z}\!+\!I_{j}^{y}I_{k}^{y}\right]\!-\!\mathbf{I}_{j}\! \cdot\!\mathbf{I}_{k}\right),\] (S54)
for the regime far away from \(\theta\approx\pi\), and
\[\mathcal{H}_{\rm eff}^{\theta=\,\pi}\approx\sum_{j<k}J_{jk}\left(3I_{j}^{z}I_ {k}^{z}\!-\!\mathbf{I}_{j}\!\cdot\!\mathbf{I}_{k}\right)\!+\!\sum\phi_{k}I_{k}^{x}\] (S55)
for \(\theta\approx\pi\), where we also dropped the \({}_{\rm eff}\) subscript on the on-site potential and removed all terms not pointing along the \(\hat{\mathbf{x}}\)-direction. Moreover, in the following we will not use the exact expression for \(\phi_{k}\) given below Eq. (S41), but rather use different approximations including the various properties of \(\phi_{k}\) without using the complicated exact expression.
## S8 One-dimensional approximate quantum dynamics
While the ETH analysis outlined in Sec. S7 C is able to explain some aspects of the experimentally observed signatures, it fails, for instance, to explain the slow transient dynamics.
To fully understand the qualitative physics behind the experimentally observed signatures, we need a more comprehensive theory that takes into account the effects of both diffusion and dissipation. The description of diffusive effects requires large systems which typically exceed those accessible in an exact diagonalization (ED) approach by at least one order of magnitude. To circumvent the system size constraint imposed by ED, several algorithms were introduced in the context of approximate quantum simulations and dynamics [81, 82, 83, 84, 85, 86, 87].
In this work, we employ a new algorithm - the local-information time-evolution (LITE) algorithm - that was recently developed by some of the authors to deal with the out-of-equilibrium transport of one-dimensional systems [44, 45]. The crucial difference with respect to other algorithms is that LITE preserves all local constants of motion with a support below a specified truncation scale. In the following, we apply LITE to perform a series of numerical simulations using a one-dimensional short-range toy model akin to the true long-range three-dimensional experimental Hamiltonians of the different regimes (i.e., the different effective Hamiltonians found in Sec. S7).
In Subsec S8.1 we give a brief introduction to the LITE algorithm. We then study the two approaches of Hamiltonian Engineering and State Engineering in great detail in Subsecs. S8.2 and S8.3, respectively. We close this section by a short comparison between the two cases, Subsec S8.4, and a brief summary, Subsec S8.5.
### Introduction to the LITE algorithm
Let us briefly outline the basic concepts behind the LITE algorithm. For a detailed introduction to the topic we refer the reader to Refs. [44; 45]. Our goal is to solve the von Neumann equation
\[\partial_{t}\rho=-i[\mathcal{H},\rho],\] (S 56)
for the density matrix \(\rho\) under the generic local Hamiltonian \(H\). The algorithm uses a decomposition of the full system into smaller subsystems. Each subsystem can be characterised by two indices, \(\ell\) and \(n\). The integer number \(\ell\) denotes the scale or range of the subsystem, i.e., one plus the number of neighboring sites within the subsystem under consideration. The integer or half-integer number \(n\) defines the center of the subsystem. For example, the subsystem \(C_{n}^{\ell}\) is defined by the \(\ell\)+1 spins centered around the physical lattice site \(n\). By virtue of partial trace operations, one can write the von Neumann equation for the subsystem \(C_{n}^{\ell}\) as
\[\partial_{t}\rho_{n}^{\ell} = -i\big{[}H_{n}^{\ell},\rho_{n}^{\ell}\big{]}\] \[-i\mathrm{Tr}_{L}\Big{(}\Big{[}\mathcal{H}_{n+1/2}^{\ell+1}- \mathcal{H}_{n}^{\ell},\rho_{n-1/2}^{\ell+1}\Big{]}\Big{)}\] \[-i\mathrm{Tr}_{R}\Big{(}\Big{[}\mathcal{H}_{n+1/2}^{\ell+1}- \mathcal{H}_{n}^{\ell},\rho_{n+1/2}^{\ell+1}\Big{]}\Big{)}.\]
Here, \(\rho_{n}^{\ell}\) (\(\mathcal{H}_{n}^{\ell}\)) denotes the subsystem density matrix (subsystem Hamiltonian) associated with \(C_{n}^{\ell}\). Note that \(\mathrm{Tr}_{L}\) and \(\mathrm{Tr}_{R}\) denote partial trace operations over the leftmost (\(L\)) and the rightmost (\(R\)) spin, respectively. In Eq. (S 57) we have assumed a nearest neighbor Hamiltonian. To solve Eq. (S 57), we require knowledge of the density matrices of the subsystems \(C_{n}^{\ell}\), \(C_{n+1/2}^{\ell+1}\), and \(C_{n-1/2}^{\ell+1}\). Thus, Eq. (S 57) is not closed, and to solve the equations of motion for the subsystem \(C_{n}^{\ell}\) we need to solve them for higher level subsystems as well. Therefore, the problem is as complex as the one in Eq. (S 56).
The LITE algorithm solves Eq. (S 57) at a subsystem scale \(\ell^{*}\) smaller than the original system size. This is done in a two-step approach: First, if quantum entanglement is only present at small scales \(<\ell^{*}\), thanks to Petz recovery maps [88], we can exactly recover the density matrices at scales \(>\ell^{*}\) from the density matrices at scale \(\ell^{*}\) and numerically solve Eq. (S 57) for a small time increment \(\delta t\). Over time, entanglement spreads and quantum mutual information builds up on increasing scales. Thus, to continue the time-evolution using the above recipe, the scale \(\ell^{*}\), which is the length at which we solve the equations of the kind of (S 57), has to be increased accordingly.
The important second step of the algorithm is activated when \(\ell^{*}\) has reached a maximum length scale \(\ell_{\mathrm{max}}\) (which is the largest value we are able to handle efficiently, given a fixed amount of computation power and time) and a finite portion \(q_{\mathrm{max}}\) of the total information has accumulated at \(\ell_{\mathrm{max}}\). Then, the algorithm removes mutual information at a truncation length \(\ell_{\mathrm{min}}<\ell_{\mathrm{max}}\) so that time evolution can be continued without further increasing \(\ell^{*}\), and the entanglement blockade can be (approximately) bridged. Importantly, this removal of quantum information has to be done so that the density matrices in the smaller subsystem, as well as the information currents, remain unaffected. In contrast to many other established algorithms (such as those based on the time-dependent variational principle [81; 82; 83; 84] or time-evolving block decimation [89; 90; 91; 92; 93]), all steps involved in the present algorithm preserve all local constants of motion up to scale \(\ell_{\mathrm{min}}\). This makes LITE particularly well suited to investigate hydrodynamics effects. The truncation length \(\ell_{\mathrm{min}}\) is the most important parameter of the algorithm, and should be chosen as large as possible (while keeping \(\ell_{\mathrm{min}}<\ell_{\mathrm{max}}\)); \(q_{\mathrm{max}}\), instead, has to be empirically chosen. For the model investigated in the present study, we find \(q_{\mathrm{max}}\sim 1\%\) as the optimal value. For a detailed introduction to the algorithm, we refer the reader to Ref. [44; 45].
The LITE algorithm can be straightforwardly generalized to open quantum systems described by the Lindblad master equation,
\[\partial_{t}\rho=-i[\mathcal{H},\rho]+\sum_{j}\gamma_{j}\left(L_{j}\rho L_{j}^ {\dagger}-\frac{1}{2}\{L_{j}^{\dagger}L_{j},\rho\}\right),\] (S 58)
where \(L_{j}\) (\(L_{j}^{\dagger}\)) denote Lindblad jump operators describing the system-environment interaction, and \(\gamma_{j}\) are the respective coupling constants. Assuming on-site jump operators, the corresponding subsystem equation reads as
\[\partial_{t}\rho_{n}^{\ell} = -i\big{[}H_{n}^{\ell},\rho_{n}^{\ell}\big{]}+\sum_{j\in C_{n}^{ \ell}}\gamma_{j}\Big{(}L_{j}\rho_{n}^{\ell}L_{j}^{\dagger}-\{L_{j}^{\dagger}L _{j},\rho_{n}^{\ell}\}\Big{)}\] (S 59) \[-i\mathrm{Tr}_{L}\Big{(}\Big{[}\mathcal{H}_{n-1/2}^{\ell+1}- \mathcal{H}_{n}^{\ell},\rho_{n-1/2}^{\ell+1}\Big{]}\Big{)}\] \[-i\mathrm{Tr}_{R}\Big{(}\Big{[}\mathcal{H}_{n+1/2}^{\ell+1}- \mathcal{H}_{n}^{\ell},\rho_{n+1/2}^{\ell+1}\Big{]}\Big{)}.\]
The LITE algorithm removes information that is exclusively found at scales \(>\ell_{\mathrm{min}}\), while keeping all (sub-)subsystem density matrices on smaller scales and corresponding information currents fixed. Thus, at the time of the (first) removal, the state of the system on scales \(<\ell_{\mathrm{min}}\) coincides with the exact state of the system. However, removing information changes the information flow or, in other words, the time dependence of the information currents that flow between different scales. As the information flow might be accelerated or slowed down, this changes the microscopic dynamics at later times. The parameter that controls how much this distorted distribution of information affects expectation values of local observables is \(\ell_{\mathrm{min}}\). In the limit \(\ell_{\mathrm{min}}\to\infty\), the algorithm becomes exact. Therefore, the corresponding scaling in \(\ell_{\mathrm{min}}\) allows to extract asymptotic values of local observables. At (intermediate) times \(t\sim\ell_{\mathrm{min}}/v_{LR}\) (with the Lieb-Robinson speed \(v_{LR}\)[94]), the distortion of the information distribution is expected to be the largest and so are the deviations from the exact dynamics.
At late times, the state of the system on small scales can be well approximated by a local Gibbs state. This is the case also for an infinite system, where the corresponding thermalization time diverges, so a true steady state cannot be reached: The ongoing thermalization process translates into information currents flowing from small to large scales and applying the local Gibbs approximation [44] for different snapshots in time yields different results. In this regime, a removal of information at scales larger than those present in the Hamiltonian has only minor effects on the dynamics. When the system is finite and equilibrium is reached, no information currents flow between small and large scales and the local Gibbs approximation becomes exact. At this stage also the LITE algorithm becomes exact even at finite \(\ell_{\mathrm{min}}\) (up to errors imprinted into the dynamics at earlier times).
### Energy diffusion around \(\theta\approx\pi\): Hamiltonian engineering
The three-dimensional long-range Hamiltonian of Eq. (S 55) is not suitable to be analyzed with the LITE algorithm since correlations are expected to appear on all length scales at very short times, and there are no Lieb-Robinson bounds restricting the spread of information and entanglement in the system. Furthermore, the three-dimensional nature of the experimental (effective) Hamiltonian poses a challenge. However, on a qualitative level, the relevant physical processes are not exclusively related to three-dimensional long-range systems but can also be observed in much simpler models. To obtain a qualitative picture within the constrains of the LITE algorithm, we condense the problem into an effective numerically tractable one-dimensional short-range Hamiltonian that we subsequently use as a toy model.
#### s6.1 Toy model Hamiltonian
Let us assume a very sparse density of \({}^{13}\)C spins so that for any \({}^{13}\)C spin there is exactly one other spin with a dominant mutual coupling. In that case, we can reduce the three-dimensional long-range model to an effective one-dimensional short-range nearest-neighbour model keeping only the dominant couplings
\[\mathcal{H}_{\pi}=\sum_{k}J_{k}\Big{(}3I_{k}^{z}I_{k+1}^{z}-\mathbf{I}_{k}\cdot\mathbf{ I}_{k+1}\Big{)}+\phi_{k}I_{k}^{x}.\] (S 60)
While this might appear as a very crude approximation at first sight, it is indeed a valid approximation in related systems (see for example Ref. [26]). To mimic the random spin position of \({}^{13}\)C atoms in the original model, we use \(J_{k}=J_{0}+W_{k}\) where \(W_{k}\in[-W,W]\) is a uniformly distributed random number.
We emphasize that the goal of this section is not to find a comprehensive quantitative agreement with the experimental results; rather, we aim to understand the fundamental physical processes that can potentially have a qualitative influence on the experiment: While quantitative details of the many-body dynamics might differ drastically between long and short-range systems in low and high dimensions, the qualitative behavior we discuss below applies in all cases provided basic principles, such as ergodicity, hold.
For the short-range nearest-neighbor model of Eq. (S 60) we now approximately solve the von Neumann equation using the LITE algorithm introduced in Refs. [44, 45].
#### s6.2 Initial states
By virtue of hyperpolarization, the experimentally prepared initial state resembles a mixed product state with a finite polarization of \({}^{13}\)C nuclei in the proximity of a NV center (see Fig. S14). The absence of long-range correlations in combination with their asymptotic nature make such states well-suited as initial states in the LITE algorithm. In analogy to the experiment we thus choose an initial state of the form
\[\rho_{\text{init}}=\frac{\mathds{1}_{2}}{2}\otimes\frac{\mathds{1}_{2}}{2} \otimes\cdots\otimes\frac{\mathds{1}_{2}}{2}\otimes\rho_{\text{p}}\otimes\frac {\mathds{1}_{2}}{2}\otimes\cdots\otimes\frac{\mathds{1}_{2}}{2}\otimes\frac{ \mathds{1}_{2}}{2},\] (S 61)
where
\[\rho_{\text{p}}=\bigotimes_{\begin{subarray}{c}j\in\{\text{region}\\ \text{of polarization}\}\end{subarray}}\frac{1}{2}\Big{(}\mathds{1}_{2}+p\sigma_{j}^{x }\Big{)}\] (S 62)
describes the initially polarized subsystem with a \(\hat{\mathbf{x}}\)- polarization \(p\) for each spin in the region of polarization. \(\mathds{1}_{2}\) is an identity matrix of dimension 2. In the experiment, hyperpolarization induces polarization of nuclear \({}^{13}\)C spins in the vicinity of NV-centers while spins far away from NV-centers can be assumed to be in an infinite-temperature mixed state. The initial state in Eq. (S 61) is thus similar to the initial state of the experiment: here the (imaginary) NV-center is located at the center of the polarized region defined by \(\rho_{\text{p}}\) (Eq. (S 62), see also Fig. S14).
Note that infinite-temperature density matrices (\(\propto\mathds{1}\)) are invariant under time evolution with any Hamiltonian. Thus, on short timescales all the dynamics generated from time evolving the state of Eq. (S 61) is expected to happen close to the region defined by \(\rho_{\text{p}}\); subsystems far away from \(\rho_{\text{p}}\) remain in an infinite-temperature (time-invariant) state. Thus, initial states of the form of Eq. (S 61) in principle allow us to investigate an (effectively) infinitely extended system. In this case, the (effective) system size is adaptive and updated during time evolution (using the routines in LITE), capturing only those parts which are sufficiently different from the time-invariant infinite-temperature state.
#### s6.3 Energy diffusion
Assuming no dissipative processes and disregarding the slow heating due to the Floquet drive (which is a good approximation within the prethermal plateau, see Sec. S7), the total energy of the (closed) quantum system described by Eq. (S 60) is (quasi-) conserved, and the system undergoes equilibration. Evidently, an initial state of the form of Eq. (S 61) possesses a spatially
dependent distribution of energy density: if the total energy of \(\rho_{\rm init}\) is non-zero with respect to \(H\) (which is the case when \(\phi_{k}\) is non-zero in the region of polarization), then the energy density is initially localized in the region of polarization.
To define a local energy, let us rewrite the Hamiltonian \(\mathcal{H}_{\pi}\) as a sum of local terms
\[\mathcal{H}_{\pi}=\sum_{k}h_{k+1/2}\equiv\sum_{n}h_{n},\] (S 63)
where
\[h_{k+1/2} = J_{k}\left(3I_{k}^{z}I_{k+1}^{z}-\mathbf{I}_{k}\cdot\mathbf{I}_{k+1} \right)+\frac{1}{2}\phi_{k}I_{k}^{x}+\frac{1}{2}\phi_{k+1}I_{k+1}^{x}\] (S 64)
with \(k\) being a physical site index. The expectation value of \(h_{k+1/2}\) can be interpreted as the energy located at the two sites \(k\) and \(k+1\), i.e., the energy of the subsystem with center \(n=k+1/2\). Thus, in the system under consideration (S 60), \(h_{n}\equiv\mathcal{H}_{n}^{1}\).
Equation (S 63) allows us to define the variance of the energy distribution as
\[\sigma_{E}^{2}=\sum_{n}(n-\overline{n})^{2}\frac{\langle h_{n} \rangle}{\langle H\rangle},\] (S 65)
where \(\overline{n}=\sum_{n}n\langle h_{n}\rangle/\langle H\rangle\) can be seen as the spatial expectation value of energy. \(\sigma_{E}^{2}\) tracks the spread of the local energy distribution and, thus, contains essential information about on-going equilibration processes. For example, in diffusive systems, the diffusion equation predicts a linear growth of \(\sigma_{E}^{2}\) with time, \(\sigma_{E}^{2}\propto t\) (or correspondingly for the standard deviation \(\sigma_{E}\propto\sqrt{t}\)). By contrast, in ballistic systems, \(\sigma_{E}^{2}\) is expected to grow as \(\sigma_{E}^{2}\sim t^{2}\). Thus, \(\sigma_{E}^{2}\) might be used to distinguish distinct energy transport regimes in the out-of-equilibrium dynamics of many-body systems [44, 87].
#### s.2.4 Constant on-site potential \(\phi\)
In Fig. S15 we show the time evolution of the total (or net) \(\mathfrak{k}\)-polarization \(I_{x}=\sum_{k}I_{k}^{x}\) as well as the corresponding value of \(\sigma_{E}^{2}\) for the weakly-disordered Hamiltonian of Eq. (S 60) with constant on-site potential \(\phi_{k}=\phi\neq 0\). The initial state is of the form of Eq. (S 61). The polarized region at \(t=0\) extends over \(N_{p}=11\) spins with a polarization \(p=0.6\) per spin. We keep the disorder in \(J_{k}\) small to avoid many-body localization effects [95], which reflects the regime of the experiment. There are two notable points in Fig. S15: (i) apart from deviations at intermediate times \(J\tau\sim 1-10\) the truncation length-scale \(\ell_{\rm min}\) has little effect on the late time dynamics of short-range observables. In this regime, the total \(\mathfrak{k}\)-polarization essentially takes a steady value. (ii) The energy, on the other hand, continues to spread at a rate \(\sigma_{E}^{2}\propto t\), as expected for a diffusive system. The observed scaling also indicates that many-body localization effects are not present in the current parameter regime (related models indeed develop many-body localization in the strongly disordered regime [95, 96]). Note that whether the energy spread happens ballistically, superdiffusively, or diffusively is not of significant importance for our main result. The primary observation is that the energy density spreads in the system in the first place; the particular scaling at which this happens might influence the corresponding timescales, though. However, particularly in light of the enormous differences to the actual experimental model, we do not expect to quantitatively capture the values measured in the experiment.
Yet, we can get an idea of the relevant physical mechanisms. In that respect, analyzing the late-time state (\(Jt>10\) in Fig. S15) is very instructive: The initial state in Eq. (S 61) has a finite \(\hat{\mathbf{x}}\)-polarization and a finite energy \(E\) - since \(\phi\) is non-zero in the region of initial polarization. At late times, the variance of the energy distribution grows as \(\sim\sqrt{t}\) (see Fig. S15) as expected for diffusive processes. We can thus expect the energy distribution to resemble a Gaussian in this regime:
\[\mathrm{Tr}\left[h_{n}\rho(t)\right]\approx\mathrm{Tr}\{\mathcal{H }_{\pi}\ \rho_{\rm init}\}\rho_{E}(n,t),\] (S 66)
where
\[p_{E}(n,t)=\frac{1}{\sigma_{E}(t)\sqrt{2\pi}}\exp\left(-\frac{(n- n_{\rm NV})^{2}}{2\sigma_{E}^{2}(t)}\right).\] (S 67)
We explicitly added the time dependence in \(\rho(t)\) and \(\sigma_{E}(t)\) to emphasize that the system is _not_ in a steady state. In particular, this implies that there is some non-zero information (and information currents) on all lengthscales.
Nevertheless, on small scales (compared to the scales present in \(\mathcal{H}_{\pi}\)) we can approximate the state \(\rho(t)\) as a \(\ell\)-local Gibbs state
\[\rho_{n}^{\ell}(t) = \mathrm{Tr}_{\mathcal{C}_{n}^{\ell}}[\rho(t)]\approx\frac{1}{Z_{n} ^{\ell}}\exp\left(-\beta_{n}(t)\mathcal{H}_{n}^{\ell}\right)\] \[= \frac{\mathds{1}_{\mathcal{D}_{n}^{\ell}}}{\mathcal{D}_{n}^{\ell}} -\frac{\beta_{n}(t)\mathcal{H}_{n}^{\ell}}{\mathcal{D}_{n}^{\ell}}+\mathcal{O}( \beta_{n}^{2}),\]
with the subsystem partition function \(Z_{n}^{\ell}=\mathrm{Tr}\left(\exp(-\beta_{n}(t)\mathcal{H}_{n}^{\ell})\right)\); \(\rho_{n}^{\ell}(t)\) is the density matrix at time \(t\) in the subsystem \(\mathcal{C}_{n}^{\ell}\), defined
by \(\ell\)+1 spins centered around \(n\). Likewise, \(\mathcal{H}_{n}^{\ell}\) is the subsystem Hamiltonian (of the full Hamiltonian \(\mathcal{H}_{n}\)) associated with subsystem \(C_{n}^{\ell}\). \(\mathrm{Tr}_{\overline{C_{n}^{\ell}}}\) is a partial trace over the complementary subsystem \(\overline{C_{n}^{\ell}}\). \(\mathcal{D}_{n}^{\ell}=2^{\ell+1}\) is the dimension of the subsystem \(C_{n}^{\ell}\).
In the second line of Eq. (S 68), we assumed \(\beta_{n}(t)\ll 1\) (and \(Z_{n}^{\ell}\approx\mathcal{D}_{n}^{\ell}\)). Using Eq. (S 68) in Eq. (S 66), we obtain an expression for the local inverse temperature [97]
\[\beta_{n}(t)\approx\frac{-4}{\frac{3}{2}J_{n}^{2}+\frac{1}{2}(\phi_{n}^{2}+ \phi_{n+1}^{2})}\mathrm{Tr}[\mathcal{H}_{n}\rho_{\mathrm{init}}]\;\,p_{E}(n,t).\] (S 69)
Equation (S 68) produces the (almost) exact expectation values for local observables on scales less than, or at most equal to, those present in the Hamiltonian itself. However, the subsystem density matrix \(\rho_{n}^{\ell}(t)\) from Eq. (S 68) does not reproduce the same dynamics observed at time \(t\): if we stop the dynamics at time \(t\), replace the actual state of the system by the snapshot approximation of Eq. (S 68) at time \(t\), and restart the evolution with the exchanged state, then the subsequent dynamics differs from the one created by the actual state [25,72]. The differences arise as the local Gibbs state of Eq. (S 68) is a maximum-entropy state with just enough information to reproduce the correct expectation values of local constants of the motion and information decays rapidly on larger scales. In the actual time-evolved state at time \(t\) this is not the case (i.e., the state is not a maximum-entropy state at time \(t\)). Consequently, the local Gibbs state is incapable of capturing correct expectation values on scales larger than those present in the Hamiltonian.
Note that this analysis is subtly different from the ETH discussion employed in Sec. S7.3, which is based on equilibrium arguments. In contrast, here the system is inherently in a non-steady state as energy continues to diffuse. Nonetheless, expectation values of local observables can yield steady values, as we now show. Applying the local Gibbs approximation, the total \(\hat{\kappa}\)-polarization is given by
\[\mathcal{I}_{x}(t) = \mathrm{Tr}\left[\left(\sum_{n}I_{n}^{x}\right)\rho(t)\right] \approx\sum_{n}-\beta_{n}(t)\frac{\mathrm{Tr}\big{[}I_{n}^{x}\mathcal{H}_{n}^{ \ell}\big{]}}{\mathcal{D}_{n}^{\ell}}\] (S 70) \[= \mathrm{Tr}[\mathcal{H}_{n}\rho_{\mathrm{init}}]\sum_{n}\frac{ \phi_{n}}{\frac{3}{2}I_{n}^{2}+\frac{1}{2}(\phi_{n}^{2}+\phi_{n+1}^{2})}p_{E} (n,t).\]
For vanishing disorder \(J_{n}\to J_{0}\) and a constant on-site potential \(\phi_{n}\)\(\rightarrow\)\(\phi\), \(\mathcal{I}_{x}\) becomes steady:
\[\mathcal{I}_{x}=\frac{\phi}{\frac{3}{2}J_{0}^{2}+\phi^{2}}\bigg{(}N_{p}\,p \phi-\frac{J}{2}p^{2}(N_{p}-1)\bigg{)}\] (S 71)
where we used \(\mathrm{Tr}[\mathcal{H}_{n}\rho_{\mathrm{init}}]=\big{(}N_{p}\,p\phi-\frac{J} {2}p^{2}(N_{p}-1)\big{)}/2\) with \(N_{p}\) the number of initially polarized spins. The steady value of \(\mathcal{I}_{x}\) is indicated by the dashed-dotted line in Fig. S15 (a) for \(J_{n}=J_{0}\). Note that the data shown in Fig. S15 are obtained using disordered \(J_{n}\), whence the small deviations.
In the (three-dimensional, long-range) experimental system spin-spin interaction terms average to zero and yield no contribution to the initial state energy (see Sec. S6.1). In contrast, in our one-dimensional short-range toy model, we obtain contributions from the spin-spin interaction terms to the initial energy \(\propto Jp^{2}\). Thus, in the following, we consider the limit \(p\to 0\) and keep only terms up to linear order in \(p\).
#### S7.5 Space-dependent on-site potential \(\phi_{n}\)
To analyze a site-dependent on-site potential \(\phi_{n}\) we can immediately adapt almost all of the above analysis. In fact, apart from the last step, the derivation of the previous section is completely general. As outlined in Sec. S7, the remaining effective on-site potential is expected to approximately scale as
\[\phi_{n}\propto\frac{1}{|n-n_{\mathrm{NV}}|^{3}}-\delta\theta,\] (S 72)
where \(n_{\mathrm{NV}}\) is the location of the NV-center [98]. To avoid dealing with divergences, we truncate the potential at \(\phi_{\mathrm{max}}-\delta\theta\) as \(n\to n_{\mathrm{NV}}\) (see Fig. S16):
\[J^{-1}\phi_{n}=\begin{cases}\frac{1}{|n-n_{\mathrm{NV}}|^{3}}-\delta\theta,& \text{if }\frac{1}{|n-n_{\mathrm{NV}}|^{3}}<J^{-1}\phi_{\mathrm{max}}.\\ J^{-1}\phi_{\mathrm{max}},&\text{otherwise}.\end{cases}\] (S 73)
Plugging \(\phi_{n}\) in Eq. (S 70) and assuming \(\sigma_{E}=\sqrt{Dt}\), we obtain the absolute value of the total net polarization over time in the presence of an on-site potential of the form in Eq. (S 73). Figure S17 displays the obtained time evolution curve for \(D=1\). \(\delta\theta\) is chosen positive so that far away from \(n_{\mathrm{NV}}\), we obtain \(\lim_{|n-n_{\mathrm{NV}}|\rightarrow\infty}\phi_{n}/J=-\delta\theta\). Initially, the energy density is localized in the regime where \(\phi_{n}>0\); hence, we find a positive total net polarization. As time progresses an increasing amount of energy diffuses into the regime where \(\phi_{n}<0\), leading to a decrease and, eventually, a sign inversion of the total net polarization. The timescale of the crossing depends sensitively on the initial state and the diffusion constant \(D\). Notably, at late times when almost all energy is located in the region with \(\phi_{n}<0\), the total net polarization reaches a steady value given by
\[\lim_{t\rightarrow\infty}\mathcal{I}_{x}(t)=-\frac{J\delta\theta}{\frac{3}{2}J ^{2}+J^{2}\delta\theta^{2}}\mathrm{Tr}[\mathcal{H}_{n}\rho_{\mathrm{init}}].\] (S 74)
For large values of the offset \(\delta\theta\), the late-time steady value scales as \(\sim 1/\delta\theta\).
Evidently, the steady value of the polarization is not only controlled by \(\delta\theta\) but also by the total initial energy. Assuming a fixed initial polarization per spin \(p\) as well as a fixed region of polarization, and given Eq. (S 73), the energy of the initial state
is given by
\[\text{Tr}[\mathcal{H}_{\pi}\rho_{\text{init}}] = p\sum_{\begin{subarray}{c}\text{region of}\\ \text{polarization}\end{subarray}}\left(\frac{1}{|n-n_{\text{NV}}|^{3}}-\delta \theta\right)+\mathcal{O}(p^{2})\] (S 75) \[= \frac{p}{2}(c-N_{p}\delta\theta)+\mathcal{O}(p^{2}),\]
where \(c=\sum\limits_{\begin{subarray}{c}\text{region of}\\ \text{polarization}\end{subarray}}\frac{1}{|n-n_{\text{NV}}|^{7}}\) is a positive constant, and \(N_{p}\) is the total number of spins in the region of polarization. The late-time steady value thus has two zeros, cf. Eq. (S 74): \(\delta\theta=0\) and \(\delta\theta=c/N_{p}\). Clearly, the second zero depends strongly on the polarization in the initial state. The two zeros constrain the regime where we expect to find a diffusion-induced sign inversion (see Fig. S18). As \(N_{p}\) increases, the \(\delta\theta\) window, where the sign inversion can be observed, shrinks.
In the experiment, \(N_{p}\) is related to the hyperpolarization time: longer hyperpolarization times imply that more \({}^{13}\)C spins further away from the NV-center are partially polarized.
#### s.2.6 Polarization gradients
A very interesting aspect emerging from the above analysis is that the local polarization of a given spin at position \(n\) (to leading order in \(\phi_{n}\)) is given by
\[\mathcal{I}_{n}^{x}(t)=\frac{\phi_{n}}{\frac{3}{2}J_{n}^{2}+\frac{1}{2}(\phi_{n }^{2}+\phi_{n+1}^{2})}\text{Tr}[\mathcal{H}_{\pi}\rho_{\text{init}}]p_{E}(n,t)\] (S 76)
In particular, in the limit of a small on-site potential \(\phi_{n}\) we obtain a linear dependence
\[\mathcal{I}_{n}^{x}(t)=\frac{2\phi_{n}}{3J_{n}^{2}+\left(a\frac{\partial\phi_{ n}}{\partial n}\right)^{2}}p_{E}(n,t)\text{Tr}[\mathcal{H}_{\pi}\rho_{\text{ init}}]+\mathcal{O}(\phi_{n}^{2}),\] (S 77)
where we used \(\phi_{n+1}=\phi_{n}+a\frac{\partial\phi_{n}}{\partial n}+\mathcal{O}(a^{2})\) with the lattice constant \(a\). The local polarization at site \(n\) is expected to be proportional to the on-site potential \(\phi_{n}\). The latter is induced by the nearby presence of the NV center and is expected to vary on a length scale of nanometers which in turn implies a polarization gradient imposed on a nanometer lengthscale.
In Fig. S19 we show the distribution of the polarization for three different snapshots in time. The zero-crossing points (in space) that separate the positively polarized from the negatively polarized regime are found from Eq. (S 73) setting \(\phi_{n}=0\). Note that this crossing point is time-independent and only determined by the applied on-site potential. As time progresses, spins further away from \(n_{\text{NV}}\) develop negative polarization, while simultaneously spins close to \(n_{\text{NV}}\) lose polarization (recall that the total net polarization is not conserved; thus these processes can happen in unequal proportions). Depending on the energy of the initial state (and more precisely, on the sign of the effective temperature in the prethermal plateau), a total negative polarization can develop over time and serve as a non-fine-tuned indicator for the presence of a nanoscale polarization gradient.
Energy diffusion in inhomogeneous systems
Let us first assume a constant \(\phi_{n}=\phi\). As discussed in Sec. S8.2.4, in this case energy spreads diffusively in the system with a spatially uniform diffusion constant \(D\neq 0\). In the limit \(\phi\to\infty\) the Hamiltonian \(\mathcal{H}_{\pi}\) reduces to a single-particle problem with vanishing energy diffusion, \(D\to 0\). For increasing values of \(\phi\), we thus expect a decrease of \(D\).
Let us now introduce the spatial dependence of \(\phi_{n}\) given in Eq. (S73). Far away from \(n_{\rm NV}\) we expect to find a (spatially constant) diffusion constant \(D_{\rm asympt}\). Likewise, close to \(n_{\rm NV}\) (assuming the cut-off depicted in Fig. S16) we have \(D_{n_{\rm NV}}\). Since the asymptotic value \(\delta\theta\ll\phi_{n_{\rm NV}}/J\), we expect \(D_{n_{\rm NV}}<D_{\rm asympt}\). The necessary interpolation between these two values implies a spatially dependent diffusion constant. Such a diffusion problem can be recast in a Fokker-Planck equation, which has no flat steady-state solution as \(D\to D(n)\).
Figure S20 shows the energy distribution as a function of time for three different truncation values of \(\phi_{\rm max}\) at fixed \(\delta\theta\). For \(\phi_{\rm max}/|J_{0}|\gg 1\) (panel (a)), the distribution of energy is significantly distorted compared to the Gaussian distribution of Eq. (S67). In particular, the spread of energy slows down significantly with increasing time. When lowering \(\phi_{\rm max}\), this effect is reduced and the spread of energy continues even at late times (panels (b) and (c)). Since our simulations can only reach finite times, we cannot rule out the possibility that the spread of energy slows down in (b) and (c) as well, and eventually comes to a halt with a space-dependent steady state. The slowing down observed in the closed one-dimensional short-range toy model resembles similar effects discussed in recent papers on Stark many-body localization [99, 100, 101]. As shown in the next section, such effect is mitigated by virtue of dissipation.
Finally, let us notice that in the experimental three-dimensional long-range system, the (here) observed slowdown of diffusion is expected to be less pronounced as the relative importance of the pinning potential is drastically reduced. In that case, we expect that energy spreads diffusively and the late time steady value analysis of Sec. S8.2.5 remains valid.
#### S8.2 Dissipation
In Section S6.2, we discussed the impact of decoherence and dissipation on the \({}^{13}\)C spins, which are influenced by the phonon bath generated by the diamond lattice and mediated by the NV center. Under the singular coupling limit, we derived the explicit form of the Lindblad master equation, which exhibits two crucial characteristics for our subsequent analysis. First, the Lindblad jump operators induce both dephasing and dissipation on the \({}^{13}\)C spins when they are polarized along the \(\hat{\kappa}\) axis. Second, the coupling strength of the Lindblad jump operators decays more rapidly with distance from the NV center (\(\sim\)\(1/r^{6}\)) compared to the dipole interaction (\(\sim\)\(1/r^{3}\)). These properties enable us to simplify the effect of dissipation in our one-dimensional toy model used for the numerical simulations. Specifically, we only apply local jump operators for \(n=n_{\rm NV}\), which represents the spin closest to the hypothetical NV center in our simulations. Moreover, in Eq. (S58) we employ the three jump operators
\[L_{+}=\frac{1}{2}\big{(}\sigma_{n_{\rm NV}}^{x}+i\sigma_{n_{\rm NV }}^{y}\big{)},\ \ L_{-}=\frac{1}{2}\big{(}\sigma_{n_{\rm NV}}^{x}-i\sigma_{n_{\rm NV }}^{y}\big{)},\ \ L_{z}=\sigma_{n_{\rm NV}}^{z},\] (S78)
where \(L_{+}\) and \(L_{-}\) generate dissipation, while \(L_{z}\) is responsible of dephasing. Note that in an open quantum system, energy is not a globally conserved quantity: the Lindblad jump operators in Eq. (S78) with coupling constants \(\gamma_{+}=\gamma_{-}=\gamma_{z}\), as implemented in the numerical simulations, act as an energy sink (see Fig. S21).
Solving the respective subsystem Lindblad equation (S 58) with the jump operators of Eq. (S 78) and the on-site potential sketched in Fig. S16, yields the time-dependent energy distribution shown in Fig. S22.
Apart from the local jump operators at \(n=n_{\text{NV}}\), in Fig. S20 (a) and Fig. S22 we employ identical simulation parameters. Contrasting both figures, it is evident that dissipation changes the energy distribution, especially at late times: instead of having the bulk of the energy density accumulated around \(n=n_{\text{NV}}\) as in Fig. S20 (a), the distribution in Fig. S22 resembles a Gaussian distribution with standard deviation \(\sigma_{E}\sim\sqrt{Dt}\).
In Fig. S23 we show the time-evolution curves corresponding to Fig. S20 (a) and Fig. S22: notice that, while there is no sign-inversion of the total net polarization in case of a closed quantum system (where the majority of energy density remains localized around \(n=n_{\text{NV}}\)), the open quantum system develops negative polarization whose value asymptotically approaches the expected late time steady-state (dashed black line in Fig. S23):
\[\mathcal{I}_{x}(Jt\gg 1)=-\frac{J\delta\theta}{\frac{3}{2}J^{2}+J^{2}\delta \theta^{2}}\text{Tr}[\mathcal{H}_{\pi}\rho(t)].\] (S 79)
Equation (S 79) corresponds to Eq. (S 74) weighted with the fraction of energy left in the system at time \(t\); this is brought in by using the time-dependent density matrix \(\rho(t)\). The latter is justified whenever the diffusion timescales are much faster than the dissipation timescales, so that at every fixed time \(t\) the system can still be approximated by a local Gibbs state.
With dissipation acting only at \(n=n_{\text{NV}}\) we find a single sign inversion of the total \(\mathbf{\hat{x}}\)-polarization in a bounded region of \(\delta\theta\) values, which indeed coincides with the analytical analysis carried out in Secs. S8 B 3-S8 B 5. Fig. S24 shows the corresponding results. Note that the shape of the sign-inversion arc can depend on the relative weight of the different parameters. However, for the applied potential (Eq. (S 73)), we expect the upper asymptotic value to approach \(\delta\theta=0\) for any set of parameters and different initial states. This offers an experimentally accessible way to calibrate the \(\mathbf{\hat{x}}\)-kick angle at \(\pi\) (corresponding to \(\delta\theta=0\) in the toy model): an \(\mathbf{\hat{x}}\)-kick angle of \(\pi\) is expected to be insensitive to changes of the initial state. This holds true even in the presence of dissipation (decaying with the distance from the NV center). In contrast, the lower asymptotic value (here around \(-\delta\theta\approx-0.6\)) depends strongly on the initial state (see also the discussion of Sec. S8 B 5, in particular, Fig. S18) [102]
### Polarization diffusion around \((\theta\approx\pi/2)\): state engineering
In contrast to the \(\mathbf{\hat{x}}\)-kick angle close to \(\pi\) where the only globally (quasi-)conserved quantity is energy, at kick angles around \(\pi/2\) the corresponding effective Hamiltonian of Eq. (S 54) additionally preserves the total \(\mathbf{\hat{x}}\)-polarization. This significantly changes the relevant physics since, in this case, besides energy, also the \(\mathbf{\hat{x}}\)-polarization (i.e., the spatially resolved \(\mathbf{\hat{x}}\)-polarization) diffuses in the system. Thus, a uniformly polarized initial state
cannot lead to a stable polarization gradient. Moreover, even for more complex initial states - where polarization gradients might be present - they cannot be detected by the experimentally available global measurements of the (quasi-)conserved energy and polarization. The only way to induce a sign inversion of the total net polarization is to break conservation of the total \(\hat{\mathbf{x}}\)-polarization by dissipation.
#### s.1.1 Toy model Hamiltonian
Along the lines of Sec. S8.2.1, we use a one-dimensional short-range toy model Hamiltonian akin to the effective Hamiltonian of Eq. (S.54)
\[\mathcal{H}_{\pi/2}=\sum_{k}J_{k}\left(\frac{3}{2}\left(I_{k}^{z}I_{k+1}^{z}+ I_{k}^{y}I_{k+1}^{y}\right)-\mathbf{I}_{k}\cdot\mathbf{I}_{k+1}\right).\] (S.80)
Once again, to mimic the random spin position of \({}^{13}\)C atoms in the original model, we use \(J_{k}=J_{0}+W_{k}\) where \(W_{k}\in[-W,W]\) is a uniformly distributed random number.
#### s.1.2 Initial states
Similar to Sec. S8.2.2, we work with asymptotically translationally invariant states akin to Eq. (S.61); however, in the region of polarization, the local polarization now depends on the site index, i.e., \(p\to p_{n}\). In practice, to mimic the experiment, we initialize a polarization profile that is positive close to \(n_{NV}\) and negative further away from \(n_{\mathrm{NV}}\). The profile is shown in Fig. S25. We evolve this state with the Hamiltonian of Eq. (S.80) and perform numerical simulations using the LITE algorithm. Over time, the initial domain wall melts and diffuses through the system (see Fig. S25).
#### s.1.3 Polarization diffusion
There is a striking difference between the dynamics generated by the \(\theta=\pi/2\) kick Hamiltonian of Eq. (S.80) and the \(\theta=\pi\) kick Hamiltonian from Eq. (S.60): with the total net polarization conserved, besides energy, also polarization diffuses through the system. As a consequence, even when starting from an initial state with a polarization dipole, there is no experimental way to resolve this spatial structure from the total net polarization (which is constant in time). Only if dissipation induced by the NV center is sufficiently strong to reduce the polarization close to \(n_{\mathrm{NV}}\) (here the positively polarized part), can the total net polarization change significantly over time; under these conditions, a sign inversion of the total net polarization, similar to the one discussed on Sec. S8.2, can be observed. These two scenarios are shown in Fig. S26. The dissipation is modeled using on-site dissipators according to Eq. (S.78) with a non-zero dissipation coupling constant only for \(n=n_{\mathrm{NV}}\).
### Comparison between energy diffusion and polarization diffusion
The employed toy models of Sec. S8.2 and Sec. S8.3 can both be used to create the characteristic sign-inverting signa
ture of the total net polarization for the respective parameter regimes and initial states. While seemingly similar, the underlying physics is fundamentally different in both cases: The toy model of Sec. S8.2 (which corresponds to the experimental system driven at with kick angles close to \(\pi\)) has no globally conserved quantity other than energy. In this case, energy diffuses and, over time, a polarization gradient builds up; in particular, the boundary between negatively and positively polarized spins \(r_{c}\) remains constant in time. By contrast, the toy model investigated in Sec. S8.3 (which corresponds to the experimental system driven with \(\hat{\mathbf{x}}\)-kick angles of \(\pi/2\)) has the total \(\hat{\mathbf{x}}\)-polarization as an additional global (quasi-)conserved quantity. Thus, in this case, polarization diffuses in the system. This, in turn, implies that any artificially designed polarization domain wall is _not_ preserved in time, i.e., \(r_{c}=r_{c}(t)\) changes over time. The exact functional form of \(r_{c}(t)\), in this case, depends sensitively on the interplay of diffusion, dissipation, and the initial state.
Figure S27 shows a direct comparison between the dynamics of the different cases with and without dissipation. The right column depicts the evolution at an x-kick angle close to \(\pi\) (toy model of Sec. S8.2, Hamiltonian engineering approach): Using the applied potential given in Eq. (S73), we can compute \(r_{c}\) (dashed vertical line in the right column of Fig. S27). As energy diffuses following \(\alpha\propto\sqrt{t}\) scaling, the polarization gradient builds up with \(r_{c}\) being constant. Instead, in the left column panels we show the dynamics generated by an x-kick angle \(\pi/2\) (toy model of Sec. S8.3, State engineering approach). In both panels (i.e., with and without dissipation) the position of the domain wall separating the positively and negatively polarized regimes depends strongly on time, as a result of polarization diffusion.
### Summary
Using two numerically tractable one-dimensional short-range models akin to the three-dimensional long-range Hamiltonians present in the experiment, we have explored two different diffusion effects. While our toy model resembling the experimental regime of kick angles \(\sim\pi\) only has energy as a globally conserved quantity, the corresponding toy model for kick angles \(\sim\pi/2\) additionally has the total \(\hat{\mathbf{x}}\)-polarization as a conserved charge. This additional constant of motion changes the diffusive behaviour: polarization diffuses through the system and domain-walls are not stable - neither in space nor in time. In contrast, if energy is the only globally conserved quantity, polarization domain walls can build up in the system as energy diffuses. Such domain walls are then stable in space and time (within the duration of the prethermal plateau). We analyze this effect in detail by means of numerical and analytical methods: Assuming an initial state with a finite amount of energy whose density is located around the NV-center, which induces a spatially dependent on-site potential on \({}^{13}\)C nuclei, we show that the total net polarization can undergo a sign inversion while energy diffuses through the system. Depending on the initial state and the asymptotic value of the on-site single particle potential, the late-time steady-state polarization can either take a negative or positive value. Importantly, for the single-particle potentials considered here, there is always a finite range of asymptotic single-particle potentials for which a sign-inversion of the total net polarization is obtained.
Locally, the polarization of individual spins is proportional to the value of the single on-site potentials. For space-dependent on-site potentials, this implies polarization gradients, where the sign-inversion of the total net polarization might serve as a global indicator for the latter. To arrive at this result, we only assume diffusion of energy as well as a local Gibbs approximation of a given late-time state. Hence, it can straightforwardly be transferred to a three-dimensional model (as present in the experiment).
Model-specifically, we find that in our theoretically tractable one-dimensional short-range toy model, energy remains bound in regions with strong on-site potentials and diffusion slows down significantly. Some aspects that underlie this behaviour can be transferred over to the actual three-dimensional system with long-range couplings - such as a spatially-dependent diffusion constant; however, diffusive (or non-diffusive) properties of the experimental system cannot be deduced from this analysis, since localization effects are known to display a severe dependence on the dimensionality of the system and the range of interactions. Including dissipation - which is expected to appear in the experimental system as well - enhances the diffusive behaviour of the theoretical one-dimensional short-range model. The expected late-time value of the total \(\hat{\mathbf{x}}\)-polarization is then reduced by a factor accounting for the continuous loss of energy.
## S9 Role of dimensionality and interaction range
Numerically simulating long-range three-dimensional quantum systems with a non-vanishing linear extent is notoriously hard. However, the essential physics is often captured already by simpler models. In order to gain a qualitative understanding of the experimental system we therefore study simplified versions of the long-range three-dimensional system. In particular, we restrict ourselves to exact quantum dynamics of long-range one-dimensional small systems (Sec. S7) and approximate quantum dynamics of a short-range one-dimensional quasi-infinite system (Sec. S8).
In general, one cannot expect that these simplifications lead to _qualitatively_ similar results. Therefore, in this section we summarize possible caveats of the simplifications and which behaviour is expected to generalize to three dimensions.
### Floquet dynamics beyond the prethermal plateau
Let us briefly comment on the Floquet prethermalization of the considered models. It was found experimentally before [74] that the three-dimensional long-range interacting model has a stable prethermal plateau which is algebraically long-lived in the driving period, i.e., the prethermal lifetime \(T_{\text{prethermal}}\propto 1/\omega^{2}\). This is in contrast to the exponential lifetime of a short-range interacting system, \(T_{\text{prethermal}}\propto\exp(-\omega/\text{const})\). However, the effective Hamiltonian analysis in Sec. S7 only relies on the existence of a prethermal plateau and not on its parametric scaling.
### Equilibration and thermalization dynamics within the prethermal plateau
In contrast to the discussion in Sec. S8, the experimental system is described by a three-dimensional long-range Hamiltonian (instead of a short-range one-dimensional one). While the
results derived in Sec. S8 are independent of the dimensionality of the problem Hamiltonian, they rely on the locality of the Hamiltonian.
In the three-dimensional experimental system, the locality of the Hamiltonian may be violated as spin-spin interactions decay with a critical exponent. However, as far as expectation values are concerned, most of the energy is still stored on local scales. To see this, let us decompose the long-range Hamiltonian into two parts
\[H=H_{r\leq\ell_{c}}+H_{r>\ell_{c}},\] (S S 81)
where \(H_{r\leq\ell_{c}}\) contain all dipolar coupling terms of spins with a distance up to \(\ell_{c}\), while \(H_{r>\ell_{c}}\) covers the rest. At late times, when the (diffusive) spread of information has reached the system size, the state of the system becomes thermal \(\rho(t\rightarrow\infty)=Z^{-1}\exp(-\beta H)\) with the partition function \(Z=\mathrm{Tr}[\exp(-\beta H)]\) and non-zero inverse temperature \(\beta\). Using \(\rho(t\rightarrow\infty)\), the energy found on scales larger than \(\ell_{c}\) is given by
\[\langle H_{r>\ell_{c}}\rangle(t\rightarrow\infty)=\mathrm{Tr}\left[H_{r>\ell _{c}}\rho(t\rightarrow\infty)\right]\approx\frac{-\beta}{D}\mathrm{Tr}\left[H_ {r>\ell_{c}}^{2}\right],\] (S 82)
with \(D\) being the dimension of the Hilbert space. Note that, the mixed term \(\mathrm{Tr}\left[H_{r>\ell_{c}}H_{r\leq\ell_{c}}\right]=0\) vanishes as \(H_{r>\ell_{c}}H_{r\leq\ell_{c}}\) contains strings with at least two non-trivial Pauli-operators and all non-trivial Pauli-strings are traceless, see Eq. (S 46). The remaining trace evaluates to
\[\mathrm{Tr}\left[H_{r>\ell_{c}}^{2}\right]=D\sum_{\begin{subarray}{c}i<j\\ r_{ij}>\ell_{c}\end{subarray}}J_{\mathrm{exp}}^{2}\frac{(3\cos^{2}(\theta_{ij} )\!-\!1)^{2}}{r_{ij}^{6}}.\] (S 83)
Assuming uniform \({}^{13}\)C density and \(\ell_{c}\) to be much larger than the average inter-spin distance we can replace the sum with an integral. This results in
\[\langle H_{r>\ell_{c}}\rangle(t\rightarrow\infty)\approx-2\pi\beta\frac{4J_{ \mathrm{exp}}^{2}}{5}\sum_{i}\int_{r>\ell_{c}}\frac{\mathrm{d}r}{r^{4}}=\frac{ -8N\pi\beta J_{\mathrm{exp}}^{2}}{15\ell_{c}^{3}}\] (S 84)
with \(N\) the total number of spins. Up to corrections of order \(\mathcal{O}(1/\ell_{c}^{3})\) the energy is thus located on scales \(<\ell_{c}\).
\[\langle H\rangle(t\rightarrow\infty)=\langle H_{r\leq\ell_{c}}\rangle(t)\!+ \!\mathcal{O}(1/\ell_{c}^{3}).\] (S 85)
Similarly, at early times, for the initial state with finite polarization close to the NV-center (and featureless anywhere else), we have \(\langle H_{r>\ell_{c}}\rangle(t)=0\). Assuming diffusive transport the late time value of Eq. (S 84) poses an upper bound on the energy found on scales \(>\ell_{c}\). Thus, as far as we are not interested in correlation functions that exceed the scale \(\ell_{c}\), we can neglect the energy stored at scales \(>\ell_{c}\) and approximate the full Hamiltonian of the system using only terms up to \(\ell_{c}\)
\[H\approx\sum_{n}h_{\nu_{n}^{\prime c}},\] (S 86)
where \(h_{\nu_{n}^{\prime c}}\) is the \(\ell_{c}\)-local Hamiltonian density in the volume \(V_{n}^{\prime c}\) with center \(n\) and diameter \(\ell_{c}\). Similar arguments were derived in Ref. [103] in the context of Floquet heating.
Equation (S 86) allows to repeat the steps outlined in Secs. S8 B3 - S8 B6. In analogy to Eq. (S 69) This yields local inverse temperatures of the form
\[\beta_{n}(t)\approx\frac{\mathrm{Tr}\left[H_{\mathrm{\rho}\mathrm{init}} \right]}{\mathrm{Tr}\left[h_{\nu_{n}^{\prime c}}\,H\right]}\;p_{E}(n,t).\] (S 87)
where \(\mathrm{Tr}\!\left[h_{V_{\mathrm{ff}}^{x,c}}H\right]\) takes a non-zero positive value depending on \(n\) and \(\ell_{\mathrm{c}}\). Following similar steps as derived in Secs. S8.2.3 - S8.2.6, we obtain the local polarization of a given spin \(n\)
\[I_{n}^{x}(t)=\frac{\phi_{n}}{4}\frac{\mathrm{Tr}\!\left[H\rho_{\mathrm{ini}} \right]}{\mathrm{Tr}\!\left[h_{V_{\mathrm{ff}}^{x,c}}H\right]}p_{E}(n,t).\] (S 88)
Finally, let us point out that interactions in the three-dimensional long-range system depend on the angle \(\vartheta\) between the lattice position vector and the applied magnetic on-site potential. This leads to an angular dependence in the couplings and the spatially inhomogeneous on-site potential \(\phi_{n}\) induced by the NV electron spin, which cannot be equivalently modeled in one dimension. Moreover, the radial dependence of the exact profile does not follow a simple \(1/r^{3}\)-dependence, see effective Hamiltonian analysis around Eq. (S41) and Fig S10 (b) for the exact form.
The angular dependence in the spin-spin couplings has no qualitative impact on the observed dynamics. Nevertheless, it does change the details of the above analysis. For instance, the key results of diffusion and the fact that the effective potential profile will also be imprinted in the local polarization profile, Eq. (S 88), remain unchanged. However, angular-dependent spatially inhomogeneous on-site potentials lead to a quantitatively different local polarization profile following the local on-site potential, which is also found in the classical three-dimensional simulation, Sec. S10.
## S10 Three-dimensional classical simulations on a diamond lattice
In this section we complement the observations from the previous Secs. S7 and S8 performed for few spin or infinite 1D systems, with a classical simulation on a finite 3D long-range system on a diamond lattice of \(L=1000\) randomly placed spins.
Performing a classical simulation allows us to reach considerably larger system sizes of hundreds of spins, enabling the study of 3D systems with a finite linear extent. At the same time, however, a classical simulation comes at the expense of neglecting all quantum correlations. The thermalization analysis in Sec. S7.3 and the approximate dynamics in Sec. ( S8) indicate that the system starts in an high-temperature state and remains close to a high-temperature state. This suggests that the dynamics may be described well classically.
While the classical simulation scales only linearly in the number of spins \(L\), the number of spins in a three-dimensional system scales cubic \(L\propto r^{3}\) in the linear dimension \(r\). Thus, studying quasi infinite systems, i.e., systems with a linear extension far exceeding the support of the initial state, is beyond the scope even for the classical simulation. Therefore, we cannot study free diffusion in these systems. Hence, we restrict the study of three-dimensional systems to test the predictions from the ETH-analysis in Sec. S7.3 about the formation of spatially inhomogeneous local polarization starting from a homogenous polarized state in three dimensions.
As discussed before, it is sufficient to consider the Hamiltonian time evolution generated by the effective Hamiltonian (S 54) for intermediate times, since we are only interested in the prethermal properties.
### Classical simulation algorithm
Equations of Motion.Notice that the effective Hamiltonian analysis in Sec. S7 was performed for a quantum system. However, the same analysis and arguments of the Floquet prethermal plateau are expected to remain valid also in the classical limit [104; 105]. The starting point for the classical equations of motion is the quantum Heisenberg equation of motion for the expectation values of the spin operators with respect to the three-dimensional Hamiltonian, Eq. (S 55),
\[\frac{\mathrm{d}}{\mathrm{d}t}\langle\mathbf{I}_{k}\rangle(t)=\big{\langle}i\big{[} \mathcal{H}_{\mathrm{eff}}^{\theta\approx\pi},\,\mathbf{I}_{k}\big{]}\big{\rangle} (t)=\big{\langle}\mathbf{I}_{k}\mathbf{\times}\mathbf{\nabla}_{\mathbf{I}_{k}}\mathcal{H}_{ \mathrm{eff}}^{\theta\approx\pi}\big{\rangle}(t)\] (S 89)
where \(\mathbf{I}_{k}=(I_{k}^{x},I_{k}^{y},I_{k}^{z})^{T}\) and spin index \(k\) and \(\times\) denotes the vector (outer) product in 3D. In the second line of Eq. (S 89) we used the spin-algebra \([I_{k}^{\alpha},\,I_{k}^{\beta}]=i\delta_{kl}\sum_{\gamma}e_{\alpha\beta}I_{k}^ {y}\), with the fully anti-symmetric Levi-Civita symbol \(\epsilon_{\alpha\beta\gamma}\). The classical approximation amounts to a mean potential approximation \(\big{\langle}I_{k}^{\alpha}\,I_{l}^{\beta}\big{\rangle}{\approx}\big{\langle}I _{k}^{\alpha}\big{\rangle}\big{\langle}I_{l}^{\beta}\big{\rangle}\). Therefore, the classical equations of motion for the spin expectation values \(\mathbf{I}_{k}^{\alpha}{=}\big{\langle}I_{k}^{\alpha}\big{\rangle}\) read
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{I}_{k}(t)=\mathbf{I}_{k}(t)\mathbf{\times}\mathbf{\nabla }_{\mathbf{I}_{k}}\mathcal{H}(\mathbf{I}_{k}(t))\,,\] (S 90)
where \(\mathcal{H}{=}\mathcal{H}_{\mathrm{eff}}^{\theta\approx\pi}(\mathbf{I}_{k})\) is a quadratic function in the spin expectation values and is obtained from \(\mathcal{H}_{\mathrm{eff}}^{\theta=\pi}\) by replacing \(\mathbf{I}_{k}{\to}\mathbf{I}_{k}\).
Notice, that Eq. (S 90) only contains \(2L\) independent degrees of freedom instead of \(2^{L}\) for a quantum system of \(L\) spins, thus, leading to an exponential reduction of degrees of freedom. Therefore, the classical simulation enables us to study much larger systems compared to the quantum simulation (\(L{\approx}1000\)). Further notice that Eq. (S 90) is a non-linear equation of motion as the right-hand-side is quadratic in \(\mathbf{I}_{k}\) and must thus be solved iteratively. The classical chaotic dynamics renders time evolution up to very long times harder compared to the linear quantum equations.
Initial State.The initial state in the experiment is assumed to be well described by the density matrix \(\rho_{0}\approx(1+\sum_{k}\mu I_{k}^{x})/Z\), where \(\mu_{k}\) describes the local profile of the initial state and \(\mu_{n}\ll 1\) is sufficiently small such that \(\rho_{0}\) is a positive definite density matrix. Therefore, we may write \(\rho_{0}\approx\exp(\sum_{k}\mu_{k}I_{k}^{x})/Z\overset{}{\approx}(1+\sum_{k} \mu_{k}I_{k}^{x})/Z\), to have a well defined density matrix for all values of \(\mu_{n}\). Since this density matrix does not have any correlations between spins, i.e., \(\rho_{0}=\bigotimes_{k=1}^{L}\left[\exp(\mu_{k}I_{k}^{x})/Z_{k}\right]\), it can also be used as an initial state for the classical simulation. In particular, we use the heatbath Monte-Carlo algorithm [106] to sample individual classical spin configurations and average over a large number of the configurations to obtain a thermal average. The algorithm works as follows. We first draw, for each individual spin \(k=1,\,\ldots,\,L\), the \(I_{k}^{x}\) component with \(\mathcal{I}_{k}^{x}=\log(1+u[e^{2\mu_{k}}-1])/\mu_{k}\), where \(u\) is a random variable uniformly distributed on the unit interval. Then, the \(I_{k}^{y,z}\) components are drawn uniformly from a circle of radius \([1-(I_{k}^{x})^{2}]^{1/2}\). Finally, we compute the relevant (time-dependent) expectation values for each state (trajectory) and eventually average over many initial spin configurations. Averaging over many spin configurations the \(\mathcal{I}_{k}^{y,z}\) components average to zero, while the \(\mathcal{I}_{k}^{x}\)-component is biased towards positive values, i.e. leading to a \(\Re\)-polarized ensemble.
In the experiment the initial state is generated via hyperpolarization, i.e. the NV centers are polarized via a resonant drive and due to dipole-dipole interactions this polarization diffusive through the surrounding system of \({}^{13}\)Catoms. Therefore, the initial state likely possesses a spatial profile with parts of the system closer to the NV being polarized more strongly. We mimic this in the classical simulation by choosing a domain-wall like state
\[\mu_{n}=\begin{cases}\mu&r\leq r_{\text{pol}}\,,\\ 0&r>r_{\text{pol}}\,,\end{cases}\] (S 91)
where spins within a given radius \(r_{\text{pol}}\) are polarized and spins beyond this radius are not polarized, see also initial state in Fig. S28A(iii).
_System geometry_ - The system of interest are the nuclear spins of \({}^{13}\)C-atoms which are randomly distributed on a diamond lattice with a lattice spacing \(a=0.357\,\text{nm}\) and a density of roughly \(n=0.5\,\text{\%}\) of lattice sites being occupied by \({}^{13}\)C-atoms.
However, the \({}^{13}\)C-spins within a radius of \(r<r_{\text{c}}=1.7\,\text{nm}\), the so-called frozen core, are far-detuned from \({}^{13}\)C-spins at larger radii and cannot be detected by the measurement procedure implemented in the experiment. Therefore, we consider only spins that have a minimal distance \(r_{\text{min}}>r_{\text{c}}\) from the electron. Since we cannot simulate all \(\sim 10^{4}\) spins surrounding a single NV-center, we further restrict ourselves to a region close to the chosen crossing radius, which we choose to be \(r_{\text{c}}\approx 6.5\,\text{nm}\cdot\sqrt[3]{|}3\cos(\theta)^{2}-1]\) and we set \(r_{\text{min}}=3\,\text{nm}\).
Drawing the sites occupied by \({}^{13}\)C-atoms on the diamond lattice randomly may lead to two or more \({}^{13}\)C-atoms being in close proximity of each other, leading to a strong interaction two orders of magnitude stronger than the median coupling. Such configurations can also occur in the experiment and this strong coupling will dominate the short-time dynamics. However, the long-time dynamics are hardly impacted by such rare configurations. Since we are only interested in the long-time dynamics of the system and simulating systems which possess a separation of time scales demands expensive high precision simulations, we neglect lattice configurations where \({}^{13}\)C-atoms are closer than some minimal distance \(d_{\text{min}}\) apart. We choose \(d_{\text{min}}=2\,a\) leading to the interaction energy scales \(E_{\text{interaction}}\) being comparable to the median coupling \(J\), i.e., \(E_{\text{interaction}}\leq 5\,J\). This entire procedure corresponds to an effective coarse-graining of time, neglecting short-time dynamics \(tJ\ll 1\).
In summary, we draw \(L\) many spin positions on a diamond lattice of size \(L/n\) with inter-spin distance \(r_{ij}\geq 2a\) and distance from the NV-center \(r\geq r_{\text{min}}=3\,\text{nm}>r_{\text{c}}\). This ensures that we avoid the aforementioned large energy scales and populate a mean ratio of \(n\)\({}^{13}\)C-spins. In the experiment the observed data corresponds to an average over the environment of many NV-centers. Therefore, we also average our results over many lattice configurations.
### Spin inversion in classical simulations
In the following, we study the spin locking regime near \(\theta\approx\pi\) using the classical simulation introduced above. As already mentioned, this enables us to simulate the full three-dimensional long-range interacting system for a large, yet limited number of spins. The results are shown in Fig. S28.
_Local polarization profile.-_ The initial state for the time-evolution is a homogeneously polarized state with a polarization of \(I_{n}^{x}\approx 0.4\) per site, see Fig. S28 (A)(iii). As time progresses the state develops a spatially inhomogeneous local polarization distribution, Fig. S28 (A)(ii). Eventually at late times, Fig. S28 (A)(iii), the polarization is equal to the applied effective on-site potential, Fig. S28 (B) up to a proportionality constant given by the inverse temperature \(\beta\); this is expected from the ETH analysis in Sec. S7.3. Therefore, the classical simulation also affirms the conjecture that the experimental observations are caused by the formation of a robust spatially inhomogeneous local polarization profile. In this case the polarization gradient around each NV encloses 1000-spins and a macroscopic distance of \(>10\,\text{nm}\) with \(\approx 16\,\text{\%}\) negatively polarized and \(\approx 84\,\text{\%}\) positively polarized spins. We found that the precise profile is strongly influenced by the choice of parameters, in particular the pulse duration \(t_{\text{p}}\), Rabi frequency \(\Omega\), and polarization of the electron \(P\).
_Integrated polarization.-_ As expected (see also Sec. S7.3) the signal \(S=\sum_{n=1}^{L}I_{n}^{x}\) is not conserved by the dynamics but
decays slowly in time, see Fig. S28(C). In agreement with other simulation methods the net polarization eventually inverts at late times as the energy spreads outwards and more parts of the outer region are getting negatively polarized. Notice, that the negative signal is much weaker in magnitude than the initial positive signal than observed in other simulations or the experiment. One possible explanation is the lack of dissipation such that at all times the strong positive contribution at \(r{<}r_{\rm c}(\theta)\) does not decay and thus decreases the magnitude of the negative signal.
In summary, the classical simulation confirms in a three-dimensional long-range model the analytical and numerical results obtained for one-dimensional toy models in Secs. S8 and S7. In particular, the macroscopic, stable, spatially inhomogeneous profile formed at late times reflects the profile of the effective on-site potential induced on the nuclear spins by the interaction with the NV.
|
2302.12158 | Constraints on the amplitude of gravitational wave echoes from black
hole ring-down using minimal assumptions | Gravitational wave echoes may appear following a compact binary coalescence
if the remnant is an "exotic compact object" (ECO). ECOs are proposed
alternatives to the black holes of Einstein's general relativity theory and are
predicted to possess reflective boundaries. This work reports a search for
gravitational wave transients (GWTs) of generic morphology occurring shortly
after (<1s) binary black hole (BBH) mergers, therefore targeting all
gravitational wave echo models. We investigated the times after the ringdown
for the higher signal-to-noise ratio BBHs within the public catalog GWTC-3 by
the LIGO-Virgo-KAGRA collaborations (LVK). Our search is based on the
coherentWaveBurst pipeline, widely used in generic searches for GWTs by the
LVK, and deploys new methods to enhance its detection performances at low
signal-to-noise ratios. We employ Monte Carlo simulations for estimating the
detection efficiency of the search and determining the statistical significance
of candidates. We find no evidence of previously undetected GWTs and our
loudest candidates are morphologically consistent with known instrumental noise
disturbances. Finally, we set upper limits on the amplitude of GW echoes for
single BBH mergers. | Andrea Miani, Claudia Lazzaro, Giovanni Andrea Prodi, Shubhanshu Tiwari, Marco Drago, Edoardo Milotti, Gabriele Vedovato | 2023-02-23T16:43:25Z | http://arxiv.org/abs/2302.12158v2 | Constraints on the amplitude of gravitational wave echoes from black hole ring-down using minimal assumptions
###### Abstract
Gravitational wave echoes may appear following a compact binary coalescence if the remnant is an "exotic compact object" (ECO). ECOs are proposed alternatives to the black holes of Einstein's general relativity theory and are predicted to possess reflective boundaries. This work reports a search for gravitational wave transients (GWTs) of generic morphology occurring shortly after (\(\lesssim 1\,\mathrm{s}\)) binary black hole (BBH) mergers, therefore targeting all gravitational wave echo models. We investigated the times after the ringdown for the higher signal-to-noise ratio BBHs within the public catalog GWTC-3 by the LIGO-Virgo-KAGRA collaborations (LVK). Our search is based on the coherentWaveBurst pipeline, widely used in generic searches for GWTs by the LVK, and deploys new methods to enhance its detection performances at low signal-to-noise ratios. We employ Monte Carlo simulations for estimating the detection efficiency of the search and determining the statistical significance of candidates. We find no evidence of previously undetected GWTs and our loudest candidates are morphologically consistent with known instrumental noise disturbances. Finally, we set upper limits on the amplitude of GW echoes for single BBH mergers.
+
Footnote †: Correspondence email adress: [email protected]
## I Introduction
At the time of writing, the LIGO [1] and Virgo [2] observatories have successfully detected about 90 gravitational wave transients (GWTs) [3; 4; 5], all associated to compact binary coalescences (CBCs). More than 90% of these GWTs are identified as generated by the merger of binary black hole (BBH) systems. Recently, this worldwide observatory has expanded to include the KAGRA detector [6], and it is preparing for its upcoming fourth observing run (O4). Investigating the black hole (BH) nature through GW astronomy is therefore a very hot topic in fundamental physics, especially in view of the so-called BH information paradox [7]. The LIGO-Virgo-KAGRA collaboration (LVK) already published several results of tests of the general relativity theory (GR) [8; 9; 10; 11; 12], exploiting the GWTs emitted by BBHs.
Several recent papers [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24] addressed the topic of _exotic compact objects_ (ECOs) [25]: possible compact objects (COs) alternative to the BHs predicted by Albert Einstein's GR theory. Examples of ECOs include wormholes [26], boson stars [27], gravastar [28], and fuzzballs [29]. These ECO models are characterized by different astrophysical properties, like their constituent "matter", but they all share one physical characteristic: Planck-scale modifications of the BH event horizon due to quantum effects[17] or the presence of a surface of different nature [16; 30]. This feature would enable the emission of repeated GWTs occurring shortly after the BBH merger time, _echoes_ of the ECO remnant ringdown [13; 14; 31].
In this work, we report a systematic search for echo signals of generic morphology occurring after the merger-ringdown phase of BBH GWTs [32]. The detection performance of the method is demonstrated down to low signal-to-noise (SNR) ratios, and the results are practically independent of the echo signal morphology. We also provide upper limits on the strain amplitude at earth of echoes for the loudest BBH GWTs in the LVK catalogs [3; 4; 5]. The search is based on the coherent Wave Burst pipeline (cWB) [33; 34; 35; 36; 37] widely used to search for generic short duration GWTs by LVK [38; 39; 40].
Section II provides a brief review of GW echo models and discusses the main characteristics of the predicted echo signals. In section III we summarize the data analysis method focusing on novel features: in particular, on the search for weak post-merger-ringdown GWTs, the simulations with software signal injections and the construction of the confidence belt on echoes' \(h_{\mathrm{rss}}\)[41] strain amplitude. Section IV reports the search results including detection performances, checks of robustness to different echo morphologies, and upper limits on the echoes' \(h_{\mathrm{rss}}\). A comparative discussion with respect to two, pre
viously published, echo searches [20; 23] is reported in section V. Conclusions are drawn in section VI.
## II Gravitational wave echoes
An ECO remnant would be characterized by an _inner barrier_ located at distance \(r_{\rm in}\) from the object's center [15; 16; 42], so
\[r_{\rm in}=r_{\rm ch}+l\,. \tag{1}\]
Here, in eq.(1), \(r_{\rm eh}\) is the radius of the event horizon, while \(l\) is the _length correction_ to the would-be BH event horizon. This length correction is theorised to be extremely small [14; 16; 17], of the order of the Planck length (\(l_{\rm Planck}\sim 2\cdot 10^{-35}\)m). The inner barrier, together with the _outer barrier_, i.e., the effective potential barrier of the ECO, forms a cavity that traps the inbound GW radiation emitted by the CBC merger and ringdown. At each round-trip, a fraction of the trapped GW transient is radiated away, generating a train of GW pulses, called _echoes_[43].
Here we assume for simplicity a non-rotating ECO remnant, as done by many echo searches in the literature [17; 18; 19; 20; 21; 22; 23], since it is good enough for our method. A complete description of a possible echo template for a spinning ECO remnant, as expected from a compact binary coalescence, is provided in [14].
The main parameters characterizing the models of echoes are [20; 21] (see also figure 1):
* \(\Delta t_{\rm echo}\) : the time separation between subsequent pulses as measured by a distant observer. It corresponds to the round-trip travel time of the GW transient between the inner and outer barriers [16];
* \(t_{\rm echo}\) : the delay of the first echo pulse from the coalescence time of the binary. In general \(t_{\rm echo}\sim\Delta t_{\rm echo}\) apart from small effects related to the strong non-linearity close to the merger time;
* \(\gamma\) : the attenuation per round-trip in terms of the GW amplitude ratio between subsequent echo pulses (\(0<A<1\));
* \(A\) : amplitude ratio between the first echo amplitude and the amplitude at merger time (\(0\leq A<1\)).
Following [15; 16], the theoretical prediction for \(\Delta t_{\rm echo}\) is clearly related to the space-time geometry outside the ECO remnant:
\[\Delta t_{\rm echo}\sim 2\int_{r_{\rm in}}^{r_{\rm out}}\frac{1}{\sqrt{F(r)B(r) }}dr \tag{2}\]
where in eq. 2, \(F(r)\) and \(1/B(r)\)1 are the coefficients functions for the time and radial component of the metric in a spherically symmetric system, \(r_{\rm in}\) is the radius of the inner barrier and \(r_{\rm out}\) the radius of the outer barrier.
Footnote 1: The space-time geometry outside an ECO remnant can be described with the \(ds^{2}=-F(r)d\ell^{2}+1/B(r)d\tau^{2}+r^{2}d\Omega^{2}\) metric. Such a metric is used to generally describe a CO with spherical symmetry and matter localised only in the region \(r<r_{shell}\). Following Birkhoff’s theorem, in the region \(r>r_{shell}\) the Schwarzschild metric holds: \(F(r)=B(r)=\left(1-\frac{2GM}{r^{2}r}\right)\).
Eq. 2 takes into account the effects of the redshift and the spatial curvature on the GW echo scattering process. The resulting approximate expression for the time separation is [15; 16]:
\[\Delta t_{\rm echo}\approx 54\left(\frac{n}{4}\right)M_{30}\left[1-0.001\ln \left(\frac{l/l_{\rm Planck}}{M_{30}}\right)\right]\,{\rm ms}\,. \tag{3}\]
Here, \(n\) is a parameter of the order of the unity that takes into account the structure of the ECO nature [16; 19], \(M_{30}\equiv M/30M_{\odot}\) with \(M\) standing for the final mass of the remnant. Therefore, a measurement of \(\Delta t_{\rm echo}\) would provide information over the theorized nature of the ECO through the parameters \(n\) and \(l\), related to the compactness of the ECO [13]. According to eq.(3), typical values for echoes time separation are \(\Delta t_{\rm echo}\in(30,400)\) ms for BBH mergers whose total mass ranges in \(\in(10,100)\)\(M_{\odot}\), like most of those detected during O1, O2 and O3 by the LV Collaborations [3; 4; 5].
### Signal proxy for echoes
The generic template we use to mimic echo signals \(h_{\rm echo}(t)\) is a double sine-Gaussian (SGE) pulse \(h_{\rm echo}(t)=h_{\rm SGE}(t)+\gamma\cdot h_{\rm SGE}(t+\Delta t_{\rm echo})\) with \(h_{\rm SGE}(t)\)[44]:
\[h_{\rm SGE}(t)=h_{0}\,e^{-\frac{(t-t_{0})^{2}}{\tau^{2}}}\cos(2\pi f_{0}t+\phi _{0})\,. \tag{4}\]
Figure 1: Simulated inspiral-merger-ringdown-echoes (IMRE) GW transient signal vs. time at one detector: the red line is the whitened reconstructed signal strain \(h(t)\); the grey line is the whitened data. The CBC signal peaks at the merger time (60.42 s) and is followed by echoes, whose main parameters are visualized.
In eq.(4), \(h_{0}\) is the signal amplitude, \(t_{0}\) the central time of the SGE, \(\tau\) the half-time duration of the pulse, \(f_{0}\) and \(\phi_{0}\) its central frequency and phase respectively. The values we select for these parameters are:
* \(h_{0}\) is defined as \(h_{0}=A\cdot h_{\rm max}\), where \(h_{\rm max}\) is the GW amplitude at the merger. In our simulations, \(A\) is randomly selected per each injection within a uniform distribution \(0<A<1\) (see III.3).
* \(\gamma=0.5\) so that the second echo is contributing \(1/3\) of the injected SNR. This is an intermediate condition on the concentration of the signal in time and makes possible to study the reconstruction of a weaker echo, separately from the first.
* \(\tau=20\,\rm ms\) and \(f_{0}=140\,\rm Hz\), are close to expectations for the typical mass range of BBH mergers in GWTC-3.
* \(\phi_{0}=0\). This is not impacting the results since the search method is agnostic on the signal phase in each pulse.
* \(t_{\rm echo}=300\,\rm ms\) and \(\Delta t_{\rm echo}=300\,\rm ms\) are intermediate values for the investigated BBH mergers (see section IV.1) according to eq. 3.
Furthermore, the sky location of the echo signal proxy is the same as the BBH GWT.
## III Search methods
This section describes the methods developed to search for generic GWTs after BBH mergers, such as echo signals. The analysis is based on cWB methods and comprises Monte Carlo simulations to tune the search and interpret the results in terms of gravitational wave echoes. We call this new analysis _cWB echo signal (ES) search_.
### Coherent WaveBurst
Coherent WaveBurst [36; 37] is a data analysis pipeline searching for generic GWT signals in the data from the LVK GW detectors network [45; 46; 47]. Designed to operate without a specific waveform model, cWB first identifies coincident excess power in the multi-resolution time-frequency (TF) representations of the detectors' strain data [35]. Then, for the selected events, cWB reconstructs the source sky location and the signal waveform of each GW candidate by means of a constrained maximum likelihood method [34].
To be robust against the non-stationary detector noise cWB employs signal-independent vetoes, reducing the initial high rate of the excess power triggers. The primary selection cut is on the network correlation coefficient \(c_{\rm c}\)[33], defined as:
\[c_{\rm c}=\frac{E_{\rm c}}{E_{\rm c}+E_{\rm null}}\,, \tag{5}\]
which is informative on the coherence of a signal among the detectors of the network. Here, \(E_{\rm c}\) and \(E_{\rm null}\)[33; 34; 48] are the coherent and the null energy of the signal. The algorithm also combines all the data streams into one coherent statistic \(\eta_{\rm c}\)[33], which is used for ranking the detected events and is defined as:
\[\eta_{\rm c}=\sqrt{\frac{c_{\rm c}\cdot E_{\rm c}}{N-1}}\,, \tag{6}\]
with \(N\) the number of detectors in the network.
Typically, for a GW signal \(c_{\rm c}\sim 1\) while for instrumental glitches \(c_{\rm c}\ll 1\). By setting a threshold value on \(c_{\rm c}\), it is possible to reconstruct events with a lower or higher probability of being genuine GW signals.
In the LVK analyses, different cWB searches are used depending on the target GWT. In a previous work, cWB was used to investigate post-merger GW emission in a configuration more sensitive to the chirping morphology of the CBCs signals [49]. Currently, the most general cWB search is the all-sky burst search [38; 39; 40], with a proven ability to detect the broadest variety of GW signal morphologies. Our search method is based on this cWB instance, the same version used in the LVK O3 analysis [36; 40], thus it is more agnostic than [49], and its sensitivity is improved for post-ringdown signals with respect to all the other cWB searches.
### Searching for echoes
Due to the expected nature of echoes, the cWB all-sky burst search is modified to select more pixels with a low energy content and scattered over a wider than usual time span (see appendix A). Triggering and final thresholds are decreased, and to group different pulses (i.e. the BBH merger and the echo-like signals) into a single event, we increase the maximum time separation
Figure 2: This plot shows the segmentation of the analyzed time following up an event (red line). The pale blue opaque area is representative of the blind time \(\Delta t_{\rm blind}\), and the light blue transparent area after it highlights the post-merger window (PMW).
between disjoint clusters of pixels which define a single event. Specifically, the \(\eta_{\rm c}\) threshold is decreased from 5.0 to 3.5, and the \(T_{\rm gap}\) parameter [37] is increased up to 2 s. Also, the whitening [50] of the data is performed using a different TF map resolution to decrease the leakage of the ring-down signal of the remnant into the subsequent TF pixels. Indeed, while the cWB all-sky burst search performs the whitening in the TF map with the best frequency resolution, typically \(\Delta f=1\,\rm Hz\) and \(\Delta t=0.5\,\rm s\), here we adopt a better time resolution, using pixels with a time width of \(\Delta t=0.125\,\rm s\) and \(\Delta f=4\,\rm Hz\).
The search uses the BBH GWT as trigger and focuses on a user-defined post-merger time interval, called _post-merger window_ (PMW), see figure 2. The PMW starts at time \(t_{\rm start}^{\rm PMW}\), defined as
\[t_{\rm start}^{\rm PMW}=t_{\rm coa}+\Delta t_{\rm blind}\,, \tag{7}\]
where \(t_{\rm coa}\) is the coalescence time of the BBH system and \(\Delta t_{\rm blind}\) a user-defined blind time. The blind time's purpose is to mask the ring-down of the BBH signal, and its choice will be discussed later, in III.4. Limiting the ES search to a PMW allows the reduction of the noise contribution in estimating the energy content of the post-merger phase of the BBH system without penalising the capability to detect possible echo signals. We set the PMW time duration, \(\Delta t^{\rm PMW}\), equal to 1 s. Such time width is enough to include the first \(\sim\)2-4 echo pulses, when the ECO remnant mass is compatible with the average remnant mass of the detected BBH systems (\(\sim(50,70)\,M_{\odot}\)) by the LVK collaboration [51], according to eq.(3).
Within the PMW, the main statistical parameters we compute are the network correlation coefficient, \(c_{\rm c}^{\rm PMW}\), analogous to \(c_{\rm c}\) (see eq.(5)), and the network signal to noise ratio of the data, \(\rm SNR^{\rm PMW}\), defined as
\[\rm SNR^{\rm PMW}=\sqrt{\sum_{k}^{N}\sum_{j\in J}(x_{\rm rec}[j])^{2}}\,, \tag{8}\]
where \(J\) is the set of the data pixels corresponding to the times inside \(\Delta t^{\rm PMW}\), and \(x_{\rm rec}[j]\) is the whitened reconstructed data.
While cWB can work with arbitrary detectors networks, the ES search deployed here is run only over the two LIGO detectors network (L, Livingston [1], and H, Hanford [1]), which picks up most of the GWTs' SNR.
### Monte Carlo estimators
The ES search follows a two-track scheme: the background (BGK) analysis, and the signal (SIG) analysis. Both analyses are off-source experiments, meaning that the data do not include the times corresponding to the detected GW signals. The ES search is separately performed for each BBH GWT considered.
The **background (BGK) analysis** is used to estimate the noise statistics for the null hypothesis in the PMW. We create a set of off-source software signal injections over the data stream using waveform templates of the specific BBH event under study. These templates are randomly selected from the CBC waveform posterior samples [51], provided by the Parameter Estimation (PE) methods for the considered GW event. The signals are injected widely separated, i.e. one each 600 s, to avoid systematic effects in the analysis. This simulation is representative of the null hypothesis since by construction we inject only a BBH coalescence and no post-ringdown signals are present.
The **signal (SIG) analysis** enables the measurement of the sensitivity of the ES search to signals within the PMW. The injected BBH GWTs are the same as the BGK analysis with, in addition, the injection of _secondary signals_ after each BBH merger according to the echo model of section II.1. Different morphologies of secondary signals have been tested as well.
This double simulation scheme is depicted in figure 3. The data used for all studies are real data available at the GW open science centre of the LVK collaboration, see [51].
These two analyses allow us to study the detection probability, DP, and the false alarm probability, FAP, as functions of the reconstructed \(\rm SNR^{\rm PMW}_{rec}\). Their definition is the following:
\[\rm DP=\frac{EV_{SIG}(SNR^{\rm PMW}_{rec}\geq th_{snr})}{EV} \tag{9}\] \[\rm FAP=\frac{EV_{BGK}(SNR^{\rm PMW}_{rec}\geq th_{snr})}{EV}\,,\]
and here \(\rm EV_{SIG}\) and \(\rm EV_{BGK}\) are the number of detected events above threshold in the PMW from the SIG and BGK distributions, EV is the total number of injected signals, and \(th_{snr}\) is the threshold value on \(\rm SNR^{\rm PMW}\).
Figure 3: Flowchart of the echo signal (ES) search. Once the cWB all-sky burst search detects a BBH event, the cWB ES search can be run as a follow-up. The search runs two parallel studies on a common data selection: the background (BGK) and the signal (SIG), and computes all the statistical estimators described in section III.2. The BBH primary signal injections are randomly picked from the PE samples distribution for that event.
### Tuning of the analysis
The purpose of this tuning is the optimisation of the ES search performance before investigating the on-source GW events results. The tuning of the ES search is done by analysing the simulations related to the GW150914 event [52; 53; 54].
Thanks to the DP and FAP measurements it is possible to build the receiver operating characteristic (ROC) curves to study the ES search performances while tuning its parameters: the cWB production thresholds as well as the unique features of this search, \(\Delta t_{\rm blind}\), \(\Delta t^{PMW}\), and \(c_{\rm c}^{\rm PMW}\). The chosen configuration of the analysis is the one that maximises the DP for low values of FAP, in the interval \(0.5\%\leq FAP\leq 5\%\). This region corresponds to the events which possess low to medium \(\rm SNR_{\rm rec}^{PMW}\), typically \(5\leq\rm SNR_{\rm rec}^{PMW}\leq 8\).
The list of tested parameters and their final values are reported in appendix A. More specifically, BKG simulations show that the statistical properties of the noise background are independent of the choice of \(\Delta t_{\rm blind}\) within the range \((0.04,0.4)\,\rm s\). Therefore, any \(\Delta t_{\rm blind}\) in this range can be freely selected for the cWB ES search. Instead, the noise level starts to increase as \(\Delta t_{\rm blind}\) gets shorter due to some residual leakage from the primary BBH GWT signal into the PMW. The duration of the PMW window, \(\Delta t^{PMW}\), affects as expected the mean \(\rm SNR^{PMW}\) from the BKG analysis, the longer \(\Delta t^{PMW}\) the larger the noise in the PMW. We selected a \(\Delta t^{PMW}=1\rm s\) in order to collect more echo pulses in the PMW.
### Inference of confidence intervals
The cWB ES search can set confidence intervals on the \(h_{\rm rss}\)[41]
\[h_{\rm rss}=\sqrt{\int_{t\in\rm PMW}(\mid h_{+}(t)\mid^{2}+\mid h_{\times}(t) \mid^{2})dt} \tag{10}\]
of signals consistent with the on-source data in the PMW. Studying the bivariate distribution of \(h_{\rm rss}\) of the injected post-merger signals and the recovered \(\rm SNR_{\rm rec}^{PMW}\) from SIG and BKG simulations, figure 4, one can build the confidence belt by measuring the distribution of \(\rm SNR_{\rm rec}^{PMW}\) as a function of the injected \(h_{\rm rss}\), \(h_{\rm rss}^{\rm inj}\), [55]. This is approximately achieved by introducing a binning in \(h_{\rm rss}^{\rm inj}\) in order to preserve a minimum number of samples of hundreds per bin from the SIG analysis. This allows to target a confidence belt coverage of 95%. For the special case of \(h_{\rm rss}^{\rm inj}=0\), the background, we exploit the full statistics of the BKG simulation. This belt is then used to set the 95% confidence interval on \(h_{\rm rss}\) as a function of the \(\rm SNR^{PMW}\) value measured on-source, \(\rm SNR_{\rm ON}^{PMW}\).
## IV Results
Using the search tuning described in III.4, we investigated a sub-set of 33 BBH events from the BBH detections from LVK collaboration [3; 4; 5]. The subset comprises all the BBH events that possess a network SNR greater than 10 in the cWB search for generic GWTs [38; 39; 40]2. The selection is motivated by the reasonable expectation that the signal amplitude of echoes is such that \(A\ll 1\), since no signal with an amplitude comparable to that of the merger has been observed after the ring-down phase of a BBH GW emission. The list of investigated BBH events and related main results is given in table 1.
Footnote 2: The network SNR recovered by cWB is consistent to the one recovered by template searches for these loud BBH events
### Robustness of cWB ES search
We tested the robustness of the cWB ES search against variations of the injected signals in SIG analyses, see section III.3, for a few BBH GWT cases. By changing the delay time \(t_{\rm eco}\) and time separation \(\Delta t_{\rm echo}\) of the two pulses of the signal proxy defined in section II.1, the detection probability at FAP=5% results unaffected as long as both pulses occur inside the analyzed time window, PMW. Therefore, the off-source results reported in this work can be considered valid as long as \(\Delta t^{PMW}=1\rm s\) and \(\Delta t_{\rm blind}\) is included in the tested range (0.04,0.4) s,
Figure 4: Confidence belt for the echo’s injected amplitude \(h_{\rm rss}\), \(h_{\rm rss}^{\rm inj}\), vs the reconstructed \(\rm SNR_{rec}^{PMW}\) in the PMW for GW150914. The blue region corresponds to 95% coverage. The on-source 95% confidence interval in terms of the \(h_{\rm rss}\) is set by the intersection between the vertical line at the on-source value \(\rm SNR_{\rm ON}^{PMW}\sim 0.37\) (red line) and the blue region. The y-axis values are in units of \(10^{-21}/\sqrt{Hz}\).
\begin{table}
\begin{tabular}{l c|c|c|c|c|c|c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{List of analysed BBH events} \\ \hline Run - GW name & App. & \(t_{\rm cona}\) & \({\rm SNR}_{\rm net}\) & \(h_{\rm one}^{50\%}\cdot\frac{10^{-23}}{\sqrt{\rm Hz}}\) & \(t_{\rm echo}\) [8] & \({\rm SNR}_{\rm ON}^{\rm PMW}\) & p-value\({}_{\rm ON}\) \\ \hline O1 - GW150914 & 1 & 1126259462.421 & 24.4 & \(2.79\pm 0.02\) & \(227^{+12}_{-11}\) & 0.4 & \(0.819\pm 0.006\) \\ O1 - GW151012 & 1 & 1128678900.467 & 10.0 & \(2.57\pm 0.03\) & \(127^{+39}_{-14}\) & \(\leq 0.1\) & \(0.53\pm 0.01\) \\ O1 - GW151226 & 1 & 1135136350.668 & 13.1 & \(2.70\pm 0.03\) & \(73^{+23}_{-14}\) & \(\leq 0.1\) & \(0.79\pm 0.01\) \\ O2 - GW170104 & 1 & 1167559936.619 & 13.0 & \(2.52\pm 0.01\) & \(176^{+18}_{-14}\) & \(\leq 0.1\) & \(0.888\pm 0.004\) \\ O2 - GW170608 & 1 & 1180922494.501 & 14.9 & \(2.63\pm 0.01\) & \(63^{+12}_{-2}\) & 0.5 & \(0.069\pm 0.004\) \\ O2 - GW170729 & 1 & 1185389807.346 & 10.2 & \(2.53\pm 0.01\) & \(287^{+53}_{-37}\) & 2.5 & \(0.041\pm 0.003\) \\ O2 - GW170809 & 1 & 1186302519.758 & 12.4 & \(2.40\pm 0.02\) & \(203^{+19}_{-14}\) & \(\leq 0.1\) & \(0.56\pm 0.02\) \\ O2 - GW170814 & 1 & 1186741861.533 & 15.9 & \(2.51\pm 0.02\) & \(191^{+12}_{-9}\) & 0.3 & \(0.450\pm 0.007\) \\ O2 - GW170823 & 1 & 1187529256.501 & 11.5 & \(2.50\pm 0.02\) & \(236^{+36}_{-27}\) & \(\leq 0.1\) & \(0.835\pm 0.006\) \\ O3a - GW190408\_181802 & 2 & 1238782700.279 & 14.7 & \(1.82\pm 0.01\) & \(147^{+14}_{-10}\) & 0.2 & \(0.320\pm 0.007\) \\ O3a - GW190412 & 2 & 1239082262.165 & 18.9 & \(1.82\pm 0.01\) & \(134^{+14}_{-14}\) & 0.2 & \(0.295\pm 0.007\) \\ O3a - GW190512\_180714 & 2 & 1241719652.435 & 12.3 & \(1.69\pm 0.02\) & \(123^{+14}_{-12}\) & \(\leq 0.1\) & \(0.54\pm 0.02\) \\ O3a - GW190513\_205428 & 2 & 1241816086.800 & 12.3 & \(1.83\pm 0.01\) & \(185^{+29}_{-21}\) & \(\leq 0.1\) & \(0.879\pm 0.004\) \\ O3a - GW190517\_055101 & 2 & 1242107479.848 & 10.2 & \(1.80\pm 0.02\) & \(213^{+33}_{-32}\) & \(\leq 0.1\) & \(0.52\pm 0.01\) \\ O3a - GW190519\_153544 & 2 & 1242315362.418 & 12.0 & \(1.84\pm 0.01\) & \(365^{+45}_{-50}\) & 0.4 & \(0.140\pm 0.004\) \\ O3a - GW190521 & 2 & 1242442967.471 & 14.4 & \(1.74\pm 0.01\) & \(568^{+133}_{-81}\) & 0.2 & \(0.569\pm 0.006\) \\ O3a - GW190521\_074359 & 2 & 1242459857.456 & 24.4 & \(1.73\pm 0.01\) & \(256^{+23}_{-16}\) & 4.4 & \(0.760\pm 0.006\) \\ O3a - GW190602\_175927 & 2 & 1243533585.093 & 12.1 & \(1.98\pm 0.04\) & \(402^{+64}_{-54}\) & 0.2 & \(0.468\pm 0.007\) \\ O3a - GW190701\_203306 & 2 & 1246048404.578 & 11.6 & \(1.84\pm 0.01\) & \(326^{+41}_{-32}\) & 6.4 & \(0.0015\pm 0.0006\) \\ O3a - GW190706\_222641 & 2 & 1246487219.361 & 12.3 & \(1.82\pm 0.01\) & \(358^{+66}_{-49}\) & 0.3 & \(0.149\pm 0.005\) \\ O3a - GW190814 & 2 & 1249852257.009 & 22.2 & \(1.82\pm 0.01\) & \(91^{+4}_{-3}\) & 0.13 & \(0.850\pm 0.004\) \\ O3a - GW190828\_063405 & 2 & 1251009263.781 & 16.0 & \(1.82\pm 0.01\) & \(197^{+26}_{-15}\) & 0.2 & \(0.205\pm 0.008\) \\ O3a - GW190915\_235702 & 1 & 1252627040.693 & 13.1 & \(1.88\pm 0.02\) & \(205^{+26}_{-22}\) & 0.2 & \(0.017\pm 0.004\) \\ O3a - GW190929\_012149 & 1 & 1253755327.505 & 9.9 & \(1.86\pm 0.02\) & \(367^{+122}_{-92}\) & 0.1 & \(0.147\pm 0.008\) \\ O3b - GW191109\_010717 & 2 & 1257296855.783 & 17.3 & \(1.85\pm 0.01\) & \(387^{+65}_{-54}\) & \(\leq 0.1\) & \(0.714\pm 0.008\) \\ O3b - GW191204\_171526 & 2 & 1259514944.087 & 17.5 & \(2.05\pm 0.05\) & \(68^{+6}_{-4}\) & 0.1 & \(0.31\pm 0.02\) \\ O3b - GW191215\_223052 & 2 & 1260484270.995 & 11.2 & \(1.69\pm 0.02\) & \(148^{+18}_{-15}\) & \(\leq 0.1\) & \(0.48\pm 0.02\) \\ O3b - GW191222\_033537 & 2 & 1261020955.347 & 12.5 & \(1.82\pm 0.01\) & \(272^{+55}_{-36}\) & \(\leq 0.1\) & \(0.771\pm 0.007\) \\ O3b - GW191230\_180458 & 2 & 1261764316.898 & 14.4 & \(1.77\pm 0.08\) & \(296^{+61}_{-40}\) & \(\leq 0.1\) & \(0.31\pm 0.04\) \\ O3b - GW200219\_094415 & 2 & 12661406
regardless of the choice \(t_{\rm eco}=\Delta t_{\rm echo}=0.3\) s which we adopted in the SIG analyses of all BBH GWTs.
Moreover, we tested the sensitivity of the cWB ES search to widely different morphologies of post-ringdown signals, by performing additional SIG analyses. Figure 5 shows the DP at \({\rm FAP}=5\%\) as a function of the injected \({\rm SNR}^{PMW}\) for different central frequencies of the SGE echo signal proxy (see section II.1), for a single pulse made by a BBH merger waveform and for a single burst of white noise. The resulting performances are almost identical within uncertainties, which is an expected outcome due to the general nature of the cWB search (see section III.1). The slight decrease in performances when injecting white noise burst (WNB) signals in the PMW is mostly related to their wider frequency band.
### Detection probability
We discuss here the detection probability measurements for the echo signal proxy described in section II.1, with the requirement of \({\rm FAP}=5\%\). Figure 6 shows the DP as a function of the \(h_{\rm rss}\) injected inside the PMW for a subset of GWTs from the three LVK observing runs (O1, O2, O3). The visible improvement towards smaller \(h_{\rm rss}\) comes from the temporal enhancement of the detectors' sensitivities. Between O1 and O2 observing runs, the typical \(h_{\rm rss}\) at 50% DP decreases from \(\sim 2.7\cdot 10^{-23}/\sqrt{\rm Hz}\) to \(\sim 2.5\cdot 10^{-23}/\sqrt{\rm Hz}\). A more significant decrease in \(h_{\rm rss}\) at 50% DP can be seen from O2 to O3, from average values of \(\sim 2.5\cdot 10^{-23}/\sqrt{\rm Hz}\) to \(\sim 1.8\cdot 10^{-23}/\sqrt{\rm Hz}\), corresponding to an improvement of about 28%. Column 6 of table 1 reports the resulting \(h_{\rm rss}\) values which ensure 50% DP with FAP 5% for all the studied GWTs.
The coherent WaveBurst ES search explores a significantly lower range of \(h_{\rm rss}\) values with respect to the cWB all-sky search for short-duration bursts [40]. For the latter, the best results in terms of \(h_{\rm rss}\) values at DP=50% among the tested signal morphologies has been achieved in O3 for a single pulse SGE, \(Q=100\), \(f_{0}=235\) Hz, reaching \(h_{\rm rss}=8\cdot 10^{-23}/\sqrt{\rm Hz}\) at a FAR of one per 100 years. Here instead, with a more dispersed signal, the double pulse SGE, \(Q=8.8\), \(f_{0}=140\) Hz, the average \(h_{\rm rss}\) values at DP=50% in O3 reaches \(\sim 1.9\cdot 10^{-23}/\sqrt{\rm Hz}\), but at a much higher FAR of 2 per year, estimated by multiplying the FAP by the rate of the investigated BBH GWTs.
### On-source p-value
The on-source (ON) data for each BBH GWT is analyzed using the same configuration of the cWB ES search of the SIG and BGK analyses (see sec. III.3). By comparing the ON results with their BGK distributions we can estimate the p-value of \({\rm SNR}^{\rm PMW}_{\rm ON}\) per each BBH GWT:
\[{\rm p-value}_{\rm ON}=\frac{{\rm EV}_{\rm BGK}({\rm SNR}^{\rm PMW}_{\rm rec} \geq{\rm SNR}^{\rm PMW}_{\rm ON})}{{\rm EV}}\,, \tag{11}\]
where \({\rm SNR}^{\rm PMW}_{\rm ON}\) is the on-source reconstructed SNR inside the PMW, EV is the total number of BKG instances and \({\rm EV}_{\rm BGK}\) is the number of BKG instances with \({\rm SNR}^{\rm PMW}_{\rm rec}\)
Figure 6: Plot of the detection probability as a function of \(h_{\rm rss}^{\rm ini}\) of the echo signal for a selection of BBH GWTs from each observing run of the LVK. \(h_{\rm rss}^{\rm ini}\) units: \(10^{-23}\,\cdot\,\sqrt{\rm Hz}\). The temporal sensitivity improvement achieved is clearly visible.
Figure 5: Detection probability (DP) at \({\rm FAP}=5\%\) as a function of \({\rm SNR}^{\rm PMW}_{\rm inj}\) for different morphologies of simulated post-ringdown signals: a high mass (\(80-80M_{\odot}\)) BBH coalescence (blue), trains of two elliptically polarised sine-Gaussian pulses as described in II.1 with different central frequencies \(f_{0}=80,140,200,400\) Hz (orange, green, red, and violet respectively) and a single pulse of white noise, WNB, of duration \(\sim 0.02\) s, central frequency 150 Hz and bandwidth 100 Hz (brown). These results refer to GW150914.
above the ON value. A low p-value points to \(\mathrm{SNR}_{\mathrm{ON}}^{\mathrm{PMW}}\) on the high-energy tail of the \(\mathrm{SNR}_{\mathrm{rec}}^{\mathrm{PMW}}\) distribution for the null hypothesis. Columns 8 and 9 of table 1 list the \(\mathrm{SNR}_{\mathrm{rec}}^{\mathrm{ON}}\) and p-value\({}_{\mathrm{ON}}\) per each BBH GWT. Figure 7 reports the p-value for each investigated GWT, ranked from the lowest to the highest. These estimates are based on the BKG analyses performed over approximately one calendar month of data around each BBH GWT. We set an a priori threshold on the false discovery rate [56], FDR \(<0.1\), to select the p-values hinting at a rejection of the null hypothesis. These cases are then the object of deeper follow-up studies.
Two GW events, GW190701 and GW200224, show an interesting \(\mathrm{SNR}_{\mathrm{rec}}^{\mathrm{ON}}\) and their p-values pass the a priori FDR threshold. In both cases, the morphological information of the outliers reconstructed inside the PMW (see appendix B) points to a dominant contribution by known instrumental disturbances in the frequency range \((16,40)\,\mathrm{Hz}\)[57; 58]. These noise disturbances are known to often occur as a train of more pulses with a quasi-regular time separation. This feature is especially evident in our analysis of GW200224 (see appendix B.2) and can affect our p-values estimates, since it violates the assumption of uniformly random occurrence times and of independence of each noise pulse. Therefore, one can expect, at the very least, an underestimation of the uncertainties of our p-values.
We checked for systematic errors in the p-values of GW190701 and GW200224 by changing the off-source injection times of the BBH GWTs inside the BKG analysis. In particular, we repeated the BKG analysis using only 4096 s of data around the GWT time. The new _local_ p-value estimates are
\[\mathrm{GW190701:\quad p\text{-value}_{\mathrm{ON}}^{\mathrm{local}}=0.004\pm 0.002} \tag{12}\]
\[\mathrm{GW200224:\quad p\text{-value}_{\mathrm{ON}}^{\mathrm{local}}=0.007\pm 0.002}\,, \tag{13}\]
also reported in figure 11, in appendix B. In the case of GW200224, the discrepancy between the estimates points to large systematic effects, including a significant bias of the p-value, which weakens its initial statistical significance. As for GW190701, the local p-value estimate is also higher than the initial one, though it may still be compatible within the stated statistical uncertainties.
Further statistical checks and more morphological tests on GW190701 and GW 200224 are reported in appendix B. Among these checks, the most important observation is that the reconstructed frequency spectrum for both the candidates does not match any expectation from echo models [17], so these outliers cannot be considered plausible candidates for echoes. We conclude that these two outliers are not suitable candidates for echo signals and are very likely instrumental disturbances.
For all the other GWTs, our p-value estimates occur well above our FDR threshold of attention, and their distribution is well described by the empirical BGK model. Therefore, our work does not reject the null hypothesis, confirming what was previously reported by different search methods:
* the generic echo search of [23], which estimated p-values in the post-ringdown of the GWTs detected in observing runs O1 and O2 [24] and in O3b [12];
* the template-based searches [20; 22], which provided p-value estimates for O1 GWTs plus GW170104.
We discuss the comparison of performances with the cWB ES search in section V.
### Upper limits on \(h_{\mathrm{rss}}\) of echoes
The confidence belt construction procedure requires SIG analyses with extended statistics. Therefore we priorised the GWTs detected by the cWB all-sky burst search with an \(\mathrm{SNR}_{\mathrm{net}}\,\geq 15\), adding a few more GWTs close to this threshold to sample more uniformly the total mass range of the detected BBH mergers. All confidence intervals result in upper limits on the \(h_{\mathrm{rss}}\) of the echo signals, \(h_{\mathrm{rss}}^{\mathrm{UL}}\) (see table 2). Typical upper limits values are in the \(h_{\mathrm{rss}}\) range \(\sim 2\div 3\times 10^{-23}/\sqrt{Hz}\) at 95% coverage.
The ratios between \(h_{\mathrm{rss}}^{\mathrm{UL}}\) and the \(h_{\mathrm{rss}}\) of the primary BBH GWT, \(h_{\mathrm{rss}}^{\mathrm{BBH}}\) are also reported in table 2. These ratios are our measured amplitude upper limits in relative terms, though their connection to the echo's \(A\) parameter (see section II) depends on the actual morphologies of echo models and of the primary BBH GWT. In the
Figure 7: Ordered p-values of \(\mathrm{SNR}_{\mathrm{ON}}^{\mathrm{PMW}}\) for the null hypothesis as measured by BKG analyses, blue dots with statistical uncertainties. The **red dashed line** corresponds to expected values for the null hypothesis, or to a false discovery rate \(\mathrm{FDR}\!=\!50\%\). The **orange dashed line** corresponds to \(\mathrm{FDR}=\!10\%\) and the **orange filled area** highlights the region in which the \(\mathrm{FDR}<10\%\), used to select candidates.
approximation that the merger-ringdown and each echo pulse share similar morphologies (e.g. similar central frequency and number of cycles), then the reported \(h_{\rm rss}\) ratios can be considered to be equivalent to upper limits on \(A\). They would be conservative upper limits in case more echo pulses are occurring within the PMW. It is clear from table 2 that the GWTs providing the more stringent upper limits on the relative echo amplitude are typically the louder ones, constraining the \(h_{\rm rss}\) ratios to \(\lesssim 1/8\) in case of \({\rm SNR}_{\rm net}\gtrsim 22\).
These results can be used to set observational constraints on echoes' models, and allow projecting expected sensitivity to echoes as the detectors improve their sensitivity.
## V Comparison with previous searches for echoes
Most of the published searches for echoes have analyzed O1 data [19; 20; 21; 22; 24], focusing in particular on the GW150914 event [21; 22]. Here we provide some comments on the performances of the cWB ES search with respect to previously reported methods, being aware, however, that a full comparison of performances would require additional coordinated simulations which are computationally costly and beyond the scope of this paper. We focus on a previous model-independent search using simulated data [23], on a template-based search on GW150914 data [20], and on a very recent model-dependent analysis by [59].
**Model-independent search method by Tsang et al.**[23]. This general search method for echoes has been tested on two simulated data sets, assuming the two LIGO detectors at design sensitivity [60] and two noise models: pure stationary Gaussian and glitch-contaminated stationary Gaussian. The authors state that echo signals are confidently detectable above SNR \(=12\). This is also true for our cWB ES search, which at \({\rm SNR}^{\rm PMW}=12\) preserves a 100% detection probability at a false alarm probability of 5%, on real data. Tsang, et al. also show that for echoes with \({\rm SNR}=8\), their false alarm probability is \(\sim 6\%\) for the Gaussian noise model, while for the glitch contaminated noise increases to \(\sim 30\%\). At the same SNR value and over real detector data, the cWB ES search proves a detection probability \(\geq 95\%\) always at a false alarm probability of 5%. This partial comparison supports the conclusion that our cWB ES search is more competitive in the low SNR range. Moreover, our off-source simulations clearly show that the data are not compliant with a stationary Gaussian noise model in the low SNR range of interest in the proximity of most BBH GWTs.
**Model-dependent search method by Westerweck et al.**[20] This template-based search has been deployed on real data analyzing four BBH GWTs (including GW150914) and does not find violations of the null hypothesis. It estimates the p-values of results by using different noise instantiations close to the GWTs times, which is a similar method to our BKG analysis. Instead, the sensitivity of this search is assessed by injecting echo waveforms on simulated Gaussian noise which preserves the actual power spectral density of the LIGO detectors at the GWTs detections. Figure 2 and 5 in [20] show that peak amplitudes of echoes detach from the noise fluctuations starting from \(h_{\rm p}\simeq 2\cdot 10^{-22}\). In actual noise, our search achieves 50% detection probability with a false alarm probability of 5% for a peak amplitude of the assumed echo waveform \(h_{\rm p}\sim 2.3\cdot 10^{-22}\) for GW150914, as estimated from our more general result in terms of \(h_{\rm rss50\%}\) (see Tab.1). Therefore, we conclude that the sensitivity of the cWB ES search is at least competitive to that of this template-based search on this specific echo model. We remark that the implementation of the model-dependent search uses a template bank and requires subtraction of the detected GWT trigger from data prior to matched filtering for the template bank. Such steps add complexity with respect to the cWB ES search.
**Model-dependent analysis by Abedi.**[59] An
\begin{table}
\begin{tabular}{l|c|c|c} \multicolumn{4}{c}{Upper limits on echoes amplitude} \\ \hline Run - GW name & \({\rm SNR}_{\rm net}\) & \(h_{\rm rss}^{\rm UL}\cdot\frac{10^{-23}}{\sqrt{H_{\rm c}}}\) & \(\frac{h_{\rm rss}^{\rm UL}}{h_{\rm BBH}^{\rm BBH}}\) \\ \hline O1 - GW150914 & 24.4 & 3.4 & 0.13 \\ O2 - GW170608 & 14.9 & 3.0 & 0.39 \\ O2 - GW170814 & 15.9 & 2.5 & 0.17 \\ O3a - GW190408\_181802 & 14.7 & 2.5 & 0.27 \\ O3a - GW190412 & 18.9 & 2.2 & 0.20 \\ O3a - GW190521 & 14.4 & 2.0 & 0.23 \\ O3a - GW190521\_074359 & 24.4 & 2.3 & 0.16 \\ O3a - GW190814 & 22.2 & 2.0 & 0.13 \\ O3a - GW190828\_063405 & 16.0 & 2.1 & 0.14 \\ O3b - GW191109\_010717 & 17.3 & 2.0 & 0.23 \\ O3b - GW191204\_171526 & 17.5 & 2.0 & 0.27 \\ O3b - GW200311\_115853 & 17.8 & 2.4 & 0.23 \\ \hline O3b - GW200224\_222234\({}^{\dagger}\) & 20.0 & \(\sim 3.2\) & 0.15 \\ \end{tabular}
\end{table}
Table 2: List of the BBH GWTs selected for setting confidence intervals on the echo’s amplitude. They are a subset of the loudest ones listed in table 1. The columns report: GWT name; network SNR, \({\rm SNR}_{\rm net}\), of the GWT; upper limit in terms of \(h_{\rm rss}\) of possible echo candidates inside the PMW, \(h_{\rm rss}^{\rm UL}\); relative upper limit defined as the ratio between the \(h_{\rm rss}^{\rm UL}\) and the \(h_{\rm rss}\) of the related primary GWT, \(h_{\rm rss}^{\rm BBH}\). \({\dagger}\): this GWT event is affected by a loud instrumental noise in the PMW (see appendix B.2).
other systematic search for a specific echo model has been very recently reported by Abedi [59]. This search analyses 65 GWTs from the LVK catalog of compact binary coalescences. The method assumes Gaussian noise close to each GW event. The main result reported is an upper limit value to the echo amplitude, A, resulting to be A \(\leq 0.42\) with a 90% credible interval, under the assumption that A be equal for all analyzed events. In addition, the Bayes factor reported for GW190521 stands out as an outlier, suggesting a preference for post-merger echoes rather than the null hypothesis. In our study, GW190521 shows an on-source p-value equal to \(0.569\pm 0.006\), suggesting that the data in the PMW are compatible with noise. Moreover, our relative upper limit on the amplitude \(h_{\rm rss}\) ratio at 95% coverage is 0.23 for GW190521 and is as low as 0.13 for the loudest GWTs.
## VI Conclusions
This paper describes a search for secondary gravitational wave transients of generic morphology which may occur shortly after the ringdown phase of a primary signal from a Compact Binary Coalescence. The analysis method is developed on top of the coherent WaveBurst pipeline: it uses the primary GWT as a trigger and follows up the coherent response of the interferometric gravitational wave detectors on a selectable time window, defined with respect to the merger time.
The scientific motivation for this work is the search for gravitational wave echoes after binary black hole mergers. Such echoes are expected if the final remnant object is not a standard black hole from the general relativity theory, either because the event horizon is not fully absorbing or because the remnant is an exotic compact object larger than the would-be event horizon. The detection performances of the current search are described in terms of \(h_{\rm ss}\) strain amplitude and are rather independent of the signal waveform and spectra within a wide signal class. Therefore, as long as any echo pulse occurs inside the selected time window, from 0.2 to 1.2 s after the merger, the reported results can be interpreted in terms of any echoes' model.
The analysis of the loudest 33 BBH mergers detected during the O1, O2 and O3 observing runs by the LIGO, Virgo and KAGRA collaborations is consistent with null results (see table 1), so no evidence of echo signals is found. This search provides separate results for single BBH mergers. The off-source characterization of the detection efficiency vs false alarm probability and the estimation of p-values of candidates are performed using thousands of real detector noise instantiations. Therefore, the results do not rely on an a priori noise model and point out that the actual noise statistics is far from Gaussian in most cases, even at low SNR. The search also provides a morphological reconstruction of candidates and, for the first time, the confidence intervals on the \(h_{\rm ss}\) amplitude of gravitational wave echoes. The latter turn out to be upper limits, typically ranging in the interval \(\sim 2+3\times 10^{-23}/\sqrt{Hz}\) in terms of \(h_{\rm rss}\) (see table 2).
The two loudest candidates found occur after GW190701 and GW200224. These candidates are also the only ones featuring low enough p-values to require further follow-up investigations. Their morphological reconstruction clearly points to the dominating presence of known pulsating instrumental noise disturbances at low frequencies, occurring in both the LIGO detectors, and they are by far inconsistent with any published model of echoes. The pseudo-regular cadence of these disturbances is the likely cause of a systematic error in our initial p-values estimates.
To our knowledge, this search for echoes is delivering the highest and most general sensitivity to the possible presence of gravitational wave echoes. In preparation for the fourth observing run from LVK Collaboration, we plan to extend our search also to BNSs and to NS-BHs GWT signals.
Remarkably, this search can be easily modified to study other science cases of interest in current GW astronomy, because of its unmodelled nature and its adaptability. Possible examples of application fields can be the investigation of memory effects [61; 62; 63; 64; 65], precursors to highly eccentric BBHs [66; 67; 68; 69; 70], or micro-lensing effects [71; 72]. These examples share in common the predicted presence of a weak GW feature close to the coalescence time of the primary CBC GWT signal.
###### Acknowledgements.
The authors would like to thank Andrea Maselli, Francesco Salemi, and Patrick Sutton for their constructive inputs. We also acknowledge useful discussions with Sophie Bini, Alessandro Martini, and Andrea Virtuoso. This research used data, software and web tools from the Gravitational Wave Open Science Center, a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. Andrea Miani thankfully acknowledges the grant provided by the EGO Consortium EGO-DIR-56-2021 and the University of Trento. Shubhanshu Tiwari is supported by the Swiss National Science Foundation (SNSF) Ambizione Grant Number: PZ00P2-202204.
## Appendix A cWB all-sky burst search vs ES search
Table 3 lists cWB [36; 37] parameters (first column) and their threshold values that can be tuned in a cWB search, comparing the configuration for the cWB all-sky burst search [38; 39; 40] (second column) to that of the cWB ES search (third column). The different tuning of \(\eta_{\mathrm{c}}\), T\({}_{\mathrm{gap}}\), and SUBRHO thresholds is motivated by the need to grasp lower SNR triggers in the search for echoes, while keeping under control the false alarms. Additional configuration parameters are defined in the cWB ES search: the time width of the PMW, \(\Delta t^{\mathrm{PMW}}\); a blind time after the coalescence time, \(t_{\mathrm{blind}}\); the fraction of correlated energy in the PMW, \(\mathcal{C}_{\mathrm{c}}^{\mathrm{PMW}}\).
## Appendix B Followup of loudest candidates
From the analysis of the p-values of the BBH GWTs (see section IV.3), two events are selected for deeper investigations since they are consistent with a FDR \(\leq 10\%\): GW190701 and GW200224. Estimating the p-values on a different, more local set of noise instantiations results in higher p-values, which points to some systematic bias in our estimating procedure. Nevertheless, these two local p-values are still the only ones \(\leq 1\%\), further motivating the following deeper investigations on GW190701 and GW200224.
The morphological study of the PMW on-source event allows to gather information about the reconstructed SNR of the energy excess, its arrival time, mean frequency, and the reconstructed waveform. Additional tests have been deployed as well, like performing a single detector analysis of the on-source morphology, with the subtraction of the primary BBH waveform. The information of the morphological studies are then compared with the theoretical expectation of echo models (see section II) and with the known noise disturbances.
### Gw190701
Figure 7(a), shows the reconstructed strain signal waveform of GW190701 in L1 detector. Here, the BBH signal is the smallest bump on the left while the two bumps on its right are the post-merger energy excesses. Among them, the most interesting one is the second (at time \(\sim 168.86\,\mathrm{s}\)) since it is the one falling inside our PMW. The post-merger candidate shows higher strain, and longer time duration, around \(\geq 100\,\mathrm{ms}\), with respect to the BBH event, and no echo models are consistent, to
Figure 8: On top, Fig. 7(a), is plotted the strain amplitude waveform of GW190701 and its post-merger as function of time for L1 detector. Its merger time is around \(\sim 168.56\,\mathrm{s}\). On the bottom, Fig. 7(b), reports the strain amplitude of GW200224 and its post-merger as function of time for L1 detector. Its merger time is around \(\sim 182.40\,\mathrm{s}\). In both the scenarios are clearly visible the glitches providing the energy excesses when running the ES search.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline \multicolumn{3}{c}{configuration parameters} \\ \hline Parameters & All-Sky O3 search & ES search \\ \hline bpp & 0.001 & 0.001 \\ subnet & 0.5 & 0.5 \\ \(c_{\mathrm{c}}\) & 0.5 & 0.5 \\ \(\eta_{\mathrm{c}}\) & 5.0 & 3.5 \\ \(A_{\mathrm{core}}\) & 1.7 & 1.7 \\ \(T_{\mathrm{gap}}\) & 0.2 s & 2.0 s \\ \(F_{\mathrm{gap}}\) & 128.0 Hz & 128.0 Hz \\ SUBRHO & 5.5 & 3.5 \\ SUBNET & 0.1 & 0.1 \\ PMW & not used & \(\Delta t^{\mathrm{PMW}}=1\,\mathrm{s}\) \\ \(t_{\mathrm{blind}}\) & not used & \(t_{\mathrm{blind}}\geq 40\,\mathrm{ms}\) \\ \(c_{\mathrm{c}}^{\mathrm{PMW}}\) & not used & \(c_{\mathrm{c}}^{\mathrm{PMW}}\geq 0.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of the cWB main production thresholds between all-sky burst search [40] (second column) and ES search (third column).
our knowledge, with these features.
The entire on-source event (BBH + PM signals) has an overall SNR content around SNR \(\sim 12.9\) with a \(\rho\sim 4.8\), and \(c_{\rm c}\sim 0.57\) that is an unusually low value for an event with such an SNR. Figures (a)a and (b)b, the two TF maps of the event for each detector, show that in L1-detector, after the BBH event, there are three post-merger energy excesses, at times \(\sim 168.64\) s, \(\sim 168.86\) s, and \(169.97\) s, while in the post-merger of H1 there is only one clear energy excess at \(\sim 168.84\) s. This energy distribution asymmetry explains the low value of the correlation coefficient \(cc\) suggesting that a noise realisation is a preferred explanation for such an observation since it does not match up with echo signal predictions. Furthermore, in the bottom row of figure 9 there is the network on-source likelihood 9d TF map. At time \(\sim 168.55\) s there is the chirping cluster of pixels representing the GW190701 event, going from frequencies around \(\sim 40\) Hz up to \(\sim 150\) Hz, while in the post-merger, at the time \(\sim 168.84\) s, is clearly visible the energy excess. It has a central frequency around \(f_{0}\sim[30-40]\) Hz which is not a frequency range expected for echoes: they should possess similar frequencies or higher than the BBH merger one [17].
Finally, figure (e)e shows the on-source likelihood TF map after the BBH subtraction for the single L1 detector configuration. A repetition of energy excesses either before and after GW190701 (\(\sim 168.55\) s) event is visible, and they have similar frequencies and morphologies, hinting at possible noisy features polluting the detector data.
### Gw200224
To study the post-merger on-source energy excess detected in GW200224 we deploy the same strategy in GW190701. Figure (b)b shows the on-source strain waveform of the entire event, with the BBH being the small signal on the left. The time duration of the PM signal, \(\sim 400\) ms as well as its time distance of \(\sim 1\) s to the merger time of the BBH do not match the theoretical predictions of echo signals. Following eq.(3) \(t_{\rm echo}\) is predicted to be \(\sim\) ms after \(t_{\rm coa}\). Moreover, the TF map of the on-source event, figure (a)a shows that the mean frequency of this PM excess of energy is around \(40\) Hz well below the expected frequency values for echoes.
Figures (a)a and (b)b, the TF maps of the event in L1 and H1 respectively, shows that the PM signal is present only in L1 detector, while in H1 such high energy excess is not reconstructed. Since the two LIGO detectors are nearly aligned and are sensitive to the same GW's polarisation, for real astrophysical events such energy imbalance in the detectors is suspicious.
We proceed in subtracting to GW200224 on-source
Figure 9: In this figure: plots (a)a and (b)b show the time-frequency map of GW190701 event in L1 and H1 detector respectively. Plot (c)c shows the event in L1 detector once the best template of GW190701 between its posterior samples is subtracted from the data. Plot (d)d show the reconstructed maximum likelihood of the event for the LH network, while plot (e)e display the same quantity but for a single detector search (L1) and after the best GW190701 template is subtracted from the data.
event the best PE model describing that same BBH event, then on the subtracted data we run the single detector ES search. The result is displayed in figure 10e. Here, no undetected energy excess other than the investigated one appears, suggesting that we are not in a scenario similar to the single detector analysis of GW190701. The energy outlier has a SNR \(\sim 10.4\), while the overall SNR of the BBH signal plus post-merger excess of energy is equal to \(\sim 16.8\) (in single detector mode).
### cWB ES search with 32 Hz mitigation
The PMW on-source morphologies hint to a possible data pollution by a glitch family identified in the frequency range \(\in(16,40)\) Hz [57; 58]. Therefore, we repeated the ES search for these two GWTs by including a specific single detector data filter [73; 74] that estimates the power oscillations within the frequency range \(\in(16-40)\) Hz and attenuates them. We label such analysis as 32 Hz-ES search, to differentiate it from the standard ES search. The measured on-source null hypothesis p-values when the noise around 32 Hz frequencies is mitigated result:
\[\text{GW190701:}\quad\text{p-value}_{\text{ON}}^{32\,\text{Hz}}=0.024\pm 0.002 \tag{1}\]
\[\text{GW200224:}\quad\text{p-value}_{\text{ON}}^{32\,\text{Hz}}=0.003\pm 0.001 \tag{2}\]
and they are plotted in figure 11 as the violet dots. This noise mitigation rules out the post-merger event candidate GW190701, while for the PM of GW200224 the p-value is still within the FRD\(\leq 10\%\).
This study together with the morphological investigation of the PMW energy excesses of GW190701 and GW200224 (see appendix B.1 and B.2), show that it is reasonable to assume them to be non stationary noise feature polluting the data and especially affecting L detector. These noise transients posses a central frequency around \((30,40)\)Hz, and have a greater time duration (\(\sim\) hundred of ms) with respect to the expected one for echo signals (\(\sim\) tens of ms), so around one order of magnitude bigger.
Figure 10: In this figure: plots 10a and 10b show the time-frequency map of GW200224 event in L1 and H1 detector respectively. Plot 10c shows the event in L1 detector once the best template of GW200224 between its posterior samples is subtracted from the data. Plot 10d show the reconstructed maximum likelihood of the event for the LH network, while plot 10e display the same quantity but for a single detector search (L1) and after the best GW200224 template is subtracted from the data. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.